Microsoft AI Business Professional AB-730 Exam Questions

Page: 1 / 14
Total 77 questions
Question 1

You are a medical researcher.

You need to ensure that Microsoft 365 Copilot responds to you in a professional manner that is appropriate to your field. What is the best approach to achieve the goal?



Answer : B

Custom instructions allow users to define persistent preferences for tone, style, and response behavior across conversations. Microsoft documentation explains that Custom instructions help tailor Copilot responses to professional roles, communication styles, and domain expectations.

For a medical researcher requiring consistently professional, field-appropriate responses, configuring Custom instructions in Settings ensures that Copilot maintains the desired tone and terminology in future interactions.

Attaching examples influences only a single conversation. Saving prompts to the Prompt Gallery aids reuse but does not enforce tone globally. Copilot Memory stores factual preferences but is not intended for style configuration.

Therefore, the best and most comprehensive approach is to configure Custom instructions in Settings.


Question 2

You create a Microsoft 365 Copilot notebook and add a file named Process.docx from a local folder. Yesterday, you updated Process.docx in the local folder. What will occur when you chat in the notebook?



Answer : C

Microsoft 365 Copilot notebooks use the version of a file that was uploaded or attached at the time it was added to the notebook. When a document such as Process.docx is added from a local folder, Copilot references that uploaded snapshot of the file. If the file is later modified locally, the notebook does not automatically sync or refresh with the updated local version unless the updated file is re-uploaded.

Microsoft guidance on grounding and file references explains that Copilot works with the specific content stored in Microsoft 365 or the attached artifact within the notebook session. Since the updated version remains in the local folder and was not reattached, Copilot continues to use the originally added version.

Therefore, during subsequent chats in the notebook, Copilot references only the original uploaded version of Process.docx.

==================================================


Question 3

You use Microsoft 365 Copilot.

You discover that you had a conversation that used a knowledge source that contains confidential information.

You need to delete the conversation data without requiring administrative approval. You must retain your other conversations, if possible.

What should you use?



Answer : D

Microsoft 365 Copilot follows Microsoft's enterprise data governance and privacy principles, which allow users to manage their own conversation history where appropriate. According to Microsoft AI Business Professional guidance, users can review and delete individual Copilot conversation histories directly from their personal account settings without requiring administrative intervention.

Option D is correct because the My Account portal in Microsoft 365 Copilot allows individual users to manage their activity history, including deleting specific conversations. This enables targeted removal of sensitive interactions while retaining other conversation data.

Option A is incorrect because the Copilot app itself does not provide full account-level activity management capabilities. Options B and C are administrative tools used for tenant-wide governance, compliance, retention policies, and eDiscovery. These portals typically require administrative privileges and are not intended for individual user self-service deletion of specific conversations.

Therefore, to delete a single confidential Copilot conversation without affecting other chats and without requiring administrator approval, the correct tool is the My Account portal in Microsoft 365 Copilot.


Question 4

You receive the following response to a prompt: "Sorry, it looks like I can't respond to this. Let's try a different topic."

What is a possible cause of the response?



Answer : B

Microsoft 365 Copilot follows Microsoft's Responsible AI principles and enforces strict content safety policies. When a prompt violates safety guidelines---such as containing harmful, abusive, illegal, or restricted content---the system may refuse to generate a response. The refusal message shown is consistent with safety filtering behavior.

Generative AI systems include moderation layers that evaluate prompts before generating output. If the prompt is classified as unsafe or non-compliant with policy, Copilot blocks the request and encourages the user to try a different topic.

A vague prompt typically results in a generic or clarifying response rather than a refusal. There is no fixed limit of five requests per prompt. Exceeding the context window usually results in truncation or processing errors, not a safety-based refusal message.

Therefore, the most likely cause of the response is that the prompt contains language that violates safety guidelines.

==================================================


Question 5

You ask Microsoft 365 Copilot to create a report based on information from the web. You verify the response and discover that some information is fictional.

What is this an example of?



Answer : B

This scenario is an example of fabrication, which is commonly referred to in generative AI contexts as a hallucination. Fabrication occurs when an AI system generates information that appears credible but is factually incorrect, invented, or unsupported by verifiable sources.

According to Microsoft AI Business Professional guidance, large language models predict text based on patterns learned during training. They do not ''know'' facts in the human sense. As a result, when asked to generate reports using web-based information, the model may produce plausible-sounding but fictional details if sufficient grounding or reliable sources are not provided.

Deepfake refers specifically to synthetic media such as manipulated images, audio, or video. Overreliance describes a human behavior risk where users trust AI outputs without verification. Prompt injection is a malicious technique designed to manipulate model behavior. Bias refers to systematic unfairness in outputs.

In this case, the presence of fictional information in the generated report directly aligns with fabrication, making option B the correct answer.


Question 6

You are creating a custom analytics agent in the Microsoft 365 Copilot app. The agent will use Microsoft Excel files that contain sales data as knowledge.

You need to ensure that the agent can create visualizations, perform mathematical operations, create aggregations, and analyze the data in the files.

What should you add to the agent?



Answer : A

When building a custom analytics agent in Microsoft 365 Copilot that must process structured data from Excel files, advanced analytical capabilities are required. According to Microsoft AI Business Professional guidance, tasks such as performing mathematical calculations, generating aggregations, creating charts, and conducting structured data analysis require programmatic execution capabilities rather than simple text generation.

A code interpreter enables the agent to run Python-based analytical operations in a secure execution environment. This allows the agent to manipulate datasets, compute totals and averages, perform grouping and filtering, and generate visualizations such as bar charts or line graphs based on the Excel data. The interpreter bridges the gap between natural language instructions and executable analytical logic.

An image generator is designed for creative visual content and is unrelated to structured data analytics. Suggested prompts and templates improve usability and consistency but do not provide computational or visualization capabilities.

Therefore, to enable mathematical operations, aggregation, data analysis, and visualization of Excel sales data, the correct component to add to the agent is a code interpreter.


Question 7

You use Microsoft 365 Copilot.

You regularly upload the same five files to Copilot chats.

You need to simplify referencing the files in the chats.

What are two ways to achieve the goal?



Answer : B, D

Microsoft 365 Copilot provides structured ways to persist and reuse knowledge sources to avoid repeatedly uploading the same files.

Creating a notebook that references the five files (Option B) allows those documents to remain attached within a persistent workspace. Copilot can then consistently ground responses in those files without re-uploading them for every conversation.

Creating an agent with a defined knowledge source (Option D) also provides a reusable solution. By configuring the five files as part of the agent's knowledge base, the agent can automatically reference them in future interactions.

Saving a prompt does not persist file attachments. Fine-tuning models is not part of standard Microsoft 365 Copilot user workflows. Uploading a ZIP file does not improve reference management and may reduce accessibility of individual documents.

Therefore, the correct solutions are to create a notebook that references the files and to create an agent with a knowledge source.


Page:    1 / 14   
Total 77 questions