WGU Practical Applications of Prompt QFO1 Practical-Applications-of-Prompt Exam Questions

Page: 1 / 14
Total 50 questions
Question 1

Which prompting technique involves using information from an initial prompt to guide the AI to a second prompt?



Answer : A

The Generated Knowledge technique is a two-step optimization process. In the first step, the user asks the AI to generate a set of relevant facts, rules, or background information about a topic. In the second step, this newly 'generated knowledge' is incorporated into a follow-up prompt to improve the accuracy of the final answer. This is particularly useful when the AI needs to perform a task that requires specific domain expertise that might not be immediately 'top-of-mind' for the model.

For example, if you want the AI to write a medical summary, you might first ask it to 'List the current guidelines for treating hypertension' (Generated Knowledge). Then, you use that list in a second prompt: 'Based on these guidelines, evaluate this patient's case.' This technique prevents the AI from relying purely on its general training data and instead forces it to use a 'grounded' set of facts as a reference point. It is a powerful way to reduce hallucinations because the model is essentially building its own 'contextual library' before attempting the main task. This sequential approach ensures that the final output is backed by explicit logic rather than just probabilistic word prediction.


Question 2

A user uses an AI model to predict weather patterns. However, the model consistently predicts temperatures that are off by about five degrees. Which form of bias is associated with this phenomenon?



Answer : B

The phenomenon where an AI consistently produces results that deviate from the truth by a specific margin (in this case, five degrees) is known as Measurement bias. This typically occurs when the data used to train the model was collected using faulty, poorly calibrated, or inconsistent tools. If the thermometers used to gather the historical weather data were all consistently off by five degrees, the AI will learn and replicate that systemic error as if it were a factual pattern.

Unlike 'Sampling bias' (which involves who or what is included in the data) or 'Confirmation bias' (which involves the user seeking data that fits their beliefs), Measurement bias is a technical flaw in the data collection phase. It is particularly dangerous because the model may appear to be 'consistent' and 'reliable,' but it is actually consistently wrong. In the field of AI ethics and data integrity, identifying measurement bias is crucial because it requires the user to go back to the source sensors or the data entry process to find the 'skew.' Correcting this bias isn't a matter of changing the prompt, but rather of re-calibrating the training data to ensure it accurately reflects the real-world environment it is meant to predict.


Question 3

A company released a new sports watch, and an advertiser wants to use generative AI to help produce a text-based advertisement for the watch that explains the features of the watch. Which prompt engineering solution is most likely to achieve this goal?



Answer : A

To achieve a high-quality, accurate advertisement, the most effective solution is to give a list of features that should be highlighted. In prompt engineering, this is known as providing 'input data' or 'grounding.' Without a specific list of features, the AI will likely 'hallucinate' capabilities for the sports watch---such as a 100-day battery life or a built-in laser---that the product does not actually possess.

By providing a concrete list (e.g., 'GPS tracking, heart rate monitor, 50m water resistance, and sapphire glass'), the user provides the AI with the raw materials needed to construct the ad. This shifts the AI's role from 'fictional writer' to 'creative editor.' The model can then focus on persuasive language and structural formatting rather than inventing technical specifications. This is the standard professional approach for marketing teams: use the prompt to establish the 'facts' and let the AI handle the 'flair.' It ensures the resulting text is both creative and factually grounded, which is the primary requirement for any commercial advertisement.


Question 4

What is the principle of ethics that is ensured by creating mechanisms to assign responsibility for AI actions and decisions?



Answer : A

The principle of Accountability is centered on the requirement that there must be an identifiable person or entity responsible for the outcomes of an AI system's actions. As AI systems become more autonomous, the 'responsibility gap' becomes a significant ethical risk. Establishing accountability means creating clear frameworks---legal, organizational, and technical---to ensure that when an AI makes a mistake (such as an incorrect medical diagnosis or a biased financial decision), there is a mechanism for recourse, explanation, and correction.

In the context of prompt engineering, accountability is often managed through 'human-in-the-loop' systems. This ensures that while the AI may generate the initial draft or decision-making logic, a human remains the ultimate authority who 'signs off' on the result. Accountability also involves 'Auditability'---the ability for third parties to review the AI's logs and decision-making history. Without accountability, AI deployment can lead to 'organized irresponsibility,' where no one takes ownership of systemic failures. By embedding accountability into the lifecycle of an AI project, organizations protect themselves and their users, ensuring that the technology serves as a tool for human progress rather than an unchecked black box.


Question 5

A lawyer needs to interact with a database to search for cases relating to college admissions. What is a benefit of writing effective prompts when interacting with the database?



Answer : C

For professionals dealing with vast amounts of specialized information, such as lawyers, the primary benefit of effective prompt engineering is the prevention of sifting through irrelevant results. Legal databases are massive, containing millions of precedents, statutes, and opinions. A vague prompt like 'Find cases about schools' would return thousands of results, most of which would be useless to a specific case regarding college admissions.

By using specific keywords, Boolean logic, and contextual constraints within the prompt (e.g., 'Search for U.S. Supreme Court cases from 2000--2023 specifically addressing affirmative action in private university undergraduate admissions'), the lawyer drastically narrows the search field. This precision is the essence of effective prompting in a professional environment. It saves significant time and cognitive energy by ensuring that the AI or search algorithm acts as a high-resolution filter. This 'signal-to-noise' optimization allows the professional to focus on the high-value task of legal analysis rather than the low-value task of manual data sorting. Effective prompts turn a mountain of data into a curated list of relevant evidence.


Question 6

Which factor should be considered when writing generative AI prompts?



Answer : C

When engineering a prompt, determining the 'Scope' is vital for achieving a high-quality response. Scope refers to the boundaries and breadth of the request. A prompt with a scope that is too broad (e.g., 'Tell me everything about history') will result in a superficial, overly generalized, and likely unhelpful response. Conversely, a prompt with a scope that is too narrow might exclude necessary context.

Effective prompt engineering involves 'right-sizing' the scope to match the user's specific needs. This includes defining the timeframe, the specific sub-topics to be covered, and the level of detail required. By managing the scope, the user prevents the AI from 'hallucinating' or filling in gaps with irrelevant information. It also helps manage the model's token limit and ensures that the most important information is prioritized in the output. While factors like uniqueness or location might be relevant in very specific niche cases, 'Scope' is a universal pillar of prompt construction. It ensures that the AI stays focused on the task at hand, delivering a concentrated and accurate response that fits within the user's practical requirements.


Question 7

What is a risk associated with failing to include a goal when writing a prompt?



Answer : C

Failing to include a clear goal creates a significant risk of receiving inaccurate responses. In the context of AI, 'inaccuracy' doesn't just mean a factual error; it also refers to an output that is 'off-target' for the user's intent. Without a goal (the specific outcome the user wants to achieve), the AI is forced to make assumptions about what the user wants. These assumptions are often based on the most common patterns in its training data, which may not align with the user's actual needs.

For example, if a user provides context about a product but doesn't state the goal (e.g., 'Write a product description,' 'Critique this product,' or 'Compare this product to X'), the AI might simply summarize the text provided. This response is 'inaccurate' because it fails to fulfill the user's unspoken requirement. This lack of direction leads to a 'hallucination of intent,' where the AI provides a response that is technically coherent but practically useless. Clearly defining the goal is the most effective way to anchor the AI's logic, ensuring that the generated content is accurate in terms of both facts and function.


Page:    1 / 14   
Total 50 questions