WGU Practical Applications of Prompt QFO1 Practical-Applications-of-Prompt Exam Questions

Page: 1 / 14
Total 50 questions
Question 1

A person is using generative AI to create a social media post. Why is it important to write an effective prompt?



Answer : C

Writing an effective prompt is essential because it provides the logical framework the AI needs to process a request; primarily, the prompt prevents output that is nonsensical. Generative AI models are statistical engines that predict the next most likely word or character. Without a clear, well-structured prompt that includes instructions and context, the model can easily lose the 'thread' of logic, leading to 'hallucinations' or sequences of text that are grammatically correct but logically incoherent or irrelevant to the user's goal.

In the context of social media, where brevity and impact are key, an ineffective prompt might result in a post that uses the wrong hashtags, misses the brand voice, or includes bizarre metaphors that don't make sense to the audience. While no prompt can 'ensure' a post will be well-received by humans (Option B) or guarantee absolute originality (Option D), a structured prompt guides the AI to stay within the bounds of human logic. By providing specific constraints (e.g., 'Write a 20-word caption about coffee in a joyful tone'), the user ensures the output is a sensible, usable piece of content rather than a random string of related words.


Question 2

What is the principle of ethics that is ensured by explaining AI system decision-making to stakeholders and users?



Answer : A

Transparency in AI ethics refers to the degree to which an AI system's internal logic, data sources, and decision-making processes are visible and understandable to humans. It is the direct antidote to the 'Black Box' problem. When an AI system provides a recommendation, the principle of transparency ensures that stakeholders (such as regulators, developers, and end-users) can understand the 'why' behind the output. This is often achieved through 'Explainable AI' (XAI) techniques.

In practical prompt engineering, transparency is optimized by instructing the model to provide its reasoning. For example, using 'Chain of Thought' prompting forces the AI to list the steps it took to arrive at a conclusion. This makes the interaction transparent because the user can see if the AI relied on faulty logic or biased data. Transparency builds trust; if a user understands how an AI reached a conclusion, they are more likely to adopt the technology. Furthermore, transparency is a prerequisite for other ethical principles like Fairness and Accountability, as you cannot fix a bias or hold a system accountable if you cannot see how it functions internally.


Question 3

There have been complaints that deepfake videos on a social media platform are being circulated that show public figures making false statements. Which area of ethical concern does this situation demonstrate?



Answer : A

The rise of deepfakes---AI-generated synthetic media that convincingly depicts people saying or doing things they never did---falls squarely under the ethical concern of Misinformation and manipulation. This represents a significant challenge to the 'Information Integrity' of digital platforms. By creating realistic but false content, generative AI can be used to influence elections, damage reputations, or incite social unrest.

This ethical concern highlights the 'dual-use' nature of AI. While the same technology can be used for harmless entertainment or high-end film production, in the hands of bad actors, it becomes a tool for 'cognitive hacking.' Prompt engineering optimization in this context involves developing guardrails within AI models to prevent the generation of content involving public figures or non-consensual imagery. It also involves the use of AI to detect deepfakes by identifying microscopic inconsistencies in pixels or heart-rate signatures that are invisible to the human eye. Addressing misinformation requires a combination of technical watermarking, robust platform policies, and user education to ensure that the boundary between reality and AI-generated fiction remains clear.


Question 4

An AI model was trained on historical loan data. A loan officer has noticed that the model disproportionately suggests to refuse loans to people who live in a particular area. What is the type of bias described in the scenario?



Answer : D

The scenario describes Algorithmic bias, which occurs when an AI system reflects and potentially amplifies the prejudices or inequalities present in the historical data it was trained on. In this case, if historical lending practices were discriminatory toward specific neighborhoods (a practice known as 'redlining'), the AI model treats the resulting 'denial' patterns as a mathematical rule. It learns that living in a certain zip code is a predictor of loan failure, even if the individual applicants are creditworthy.

This is a major ethical concern in prompt engineering and AI deployment because the 'bias' is not a glitch in the code, but a reflection of systemic human bias encoded into the model's logic. It differs from 'Sampling bias' (which would occur if the model only looked at one city) or 'Measurement bias' (which involves faulty sensors). Algorithmic bias is particularly insidious because it can give discriminatory decisions a 'veneer of objectivity,' making it harder for human operators to spot the unfairness. Addressing this requires rigorous data auditing and the use of 'fairness constraints' to ensure that the AI does not penalize individuals based on protected characteristics or proxy variables like geography.


Question 5

A user is crafting a prompt and includes both the goal and the context within the text of the prompt. What is a benefit of crafting the prompt in this way?



Answer : A

Combining a clear goal with rich context is the gold standard for achieving greater interaction effectiveness. The goal tells the AI what to achieve (the destination), while the context explains the circumstances surrounding the task (the map). When these two elements are present, the AI can generate a response that is not only factually correct but also relevant to the user's specific situation. Effectiveness in AI interactions is measured by how closely the output meets the user's needs on the first try.

When a prompt lacks a goal, the AI might provide a great summary of a topic but fail to perform the required action. When it lacks context, it might perform the action in a way that is inappropriate for the audience. By merging them, the user minimizes 'drift'---the tendency for AI to wander into irrelevant topics. This leads to a more professional, tailored, and high-quality interaction. In practical scenarios, such as drafting a corporate policy or creating a marketing strategy, the synergy between goal and context ensures that the AI understands the 'big picture,' resulting in a much more effective and usable first draft.


Question 6

A person wants to use AI to make a technical document easier to comprehend. Which prompt engineering solution is most effective to achieve this goal?



Answer : D

The most effective way to optimize AI for clarity and comprehension is to include reading-level limitations. While 'summarizing' (Option B) shortens the text, it doesn't necessarily make the remaining language simpler. However, specifying a 'tenth-grade reading level' (or 'Explain it like I'm five') provides the AI with a very specific linguistic constraint. It forces the model to swap complex jargon for common synonyms, use shorter sentence structures, and avoid passive voice.

This technique is a form of Output Constraint. Reading levels are well-defined metrics that AI models can emulate because they have been trained on vast amounts of graded educational material. By setting this boundary, the user ensures the output is accessible to a broader audience without losing the core technical meaning. In practical professional settings---such as translating a medical white paper for a patient or a legal contract for a small business owner---this type of prompting is essential. It transforms dense, 'impenetrable' text into actionable information, demonstrating how specific constraints can be used to reformat and simplify complex data sets effectively.


Question 7

A team of historians wants to use AI-based tools to aid in the research of the history of Europe's agricultural equipment. What is the importance of writing effective prompts in the research?



Answer : C

In academic and historical research, the sheer volume of available data can easily lead to 'scope creep' or tangential exploration. Writing effective prompts is crucial because it ensures that researchers remain focused on their specific inquiry. When dealing with a broad subject like 'Europe's agricultural equipment,' an unstructured prompt might return a generalized history of farming. However, an effective prompt---specifying the region (e.g., Western Europe), the era (e.g., the Industrial Revolution), and the specific type of equipment (e.g., steam-powered threshing machines)---acts as a navigational guide for the AI.

This focus is essential for maintaining the integrity of the research process. It prevents the AI from generating irrelevant 'filler' content and forces the output to adhere to the specific historical parameters defined by the team. While AI can assist in synthesizing information, it cannot determine the 'importance' of research (which is a human value judgment) nor should it replace the need for multiple sources (as verification is still required). By refining the prompt to include specific constraints and objectives, historians can use AI as a precision tool to uncover specific data points and trends, ensuring that the resulting analysis stays aligned with the original research goals.


Page:    1 / 14   
Total 50 questions