PMI Certified Professional in Managing AI PMI-CPMAI Exam Practice Test

Page: 1 / 14
Total 102 questions
Question 1

A company needs to launch an AI application quickly to be the first to the market. The project team has decided to use pretrained models for their current AI project iteration.

What is a key result of leveraging pretrained models?



Answer : A

Within PMI-CPMAI, one of the key strategic levers for AI projects is reusing existing AI assets, including pretrained models, to accelerate delivery and reduce initial development complexity. PMI describes pretrained and foundation models as allowing organizations to ''leverage previously learned representations so that teams can focus effort on adaptation, integration, and value realization rather than building models from scratch.'' This often results in a shorter experimentation cycle, reduced training time, and faster deployment, especially when speed-to-market is a primary objective.

PMI emphasizes that such reuse is particularly valuable in early iterations or minimum viable products (MVPs), where the aim is to ''deliver functional AI capability quickly, validate value hypotheses, and gather user feedback.'' While the team still needs to handle integration, fine-tuning, and risk controls, the heavy lifting of initial training on massive datasets has already been done by the pretrained model provider. This is contrasted with full custom model development, which PMI characterizes as more resource-intensive and time-consuming, requiring substantial data preparation, training, and optimization. Potential challenges such as compatibility or scalability must be managed, but they are not the key, primary effect identified by PMI. The most central and intended result of using pretrained models in this context is that the overall project timeline is reduced, enabling the company to reach the market faster.


Question 2

A healthcare provider plans to deploy an AI system to predict patient readmissions. The project manager needs to conduct a risk assessment to ensure patient safety and data integrity.

What is an effective method to help ensure the AI system adheres to ethical standards?



Answer : A

According to the PMI Certified Professional in Managing AI (PMI-CPMAI) framework, ensuring that an AI system adheres to ethical standards---particularly in high-risk domains such as healthcare---requires establishing mechanisms that promote transparency, accountability, fairness, and human interpretability. PMI-CPMAI highlights that one of the most effective methods to accomplish this is the use of an explainability framework.

PMI's Responsible AI guidance states that ''ethical assurance requires that stakeholders can understand how an AI model arrives at its decisions, especially when outcomes impact human safety or well-being.'' Explainability frameworks provide clear, interpretable insights into model reasoning, feature importance, and decision pathways. This transparency supports multiple ethical principles:

* fairness (by identifying potential biases),

* accountability (by documenting the basis of predictions),

* trustworthiness (by enabling clinicians to validate or override predictions), and

* patient safety (by ensuring decisions are understandable and clinically appropriate).

PMI-CPMAI emphasizes that explainability is especially critical in healthcare because medical decisions must be defensible, reviewable, and aligned with clinical judgment. The guidance states: ''Opaque AI systems pose elevated ethical risk in regulated environments; explainable AI reduces this risk by enabling practitioners to interrogate and validate model outputs.''

While the other options support overall risk management, they do not directly ensure ethical adherence:

* B. Stakeholder impact analysis identifies affected parties but does not ensure ethical behavior.

* C. Continuous monitoring supports safety and performance but does not inherently make decisions explainable.

* D. Data encryption protects confidentiality but does not address ethical reasoning or fairness.

Thus, the method most directly aligned with ensuring ethical standards during risk assessment is A. Using an explainability framework.


Question 3

A development team is tasked with creating an AI system to assist physicians with diagnosing medical conditions. They encountered cases where symptoms do not always lead to well-defined diagnoses.

Which approach should the project manager integrate to handle the inherent uncertainty?



Answer : A

For AI systems supporting high-stakes medical decisions, PMI-CP/CPMAI and responsible AI guidance emphasize human-in-the-loop oversight as the primary way to manage inherent uncertainty and risk. In clinical diagnosis, symptoms are often ambiguous, overlapping across multiple conditions, and influenced by patient history and context. No matter how advanced the model, there will be edge cases, rare diseases, and conflicting signals.

Rather than attempting to eliminate uncertainty purely through more complex models, more input variables, or ever-growing rule sets, best practice is to design the AI as a decision-support tool, not an autonomous decision-maker. That means physicians retain ultimate responsibility, reviewing AI suggestions, over-riding them when clinically necessary, and using their expertise to weigh patient-specific factors the model may not capture.

Human-in-the-loop design also supports explainability and trust: clinicians can question outputs, cross-check with other evidence, and provide feedback that can be used later for model improvement. CPMAI's lifecycle framing for regulated and safety-critical domains is clear: when outcomes materially affect health or life, the appropriate way to handle uncertainty is to keep a human in the loop for all decision-making, which aligns directly with option A.


Question 4

An AI project team has identified a gap in their data knowledge and experience. They need to address this issue in order to proceed with their AI implementation.

What is the effective solution?



Answer : D

Within PMI-CPMAI guidance on AI readiness and capability enablement, a clearly identified gap in data knowledge and experience is treated as a critical skills and competency risk. The framework emphasizes that AI projects are highly dependent on data literacy, understanding of data sources, structure, quality, and regulatory constraints. When such gaps exist, PMI-consistent practice is to bring in specialized expertise to both support the current initiative and uplift the organization's internal capabilities.

Hiring an external data consultant provides immediate access to deep data expertise, including data modeling, governance, privacy, and AI-specific data requirements. This expert can perform targeted assessments, help define data strategies, guide data preparation, and deliver focused training or coaching to the project team. PMI-CPMAI stresses that leveraging external SMEs is often the most effective way to de-risk complex AI implementations when internal skills are insufficient, especially in early stages or high-stakes domains.

Options such as deploying abstract ''frameworks'' or ''protocols'' do not, by themselves, close a human expertise gap. A comprehensive internal data immersion program may be useful long-term, but it first requires guidance on what to learn and how to structure that learning. Therefore, the most effective and actionable solution to proceed with implementation is hiring an external data consultant to provide targeted guidance and training.


Question 5

An AI project team needs to consider compliance with data regulations and explainability standards as requirements for a new AI solution.

At what point in the project should the requirements be approached?



Answer : B

In PMI-CP/CPMAI-aligned practice, compliance requirements such as data protection regulations (e.g., privacy laws, data residency) and explainability standards are treated as business and regulatory constraints, not as late technical details. They must therefore be identified and incorporated during the business understanding phase. At this stage, the project manager and stakeholders clarify the problem statement, success criteria, risk appetite, and constraints under which the AI solution must operate. That includes explicitly stating: which regulations apply, what level of transparency or explainability is required, which stakeholders must be able to understand model outputs, and which decisions must remain under human control.

By capturing these requirements early, they directly influence the choice of AI pattern, model families, data sources, architecture, and governance mechanisms. If these constraints are postponed until data preparation or final testing, the team risks discovering that the chosen models are too opaque, the data cannot legally be used as collected, or additional documentation and controls are needed that fundamentally change scope and timeline. CPMAI stresses that responsible AI and regulatory compliance are ''built in from the beginning,'' so the correct point to approach these requirements is the business understanding phase.


Question 6

A project team is working on an AI project that requires strict adherence to data privacy regulations. The team is in the initial stages of data collection and aggregation.

Which task will help to ensure regulatory compliance?



Answer : A

In the PMI-CPMAI perspective on responsible AI and data governance, regulatory compliance starts with knowing exactly what data you have and how sensitive it is. Before you can design controls, encryption schemes, or risk plans, you must first perform a data audit and classification to identify personal, sensitive, and regulated data elements, as well as their sources, flows, and storage locations. This aligns with the guidance that early in the AI lifecycle, project teams should create a clear data inventory and mapping to understand which datasets fall under privacy regulations (such as health, financial, or personally identifiable information).

By conducting a thorough data audit to identify sensitive information, the project team can determine which regulations apply, what consent or legal basis is required, and where to apply specific safeguards (access controls, anonymization, retention limits, etc.). Encryption and broader risk management plans are important, but they are secondary steps that rely on the foundational insight gained from the audit. Verbal commitments from stakeholders have no formal regulatory standing. Therefore, in the initial stages of data collection and aggregation, the task that most directly supports regulatory compliance is a thorough data audit to identify sensitive information.


Question 7

An AI project for a financial technology client is at risk due to potential inaccuracies in data aggregation. What is the first step the project manager should take to mitigate the risk?



Answer : C

When an AI initiative faces risk due to potential inaccuracies in data aggregation, PMI-CPMAI--aligned practice says the very first action is to understand the data characteristics before taking any corrective measures. This includes clarifying data sources, aggregation logic, granularity, formats, lineage, and quality dimensions (completeness, consistency, accuracy, timeliness, and validity). By doing so, the project manager and data team can determine where and why aggregation errors are arising, and whether they stem from upstream systems, ETL/ELT pipelines, joining logic, or business rules.

PMI's AI data lifecycle guidance stresses that you cannot reliably ''fix'' freshness, delete records, or visualize results until you have a structured understanding of the data landscape and its transformation steps. Jumping to deletion (option B) can worsen bias or information loss, and focusing only on freshness (option A) or visualization (option D) treats symptoms rather than root cause.

Therefore, the correct first step in mitigating this type of risk is to understand the data characteristics (option C), which then informs targeted remediation actions, improved aggregation logic, and robust data quality controls aligned with the AI solution's objectives and risk appetite.


Page:    1 / 14   
Total 102 questions