You are the chief privacy officer of a medical research company that would like to collect and use sensitive data about cancer patients, such as their names, addresses, race and ethnic origin, medical histories, insurance claims, pharmaceutical prescriptions, eating and drinking habits and physical activity.
The company will use this sensitive data to build an Al algorithm that will spot common attributes that will help predict if seemingly healthy people are more likely to get cancer. However, the company is unable to obtain consent from enough patients to sufficiently collect the minimum data to train its model.
Which of the following solutions would most efficiently balance privacy concerns with the lack of available data during the testing phase?
Answer : C
Utilizing synthetic data to offset the lack of patient data is an efficient solution that balances privacy concerns with the need for sufficient data to train the model. Synthetic data can be generated to simulate real patient data while avoiding the privacy issues associated with using actual patient data. This approach allows for the development and testing of the AI algorithm without compromising patient privacy, and it can be refined with real data as it becomes available. Reference: AIGP Body of Knowledge on Data Privacy and AI Model Training.
A shipping service based in the US is looking to expand its operations into the EU. It utilizes an in-house developed multimodal AI model that analyzes all personal data collected from shipping senders and recipients, and optimizes shipping routes and schedules based on this data.
As they expand into the EU, all of the following descriptions should be included in the technical documentation for their AI model EXCEPT?
Answer : B
The EU AI Act outlines what must be included intechnical documentationfor high-risk systems. These requirements are designed to supportconformity assessment, transparency, and traceability.
From theAI Governance in Practice Report 2024:
''It mandates drawing up technical documentation... must include a general description of the AI system, the intended purpose, and a detailed description of the elements and development process.'' (p. 34)
''Documentation... includes training, testing, evaluation procedures, andappropriateness of performance metrics.'' (p. 34--35)
Therisk management systemis addressed separately through arisk management plan, not within the technical documentation itself.
Thus:
A, C, and Dare explicitly required in thetechnical documentation.
B, while important, is part of therisk management process, not a required section oftechnical documentation.
All of the following issues are unique for proprietary AI model deployments EXCEPT?
Answer : C
Biasis a common risk acrossboth proprietary and open-source models, andnot uniqueto proprietary deployments. All AI systems --- regardless of origin --- require evaluation for fairness, accuracy, and representativeness.
From theAI Governance in Practice Report 2024:
''Bias, discrimination and fairness challenges are present in both open and closed models, regardless of how the model is sourced.'' (p. 41)
Scenario:
A company using AI for resume screening understands the risks of algorithmic bias and the evolving legal requirements across jurisdictions. It wants to implement the right governance controls to prevent reputational damage from misuse of the AI hiring tool.
Which of the following measures should the company adopt to best mitigate its risk of reputational harm from using the AI tool?
Answer : A
The correct answer isA. Pre- and post-deployment testing ensuresbias, accuracy, and fairnessare evaluated and corrected as needed, which isessential for reputational risk mitigation.
From the AIGP Body of Knowledge:
''Testing AI systems before and after deployment is critical to ensure performance, fairness, and compliance. Failing to do so may result in reputational damage and legal exposure.''
AI Governance in Practice Report 2024 (Bias/Fairness and Risk Sections):
''System impact assessments, testing, and post-deployment monitoring are necessary to identify and mitigate risks... This supports both compliance and public trust.''
Testing is proactive, unlike indemnification (which transfers risk after damage), or requiring manual review (which defeats automation).
If it is possible to provide a rationale for a specific output of an Al system, that system can best be described as?
Answer : C
If it is possible to provide a rationale for a specific output of an AI system, that system can best be described as explainable. Explainability in AI refers to the ability to interpret and understand the decision-making process of the AI system. This involves being able to articulate the factors and logic that led to a particular output or decision. Explainability is critical for building trust, enabling users to understand and validate the AI system's actions, and ensuring compliance with ethical and regulatory standards. It also facilitates debugging and improving the system by providing insights into its behavior.
CASE STUDY
Please use the following answer the next question:
XYZ Corp., a premier payroll services company that employs thousands of people globally, is embarking on a new hiring campaign and wants to implement policies and procedures to identify and retain the best talent. The new talent will help the company's product team expand its payroll offerings to companies in the healthcare and transportation sectors, including in Asia.
It has become time consuming and expensive for HR to review all resumes, and they are concerned that human reviewers might be susceptible to bias.
Address these concerns, the company is considering using a third-party Al tool to screen resumes and assist with hiring. They have been talking to several vendors about possibly obtaining a third-party Al-enabled hiring solution, as long as it would achieve its goals and comply with all applicable laws.
The organization has a large procurement team that is responsible for the contracting of technology solutions. One of the procurement team's goals is to reduce costs, and it often prefers lower-cost solutions. Others within the company are responsible for integrating and deploying technology solutions into the organization's operations in a responsible, cost-effective manner.
The organization is aware of the risks presented by Al hiring tools and wants to mitigate them. It also questions how best to organize and train its existing personnel to use the Al hiring tool responsibly. Their concerns are heightened by the fact that relevant laws vary across jurisdictions and continue to change.
Which other stakeholder groups should be involved in the selection and implementation of the Al hiring tool?
Answer : A
In the selection and implementation of the AI hiring tool, involving Finance and Legal is crucial. The Finance team is essential for assessing cost implications, budget considerations, and financial risks. The Legal team is necessary to ensure compliance with applicable laws and regulations, including those related to data privacy, employment, and anti-discrimination. Involving these stakeholders ensures a comprehensive evaluation of both the financial viability and legal compliance of the AI tool, mitigating potential risks and aligning with organizational objectives and regulatory requirements.
CASE STUDY
Please use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to individuals. ABC has decided to utilize artificial intelligence to streamline and improve its customer acquisition and underwriting process, including the accuracy and efficiency of pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose large language model (''LLM''). In particular, ABC intends to use its historical customer data---including applications, policies, and claims---and proprietary pricing and risk strategies to provide an initial qualification assessment of potential customers, which would then be routed t
Answer : B
Providing the loan applicants with information about the model capabilities and limitations would not directly support fairness testing by the compliance team. Fairness testing focuses on evaluating the model's decisions for biases and ensuring equitable treatment across different demographic groups, rather than informing applicants about the model.