IAPP AIGP Artificial Intelligence Governance Professional Exam Practice Test

Page: 1 / 14
Total 132 questions
Question 1

CASE STUDY

A premier payroll services company that employs thousands of people globally, is embarking on a new hiring campaign and wants to implement policies and procedures to identify and retain the best talent. The new talent will help the company's product team expand its payroll offerings to companies in the healthcare and transportation sectors, including in Asia.

It has become time consuming and expensive for HR to review all resumes, and they are concerned that human reviewers might be susceptible to bias.

To address these concerns, the company is considering using a third-party Al tool to screen resumes and assist with hiring. They have been talking to several vendors about possibly obtaining a third-party Al-enabled hiring solution, as long as it would achieve its goals and comply with all applicable laws.

The organization has a large procurement team that is responsible for the contracting of technology solutions. One of the procurement team's goals is to reduce costs, and it often prefers lower-cost solutions. Others within the company deploy technology solutions into the organization's operations in a responsible, cost-effective manner.

The organization is aware of the risks presented by Al hiring tools and wants to mitigate them. It also questions how best to organize and train its existing personnel to use the Al hiring tool responsibly. Their concerns are heightened by the fact that relevant laws vary across jurisdictions and continue to change.

The organization continues planning the adoption of an AI tool to support hiring, but is concerned about potential bias in content generated by AI systems and how that could affect public perception.

Which of the following measures should the company adopt to best mitigate its risk of reputational harm from using the AI tool?



Answer : A

Note: This is the same scenario and question as Question 21 and thus has the same correct answer: A. It's possible this was duplicated in your original input.

Repeated for clarity:

''Testing AI tools pre- and post-deployment helps ensure they perform as expected and do not introduce bias, privacy issues, or fairness concerns. This mitigates reputational and legal risk.''

The AI Governance in Practice Report 2024 further reinforces:

''Ongoing monitoring and testing post-deployment allows organizations to catch and correct unintended impacts... especially important in HR and hiring contexts.''


Question 2

Which of the following best defines an "Al model"?



Answer : D

An AI model is best defined as a program that has been trained on a set of data to find patterns within that data. This definition captures the essence of machine learning, where the model learns from the data to make predictions or decisions. Reference: AIGP BODY OF KNOWLEDGE, which provides a detailed explanation of AI models and their training processes.


Question 3

What is the primary purpose of an AI impact assessment?



Answer : D

The correct answer is D. AI Impact Assessments are primarily used to identify and manage risks and harms associated with AI systems.

From the AIGP Body of Knowledge:

''The goal of an AI impact assessment is to ensure that risks are identified, evaluated, and mitigated prior to or during development and deployment.''

As further confirmed in the AI Governance in Practice Report 2024 (Part III):

''Risk-based tools like DPIAs and Algorithmic Impact Assessments help identify potential risks to individuals and society, enabling organizations to implement mitigation plans and safeguards.''

While benefits may be noted in such assessments, the core objective is to manage risks and promote responsible AI.


Question 4

A company is creating a mobile app to enable individuals to upload images and videos, and analyze this data using ML to provide lifestyle improvement recommendations. The signup form has the following data fields:

1. First name

2. Last name

3. Mobile number

4. Email ID

5. New password

6. Date of birth

7. Gender

In addition, the app obtains a device's IP address and location information while in use.

What GDPR privacy principles does this violate?



Answer : A

The GDPR privacy principles that this scenario violates are Purpose Limitation and Data Minimization. Purpose Limitation requires that personal data be collected for specified, explicit, and legitimate purposes and not further processed in a manner that is incompatible with those purposes. Data Minimization mandates that personal data collected should be adequate, relevant, and limited to what is necessary in relation to the purposes for which they are processed. In this case, collecting extensive personal information (e.g., IP address, location, gender) and potentially using it beyond the necessary scope for the app's functionality could violate these principles by collecting more data than needed and possibly using it for purposes not originally intended.


Question 5

CASE STUDY

A global marketing agency is adapting a large language model ("LLM") to generate content for an upcoming marketing campaign for a client's new product: a hard hat designed for construction workers of any gender to better protect them from head injuries.

The marketing agency is accessing the LLM through an application programming interface ("API") developed by a third-party technology company. They want to generate text to be used for targeted advertising communications that highlight the benefits of the hard hat to potential purchasers. Both the marketing agency and the technology company have taken reasonable steps to address Al governance.

The marketing company has:

* Entered into a contract with the technology company with suitable representations and warranties.

* Completed an impact assessment on the LLM for this intended use.

* Built technical guidance on how to measure and mitigate bias in the LLM.

* Enabled technical aspects of transparency, explainability, robustness and privacy.

* Followed applicable regulatory requirements.

* Created specific legal statements and disclosures regarding the use of the Al on its client's advertising.

The technology company has:

* Provided guidance and resources to developers to address environmental concerns.

* Build technical guidance on how to measure and mitigate bias in the LLM.

* Provided tools and resources to measure bias specific to the LLM.

* Enabled technical aspects of transparency, explainability, robustness and privacy.

* Mapped and mitigated potential societal harms and large-scale impacts.

* Followed applicable regulatory requirements and industry standards.

* Created specific legal statements and disclosures regarding the LLM. including with respect to IP and rights to data.

The agency has taken governance actions such as:

Conducting an impact assessment

Providing legal disclosures

Enabling bias mitigation and explainability

Complying with regulatory requirements

Which of the following should be included in the marketing company's disclosures about the use of the LLM EXCEPT?



Answer : B

The correct answer is B -- Proprietary methods. While transparency is important, organizations are not obligated to disclose proprietary algorithms, methods, or trade secrets in public disclosures.

From the AIGP Body of Knowledge -- Transparency & Disclosures:

''AI system users should disclose the purpose, capabilities, limitations, and applicable legal context---but not sensitive IP.''

AI Governance in Practice Report 2024 (Transparency Section) states:

''Disclosure requirements balance public understanding with the need to protect proprietary business interests. Proprietary training methods are not expected to be disclosed.''

Thus, while it's best practice to disclose the intended purpose, legal compliance, and system limitations, internal proprietary techniques are usually excluded.


Question 6

CASE STUDY

Please use the following answer the next question:

XYZ Corp., a premier payroll services company that employs thousands of people globally, is embarking on a new hiring campaign and wants to implement policies and procedures to identify and retain the best talent. The new talent will help the company's product team expand its payroll offerings to companies in the healthcare and transportation sectors, including in Asia.

It has become time consuming and expensive for HR to review all resumes, and they are concerned that human reviewers might be susceptible to bias.

Address these concerns, the company is considering using a third-party Al tool to screen resumes and assist with hiring. They have been talking to several vendors about possibly obtaining a third-party Al-enabled hiring solution, as long as it would achieve its goals and comply with all applicable laws.

The organization has a large procurement team that is responsible for the contracting of technology solutions. One of the procurement team's goals is to reduce costs, and it often prefers lower-cost solutions. Others within the company are responsible for integrating and deploying technology solutions into the organization's operations in a responsible, cost-effective manner.

The organization is aware of the risks presented by Al hiring tools and wants to mitigate them. It also questions how best to organize and train its existing personnel to use the Al hiring tool responsibly. Their concerns are heightened by the fact that relevant laws vary across jurisdictions and continue to change.

The frameworks that would be most appropriate for XYZ's governance needs would be the NIST Al Risk Management Framework and?



Answer : C

The IEEE Ethical System Design Risk Management Framework (IEEE 7000-21) would be most appropriate for XYZ Corp's governance needs in addition to the NIST AI Risk Management Framework. The IEEE framework specifically addresses ethical concerns during system design, which is crucial for ensuring the responsible use of AI in hiring. It complements the NIST framework by focusing on ethical risk management, aligning well with XYZ Corp's goals of deploying AI responsibly and mitigating associated risks.


Question 7

Each of the following actors are typically engaged in the Al development life cycle EXCEPT?



Answer : B

Typically, actors involved in the AI development life cycle include data architects (who design the data frameworks), socio-cultural and technical experts (who ensure the AI system is socio-culturally aware and technically sound), and legal and privacy governance experts (who handle the legal and privacy aspects). Government regulators, while important, are not directly engaged in the development process but rather oversee and regulate the industry. Reference: AIGP BODY OF KNOWLEDGE and AI development frameworks.


Page:    1 / 14   
Total 132 questions