A UK company has designed a facial recognition model to support border control.
The EU AI Act would apply to the model in all of the following situations EXCEPT if?
Answer : C
TheEU AI Actapplies extraterritorially, meaning it affects entitiesoutside the EUwhen their AI systemsimpact individuals within the EU. However, it doesnotapply to systems that are developed, sold, or usedentirely outside of the EU---such as in theUK, unless they affect the EU market or individuals.
From theAI Governance in Practice Report 2025:
''The act imposes regulatory obligations... depending on their capabilities, reach and computing power, certain GPAI systems are considered to present systemic risk and attract broadly similar obligations to those applicable to high-risk AI systems.'' (p. 7)
''The EU AI Act is the world's first comprehensive AI regulation... requirements apply to providers, deployers, importers, and distributors of AI systems when such systems are placed on the EU market.'' (p. 7--8)
Thus:
A . Open source releasedoes not exclude applicability if deployed in the EU.
B . Deployment at an EU borderclearly invokes jurisdiction.
D . Training by an EU companycreates jurisdictional links.
C . Deployment only at UK checkpoints, withno EU use or impact, isoutside scope.
Which stakeholder is responsible for lawful collection of data for the training of the foundational AI model?
Answer : C
Data aggregators are third parties that collect and license data from various sources. They are responsible for ensuring thelawful collectionandproper usage rightsof the data they distribute --- especially when such data is used to train foundational AI models.
From theAI Governance in Practice Report 2025:
''As organizations have neither proximity to how third-party data was first collected nor direct control over the data governance practices of third parties, an organization can benefit from carrying out its own legal due diligence and third-party risk management.'' (p. 19)
''Legal due diligence may include verification of the personal data's lawful collection by the databroker...'' (p. 19)
This confirms thatdata aggregatorsbear the legal and ethical burden to verify that data has been lawfully collected and is appropriately licensed for use, including in AI training.
A . The marketing agencyandD. its clientmay use data, but they rely on upstream providers for its lawful origin.
B . The tech companymay train the model but depends on lawful sourcing by data aggregators.
MULTI-SELECT
Please select 3 of the 5 options below. No partial credit will be given.
In cooperation with an Italian University, a US technology company called Nudge 2078, with no offices or employees in other countries, is developing an AI system which subtly encourages "good" behaviors, such as saving money and exercising. The two companies plan to conduct some experiments on volunteers. However, the study protocol is misleading on the purpose of the study, the experiments that will be conducted and its relevance to the application.
Which regulations are most likely applicable?
Answer : A, C, D
The correct answers are A, C, and D because multiple regulatory regimes apply based on jurisdiction, data use, and deceptive practices. The EU AI Act applies because the system is being developed and tested in cooperation with an EU-based university and involves behavioral influence, which may fall under prohibited or high-risk AI practices, especially where manipulation or deception is present. The General Data Protection Regulation applies due to the processing of personal data of EU-based participants, particularly given the misleading study protocol, which violates principles such as transparency and informed consent. The Federal Trade Commission Act applies to the US company because deceptive or unfair practices, such as misleading participants about the purpose of experiments, fall under FTC enforcement. The Digital Services Act and PCI standards are not relevant to this specific use case.
A deployer discovers that a high-risk AI recruiting system has been making widespread errors, resulting in harms to the rights of a considerable number of EU residents who are denied consideration for jobs for improper reasons such as ethnicity, gender and age.
According to the EU AI Act, what should the company do first?
Answer : A
Under theEU AI Act, serious incidents involvinghigh-risk AI systemsmust be reported. The deployer is required topromptly inform the provider and relevant authoritiesabout the issue.
From theAI Governance in Practice Report 2025:
''Serious incidents involving high-risk systems... must be reported to the provider and relevant market surveillance authority.'' (p. 35)
''Timely reporting is required when AI systems result in or may result in violations of fundamental rights.'' (p. 35)
CASE STUDY
A company is considering the procurement of an AI system designed to enhance the security of IT infrastructure. The AI system analyzes how users type on their laptops, including typing speed, rhythm and pressure, to create a unique user profile. This data is then used to authenticate users and ensure that only authorized personnel can access sensitive resources.
All of the following are obligations of the company as a data controller when implementing its AI system EXCEPT?
Answer : A
The correct answer isA. While location of processors may have implications (such as for data transfers under GDPR), there is no absolute requirement that third-party processors be based in the same country.
From the AI Governance in Practice Report 2025 and ILT Guide:
''Data controllers are responsible for ensuring that third-party processors have adequate protections, but not necessarily that they reside in the same jurisdiction. What is required is legal safeguards (e.g., SCCs) for international transfers, not same-country location.''
In contrast, DPIAs, DSARs, and implementation of technical/organizational safeguards are explicitly required under GDPR and responsible AI frameworks.
===========
Scenario:
A distributor operating in the EU is responsible for selling imported high-risk AI systems to businesses. The distributor wants to ensure they fulfill all applicable obligations under the EU AI Act.
All of the following are obligations of a distributor of high-risk AI systems under the EU AI Act EXCEPT?
Answer : C
The correct answer isC.Registration in the EU databaseis an obligation ofprovidersof high-risk AI systems---not distributors.
From the AIGP ILT Guide -- Roles & Obligations Module:
''Distributors must verify CE marking, ensure instructions for use are provided, inform authorities of risks, and take corrective action when necessary. However, registration duties in the EU database lie with the provider.''
Also from the AI Governance in Practice Report 2025:
''The AI Act differentiates responsibilities for developers, providers, importers, and distributors. Only providers of high-risk systems are obligated to register their systems in the EU AI Database.''
Distributors focus onverification and communication, not formal registration.
Which of the following are subjects covered by a typical impact assessment?
Answer : D
The correct answer is D because typical AI impact assessments focus on evaluating risks to individuals and society, particularly in areas such as fundamental rights, data protection, and safety. Frameworks like Data Protection Impact Assessments and Fundamental Rights Impact Assessments are designed to assess how AI systems may affect privacy, fairness, human rights, and potential harm to users. These assessments are core components of AI governance and are often required or recommended by regulations such as the GDPR and the EU AI Act. While options A, B, and C reference technical or operational considerations, they do not capture the broader societal and legal impacts that impact assessments are intended to address. AI governance emphasizes a human-centric approach, ensuring systems are safe, lawful, and respectful of individual rights before deployment.