Isaca Advanced in AI Security Management AAISM Exam Practice Test

Page: 1 / 14
Total 255 questions
Question 1

Which of the following is the MOST important consideration for an organization that has decided to adopt AI to leverage its competitive advantage?



Answer : A

AAISM's governance guidance emphasizes that adopting AI for competitive advantage must begin with a comprehensive strategic roadmap for integration. This roadmap aligns AI adoption with business objectives, sets priorities, defines milestones, and ensures coordination across functions. Risk management, training, and tool procurement are essential, but they are tactical steps that follow once the strategic direction is defined. Without a roadmap, adoption becomes fragmented and risks misalignment with business strategy. The most important consideration at the adoption stage is therefore creating a strategic integration roadmap.


AAISM Exam Content Outline -- AI Governance and Program Management (Strategy and Roadmapping)

AI Security Management Study Guide -- Business Alignment of AI Initiatives

Question 2

Embedding unique identifiers into AI models would BEST help with:



Answer : B

The AAISM framework explains that embedding unique identifiers---such as digital watermarks or model fingerprints---enables organizations to trace and verify model provenance. This technique is used for tracking ownership and intellectual property rights over models, particularly when sharing, licensing, or distributing AI systems. While identifiers may support certain security functions, their primary control objective is ownership verification, not preventing access, bias removal, or adversarial detection. The correct alignment with AAISM controls is tracking ownership.


AAISM Exam Content Outline -- AI Technologies and Controls (Model Provenance and Watermarking)

AI Security Management Study Guide -- Ownership and Accountability of Models

Question 3

A PRIMARY objective of responsibly providing AI services is to:



Answer : C

AAISM emphasizes that the primary objective of responsible AI is to establish and maintain trust in AI-driven decisions and predictions. Trust is achieved through transparency, accountability, fairness, and governance. While confidentiality and integrity are critical technical objectives, they are not the overarching purpose of responsible AI service provision. Autonomy and learning ability are features of AI, but without trust, adoption and compliance falter. The correct answer is that responsible AI services must focus on building trust in AI outcomes.


AAISM Exam Content Outline -- AI Governance and Program Management (Responsible AI Principles)

AI Security Management Study Guide -- Trust and Ethical AI Adoption

Question 4

Which of the following is the MOST effective use of AI-enabled tools in a security operations center (SOC)?



Answer : A

The most effective SOC application of AI is in detecting subtle, hard-to-find attack patterns that reduce false negatives.

AAISM technical control guidance notes that AI in SOCs is best applied to:

Enhance detection accuracy and sensitivity to anomalies.

Assist analysts in identifying hidden patterns that traditional rule-based systems miss.

Augment---not replace---human decision-making for high-confidence outcomes.

Options B and C incorrectly shift responsibility entirely to AI, which contradicts governance principles requiring human oversight. Option D is useful for efficiency, but the primary effectiveness comes from improving detection quality.

Therefore, the most effective use is to reduce false negatives and detect subtle attacks.


Question 5

A model producing contradictory outputs based on highly similar inputs MOST likely indicates the presence of:



Answer : B

The AAISM study framework describes evasion attacks as attempts to manipulate or probe a trained model during inference by using crafted inputs that appear normal but cause the system to generate inconsistent or erroneous outputs. Contradictory results from nearly identical queries are a typical symptom of evasion, as the attacker is probing decision boundaries to find weaknesses. Poisoning attacks occur during training, not inference, while membership inference relates to exposing whether data was part of the training set, and model exfiltration involves extracting proprietary parameters or architecture. The clearest indication of contradictory outputs from similar queries therefore aligns directly with the definition of evasion attacks in AAISM materials.


AAISM Study Guide -- AI Technologies and Controls (Adversarial Machine Learning and Attack Types)

ISACA AI Security Management -- Inference-time Attack Scenarios

Question 6

A large pharmaceutical company using a new AI solution to develop treatment regimens is concerned about potential hallucinations with the introduction of real-world data. Which of the following is MOST likely to reduce this risk?



Answer : B

AAISM materials identify human-in-the-loop governance as the most effective safeguard against risks such as hallucinations in AI systems used in high-stakes domains like healthcare. By ensuring that human experts validate outputs before they influence patient treatment decisions, organizations preserve accountability, safety, and accuracy. Penetration testing is a cybersecurity measure, not relevant to hallucination risk. AI impact analysis helps evaluate systemic effects but does not directly prevent faulty outputs. Data validation improves input quality but cannot fully prevent generative hallucinations. The key safeguard is human-in-the-loop oversight.


AAISM Study Guide -- AI Governance and Program Management (Human Oversight in High-Risk AI)

ISACA AI Security Management -- Mitigating Hallucinations in Generative AI

Question 7

Which of the following technologies can be used to manage deepfake risk?



Answer : C

The AAISM study material highlights blockchain as a control mechanism for managing deepfake risk because it provides immutable verification of digital media provenance. By anchoring original data signatures on a blockchain, organizations can verify authenticity and detect tampered or synthetic content. Data tagging helps organize but does not guarantee authenticity. MFA and adaptive authentication strengthen identity security but do not address content manipulation risks. Blockchain's immutability and traceability make it the recognized technology for mitigating deepfake challenges.


AAISM Study Guide -- AI Technologies and Controls (Emerging Controls for Content Authenticity)

ISACA AI Governance Guidance -- Blockchain for Data Integrity and Deepfake Mitigation

Page:    1 / 14   
Total 255 questions