In a new supply chain management system, AI models used by participating parties are interactively connected to generate advice in support of management decision making. Which of the following is the GREATEST challenge related to this architecture?
Answer : A
The AAISM governance framework notes that in multi-party AI ecosystems, the greatest challenge is ensuring clear accountability for AI outputs. When models from different parties interact, responsibility for errors, bias, or harmful recommendations can be unclear, leading to disputes and compliance gaps. While aggregate risk assessment and error identification are significant, they are secondary to the fundamental governance requirement of establishing transparent lines of responsibility. Without defined accountability, no stakeholder can reliably manage or mitigate risks. Therefore, the greatest challenge in such a distributed architecture is responsibility for AI outputs.
AAISM Study Guide -- AI Governance and Program Management (Accountability in Multi-Party Systems)
ISACA AI Governance Guidance -- Roles and Responsibilities in AI Collaboration
Which of the following is a key risk indicator (KRI) for an AI system used for threat detection?
Answer : D
AAISM materials emphasize that in operational AI systems, key risk indicators (KRIs) must reflect risks to performance and reliability rather than technical design factors alone. In the case of threat detection, the most relevant KRI is the frequency of system overrides by human analysts, as this indicates a lack of trust, frequent false positives, or poor detection accuracy. Training epochs, model depth, and training time are technical metrics but do not directly measure operational risk. Analyst overrides represent a practical measure of system effectiveness and risk.
AAISM Study Guide -- AI Risk Management (Operational KRIs for AI Systems)
ISACA AI Security Management -- Monitoring AI Effectiveness
An organization is reviewing an AI application to determine whether it is still needed. Engineers have been asked to analyze the number of incorrect predictions against the total number of predictions made. Which of the following is this an example of?
Answer : C
AAISM guidance identifies metrics like error rate versus total predictions as a key performance indicator (KPI) for evaluating AI model effectiveness. KPIs provide measurable values to assess performance against objectives. Model validation is broader and occurs prior to production use, testing the model against predefined standards. Control self-assessment relates to governance processes, not predictive accuracy. Explainable decision-making refers to interpretability, not error-rate evaluation. Thus, analyzing incorrect predictions against total predictions is a performance measure, making it a KPI.
AAISM Exam Content Outline -- AI Governance and Program Management (Performance Metrics and KPIs)
AI Security Management Study Guide -- Accuracy and Error Metrics
Which of the following is the MOST effective use of AI-enabled tools in a security operations center (SOC)?
Answer : A
The most effective SOC application of AI is in detecting subtle, hard-to-find attack patterns that reduce false negatives.
AAISM technical control guidance notes that AI in SOCs is best applied to:
Enhance detection accuracy and sensitivity to anomalies.
Assist analysts in identifying hidden patterns that traditional rule-based systems miss.
Augment---not replace---human decision-making for high-confidence outcomes.
Options B and C incorrectly shift responsibility entirely to AI, which contradicts governance principles requiring human oversight. Option D is useful for efficiency, but the primary effectiveness comes from improving detection quality.
Therefore, the most effective use is to reduce false negatives and detect subtle attacks.
During the creation of a new large language model (LLM), an organization procured training data from multiple sources. Which of the following is MOST likely to address the CISO's security and privacy concerns?
Answer : B
AAISM guidance highlights data minimization as a critical practice for addressing both security and privacy concerns. By ensuring that only the minimum necessary data is collected and retained, the organization reduces the risk of sensitive information being exposed or misused during training. Data augmentation expands data but does not mitigate privacy risk. Classification organizes data but does not limit exposure. Data discovery helps locate sources but does not directly reduce risks. The control that directly aligns with privacy-by-design principles is data minimization.
AAISM Exam Content Outline -- AI Risk Management (Data Privacy and Minimization)
AI Security Management Study Guide -- Privacy Safeguards in AI Training
The PRIMARY ethical concern of generative AI is that it may:
Answer : B
AAISM materials emphasize that the primary ethical concern with generative AI is the risk to information integrity. Generative models can create content that appears authentic but is fabricated, misleading, or manipulated. This undermines trust in information ecosystems and can have wide-reaching social, legal, and organizational impacts. While confidentiality breaches and bias are concerns, they are not the central ethical issue inherent to generative models. Availability is less relevant in this context. The most pressing concern is that generative AI may compromise the integrity of information.
AAISM Study Guide -- AI Risk Management (Ethical Risks of Generative AI)
ISACA AI Security Management -- Integrity Concerns in Generative Systems
A large language model (LLM) has been manipulated to provide advice that serves an attacker's objectives. Which of the following attack types does this situation represent?
Answer : D
AAISM categorizes the manipulation of an LLM at inference time, where crafted inputs cause outputs to serve attacker objectives, as an evasion attack. Evasion attacks exploit weaknesses in the model's decision-making boundaries by altering queries to produce compromised or misleading outputs. Privilege escalation refers to unauthorized access rights, data poisoning targets the training phase, and model inversion reconstructs training data. In this case, manipulation of outputs to align with an attacker's goals reflects an evasion attack.
AAISM Exam Content Outline -- AI Risk Management (Adversarial Attack Types)
AI Security Management Study Guide -- Evasion and Manipulation Risks