Which approach should an organization prioritize to effectively verify the security of its AI models?
Answer : B
The AAISM standard explicitly states that traditional penetration tests alone are insufficient for AI systems. Effective AI security testing requires:
* AI-specific threat modeling (e.g., data poisoning, prompt injection, model theft)
* Adversarial attack simulations (white-box, black-box, gradient-based attacks)
* Evaluation of robustness and manipulation resistance
Option B captures these requirements precisely.
Options A, C, and D do not address AI-specific attack vectors.
=============================================
An AI research team is developing a natural language processing model that relies on several open-source libraries. Which of the following is the team's BEST course of action to ensure the integrity of the software packages used?
Answer : B
AAISM's technical control guidance emphasizes that when using open-source libraries, the best safeguard for integrity is to scan the packages for malware before installation. This ensures that compromised or malicious code does not enter the AI system environment. Maintaining lists aids consistency but not security. Always using the latest versions may introduce unverified vulnerabilities. Retraining models addresses functionality but not software integrity. Therefore, the strongest protective measure is pre-installation malware scanning of open-source packages.
AAISM Exam Content Outline -- AI Technologies and Controls (Software Supply Chain Security)
AI Security Management Study Guide -- Open-Source Package Risk Mitigation
When preparing for an AI incident, which of the following should be done FIRST?
Answer : B
AAISM incident response guidance states the first foundational step is forming a cross-functional AI-aware incident response team, including model developers, data stewards, security leads, and compliance officers. Without the team established, recovery (C), containment (D), or communication channels (A) cannot be effectively designed or executed.
============================================
Which phase of the AI data life cycle presents the GREATEST inherent risk?
Answer : D
AAISM identifies training as the phase with the highest inherent risk because this is where:
* data poisoning can occur
* sensitive data may be exposed
* bias can be introduced
* model inversion risks originate
* security and privacy vulnerabilities are embedded
Preparation (C) carries risk but is less critical. Maintenance (B) and monitoring (A) involve operational safeguards, not foundational risk creation.
============================================
Within an incident handling process, which of the following would BEST help restore end user trust with an AI system?
Answer : C
Restoring end user trust during incident handling requires visible, immediate assurance that system outcomes are safe and appropriate. AAISM prescribes human oversight and approval gates for high-risk AI decisions, with human validation of outputs before use as a primary control to maintain trust while technical remediation is underway. Prioritization (A) and monitoring (B) aid operations but do not directly rebuild user confidence in outcomes. Post-incident improvements (D) are essential for long-term assurance but do not provide the immediate trust restoration that supervised, human-validated outputs deliver.
===========
Which defense is MOST effective against cyberattacks that alter input data to avoid detection?
Answer : A
AAISM lists adversarial training as the strongest method to harden models against input manipulation attacks. By exposing models to adversarial examples during training, the system learns to resist evasion techniques.
Access restriction (B) protects confidentiality, not detection evasion. Monitoring (C) is reactive, not preventive. Differential privacy (D) protects individual data, not adversarial inputs.
============================================
An attacker crafts inputs to a large language model (LLM) to exploit output integrity controls. Which of the following types of attacks is this an example of?
Answer : A
According to the AAISM framework, prompt injection is the act of deliberately crafting malicious or manipulative inputs to override, bypass, or exploit the model's intended controls. In this case, the attacker is targeting the integrity of the model's outputs by exploiting weaknesses in how it interprets and processes prompts. Jailbreaking is a subtype of prompt injection specifically designed to override safety restrictions, while evasion attacks target classification boundaries in other ML contexts, and remote code execution refers to system-level exploitation outside of the AI inference context. The most accurate classification of this attack is prompt injection.
AAISM Exam Content Outline -- AI Technologies and Controls (Prompt Security and Input Manipulation)
AI Security Management Study Guide -- Threats to Output Integrity