iSQI CT-AI Certified Tester AI Testing Exam Practice Test

Page: 1 / 14
Total 80 questions
Question 1

Max. Score: 2

Al-enabled medical devices are used nowadays for automating certain parts of the medical diagnostic processes. Since these are life-critical process the relevant authorities are considenng bringing about suitable certifications for these Al enabled medical devices. This certification may involve several facets of Al testing (I - V).

I . Autonomy

II . Maintainability

III . Safety

IV . Transparency

V . Side Effects

Which ONE of the following options contains the three MOST required aspects to be satisfied for the above scenario of certification of Al enabled medical devices?

SELECT ONE OPTION



Answer : C

For AI-enabled medical devices, the most required aspects for certification are safety, transparency, and side effects. Here's why:

Safety (Aspect III): Critical for ensuring that the AI system does not cause harm to patients.

Transparency (Aspect IV): Important for understanding and verifying the decisions made by the AI system.

Side Effects (Aspect V): Necessary to identify and mitigate any unintended consequences of the AI system.

Why Not Other Options:

Autonomy and Maintainability (Aspects I and II): While important, they are secondary to the immediate concerns of safety, transparency, and managing side effects in life-critical processes.


Question 2

When verifying that an autonomous AI-based system is acting appropriately, which of the following are MOST important to include?



Answer : C

When verifying autonomous AI-based systems, a critical aspect is ensuring that they maintain an appropriate level of autonomy while only requesting human intervention when necessary. If an AI system unnecessarily asks for human input, it defeats the purpose of autonomy and can:

Slow down operations.

Reduce trust in the system.

Indicate improper confidence thresholds in decision-making.

This is particularly crucial in autonomous vehicles, AI-driven financial trading, and robotic process automation, where excessive human intervention would hinder performance.

Why are the other options incorrect?

A . Test cases to verify that the system automatically confirms the correct classification of training data This is relevant for verifying training consistency but not for autonomy validation.

B . Test cases to detect the system appropriately automating its data input While relevant, data automation does not directly address the verification of autonomy.

D . Test cases to verify that the system automatically suppresses invalid output data This focuses on output filtering rather than decision-making autonomy.

Thus, the most critical test case for verifying autonomous AI-based systems is ensuring that it does not unnecessarily request human intervention.

Reference from ISTQB Certified Tester AI Testing Study Guide:

Section 8.2 - Testing Autonomous AI-Based Systems states that it is crucial to test whether the system requests human intervention only when necessary and does not disrupt autonomy.


Question 3

Which of the following aspects is a challenge when handling test data for an AI-based system?



Answer : A

Handling test data in AI-based systems presents numerous challenges, particularly in terms of data privacy and confidentiality. AI models often require vast amounts of training data, some of which may contain personal, sensitive, or confidential information. Ensuring compliance with data protection laws (e.g., GDPR, CCPA) and implementing secure data-handling practices is a major challenge in AI testing.

Why is Option A Correct?

Data Privacy Regulations

AI-based systems frequently process personal data, such as images, names, and transaction details, leading to privacy concerns.

Compliance with regulations such as GDPR (General Data Protection Regulation) and CCPA (California Consumer Privacy Act) requires proper anonymization, encryption, or redaction of sensitive data before using it for testing.

Data Security Challenges

AI models may leak confidential information if proper security measures are not in place.

Protecting training and test data from unauthorized access is crucial to maintaining trust and compliance.

Legal and Ethical Considerations

Organizations must obtain legal approval before using certain datasets, especially those containing health records, financial data, or personally identifiable information (PII).

Testers may need to employ synthetic data or data masking techniques to minimize exposure risks.

Why Other Options are Incorrect?

(B) Output data or intermediate data

While analyzing output data is important, it does not pose a significant challenge compared to handling personal or confidential test data.

(C) Video frame speed or aspect ratio

These are technical challenges in processing AI models but do not fall under data privacy or ethical considerations.

(D) Data frameworks or machine learning frameworks

Choosing an appropriate ML framework (e.g., TensorFlow, PyTorch) is important, but it is not a major challenge related to test data handling.

Reference from ISTQB Certified Tester AI Testing Study Guide

Handling personal or confidential data is a critical challenge in AI testing 'Personal or otherwise confidential data may need special techniques for sanitization, encryption, or redaction. Legal approval for use may also be required.'

Thus, option A is the correct answer, as data privacy and confidentiality are major challenges when handling test data for AI-based systems.


Question 4

Which ONE of the following tests is MOST likely to describe a useful test to help detect different kinds of biases in ML pipeline?

SELECT ONE OPTION



Answer : B

Detecting biases in the ML pipeline involves various tests to ensure fairness and accuracy throughout the ML process.

Testing the distribution shift in the training data for inappropriate bias (A): This involves checking if there is any shift in the data distribution that could lead to bias in the model. It is an important test but not the most direct method for detecting biases.

Test the model during model evaluation for data bias (B): This is a critical stage where the model is evaluated to detect any biases in the data it was trained on. It directly addresses potential data biases in the model.

Testing the data pipeline for any sources for algorithmic bias (C): This test is crucial as it helps identify biases that may originate from the data processing and transformation stages within the pipeline. Detecting sources of algorithmic bias ensures that the model does not inherit biases from these processes.

Check the input test data for potential sample bias (D): While this is an important step, it focuses more on the input data and less on the overall data pipeline.

Hence, the most likely useful test to help detect different kinds of biases in the ML pipeline is B . Test the model during model evaluation for data bias.


ISTQB CT-AI Syllabus Section 8.3 on Testing for Algorithmic, Sample, and Inappropriate Bias discusses various tests that can be performed to detect biases at different stages of the ML pipeline.

Sample Exam Questions document, Question #32 highlights the importance of evaluating the model for biases.

Question 5

Which ONE of the following options describes a scenario of A/B testing the LEAST?

SELECT ONE OPTION



Answer : C

A/B testing, also known as split testing, is a method used to compare two versions of a product or system to determine which one performs better. It is widely used in web development, marketing, and machine learning to optimize user experiences and model performance. Here's why option C is the least descriptive of an A/B testing scenario:

Understanding A/B Testing:

In A/B testing, two versions (A and B) of a system or feature are tested against each other. The objective is to measure which version performs better based on predefined metrics such as user engagement, conversion rates, or other performance indicators.

Application in Machine Learning:

In ML systems, A/B testing might involve comparing two different models, algorithms, or system configurations on the same set of data to observe which yields better results.

Why Option C is the Least Descriptive:

Option C describes comparing the performance of an ML system on two different input datasets. This scenario focuses on the input data variation rather than the comparison of system versions or features, which is the essence of A/B testing. A/B testing typically involves a controlled experiment with two versions being tested under the same conditions, not different datasets.

Clarifying the Other Options:

A . A comparison of two different websites for the same company to observe from a user acceptance perspective: This is a classic example of A/B testing where two versions of a website are compared.

B . A comparison of two different offers in a recommendation system to decide on the more effective offer for the same users: This is another example of A/B testing in a recommendation system.

D . A comparison of the performance of two different ML implementations on the same input data: This fits the A/B testing model where two implementations are compared under the same conditions.


ISTQB CT-AI Syllabus, Section 9.4, A/B Testing, explains the methodology and application of A/B testing in various contexts.

'Understanding A/B Testing' (ISTQB CT-AI Syllabus).

Question 6

Which of the following are the three activities in the data acquisition activities for data preparation?



Answer : C

According to the ISTQB Certified Tester AI Testing (CT-AI) syllabus, data acquisition, a critical step in data preparation for machine learning (ML) workflows, consists of three key activities:

Identification: This step involves determining the types of data required for training and prediction. For example, in a self-driving car application, data types such as radar, video, laser imaging, and LiDAR (Light Detection and Ranging) data may be identified as necessary sources.

Gathering: After identifying the required data types, the sources from which the data will be collected are determined, along with the appropriate collection methods. An example could be gathering financial data from the International Monetary Fund (IMF) and integrating it into an AI-based system.

Labeling: This process involves annotating or tagging the collected data to make it meaningful for supervised learning models. Labeling is an essential activity that helps machine learning algorithms differentiate between categories and make accurate predictions.

These activities ensure that the data is suitable for training and testing machine learning models, forming the foundation of data preparation.


Question 7

Before deployment of an AI based system, a developer is expected to demonstrate in a test environment how decisions are made. Which of the following characteristics does decision making fall under?



Answer : A

Explainability in AI-based systems refers to the ease with which users can determine how the system reaches a particular result. It is a crucial aspect when demonstrating AI decision-making, as it ensures that decisions made by AI models are transparent, interpretable, and understandable by stakeholders.

Before deploying an AI-based system, a developer must validate how decisions are made in a test environment. This process falls under the characteristic of explainability because it involves clarifying how an AI model arrives at its conclusions, which helps build trust in the system and meet regulatory and ethical requirements.

Supporting Reference from ISTQB Certified Tester AI Testing Study Guide:

ISTQB CT-AI Syllabus (Section 2.7: Transparency, Interpretability, and Explainability)

'Explainability is considered to be the ease with which users can determine how the AI-based system comes up with a particular result'.

'Most users are presented with AI-based systems as 'black boxes' and have little awareness of how these systems arrive at their results. This ignorance may even apply to the data scientists who built the systems. Occasionally, users may not even be aware they are interacting with an AI-based system'.

ISTQB CT-AI Syllabus (Section 8.6: Testing the Transparency, Interpretability, and Explainability of AI-based Systems)

'Testing the explainability of AI-based systems involves verifying whether users can understand and validate AI-generated decisions. This ensures that AI systems remain accountable and do not make incomprehensible or biased decisions'.

Contrast with Other Options:

Autonomy (B): Autonomy relates to an AI system's ability to operate independently without human oversight. While decision-making is a key function of autonomy, the focus here is on demonstrating the reasoning behind decisions, which falls under explainability rather than autonomy.

Self-learning (C): Self-learning systems adapt based on previous data and experiences, which is different from making decisions understandable to humans.

Non-determinism (D): AI-based systems are often probabilistic and non-deterministic, meaning they do not always produce the same output for the same input. This can make testing and validation more challenging, but it does not relate to explaining the decision-making process.

Conclusion: Since the question explicitly asks about the characteristic under which decision-making falls when being demonstrated before deployment, explainability is the correct choice because it ensures that AI decisions are transparent, understandable, and accountable to stakeholders.


Page:    1 / 14   
Total 80 questions