Amazon AIF-C01 AWS Certified AI Practitioner Exam Practice Test

Page: 1 / 14
Total 177 questions
Question 1

A pharmaceutical company wants to analyze user reviews of new medications and provide a concise overview for each medication. Which solution meets these requirements?



Answer : B

Amazon Bedrock provides large language models (LLMs) that are optimized for natural language understanding and text summarization tasks, making it the best choice for creating concise summaries of user reviews. Time-series forecasting, classification, and image analysis (Rekognition) are not suitable for summarizing textual data. Reference: AWS Bedrock Documentation.


Question 2

A company wants to assess the costs that are associated with using a large language model (LLM) to generate inferences. The company wants to use Amazon Bedrock to build generative AI applications.

Which factor will drive the inference costs?



Answer : A

In generative AI models, such as those built on Amazon Bedrock, inference costs are driven by the number of tokens processed. A token can be as short as one character or as long as one word, and the more tokens consumed during the inference process, the higher the cost.

Option A (Correct): 'Number of tokens consumed': This is the correct answer because the inference cost is directly related to the number of tokens processed by the model.

Option B: 'Temperature value' is incorrect as it affects the randomness of the model's output but not the cost directly.

Option C: 'Amount of data used to train the LLM' is incorrect because training data size affects training costs, not inference costs.

Option D: 'Total training time' is incorrect because it relates to the cost of training the model, not the cost of inference.

AWS AI Practitioner Reference:

Understanding Inference Costs on AWS: AWS documentation highlights that inference costs for generative models are largely based on the number of tokens processed.


Question 3

A company wants to create a new solution by using AWS Glue. The company has minimal programming experience with AWS Glue.

Which AWS service can help the company use AWS Glue?



Answer : A

AWS Glue is a serverless data integration service that enables users to extract, transform, and load (ETL) data. For a company with minimal programming experience, Amazon Q Developer provides an AI-powered assistant that can generate code, explain AWS services, and guide users through tasks like creating AWS Glue jobs. This makes it an ideal tool to help the company use AWS Glue effectively.

Exact Extract from AWS AI Documents:

From the AWS Documentation on Amazon Q Developer:

'Amazon Q Developer is an AI-powered assistant that helps developers by generating code, answering questions about AWS services, and providing step-by-step guidance for tasks such as building ETL pipelines with AWS Glue. It is designed to assist users with varying levels of expertise, including those with minimal programming experience.'

(Source: AWS Documentation, Amazon Q Developer Overview)

Detailed

Option A: Amazon Q Developer

This is the correct answer. Amazon Q Developer can assist the company by generating AWS Glue scripts, explaining Glue concepts, and providing guidance on setting up ETL jobs, which is particularly helpful for users with limited programming experience.

Option B: AWS Config

AWS Config is used for tracking and managing resource configurations and compliance, not for assisting with coding or using services like AWS Glue. This option is incorrect.

Option C: Amazon Personalize

Amazon Personalize is a machine learning service for building recommendation systems, not for assisting with data integration or AWS Glue. This option is irrelevant.

Option D: Amazon Comprehend

Amazon Comprehend is an NLP service for analyzing text, not for helping users write code or use AWS Glue. This option does not meet the requirements.


AWS Documentation: Amazon Q Developer Overview (https://aws.amazon.com/q/developer/)

AWS Glue Developer Guide: Introduction to AWS Glue (https://docs.aws.amazon.com/glue/latest/dg/what-is-glue.html)

AWS AI Practitioner Learning Path: Module on AWS Developer Tools and Services

Question 4

A company is using Amazon SageMaker Studio notebooks to build and train ML models. The company stores the data in an Amazon S3 bucket. The company needs to manage the flow of data from Amazon S3 to SageMaker Studio notebooks.

Which solution will meet this requirement?



Answer : C

To manage the flow of data from Amazon S3 to SageMaker Studio notebooks securely, using a VPC with an S3 endpoint is the best solution.

Amazon SageMaker and S3 Integration:

Configuring SageMaker to use a Virtual Private Cloud (VPC) with an S3 endpoint allows the data flow between Amazon S3 and SageMaker Studio notebooks to occur over a private network.

This setup ensures that traffic between SageMaker and S3 does not traverse the public internet, enhancing security and performance.

Why Option C is Correct:

Secure Data Transfer: Ensures secure, private connectivity between SageMaker and S3, reducing exposure to potential security risks.

Direct Access to S3: Using an S3 endpoint in a VPC allows direct access to data in S3 without leaving the AWS network.

Why Other Options are Incorrect:

A . Amazon Inspector: Focuses on identifying security vulnerabilities, not managing data flow.

B . Amazon Macie: Monitors for sensitive data but does not manage data flow between S3 and SageMaker.

D . S3 Glacier Deep Archive: Is a storage class for archiving data, not for managing active data flow.


Question 5

What are tokens in the context of generative AI models?



Answer : A

Tokens in generative AI models are the smallest units that the model processes, typically representing words, subwords, or characters. They are essential for the model to understand and generate language, breaking down text into manageable parts for processing.

Option A (Correct): 'Tokens are the basic units of input and output that a generative AI model operates on, representing words, subwords, or other linguistic units': This is the correct definition of tokens in the context of generative AI models.

Option B: 'Mathematical representations of words' describes embeddings, not tokens.

Option C: 'Pre-trained weights of a model' refers to the parameters of a model, not tokens.

Option D: 'Prompts or instructions given to a model' refers to the queries or commands provided to a model, not tokens.

AWS AI Practitioner Reference:

Understanding Tokens in NLP: AWS provides detailed explanations of how tokens are used in natural language processing tasks by AI models, such as in Amazon Comprehend and other AWS AI services.


Question 6

A retail company is tagging its product inventory. A tag is automatically assigned to each product based on the product description. The company created one product category by using a large language model (LLM) on Amazon Bedrock in few-shot learning mode.

The company collected a labeled dataset and wants to scale the solution to all product categories.

Which solution meets these requirements?



Answer : D

When you have a labeled dataset and need to scale a generative AI solution for more complex or diverse product categories, fine-tuning the foundation model with your dataset is the best approach for consistent, accurate tagging.

D is correct:

''Fine-tuning a foundation model with your labeled data allows the model to generalize to new categories and improve tagging accuracy for your inventory.'' (Reference: Amazon Bedrock Fine-Tuning, AWS Generative AI)

''Fine-tuning a foundation model with your labeled data allows the model to generalize to new categories and improve tagging accuracy for your inventory.'' (Reference: Amazon Bedrock Fine-Tuning, AWS Generative AI)

A (zero-shot) and B (prompt templates) do not leverage the labeled data or scale as accurately.

C (continued pre-training) uses unlabeled data, not labeled.


Question 7

A company's large language model (LLM) is experiencing hallucinations.

How can the company decrease hallucinations?



Answer : C

Hallucinations in large language models (LLMs) occur when the model generates outputs that are factually incorrect, irrelevant, or not grounded in the input data. To mitigate hallucinations, adjusting the model's inference parameters, particularly the temperature, is a well-documented approach in AWS AI Practitioner resources. The temperature parameter controls the randomness of the model's output. A lower temperature makes the model more deterministic, reducing the likelihood of generating creative but incorrect responses, which are often the cause of hallucinations.

Exact Extract from AWS AI Documents:

From the AWS documentation on Amazon Bedrock and LLMs:

'The temperature parameter controls the randomness of the generated text. Higher values (e.g., 0.8 or above) increase creativity but may lead to less coherent or factually incorrect outputs, while lower values (e.g., 0.2 or 0.3) make the output more focused and deterministic, reducing the likelihood of hallucinations.'

(Source: AWS Bedrock User Guide, Inference Parameters for Text Generation)

Detailed

Option A: Set up Agents for Amazon Bedrock to supervise the model training.Agents for Amazon Bedrock are used to automate tasks and integrate LLMs with external tools, not to supervise model training or directly address hallucinations. This option is incorrect as it does not align with the purpose of Agents in Bedrock.

Option B: Use data pre-processing and remove any data that causes hallucinations.While data pre-processing can improve model performance, identifying and removing specific data that causes hallucinations is impractical because hallucinations are often a result of the model's generative process rather than specific problematic data points. This approach is not directly supported by AWS documentation for addressing hallucinations.

Option C: Decrease the temperature inference parameter for the model.This is the correct approach. Lowering the temperature reduces the randomness in the model's output, making it more likely to stick to factual and contextually relevant responses. AWS documentation explicitly mentions adjusting inference parameters like temperature to control output quality and mitigate issues like hallucinations.

Option D: Use a foundation model (FM) that is trained to not hallucinate.No foundation model is explicitly trained to 'not hallucinate,' as hallucinations are an inherent challenge in LLMs. While some models may be fine-tuned for specific tasks to reduce hallucinations, this is not a standard feature of foundation models available on Amazon Bedrock.


AWS Bedrock User Guide: Inference Parameters for Text Generation (https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html)

AWS AI Practitioner Learning Path: Module on Large Language Models and Inference Configuration

Amazon Bedrock Developer Guide: Managing Model Outputs (https://docs.aws.amazon.com/bedrock/latest/devguide/)

Page:    1 / 14   
Total 177 questions