A company wants to use large language models (LLMs) with Amazon Bedrock to develop a chat interface for the company's product manuals. The manuals are stored as PDF files.
Which solution meets these requirements MOST cost-effectively?
Answer : A
Using Amazon Bedrock with large language models (LLMs) allows for efficient utilization of AI to answer queries based on context provided in product manuals. To achieve this cost-effectively, the company should avoid unnecessary use of resources.
Option A (Correct): 'Use prompt engineering to add one PDF file as context to the user prompt when the prompt is submitted to Amazon Bedrock': This is the most cost-effective solution. By using prompt engineering, only the relevant content from one PDF file is added as context to each query. This approach minimizes the amount of data processed, which helps in reducing costs associated with LLMs' computational requirements.
Option B: 'Use prompt engineering to add all the PDF files as context to the user prompt when the prompt is submitted to Amazon Bedrock' is incorrect. Including all PDF files would increase costs significantly due to the large context size processed by the model.
Option C: 'Use all the PDF documents to fine-tune a model with Amazon Bedrock' is incorrect. Fine-tuning a model is more expensive than using prompt engineering, especially if done for multiple documents.
Option D: 'Upload PDF documents to an Amazon Bedrock knowledge base' is incorrect because Amazon Bedrock does not have a built-in knowledge base feature for directly managing and querying PDF documents.
AWS AI Practitioner Reference:
Prompt Engineering for Cost-Effective AI: AWS emphasizes the importance of using prompt engineering to minimize costs when interacting with LLMs. By carefully selecting relevant context, users can reduce the amount of data processed and save on expenses.
A company wants to use generative AI to increase developer productivity and software development. The company wants to use Amazon Q Developer.
What can Amazon Q Developer do to help the company meet these requirements?
Answer : C
Amazon Q Developer is a tool designed to assist developers in increasing productivity by generating code snippets, managing reference tracking, and handling open-source license tracking. These features help developers by automating parts of the software development process.
Option A (Correct): 'Create software snippets, reference tracking, and open-source license tracking': This is the correct answer because these are key features that help developers streamline and automate tasks, thus improving productivity.
Option B: 'Run an application without provisioning or managing servers' is incorrect as it refers to AWS Lambda or AWS Fargate, not Amazon Q Developer.
Option C: 'Enable voice commands for coding and providing natural language search' is incorrect because this is not a function of Amazon Q Developer.
Option D: 'Convert audio files to text documents by using ML models' is incorrect as this refers to Amazon Transcribe, not Amazon Q Developer.
AWS AI Practitioner Reference:
Amazon Q Developer Features: AWS documentation outlines how Amazon Q Developer supports developers by offering features that reduce manual effort and improve efficiency.
A company deployed an AI/ML solution to help customer service agents respond to frequently asked questions. The questions can change over time. The company wants to give customer service agents the ability to ask questions and receive automatically generated answers to common customer questions. Which strategy will meet these requirements MOST cost-effectively?
Answer : D
RAG combines large pre-trained models with retrieval mechanisms to fetch relevant context from a knowledge base. This approach is cost-effective as it eliminates the need for frequent model retraining while ensuring responses are contextually accurate and up to date. Reference: AWS RAG Techniques.
A company is using domain-specific models. The company wants to avoid creating new models from the beginning. The company instead wants to adapt pre-trained models to create models for new, related tasks.
Which ML strategy meets these requirements?
Answer : B
Transfer learning is the correct strategy for adapting pre-trained models for new, related tasks without creating models from scratch.
Transfer Learning:
Involves taking a pre-trained model and fine-tuning it on a new dataset for a related task.
This approach is efficient because it leverages existing knowledge from a model trained on a large dataset, requiring less data and computational resources than training a new model from scratch.
Why Option B is Correct:
Adaptation of Pre-trained Models: Allows for adapting existing models to new tasks, which aligns with the company's goal of not starting from scratch.
Efficiency and Speed: Speeds up the model development process by building on the knowledge of pre-trained models.
Why Other Options are Incorrect:
A . Increase the number of epochs: Does not address the strategy of reusing pre-trained models.
C . Decrease the number of epochs: Similarly, does not apply to adapting pre-trained models.
D . Use unsupervised learning: Does not involve using pre-trained models for new tasks.
A company is building an AI application to summarize books of varying lengths. During testing, the application fails to summarize some books. Why does the application fail to summarize some books?
Answer : D
Comprehensive and Detailed
Foundation models have a context window (max tokens), which limits the size of the input text (prompt + instructions).
If the input (e.g., a very long book) exceeds this limit, the model cannot process it, causing failure.
Temperature (A) and Top P (C) control randomness, not input size.
Fine-tuning (B) is irrelevant to input truncation failures.
Reference:
AWS Documentation -- Amazon Bedrock Model Parameters (context size limits)
A company wants to set up private access to Amazon Bedrock APIs from the company's AWS account. The company also wants to protect its data from internet exposure.
Answer : D
Comprehensive and Detailed
AWS PrivateLink enables private connectivity between your VPC and supported AWS services (like Amazon Bedrock) without sending traffic over the public internet.
CloudFront (A) is for CDN and content delivery, not private service connections.
AWS Glue (B) is for ETL/data catalog, not networking.
Lake Formation (C) provides governance for data lakes, not API network isolation.
Reference:
AWS Documentation -- Access Amazon Bedrock with PrivateLink
Which prompting attack directly exposes the configured behavior of a large language model (LLM)?
Answer : D
Comprehensive and Detailed
A prompt template defines how the model is structured and guided (system prompts, roles, guardrails).
An attack that reveals or leaks this prompt template is known as a prompt extraction attack.
The other options (persona switching, exploiting friendliness, ignoring prompts) describe adversarial techniques but do not directly expose the internal configured behavior.
Reference:
AWS Responsible AI -- Prompt Injection & Extraction Attacks