Amazon AWS Certified Developer - Associate DVA-C02 Exam Practice Test

Page: 1 / 14
Total 368 questions
Question 1

A company has implemented a pipeline in AWS CodePipeline. The company Is using a single AWS account and does not use AWS Organizations. The company needs to test its AWS CloudFormation templates in its primary AWS Region and a disaster recovery Region.

Which solution will meet these requirements with the MOST operational efficiency?



Answer : B


Question 2

A developer is preparing to deploy an AWS CloudFormation stack for an application from a template that includes an IAM user. The developer needs to configure the application's resources to retain the IAM user after successful creation. However, the developer also needs to configure the application to delete the IAM user if the stack rolls back.



Answer : B

Why Option B is Correct: The RetainExceptOnCreate deletion policy ensures that the IAM user is retained after successful stack creation but is deleted if the stack creation fails or rolls back. This meets both requirements.

Why Other Options are Incorrect:

Option A: The Retain policy retains the resource regardless of stack status and does not delete the IAM user upon rollback.

Option C: Updating the service role for termination protection does not address the specific deletion behavior for the IAM user.

Option D: Stack policy controls updates, not resource deletion behavior during rollbacks.

AWS Documentation Reference:

CloudFormation DeletionPolicy Attribute


Question 3

A company is creating a new application that gives users the ability to upload and share short video files. The average size of the video files is 10 MB. After a user uploads a file, a message needs to be placed into an Amazon Simple Queue Service (Amazon SQS) queue so the file can be processed. The files need to be accessible for processing within 5 minutes.

Which solution will meet these requirements MOST cost-effectively?



Answer : B

Why Option B is Correct:

Amazon S3 Standard provides immediate access to files and is cost-effective for files that need to be accessed within 5 minutes.

By adding the S3 location to the SQS queue, you avoid transferring large files directly, which is both more efficient and scalable.

Why Other Options are Incorrect:

Option A: S3 Glacier Deep Archive is designed for archival storage with retrieval times ranging from minutes to hours, which does not meet the 5-minute requirement.

Option C: Amazon EBS is designed for block storage attached to EC2 instances, which adds unnecessary complexity and cost.

Option D: SQS is not designed to handle large file content directly and has message size limits (256 KB).

AWS Documentation Reference:

Amazon S3 Overview

Amazon SQS Best Practices


Question 4

A company is implementing an application on Amazon EC2 instances. The application needs to process incoming transactions. When the application detects a transaction that is not valid, the application must send a chat message to the company's support team. To send the message, the application needs to retrieve the access token to authenticate by using the chat API.

A developer needs to implement a solution to store the access token. The access token must be encrypted at rest and in transit. The access token must also be accessible from other AWS accounts.

Which solution will meet these requirements with the LEAST management overhead?



Question 5

In a move toward using microservices, a company's management team has asked all development teams to build their services so that API requests depend only on that service's data store. One team is building a Payments service which has its own database; the service needs data that originates in the Accounts database. Both are using Amazon DynamoDB.

What approach will result in the simplest, decoupled, and reliable method to get near-real time updates from the Accounts database?



Answer : D


Question 6

A company built an online event platform For each event the company organizes quizzes and generates leaderboards that are based on the quiz scores. The company stores the leaderboard data in Amazon DynamoDB and retains the data for 30 days after an event is complete The company then uses a scheduled job to delete the old leaderboard data

The DynamoDB table is configured with a fixed write capacity. During the months when many events occur, the DynamoDB write API requests are throttled when the scheduled delete job runs.

A developer must create a long-term solution that deletes the old leaderboard data and optimizes write throughput

Which solution meets these requirements?



Answer : A

DynamoDB TTL (Time-to-Live):A native feature that automatically deletes items after a specified expiration time.

Efficiency:Eliminates the need for scheduled deletion jobs, optimizing write throughput by avoiding potential throttling conflicts.

Seamless Integration:TTL works directly within DynamoDB, requiring minimal development overhead.


DynamoDB TTL Documentation:https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html

Question 7

A developer has built an application that inserts data into an Amazon DynamoDB table. The table is configured to use provisioned capacity. The application is deployed on a burstable nano Amazon EC2 instance. The application logs show that the application has been failing because of a ProvisionedThroughputExceededException error.

Which actions should the developer take to resolve this issue? (Select TWO.)



Answer : B, C

Requirement Summary:

DynamoDB Provisioned Mode

ProvisionedThroughputExceededException error occurring

App hosted on a small burstable EC2 instance

Option A: Move to a larger EC2 instance

May improve local performance, but does not solve DynamoDB provisioned throughput limits.

Option B: Increase DynamoDB RCUs

Valid: This directly increases the read throughput capacity of the table.

Helps handle more traffic and reduce throughput exceptions.

Option C: Reduce frequency via exponential backoff

Best practice: Using exponential backoff and jitter reduces request pressure and spreads retries out to avoid spiking.

Option D: Increase frequency of retries by reducing delay

Opposite of best practice. Increases load, worsening the issue.

Option E: Change to on-demand mode

Also valid in general, but question is scoped around provisioned capacity.

Since minimizing cost or workload type isn't mentioned, and scaling the existing model is the focus, prefer B and C.

ProvisionedThroughputExceededException: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HandlingErrors.html

Exponential backoff: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html

Capacity modes: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html


Page:    1 / 14   
Total 368 questions