Amazon SAP-C02 AWS Certified Solutions Architect - Professional Exam Practice Test

Page: 1 / 14
Total 461 questions
Question 1

A solutions architect is creating an AWS CloudFormation template from an existing manually created non-production AWS environment The CloudFormation template can be destroyed and recreated as needed The environment contains an Amazon EC2 instance The EC2 instance has an instance profile that the EC2 instance uses to assume a role in a parent account

The solutions architect recreates the role in a CloudFormation template and uses the same role name When the CloudFormation template is launched in the child account, the EC2 instance can no longer assume the role in the parent account because of insufficient permissions

What should the solutions architect do to resolve this issue?



Answer : A

Edit the Trust Policy:

Go to the IAM console in the parent account and locate the role that the EC2 instance needs to assume.

Edit the trust policy of the role to ensure that it correctly allows the sts

action for the role ARN in the child account.

Update the Role ARN:

Verify that the target role ARN specified in the trust policy matches the role ARN created by the CloudFormation stack in the child account.

If necessary, update the ARN to reflect the correct role in the child account.

Save and Test:

Save the updated trust policy and ensure there are no syntax errors.

Test the setup by attempting to assume the role from the EC2 instance in the child account. Verify that the instance can successfully assume the role and perform the required actions.

This ensures that the EC2 instance in the child account can assume the role in the parent account, resolving the permission issue.

Reference

AWS IAM Documentation on Trust Policies51.


Question 2

A solutions architect has deployed a web application that serves users across two AWS Regions under a custom domain The application uses Amazon Route 53 latency-based routing The solutions architect has associated weighted record sets with a pair of web servers in separate Availability Zones for each Region

The solutions architect runs a disaster recovery scenario When all the web servers in one Region are stopped. Route 53 does not automatically redirect users to the other Region

Which of the following are possible root causes of this issue1? (Select TWO)



Answer : D, E

Evaluate Target Health Setting:

Ensure that the 'Evaluate Target Health' setting is enabled for the latency alias resource record sets in Route 53. This setting helps Route 53 determine the health of the resources associated with the alias record and redirect traffic appropriately.

HTTP Health Checks:

Configure HTTP health checks for all weighted resource record sets. Health checks monitor the availability and performance of the web servers, allowing Route 53 to reroute traffic to healthy servers in case of a failure.

Verify that the health checks are correctly set up and associated with the resource record sets. This ensures that Route 53 can detect server failures and redirect traffic to the servers in the other Region.

By enabling the 'Evaluate Target Health' setting and configuring HTTP health checks, Route 53 can effectively manage traffic during failover scenarios, ensuring high availability and reliability.

Reference

AWS Route 53 Documentation on Latency-Based Routing50.

AWS Architecture Blog on Cross-Account and Cross-Region Setup49.


Question 3

A company has a web application that uses Amazon API Gateway. AWS Lambda and Amazon DynamoDB A recent marketing campaign has increased demand Monitoring software reports that many requests have significantly longer response times than before the marketing campaign

A solutions architect enabled Amazon CloudWatch Logs for API Gateway and noticed that errors are occurring on 20% of the requests. In CloudWatch. the Lambda function. Throttles metric represents 1% of the requests and the Errors metric represents 10% of the requests Application logs indicate that, when errors occur there is a call to DynamoDB

What change should the solutions architect make to improve the current response times as the web application becomes more popular'?



Answer : B

Enable DynamoDB Auto Scaling:

Navigate to the DynamoDB console and select the table experiencing high demand.

Go to the 'Capacity' tab and enable auto scaling for both read and write capacity units. Auto scaling adjusts the provisioned throughput capacity automatically in response to actual traffic patterns, ensuring the table can handle the increased load.

Configure Auto Scaling Policies:

Set the minimum and maximum capacity units to define the range within which auto scaling can adjust the provisioned throughput.

Specify target utilization percentages for read and write operations, typically around 70%, to maintain a balance between performance and cost.

Monitor and Adjust:

Use Amazon CloudWatch to monitor the auto scaling activity and ensure it is effectively handling the increased demand.

Adjust the auto scaling settings if necessary to better match the traffic patterns and application requirements.

By enabling DynamoDB auto scaling, you ensure that the database can handle the fluctuating traffic volumes without manual intervention, improving response times and reducing errors.

Reference

AWS Compute Blog on Using API Gateway as a Proxy for DynamoDB60.

AWS Database Blog on DynamoDB Accelerator (DAX)59.


Question 4

A company has an application that analyzes and stores image data on premises The application receives millions of new image files every day Files are an average of 1 MB in size The files are analyzed in batches of 1 GB When the application analyzes a batch the application zips the images together The application then archives the images as a single file in an on-premises NFS server for long-term storage

The company has a Microsoft Hyper-V environment on premises and has compute capacity available The company does not have storage capacity and wants to archive the images on AWS The company needs the ability to retrieve archived data within t week of a request.

The company has a 10 Gbps AWS Direct Connect connection between its on-premises data center and AWS. The company needs to set bandwidth limits and schedule archived images to be copied to AWS dunng non-business hours.

Which solution will meet these requirements MOST cost-effectively?



Answer : B

Deploy DataSync Agent:

Install the AWS DataSync agent as a VM in your Hyper-V environment. This agent facilitates the data transfer between your on-premises storage and AWS.

Configure Source and Destination:

Set up the source location to point to your on-premises NFS server where the image batches are stored.

Configure the destination location to be an Amazon S3 bucket with the Glacier Deep Archive storage class. This storage class is cost-effective for long-term storage with retrieval times of up to 12 hours.

Create DataSync Tasks:

Create and configure DataSync tasks to manage the data transfer. Schedule these tasks to run during non-business hours to minimize bandwidth usage during peak times. The tasks will handle the copying of data batches from the NFS server to the S3 bucket.

Set Bandwidth Limits:

In the DataSync configuration, set bandwidth limits to control the amount of data being transferred at any given time. This ensures that your network's performance is not adversely affected during business hours.

Delete On-Premises Data:

After successfully copying the data to S3 Glacier Deep Archive, configure the DataSync task to delete the data from your on-premises NFS server. This helps manage storage capacity on-premises and ensures data is securely archived on AWS.

This approach leverages AWS DataSync for efficient, secure, and automated data transfer, and S3 Glacier Deep Archive for cost-effective long-term storage.

Reference

AWS DataSync Overview41.

AWS Storage Blog on DataSync Migration40.

Amazon S3 Transfer Acceleration Documentation42.


Question 5

A company uses AWS Organizations to manage its development environment. Each development team at the company has its own AWS account Each account has a single VPC and CIDR blocks that do not overlap.

The company has an Amazon Aurora DB cluster in a shared services account All the development teams need to work with live data from the DB cluster

Which solution will provide the required connectivity to the DB cluster with the LEAST operational overhead?



Answer : B

Create a Transit Gateway:

In the shared services account, create a new AWS Transit Gateway. This serves as a central hub to connect multiple VPCs, simplifying the network topology and management.

Configure Transit Gateway Attachments:

Attach the VPC containing the Aurora DB cluster to the transit gateway. This allows the shared services VPC to communicate through the transit gateway.

Create Resource Share with AWS RAM:

Use AWS Resource Access Manager (AWS RAM) to create a resource share for the transit gateway. Share this resource with all development accounts. AWS RAM allows you to securely share your AWS resources across AWS accounts without needing to duplicate them.

Accept Resource Shares in Development Accounts:

Instruct each development team to log into their respective AWS accounts and accept the transit gateway resource share. This step is crucial for enabling cross-account access to the shared transit gateway.

Configure VPC Attachments in Development Accounts:

Each development account needs to attach their VPC to the shared transit gateway. This allows their VPCs to route traffic through the transit gateway to the Aurora DB cluster in the shared services account.

Update Route Tables:

Update the route tables in each VPC to direct traffic intended for the Aurora DB cluster through the transit gateway. This ensures that network traffic is properly routed between the development VPCs and the shared services VPC.

Using a transit gateway simplifies the network management and reduces operational overhead by providing a scalable and efficient way to interconnect multiple VPCs across different AWS accounts.

Reference

AWS Database Blog on RDS Proxy for Cross-Account Access48.

AWS Architecture Blog on Cross-Account and Cross-Region Aurora Setup49.

DEV Community on Managing Multiple AWS Accounts with Organizations51.


Question 6

An events company runs a ticketing platform on AWS. The company's customers configure and schedule their events on the platform The events result in large increases of traffic to the platform The company knows the date and time of each customer's events

The company runs the platform on an Amazon Elastic Container Service (Amazon ECS) cluster The ECS cluster consists of Amazon EC2 On-Demand Instances that are in an Auto Scaling group. The Auto Scaling group uses a predictive scaling policy

The ECS cluster makes frequent requests to an Amazon S3 bucket to download ticket assets The ECS cluster and the S3 bucket are in the same AWS Region and the same AWS account Traffic between the ECS cluster and the S3 bucket flows across a NAT gateway

The company needs to optimize the cost of the platform without decreasing the platform's availability

Which combination of steps will meet these requirements? (Select TWO)



Answer : A, B

Gateway VPC Endpoint for S3:

Create a gateway VPC endpoint for Amazon S3 in your VPC. This allows instances in your VPC to communicate with Amazon S3 without going through the internet, reducing data transfer costs and improving security.

Add Spot Instances to ECS Cluster:

Add another ECS capacity provider that uses an Auto Scaling group of Spot Instances. Configure this new capacity provider to share the load with the existing On-Demand Instances by setting an appropriate weight in the capacity provider strategy. Spot Instances offer significant cost savings compared to On-Demand Instances.

Configure Capacity Provider Strategy:

Adjust the ECS service's capacity provider strategy to utilize both On-Demand and Spot Instances effectively. This ensures a balanced distribution of tasks across both instance types, optimizing cost while maintaining availability.

By implementing a gateway VPC endpoint for S3 and incorporating Spot Instances into the ECS cluster, the company can significantly reduce operational costs without compromising on the availability or performance of the platform.

Reference

AWS Cost Optimization Blog on VPC Endpoints

AWS ECS Documentation on Capacity Providers


Question 7

A company that develops consumer electronics with offices in Europe and Asia has 60 TB of software images stored on premises in Europe The company wants to transfer the images to an Amazon S3 bucket in the ap-northeast-1 Region New software images are created daily and must be encrypted in transit The company needs a solution that does not require custom development to automatically transfer all existing and new software images to Amazon S3

What is the next step in the transfer process?



Answer : A

Deploy AWS DataSync Agent:

Install the DataSync agent on your on-premises environment. This can be done by downloading the agent as a virtual appliance and deploying it on VMware ESXi, Hyper-V, or KVM hypervisors.

Configure Source and Destination Locations:

Set up the source location pointing to your on-premises storage where the software images are currently stored.

Configure the destination location to point to your Amazon S3 bucket in the ap-northeast-1 Region.

Create and Schedule DataSync Tasks:

Create a DataSync task to automate the transfer process. This task will specify the source and destination locations and set options for how the data should be transferred.

Schedule the task to run at intervals that suit your data transfer requirements, ensuring new images are transferred as they are created.

Encryption in Transit:

AWS DataSync automatically encrypts data in transit using TLS, ensuring that your data is secure during the transfer process.

Monitoring and Management:

Use the DataSync console or the AWS CLI to monitor the progress of your data transfers and manage the tasks.

AWS DataSync is an efficient solution that automates and accelerates the process of transferring large amounts of data to AWS, handling encryption, data integrity checks, and optimizing network usage without requiring custom development.

Reference

AWS Storage Blog on DataSync40.

AWS DataSync Documentation41.


Page:    1 / 14   
Total 461 questions