Amazon SOA-C02 AWS Certified SysOps Administrator - Associate Exam Practice Test

Page: 1 / 14
Total 557 questions
Question 1

[Security and Compliance]

A company is rolling out a new version of its website. Management wants to deploy the new website in a limited rollout to 20% of the company's customers. The company uses Amazon Route 53 for its website's DNS solution.

Which configuration will meet these requirements?



Answer : D

To achieve a limited rollout of the new website to 20% of the company's customers using Amazon Route 53, a weighted routing policy is the most appropriate solution.

Weighted Routing Policy:

Weighted routing lets you associate multiple resources with a single domain name and choose how much traffic is routed to each resource.

Configuration:

Open the Route 53 console.

Select the hosted zone and choose 'Create Record Set.'

Create two records for your domain:

One record for the original resource with a weight of 80.

Another record for the new resource with a weight of 20.

Amazon Route 53 Weighted Routing


Question 2

[Networking and Content Delivery]

A company currently runs its infrastructure within a VPC in a single Availability Zone The VPC is connected to the company's on-premises data center through an AWS Site-to-SIte VPN connection attached to a virtual pnvate gateway. The on-premises route tables route all VPC networks to the VPN connection Communication between the two environments is working correctly. A SysOps administrator created new VPC subnets within a new Availability Zone, and deployed new resources within the subnets. However, communication cannot be established between the new resources and the on-premises environment.

Which steps should the SysOps administrator take to resolve the issue?



Answer : A

Adding a Route to the Route Tables:

When new subnets are created, they need appropriate routing to ensure communication with on-premises networks.

Steps:

Go to the AWS Management Console.

Navigate to VPC.

Select the route table associated with the new subnets.

Choose 'Edit routes.'

Add a new route with the destination CIDR block of the on-premises network.

For the target, select the virtual private gateway (VGW).

This ensures that traffic destined for the on-premises network is routed correctly through the VPN connection.

AWS VPC Route Tables


Question 3

[High Availability, Backup, and Recovery]

A Sysops administrator configured AWS Backup to capture snapshots from a single Amazon EC2 instance that has one Amazon Elastic Block Store (Amazon EBS) volume attached. On the first snapshot, the EBS volume has 10 GiB of dat

a. On the second snapshot, the EBS volume still contains 10 GiB of data, but 4 GiB have changed. On the third snapshot, 2 GiB of data have been added to the volume, for a total of 12 GiB.

How much total storage is required to store these snapshots?



Answer : B

AWS EBS snapshots are incremental, meaning that after the initial full snapshot, only the blocks that have changed since the last snapshot are saved. Here's how the storage adds up based on your scenario:

First Snapshot: Captures all 10 GiB of data.

Second Snapshot: Only 4 GiB have changed, so only these changed blocks are stored.

Third Snapshot: An additional 2 GiB of data are added, making only these new 2 GiB stored.

Thus, the total storage required is 10 GiB (initial snapshot) + 4 GiB (second snapshot) + 2 GiB (third snapshot) = 16 GiB.

AWS Documentation Reference:

Details on how EBS snapshots store data can be found here: Amazon EBS Snapshots.


Question 4

[High Availability, Backup, and Recovery]

An existing, deployed solution uses Amazon EC2 instances with Amazon EBS General Purpose SSD volumes, an Amazon RDS PostgreSQL database, an

Amazon EFS file system, and static objects stored in an Amazon S3 bucket. The Security team now mandates that at-rest encryption be turned on immediately for all aspects of the application, without creating new resources and without any downtime.

To satisfy the requirements, which one of these services can the SysOps administrator enable at-rest encryption on?



Answer : D

To encrypt your existing Amazon S3 objects with a single request, you can use Amazon S3 batch operations. You provide Amazon S3 batch operations with a list of objects to operate on, and Amazon S3 batch operations calls the respective API to perform the specified operation. You can use the copy operation to copy the existing unencrypted objects and write the new encrypted objects to the same bucket. A single Amazon S3 batch operations job can perform the specified operation on billions of objects containing exabytes of data

https://docs.aws.amazon.com/efs/latest/ug/efs-enforce-encryption.html


Question 5

[Monitoring, Reporting, and Automation]

A company is uploading important files as objects to Amazon S3 The company needs to be informed if an object is corrupted during the upload

What should a SysOps administrator do to meet this requirement?



Answer : B

Content-MD5 Header:

The Content-MD5 header provides an MD5 checksum of the object being uploaded. Amazon S3 uses this checksum to verify the integrity of the object.

Steps:

When uploading an object to S3, calculate the MD5 checksum of the object.

Include the Content-MD5 header with the base64-encoded MD5 checksum value in the upload request.

This ensures that S3 can detect if the object is corrupted during the upload process.

PUT Object - Amazon Simple Storage Service


Question 6

[Monitoring, Reporting, and Automation]

A company hosts an internal application on Amazon EC2 On-Demand Instances behind an Application Load Balancer (ALB). The instances are in an Amazon EC2 Auto Scaling group. Employees use the application to provide product prices to potential customers. The Auto Scaling group is configured with a dynamic scaling policy and tracks average CPU utilization of the instances.

Employees have noticed that sometimes the application becomes slow or unresponsive. A SysOps administrator finds that some instances are experiencing a high CPU load. The Auto Scaling group cannot scale out because the company is reaching the EC2 instance service quota.

The SysOps administrator needs to implement a solution that provides a notification when the company reaches 70% or more of thte EC2 instance service quota.

Which solution will meet these requirements in the MOST operationally efficient manner?



Answer : C

To monitor and receive alerts when the EC2 instance service quota usage reaches 70% or more:

Service Quotas Console: Navigate to the Service Quotas console within AWS and identify the specific quota for EC2 instances.

Create a CloudWatch Alarm: Directly from the Service Quotas console, set up a CloudWatch alarm for the EC2 instance quota metric. Configure the alarm to trigger when the quota utilization reaches or exceeds 70%.

Notification Setup: Link this alarm to an Amazon SNS topic that will send a notification to relevant stakeholders or systems when the quota usage threshold is breached.

This method provides an automated, straightforward way to monitor resource limits and ensures that stakeholders are promptly notified, enabling them to take proactive measures to manage the quota and prevent service disruption.


Question 7

[Monitoring, Reporting, and Automation]

A data analytics application is running on an Amazon EC2 instance. A SysOps administrator must add custom dimensions to the metrics collected by the Amazon CloudWatch agent.

How can the SysOps administrator meet this requirement?



Answer : D

Objective:

Add custom dimensions to the metrics collected by the Amazon CloudWatch agent.

Using append_dimensions:

The append_dimensions field in the Amazon CloudWatch agent configuration file allows adding custom dimensions to the metrics collected.

Dimensions help categorize and filter metrics in CloudWatch for more granular insights.

Steps to Implement:

Step 1: Edit the Amazon CloudWatch agent configuration file (commonly located at /opt/aws/amazon-cloudwatch-agent/bin/config.json).

Step 2: Add the append_dimensions field under the desired metrics section, specifying the custom dimensions in key-value pairs:

{

'metrics': {

'append_dimensions': {

'InstanceId': '${aws:InstanceId}',

'CustomDimensionKey': 'CustomDimensionValue'

}

}

}

Step 3: Restart the CloudWatch agent:

sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \

-a stop

sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \

-a start

AWS Reference:

Amazon CloudWatch Agent Configuration:CloudWatch Agent Configuration

Why Other Options Are Incorrect:

Option A: Writing a custom script is unnecessary as the CloudWatch agent natively supports appending dimensions.

Option B: EventBridge rules do not interact with CloudWatch metrics for adding dimensions.

Option C: AWS Lambda functions are not required for this use case.


Page:    1 / 14   
Total 557 questions