Amazon AWS Certified CloudOps Engineer - Associate SOA-C03 Exam Practice Test

Page: 1 / 14
Total 65 questions
Question 1

A company is storing backups in an Amazon S3 bucket. These backups must not be deleted for at least 3 months after creation.

What should the CloudOps engineer do?



Answer : B

Per the AWS Cloud Operations and Data Protection documentation, S3 Object Lock enforces write-once-read-many (WORM) protection on objects for a defined retention period.

There are two modes:

Compliance mode: Even the root user cannot delete or modify objects during the retention period.

Governance mode: Privileged users with special permissions can override lock settings.

For regulatory or audit requirements that prohibit deletion, Compliance mode is the correct choice. When configured with a 3-month retention period, all backup objects are protected from deletion until expiration, ensuring compliance with data retention mandates.

Thus, Option B is the correct CloudOps solution for immutable S3 backups.


Question 2

A medical research company uses an Amazon Bedrock powered AI assistant with agents and knowledge bases to provide physicians quick access to medical study protocols. The company needs to generate audit reports that contain user identities, usage data for Bedrock agents, access data for knowledge bases, and interaction parameters.

Which solution will meet these requirements?



Answer : A

As per AWS Cloud Operations, Bedrock, and Governance documentation, AWS CloudTrail is the authoritative service for capturing API activity and audit trails across AWS accounts. For Amazon Bedrock, CloudTrail records all user-initiated API calls, including interactions with agents, knowledge bases, and generative AI model parameters.

Using CloudTrail Lake, organizations can store, query, and analyze CloudTrail events directly without needing to export data. CloudTrail Lake supports SQL-like queries for generating audit and compliance reports, enabling the company to retrieve information such as user identity, API usage, timestamp, model or agent ID, and invocation parameters.

In contrast, CloudWatch focuses on operational metrics and log streaming, not API-level identity data. OpenSearch or Flink would add unnecessary complexity and cost for this use case.

Thus, the AWS-recommended CloudOps best practice is to leverage CloudTrail with CloudTrail Lake to maintain auditable, queryable API activity for Bedrock workloads, fulfilling governance and compliance requirements.


Question 3

Optimization]

A company's architecture team must receive immediate email notifications whenever new Amazon EC2 instances are launched in the company's main AWS production account.

What should a CloudOps engineer do to meet this requirement?



Answer : B

As per the AWS Cloud Operations and Event Monitoring documentation, the most efficient method for event-driven notification is to use Amazon EventBridge to detect specific EC2 API events and trigger a Simple Notification Service (SNS) alert.

EventBridge continuously monitors AWS service events, including RunInstances, which signals the creation of new EC2 instances. When such an event occurs, EventBridge sends it to an SNS topic, which then immediately emails subscribed recipients --- in this case, the architecture team.

This combination provides real-time, serverless notifications with minimal management. SQS (Option C) is designed for queue-based processing, not direct user alerts. User data scripts (Option A) and custom polling with Lambda (Option D) introduce unnecessary operational complexity and latency.

Hence, Option B is the correct and AWS-recommended CloudOps design for immediate launch notifications.


Question 4

A company has an application running on EC2 that stores data in an Amazon RDS for MySQL Single-AZ DB instance. The application requires both read and write operations, and the company needs failover capability with minimal downtime.

Which solution will meet these requirements?



Answer : A

According to the AWS Cloud Operations and Database Reliability documentation, Amazon RDS Multi-AZ deployments provide high availability and automatic failover by maintaining a synchronous standby replica in a different Availability Zone.

In the event of instance failure, planned maintenance, or Availability Zone outage, Amazon RDS automatically promotes the standby to primary with minimal downtime (typically less than 60 seconds). The failover is transparent to applications because the DB endpoint remains the same.

By contrast, read replicas (Option B) are asynchronous and do not provide automated failover. Auto Scaling (Option C) applies to EC2, not RDS. RDS Proxy (Option D) improves connection management but does not add redundancy.

Thus, Option A --- converting the RDS instance into a Multi-AZ deployment --- delivers the required high availability and business continuity with minimal operational effort.


Question 5

A company needs to enforce tagging requirements for Amazon DynamoDB tables in its AWS accounts. A CloudOps engineer must implement a solution to identify and remediate all DynamoDB tables that do not have the appropriate tags.

Which solution will meet these requirements with the LEAST operational overhead?



Answer : C

According to the AWS Cloud Operations, Governance, and Compliance documentation, AWS Config provides managed rules that automatically evaluate resource configurations for compliance. The ''required-tags'' managed rule allows CloudOps teams to specify mandatory tags (e.g., Environment, Owner, CostCenter) and automatically detect non-compliant resources such as DynamoDB tables.

Furthermore, AWS Config supports automatic remediation through AWS Systems Manager Automation runbooks, enabling correction actions (for example, adding missing tags) without manual intervention. This automation minimizes operational overhead and ensures continuous compliance across multiple accounts.

Using a custom Lambda function (Options A or B) introduces unnecessary management complexity, while EventBridge rules alone (Option D) do not provide resource compliance tracking or historical visibility.

Therefore, Option C provides the most efficient, fully managed, and compliant CloudOps solution.


Question 6

A company is migrating a legacy application to AWS. The application runs on EC2 instances across multiple Availability Zones behind an Application Load Balancer (ALB). The target group routing algorithm is set to weighted random, and the application requires session affinity (sticky sessions).

After deployment, users report random application errors that were not present before migration, even though target health checks are passing.

Which solution will meet this requirement?



Answer : A

According to the AWS Cloud Operations and Elastic Load Balancing documentation, Application Load Balancer (ALB) supports multiple routing algorithms to distribute requests among targets:

Round robin (default)

Least outstanding requests (LOR)

Weighted random

When applications require session affinity, AWS recommends using ''least outstanding requests'' as the load balancing algorithm because it reduces latency, distributes load evenly, and ensures consistent target responsiveness during high traffic.

Using weighted random routing with sticky sessions can cause sessions to be routed inconsistently if one target's capacity fluctuates, leading to session mismatches and application errors --- especially when user sessions rely on instance-specific state.

Disabling cross-zone balancing (Option C) or adjusting deregistration delay (Option D) does not address routing inconsistency. Anomaly mitigation (Option B) protects against target performance degradation, not sticky-session misrouting.

Therefore, the correct solution is Option A --- changing the target group's routing algorithm to least outstanding requests ensures smoother, predictable session handling and resolves random application errors.


Question 7

Optimization]

A company has a workload that is sending log data to Amazon CloudWatch Logs. One of the fields includes a measure of application latency. A CloudOps engineer needs to monitor the p90 statistic of this field over time.

What should the CloudOps engineer do to meet this requirement?



Answer : B

To analyze and visualize custom statistics such as the p90 latency (90th percentile), a CloudWatch metric must be generated from the log data. The correct method is to create a metric filter that extracts the latency value from each log event and publishes it as a CloudWatch metric. Once the metric is published, percentile statistics (p90, p95, etc.) can be displayed in CloudWatch dashboards or alarms.

AWS documentation states:

''You can use metric filters to extract numerical fields from log events and publish them as metrics in CloudWatch. CloudWatch supports percentile statistics such as p90 and p95 for these metrics.''

Contributor Insights (Option A) is for analyzing frequent contributors, not numeric distributions. Subscription filters (Option C) are used for log streaming, and Application Insights (Option D) provides monitoring of application health but not custom p90 statistics. Hence, Option B is the CloudOps-aligned, minimal-overhead solution for percentile latency monitoring.

References (AWS CloudOps Documents / Study Guide):

* AWS Certified CloudOps Engineer -- Associate (SOA-C03) Exam Guide -- Domain 1: Monitoring and Logging

* Amazon CloudWatch Logs -- Metric Filters

* AWS Well-Architected Framework -- Operational Excellence Pillar


Page:    1 / 14   
Total 65 questions