A company applies user-defined tags to AWS resources. Twenty days after applying the tags, the company notices that the tags cannot be used to filter views in the AWS Cost Explorer console.
What is the reason for this issue?
Answer : B
User-defined tags must be explicitly activated as cost allocation tags in the AWS Billing and Cost Management console before they can be used in Cost Explorer. Simply applying tags to resources is not sufficient.
Once activated, cost allocation tags can take up to 24 hours to appear in Cost Explorer, but they will not appear at all if activation is not performed. The 30-day delay applies only to historical reporting after activation, not to visibility itself.
Cost and Usage Reports and Budgets are not prerequisites for Cost Explorer filtering.
Therefore, the issue occurs because the tags were not activated for cost allocation.
Optimization]
A company uses an AWS Lambda function to process user uploads to an Amazon S3 bucket. The Lambda function runs in response to Amazon S3 PutObject events.
A SysOps administrator needs to set up monitoring for the Lambda function. The SysOps administrator wants to receive a notification through an Amazon Simple Notification Service (Amazon SNS) topic if the function takes more than 10 seconds to process an event.
Which solution will meet this requirement?
Answer : B
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
AWS Lambda automatically publishes operational metrics to Amazon CloudWatch, including Duration, which represents the time a function invocation takes to run. To alert when processing time exceeds 10 seconds, the most direct and operationally efficient solution is to create a CloudWatch alarm on the Lambda Duration metric and configure the alarm action to publish to an SNS topic. This meets the monitoring requirement without requiring log parsing or additional query mechanisms.
Option B fits best because it uses the built-in metric stream for Lambda observability: CloudWatch metrics are near real-time, require minimal configuration, and alarms are a standard CloudOps practice for proactive notification. The alarm can be configured with the appropriate statistic (for example, Maximum or p99 via metric math where applicable) and a threshold of 10,000 milliseconds, ensuring the operations team is notified before performance degrades further.
Option A is incorrect because PostRuntimeExtensionsDuration is not the primary runtime metric for function execution time, and extracting runtime from logs is unnecessary. Option C changes the function timeout to 10 seconds, which would cause failures rather than simply notifying on slow executions. Option D is more operationally complex because it relies on log queries; CloudWatch alarms are more straightforward when a native metric exists.
AWS Lambda Developer Guide -- Monitoring functions with CloudWatch metrics (Duration)
Amazon CloudWatch User Guide -- Creating alarms and SNS notifications
AWS SysOps Administrator Study Guide -- Monitoring and alerting patterns
A company has a multi-account AWS environment that includes the following:
* A central identity account that contains all IAM users and groups
* Several member accounts that contain IAM roles
A SysOps administrator must grant permissions for a particular IAM group to assume a role in one of the member accounts. How should the SysOps administrator accomplish this task?
Answer : B
Comprehensive and Detailed Explanation From Exact Extract of AWS CloudOps Documents:
The correct answer is B because cross-account role assumption requires two explicit permissions. AWS CloudOps documentation states that the target role must trust the principal, and the principal must be allowed to call sts:AssumeRole.
In the member account, the role's trust policy must list the IAM group ARN (or the identity account) as a trusted principal. In the identity account, the IAM group must have an inline or attached policy that allows the sts:AssumeRole action for the target role ARN. This dual configuration enables secure and controlled cross-account access.
Option A is incorrect because trust policies cannot be attached to IAM groups. Option C is incorrect because sts:PassRole is used for passing roles to AWS services, not for assuming roles. Option D is incorrect because roles do not grant permissions via inline policies to principals.
This approach aligns precisely with AWS CloudOps guidance for multi-account IAM design.
IAM User Guide -- Cross-Account Role Access
AWS SysOps Administrator Study Guide -- Identity and Access Management
AWS Well-Architected Framework -- Security Pillar
A company has a stateful web application that is hosted on Amazon EC2 instances in an Auto Scaling group. The instances run behind an Application Load Balancer (ALB) that has a single target group. The ALB is configured as the origin in an Amazon CloudFront distribution. Users are reporting random logouts from the web application.
Which combination of actions should a CloudOps engineer take to resolve this problem? (Select TWO.)
Answer : B, E
Stateful applications require session persistence to ensure that subsequent requests from the same user are routed to the same backend instance. When CloudFront is used in front of an ALB, session-related cookies must be forwarded correctly; otherwise, CloudFront can route requests to different targets, causing session loss and random logouts.
Configuring cookie forwarding in the CloudFront cache behavior ensures that session cookies (such as authentication tokens) are forwarded to the ALB and not stripped or cached incorrectly. Without this configuration, CloudFront may serve cached responses that do not align with the user's active session state, leading to authentication issues.
On the ALB side, sticky sessions (session affinity) must be enabled on the target group to ensure that requests with the same session cookie are consistently routed to the same EC2 instance. ALB stickiness uses application cookies to bind a user session to a specific target, which is critical for stateful applications that store session data in memory.
Option A affects load distribution efficiency but does not address session persistence. Option C (header forwarding) is unnecessary unless the application explicitly stores session state in headers, which is uncommon. Option D applies only when using multiple target groups and listener rules, which is not the case here.
Together, enabling cookie forwarding in CloudFront and sticky sessions at the ALB target group resolves the logout issue by maintaining consistent session routing from the user through CloudFront to the same backend instance.
A company hosts a static website in an Amazon S3 bucket, accessed globally via Amazon CloudFront. The Cache-Control max-age header is set to 1 hour, and Maximum TTL is set to 5 minutes. The CloudOps engineer observes that CloudFront is not caching objects for the expected duration.
What is the reason for this issue?
Answer : D
As per the AWS Cloud Operations and Content Delivery documentation, CloudFront determines cache behavior by evaluating both origin headers (e.g., Cache-Control and Expires) and distribution-level TTL settings.
When Cache-Control max-age conflicts with the Maximum TTL configured in CloudFront, the shorter TTL value takes precedence. This results in CloudFront caching content for only 5 minutes instead of 1 hour, despite the origin headers suggesting a longer duration.
AWS documentation explicitly states: ''When both origin cache headers and CloudFront TTL settings are defined, CloudFront uses the most restrictive caching period.'' This mismatch causes the perceived performance drop, as CloudFront frequently revalidates content.
Therefore, Option D is correct --- cache-duration settings conflict with each other, leading to unexpected caching behavior.
Optimization]
A CloudOps engineer is maintaining a web application that uses an Amazon CloudFront web distribution, an Application Load Balancer (ALB), Amazon RDS, and Amazon EC2 in a VPC. All services have logging enabled. The CloudOps engineer needs to investigate HTTP Layer 7 status codes from the web application.
Which log sources contain the status codes? (Select TWO.)
Answer : C, D
Layer 7 (application-layer) HTTP status codes such as 200, 404, and 500 are generated by web-facing services that process HTTP requests. In this architecture, both CloudFront and the Application Load Balancer (ALB) operate at Layer 7 and record HTTP response information in their access logs.
ALB access logs include detailed request and response data such as client IP address, request path, target response status code, and latency. These logs are essential for analyzing how backend EC2 instances respond to client requests.
CloudFront access logs record viewer requests and responses at the edge locations. These logs also include HTTP status codes returned to the client, making them critical for understanding end-user experience and edge-level behavior.
VPC Flow Logs capture network-level (Layer 3 and 4) traffic metadata such as source IP, destination IP, ports, and protocol. They do not contain HTTP status codes. AWS CloudTrail logs API calls to AWS services and does not capture application response codes. RDS logs contain database-related information, not HTTP responses.
Therefore, the correct sources for HTTP Layer 7 status codes are ALB access logs and CloudFront access logs.
A company has an application that collects notifications from thousands of alarm systems. Notifications include alarm notifications and information notifications. All notifications are stored in an Amazon Simple Queue Service (Amazon SQS) queue. Amazon EC2 instances in an Auto Scaling group process the messages.
A CloudOps engineer needs to prioritize alarm notifications over information notifications.
Which solution will meet these requirements?
Answer : D
Amazon SQS does not support message prioritization within a single queue. To prioritize certain messages, AWS recommends using multiple queues and processing them in priority order.
By separating alarm notifications into one queue and information notifications into another, the application can poll and process the alarm queue first. This ensures that critical alarm messages are handled immediately, even during high load.
Scaling faster or using SNS fanout does not guarantee priority. DynamoDB streams are unrelated to SQS message ordering.
Therefore, separate queues provide a simple, scalable, and effective prioritization mechanism.