[Deployment, Provisioning, and Automation]
A company has a public web application that experiences rapid traffic increases after advertisements appear on local television. The application runs on Amazon EC2 instances that are in an Auto Scaling group. The Auto Scaling group is not keeping up with the traffic surges after an advertisement runs. The company often needs to scale out to 100 EC2 instances during the traffic surges.
The instance startup times are lengthy because of a boot process that creates machine-specific data caches that are unique to each instance. The exact timing of when the advertisements will appear on television is not known. A SysOps administrator must implement a solution so that the application can function properly during the traffic surges.
Which solution will meet these requirements?
Answer : A
To address the issue of slow startup times during unexpected traffic surges, a warm pool for the Auto Scaling group is an effective solution:
Warm Pool Concept: A warm pool allows you to maintain a set of pre-initialized or partially initialized EC2 instances that are not actively serving traffic but can be quickly brought online when needed.
Management of Instances: Instances in the warm pool can be kept in a stopped state and then started much more quickly than launching new instances, as the machine-specific data caches are already created.
Scalability and Responsiveness: During a surge in traffic, especially unpredictable ones like those triggered by advertisements, instances from the warm pool can be rapidly activated to handle the increased load, ensuring that the application remains responsive without the typical delays associated with boot processes.
This method significantly reduces the time to scale out by utilizing pre-warmed instances, enhancing the application's ability to cope with sudden and substantial increases in traffic.
[Monitoring, Reporting, and Automation]
A company hosts an internet web application on Amazon EC2 instances. The company is replacing the application with a new AWS Lambda function. During a transition period, the company must route some traffic to the legacy application and some traffic to the new Lambda function. The company needs to use the URL path of request to determine the routing.
Which solution will meet these requirements?
Answer : D
To route traffic based on the URL path during a transition period where both an EC2-based legacy application and a new AWS Lambda function are in use:
Use of Application Load Balancer (ALB): ALBs support advanced request routing based on the URL path, among other criteria. This capability allows the ALB to evaluate the URL path of incoming requests and route them appropriately to either the legacy EC2 instances or the Lambda function.
Path-Based Routing Rules: Configure the ALB with rules that specify which URL paths should be directed to the EC2 instances and which should be routed to the Lambda function. For example, requests to /legacy/* might go to the EC2 instances, while /new/* could be directed to the Lambda function.
Integration with Lambda: ALBs can directly invoke Lambda functions in response to HTTP requests, making them ideal for scenarios where both server-based and serverless components are used in tandem.
This setup not only facilitates a smooth transition by enabling simultaneous operation of both components but also leverages the native capabilities of ALBs to manage traffic based on application requirements effectively.
[Monitoring, Reporting, and Automation]
A company wants to track its AWS costs in all member accounts that are part of an organization in AWS Organizations. Managers of the
member accounts want to receive a notification when the estimated costs exceed a predetermined amount each month. The managers
are unable to configure a billing alarm. The IAM permissions for all users are correct.
What could be the cause of this issue?
Answer : A
For member accounts in AWS Organizations to receive notifications about estimated costs exceeding a predetermined amount, billing alerts must be enabled in the management/payer account.
Enable Billing Alerts in the Management Account:
Open the AWS Billing and Cost Management console at AWS Billing Console.
In the navigation pane, choose Billing preferences.
Under Billing preferences, ensure that Receive Billing Alerts is enabled.
Create a Budget and Set Up Notifications:
Open the AWS Budgets console at AWS Budgets Console.
Create a budget and configure it to monitor the estimated costs.
Set up notifications for when the budget thresholds are exceeded and specify the email addresses of the managers in the member accounts.
By enabling billing alerts in the management account, you allow member accounts to receive notifications about their estimated costs.
Setting Up Alerts and Notifications
Creating an AWS Budget
[Security and Compliance]
A SysOps administrator created an Amazon VPC with an IPv6 CIDR block, which requires access to the internet. However, access from the internet towards the VPC is prohibited. After adding and configuring the required components to the VPC. the administrator is unable to connect to any of the domains that reside on the internet.
What additional route destination rule should the administrator add to the route tables?
Answer : D
https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html
To allow outbound IPv6 traffic from instances in your VPC to the internet, but prevent inbound traffic from the internet, you should use an egress-only internet gateway.
Egress-Only Internet Gateway:
An egress-only internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in your VPC to the internet and prevents the internet from initiating an IPv6 connection with your instances.
Routing Configuration:
Open the Amazon VPC console.
Select the route table associated with your VPC.
Add a route for IPv6 traffic (destination ::/0) to the egress-only internet gateway.
Egress-Only Internet Gateways
Routing for IPv6 Traffic
[Security and Compliance]
A SysOps administrator manages the caching of an Amazon CloudFront distribution that serves pages of a website. The SysOps administrator needs to configure the distribution so that the TTL of individual pages can vary. The TTL of the individual pages must remain within the maximum TTL and the minimum TTL that are set for the distribution.
Which solution will meet these requirements?
Answer : B
To allow the TTL (Time to Live) of individual pages to vary while adhering to the maximum and minimum TTL settings configured for the Amazon CloudFront distribution, setting cache behaviors directly at the origin is most effective:
Use Cache-Control Headers: By configuring the Cache-Control: max-age directive in the HTTP headers of the objects served from the origin, you can specify how long an object should be cached by CloudFront before it is considered stale.
Integration with CloudFront: When CloudFront receives a request for an object, it checks the cache-control header to determine the TTL for that specific object. This allows individual objects to have their own TTL settings, as long as they are within the globally set minimum and maximum TTL values for the distribution.
Operational Efficiency: This method does not require any additional AWS services or modifications to the distribution settings. It leverages HTTP standard practices, ensuring compatibility and ease of management.
Implementing the TTL management through cache-control headers at the origin provides precise control over caching behavior, aligning with varying content freshness requirements without complex configurations.
[Monitoring, Reporting, and Automation]
A SysOps administrator must analyze Amazon CloudWatch logs across 10 AWS Lambda functions for historical errors. The logs are in JSON format and are stored in Amazon S3. Errors sometimes do not appear in the same field, but all errors begin with the same string prefix.
What is the MOST operationally efficient way for the SysOps administrator to analyze the log files?
Answer : B
To analyze CloudWatch logs across multiple AWS Lambda functions for historical errors, the most operationally efficient way is to use AWS Glue and Amazon Athena.
AWS Glue and Amazon Athena:
AWS Glue can crawl the data in S3, creating a catalog that makes it queryable.
Amazon Athena allows you to run SQL queries on the data cataloged by AWS Glue.
Steps to Implement:
Set up an AWS Glue crawler to index the logs stored in S3.
Configure the crawler to create a table in the AWS Glue Data Catalog.
Use Amazon Athena to query the table for errors using SQL. Since errors have a common string prefix, you can use SQL queries to filter and find these errors.
AWS Glue
Amazon Athena
[Monitoring, Reporting, and Automation]
An ecommerce company uses an Amazon ElastiCache (Redis) cluster for in-memory caching of popular product queries on a shopping website. A SysOps administrator views Amazon CloudWatch metrics data for the ElastiCache cluster and notices a large number of evictions. The SysOps administrator needs to implement a solution to reduce the number of evictions. The solution also must keep the popular queries cached. Which solution will meet these requirements with the LEAST operational overhead?
Answer : D
Evictions in Redis occur when memory is full and new data overwrites older data. To reduce evictions and retain more data:
Increase node size (memory capacity) or
Add more nodes (sharding)
From ElastiCache Redis eviction documentation:
High eviction count suggests memory pressure. To reduce evictions, consider increasing node size.