A company has an application that runs on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster on Amazon EC2 instances. The application has a U1 that uses Amazon DynamoDB and data services that use Amazon S3 as part of the application deployment.
The company must ensure that the EKS Pods for the U1 can access only Amazon DynamoDB and that the EKS Pods for the data services can access only Amazon S3. The company uses AWS Identity and Access Management |IAM).
Which solution meets these requirements?
Answer : A
A company runs a multi-tier web application that hosts news content. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones and use an Amazon Aurora database.
A solutions architect needs to make the application more resilient to periodic increases in request rates.
Which architecture should the solutions architect implement? (Select TWO.)
Answer : B, D
Aurora Replicas: Provide read scalability and high availability. They allow offloading read traffic from the primary database instance.
AWS Global Accelerator: Provides improved availability and performance by routing user requests to the optimal endpoint using AWS's global network.
''Aurora Replicas can be used to increase read scalability and availability.''
--- Aurora Replicas
''AWS Global Accelerator improves the availability and performance of your applications with global users.''
--- AWS Global Accelerator
Together, these enhance both the database and network layer resilience.
A company runs an online order management system on AWS. The company stores order and inventory data for the previous 5 years in an Amazon Aurora MySQL database. The company deletes inventory data after 5 years.
The company wants to optimize costs to archive data.
Options:
Answer : B
The SELECT INTO OUTFILE S3 feature allows you to export Amazon Aurora MySQL data directly to Amazon S3 with minimal operational overhead. This method is efficient and cost-effective for archiving historical data.
You can configure S3 Lifecycle rules to transition the exported data to lower-cost storage (e.g., S3 Glacier or S3 Standard-IA) and eventually delete it after 5 years.
No need for additional ETL tools like Glue or DataBrew unless complex transformations are required.
Exporting data from Aurora MySQL to S3
A company is planning to deploy a data processing platform on AWS. The data processingplatform is based on PostgreSQL. The company stores the data that the platform must process on premises.
To comply with regulations, the company must not migrate the data to the cloud. However, the company wants to use AWS managed data analytics solutions.
Which solution will meet these requirements?
Answer : C
AWS Outposts extends AWS infrastructure and services to on-premises locations. Running Amazon EMR on Outposts allows for processing data that resides locally while benefiting from the managed services of EMR. This enables compliance with data residency requirements and provides scalability and manageability for analytics.
=============
A company needs to optimize its Amazon S3 storage costs for an application that generates many files that cannot be recreated Each file is approximately 5 MB and is stored in Amazon S3 Standard storage.
The company must store the files for 4 years before the files can be deleted The files must be immediately accessible The files are frequently accessed in the first 30 days of object creation, but they are rarely accessed after the first 30 days.
Which solution will meet these requirements MOST cost-effectively?
Answer : C
Amazon S3 Standard-IA: This storage class is designed for data that is accessed less frequently but requires rapid access when needed. It offers lower storage costs compared to S3 Standard while still providing high availability and durability.
Access Patterns: Since the files are frequently accessed in the first 30 days and rarely accessed afterward, transitioning them to S3 Standard-IA after 30 days aligns with their access patterns and reduces storage costs significantly.
Lifecycle Policy: Implementing a lifecycle policy to transition the files to S3 Standard-IA ensures automatic management of the data lifecycle, moving files to a lower-cost storage class without manual intervention. Deleting the files after 4 years further optimizes costs by removing data that is no longer needed.
Amazon S3 Storage Classes
S3 Lifecycle Configuration
A company runs an application on Amazon EC2 instances. The instances need to access an Amazon RDS database by using specific credentials. The company uses AWS Secrets Manager to contain the credentials the EC2 instances must use. Which solution will meet this requirement?
Answer : A
IAM Role: Attaching an IAM role to an EC2 instance profile is a secure way to manage permissions without embedding credentials.
AWS Secrets Manager: Grants controlled access to database credentials and automatically rotates secrets if configured.
Identity-Based Policy: Ensures the IAM role only has access to specific secrets, enhancing security.
AWS Secrets Manager Documentation
A company hosts an application in an Amazon EC2 Auto Scaling group. The company has observed that during periods of high demand, new instances take too long to join the Auto Scaling group and serve the increased demand. The company determines that the root cause of the issue is the long boot time of the instances in the Auto Scaling group. The company needs to reduce the time required to launch new instances to respond to demand. Which solution will meet this requirement?
Answer : B
A warm pool is an Auto Scaling feature that keeps instances in a pre-initialized state so they can quickly join the active group when scaling is required. This reduces the time needed for instance bootstrapping and makes new capacity available almost instantly. Option A only increases capacity limits but does not address slow boot times. Option C merely extends grace periods without solving the delay. Option D forces overprovisioning, which is wasteful and not aligned with cost optimization. Using a warm pool (B) directly addresses the problem by reducing response time to scaling events.