[Design Secure Architectures]
A company is performing a security review of its Amazon EMR API usage. The company's developers use an integrated development environment (IDE) that is hosted on Amazon EC2 instances. The IDE is configured to authenticate users to AWS by using access keys. Traffic between the company's EC2 instances and EMR cluster uses public IP addresses.
A solutions architect needs to improve the company's overall security posture. The solutions architect needs to reduce the company's use of long-term credentials and to limit the amount of communication that uses public IP addresses.
Which combination of steps will MOST improve the security of the company's architecture? (Select TWO.)
Answer : B, D
[Design Secure Architectures]
A global company runs its applications in multiple AWS accounts in AWS Organizations. The company's applications use multipart uploads to upload data to multiple Amazon S3 buckets across AWS Regions. The company wants to report on incomplete multipart uploads for cost compliance purposes.
Which solution will meet these requirements with the LEAST operational overhead?
Answer : C
S3 Storage Lens is a cloud storage analytics feature that provides organization-wide visibility into object storage usage and activity across multiple AWS accounts in AWS Organizations. S3 Storage Lens can report the incomplete multipart upload object count as one of the metrics that it collects and displays on an interactive dashboard in the S3 console. S3 Storage Lens can also export metrics in CSV or Parquet format to an S3 bucket for further analysis. This solution will meet the requirements with the least operational overhead, as it does not require any code development or policy changes.
1explains how to use S3 Storage Lens to gain insights into S3 storage usage and activity.
2 describes the concept and benefits of multipart uploads.
[Design High-Performing Architectures]
A company hosts an online shopping application that stores all orders in an Amazon RDS for PostgreSQL Singfe-AZ DB instance. Management wants to eliminate single points of C^ilure and has asked a solutions architect to recommend an approach to minimize database downtime without requiring any changes to the application code.
Which solution meets these requirements?
Answer : A
https://aws.amazon.com/rds/features/multi-az/ To convert an existing Single-AZ DB Instance to a Multi-AZ deployment, use the 'Modify' option corresponding to your DB Instance in the AWS Management Console.
[Design Resilient Architectures]
A company has a regional subscription-based streaming service that runs in a single AWS Region. The architecture consists of web servers and application servers on Amazon EC2 instances. The EC2 instances are in Auto Scaling groups behind Elastic Load Balancers. The architecture includes an Amazon Aurora database cluster that extends across multiple Availability Zones.
The company wants to expand globally and to ensure that its application has minimal downtime.
Answer : D
This option is the most efficient because it deploys the web tier and the application tier to a second Region, which provides high availability and redundancy for the application.It also uses an Amazon Aurora global database, which is a feature that allows a single Aurora database to span multiple AWS Regions1. It also deploys the database in the primary Region and the second Region, which provides low latency global reads and fast recovery from a Regional outage.It also uses Amazon Route 53 health checks with afailover routing policy to the second Region, which provides data protection by routing traffic to healthy endpoints in different Regions2.It also promotes the secondary to primaryas needed, which provides dataconsistency by allowing write operations in one of the Regions at a time3. This solution meets the requirement of expanding globally and ensuring that its application has minimal downtime. Option A is less efficient because it extends the Auto Scaling groups for the web tier and the application tier to deploy instances in Availability Zones in a second Region, which could incur higher costs and complexity than deploying them separately. It also uses an Aurora global database to deploy the database in the primary Region and the second Region, which is correct. However, it does not use Amazon Route 53 health checks with a failover routing policy to the second Region, which could result in traffic being routed to unhealthy endpoints. Option B is less efficient because it deploys the web tier and the application tier to a second Region, which is correct. It also adds an Aurora PostgreSQL cross-Region Aurora Replica in the second Region, which provides read scalability across Regions. However, it does not use an Aurora global database, which provides faster replication and recovery than cross-Region replicas. It also uses Amazon Route 53 health checks with a failover routing policy to the secondRegion, which is correct. However, it does not promote the secondary to primary as needed, which could result in data inconsistency or loss. Option C is less efficient because it deploys the web tier and the application tier to a second Region, which is correct. It also creates an Aurora PostgreSQL database in the second Region, which provides data redundancy across Regions. However, it does not use an Aurora global database or cross-Region replicas, which provide faster replication and recovery than creating separate databases. It also uses AWS Database Migration Service (AWS DMS) to replicate the primary database to the second Region, which provides data migration between different sources and targets. However, it does not use an Aurora global database or cross-Region replicas, which provide faster replication and recovery than using AWS DMS. It also uses Amazon Route 53 health checks with a failover routing policy to the second Region, which is correct.
[Design Resilient Architectures]
A company wants to use Amazon Elastic Container Service (Amazon ECS) to run its on-premises application in a hybrid environment The application currently runs on containers on premises.
The company needs a single container solution that can scale in an on-premises, hybrid, or cloud environment The company must run new application containers in the AWS Cloud and must use a load balancer for HTTP traffic.
Which combination of actions will meet these requirements? (Select TWO.)
Answer : A, B
Understanding the Requirement: The company needs a container solution that can scale across on-premises, hybrid, and cloud environments, with a load balancer for HTTP traffic.
Analysis of Options:
Fargate Launch Type and ECS Anywhere: Using Fargate for cloud-based containers and ECS Anywhere for on-premises containers provides a unified management experience across environments without needing to manage infrastructure.
Application Load Balancer: Suitable for HTTP traffic and can distribute requests to the ECS services, ensuring scalability and performance.
Network Load Balancer: Typically used for TCP/UDP traffic, not specifically optimized for HTTP traffic.
EC2 Launch Type for ECS and ECS Anywhere with Fargate: Involves managing infrastructure for EC2 instances, increasing operational overhead.
Best Combination of Solutions:
ECS with Fargate Launch Type and ECS Anywhere: This provides flexibility and scalability across hybrid environments with minimal operational overhead.
Application Load Balancer: Optimized for HTTP traffic, ensuring efficient load distribution and scaling for the ECS services.
Amazon ECS on AWS Fargate
Amazon ECS Anywhere
Application Load Balancer
[Design Resilient Architectures]
A company s order system sends requests from clients to Amazon EC2 instances The EC2 instances process the orders and men store the orders in a database on Amazon RDS Users report that they must reprocess orders when the system fails. The company wants a resilient solution that can process orders automatically it a system outage occurs.
What should a solutions architect do to meet these requirements?
Answer : C
To meet the company's requirements of having a resilient solution that can process orders automatically in case of a system outage, the solutions architect needs to implement a fault-tolerant architecture. Based on the given scenario, a potential solution is to move the EC2 instances into an Auto Scaling group and configure the order system to send messages to an Amazon Simple Queue Service (Amazon SQS) queue. The EC2 instances can then consume messages from the queue.
[Design Resilient Architectures]
A company hosts an application on Amazon EC2 On-Demand Instances in an Auto Scaling group. Application peak hours occur at the same time each day. Application users report slow application performance at the start of peak hours. The application performs normally 2-3 hours after peak hours begin. The company wants to ensure that the application works properly at the start o* peak hours.
Which solution will meet these requirements?
Answer : D
Understanding the Requirement: The application experiences slow performance at the start of peak hours, but normalizes after a few hours. The goal is to ensure proper performance at the beginning of peak hours.
Analysis of Options:
Application Load Balancer: Ensures proper traffic distribution but does not address the need to have sufficient instances running at the start of peak hours.
Dynamic Scaling Policy Based on Memory or CPU Utilization: While dynamic scaling reacts to usage metrics, it may not preemptively scale in anticipation of peak hours, leading to delays as new instances are launched and become available.
Scheduled Scaling Policy: This allows the Auto Scaling group to launch instances ahead of time, ensuring that enough instances are available and ready to handle the increased load right at the start of peak hours.
Best Solution:
Scheduled Scaling Policy: This approach ensures that new instances are launched and ready before peak hours begin, addressing the slow performance issue at the start of peak periods.
Scheduled Scaling for Amazon EC2 Auto Scaling