[Design Resilient Architectures]
A company is building an application in the AWS Cloud. The application is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses Amazon Route 53 for the DNS.
The company needs a managed solution with proactive engagement to detect against DDoS attacks.
Which solution will meet these requirements?
Answer : D
AWS Shield Advanced is designed to provide enhanced protection against DDoS attacks with proactive engagement and response capabilities, making it the best solution for this scenario.
AWS Shield Advanced: This service provides advanced protection against DDoS attacks. It includes detailed attack diagnostics, 24/7 access to the AWS DDoS Response Team (DRT), and financial protection against DDoS-related scaling charges. Shield Advanced also integrates with Route 53 and the Application Load Balancer (ALB) to ensure comprehensive protection for your web applications.
Route 53 and ALB Protection: By adding your Route 53 hosted zones and ALB resources to AWS Shield Advanced, you ensure that these components are covered under the enhanced protection plan. Shield Advanced actively monitors traffic and provides real-time attack mitigation, minimizing the impact of DDoS attacks on your application.
Why Not Other Options?:
Option A (AWS Config): AWS Config is a configuration management service and does not provide DDoS protection or detection capabilities.
Option B (AWS WAF): While AWS WAF can help mitigate some types of attacks, it does not provide the comprehensive DDoS protection and proactive engagement offered by Shield Advanced.
Option C (GuardDuty): GuardDuty is a threat detection service that identifies potentially malicious activity within your AWS environment, but it is not specifically designed to provide DDoS protection.
AWS Reference:
AWS Shield Advanced- Overview of AWS Shield Advanced and its DDoS protection capabilities.
Integrating AWS Shield Advanced with Route 53 and ALB- Detailed guidance on how to protect Route 53 and ALB with AWS Shield Advanced.
[Design High-Performing Architectures]
An ecommerce company is experiencing an increase in user traffic. The company's store is deployed on Amazon EC2 instances as a two-tier web application consisting of a web tier and a separate database tier. As traffic increases, the company notices that the architecture is causing significant delays in sending timely marketing and order confirmation email to users. The company wants to reduce the time it spends resolving complex email delivery issues and minimize operational overhead.
What should a solutions architect do to meet these requirements?
Answer : B
Amazon SES is a cost-effective and scalable email service that enables businesses to send and receive email using their own email addresses and domains. Configuring the web instance to send email through Amazon SES is a simple and effective solution that can reduce the time spent resolving complex email delivery issues and minimize operational overhead.
[Design Resilient Architectures]
A company's HTTP application is behind a Network Load Balancer (NLB). The NLB's target group is configured to use an Amazon EC2 Auto Scaling group with multiple EC2 instances that run the web service.
The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances that run the web service. The company needs to improve the application's availability without writing custom scripts or code.
What should a solutions architect do to meet these requirements?
Answer : C
Application availability: NLB cannot assure the availability of the application. This is because it bases its decisions solely on network and TCP-layer variables and has no awareness of the application at all. Generally, NLB determines availability based on the ability of a server to respond to ICMP ping or to correctly complete the three-way TCP handshake. ALB goes much deeper and is capable of determining availability based on not only a successful HTTP GET of a particular page but also the verification that the content is as was expected based on the input parameters.
[Design Secure Architectures]
A company has a dynamic web application hosted on two Amazon EC2 instances. The company has its own SSL certificate, which is on each instance to perform SSL termination.
There has been an increase in traffic recently, and the operations team determined that SSL encryption and decryption is causing the compute capacity of the web servers to reach their maximum limit.
What should a solutions architect do to increase the application's performance?
Answer : D
https://aws.amazon.com/certificate-manager/:
'With AWS Certificate Manager, you can quickly request a certificate, deploy it on ACM-integrated AWS resources, such as Elastic Load Balancers, Amazon CloudFront distributions, and APIs on API Gateway, and let AWS Certificate Manager handle certificate renewals. It also enables you to create private certificates for your internal resources and manage the certificate lifecycle centrally.'
[Design Secure Architectures]
A company stores user data in AWS. The data is used continuously with peak usage during business hours. Access patterns vary, with some data not being used for months at a time. A solutions architect must choose a cost-effective solution that maintains the highest level of durability while maintaining high availability.
Which storage solution meets these requirements?
Answer : B
Amazon S3 Intelligent-Tiering is the most cost-effective solution for this scenario, providing both high availability and durability while adjusting automatically to changing access patterns. It moves data across two access tiers: one optimized for frequent access and another for infrequent access, based on usage patterns. This tiering ensures that the company avoids paying for unused storage while also keeping frequently accessed data in a more accessible tier.
Key AWS references and benefits ofS3 Intelligent-Tiering:
High Durability and Availability: Amazon S3 offers 99.999999999% durability and 99.9% availability for objects stored, ensuring data is always protected.
Automatic Tiering: Data is automatically moved between tiers based on access patterns, making it ideal for workloads with unpredictable or variable access patterns.
No Retrieval Fees: Unlike S3 One Zone-IA or Glacier, there are no retrieval fees, making this more cost-effective in scenarios where access patterns vary over time.
AWS Documentation: According to the AWS Well-Architected Framework under theCost Optimization Pillar, S3 Intelligent-Tiering is recommended for storage when access patterns change over time, as it minimizes costs while maintaining availability.
[Design High-Performing Architectures]
A company has an application that uses an Amazon DynamoDB table for storage. A solutions architect discovers that many requests to the table are not returning the latest dat
a. The company's users have not reported any other issues with database performance. Latency is in an acceptable range.
Which design change should the solutions architect recommend?
Answer : C
The most suitable design change for the company's application is to request strongly consistent reads for the table. This change will ensure that the requests to the table return the latest data, reflecting the updates from all prior write operations.
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB supports two types of read consistency: eventually consistent reads and strongly consistent reads.By default, DynamoDB uses eventually consistent reads, unless users specify otherwise1.
Eventually consistent reads are reads that may not reflect the results of a recently completed write operation. The response might not include the changes because of the latency of propagating the data to all replicas. If users repeat their read request after a short time, the response should return the updated data.Eventually consistent reads are suitable for applications that do not require up-to-date data or can tolerate eventual consistency1.
Strongly consistent reads are reads that return a result that reflects all writes that received a successful response prior to the read. Users can request a strongly consistent read by setting the ConsistentRead parameter to true in their read operations, such as GetItem, Query, or Scan.Strongly consistent reads are suitable for applications that require up-to-date data or cannot tolerate eventual consistency1.
The other options are not correct because they do not address the issue of read consistency or are not relevant for the use case. Adding read replicas to the table is not correct because this option is not supported by DynamoDB. Read replicas are copies of a primary database instance that can serve read-only traffic and improve availability and performance.Read replicas are available for some relational database services, such as Amazon RDS or Amazon Aurora, but not for DynamoDB2. Using a global secondary index (GSI) is not correct because this option is not related to read consistency. A GSI is an index that has a partition key and an optional sort key that are different from those on the base table.A GSI allows users to query the data in different ways, with eventual consistency3. Requesting eventually consistent reads for the table is not correct because this option is already the default behavior of DynamoDB and does not solve the problem of requests not returning the latest data.
Read consistency - Amazon DynamoDB
Working with read replicas - Amazon Relational Database Service
Working with global secondary indexes - Amazon DynamoDB
[Design Cost-Optimized Architectures]
A survey company has gathered data for several years from areasm\the United States. The company hosts the data in an Amazon S3 bucket that is 3 TB m size and growing. The company has started to share the data with a European marketing firm that has S3 buckets The company wants to ensure that its data transfer costs remain as low as possible
Which solution will meet these requirements?
Answer : A
'Typically, you configure buckets to be Requester Pays buckets when you want to share data but not incur charges associated with others accessing the data. For example, you might use Requester Pays buckets when making available large datasets, such as zip code directories, reference data, geospatial information, or web crawling data.'https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html