A company is moving data from an on-premises data center to the AWS Cloud. The company must store all its data in an Amazon S3 bucket. To comply with regulations, the company must also ensure that the data will be protected against overwriting indefinitely.
Which solution will ensure that the data in the S3 bucket cannot be overwritten?
Answer : A
SSE-S3 ensures server-side encryption.
''When you enable versioning, Amazon S3 stores every version of every object. With versioning, you can preserve, retrieve, and restore every version of every object stored in an S3 bucket.''
--- S3 Versioning
This satisfies regulatory requirements for protecting data from overwriting indefinitely.
Incorrect Options:
B: Object Lock with retention period is time-bound, not indefinite.
C: Legal hold blocks deletion but doesn't directly prevent overwriting.
D: Storage Lens is analytics, not protection.
A company stores customer data in a multitenant Amazon S3 bucket. Each customer's data is stored in a prefix that is unique to the customer. The company needs to migrate data for specific customers to a new. dedicated S3 bucket that is in the same AWS Region as the source bucket. The company must preserve object metadata such as creation date and version IDs.
After the migration is finished, the company must delete the source data for the migrated customers from the original multitenant S3 bucket.
Which combination of solutions will meet these requirements with the LEAST overhead? (Select THREE.)
Answer : A, B, F
The combination of these solutions provides an efficient and automated way to migrate data while preserving metadata and ensuring cleanup:
Create a new S3 bucket with versioning enabled(Option A) to preserve object metadata like version IDs during migration.
UseS3 batch operations(Option B) to efficiently copy data from specific prefixes in the source bucket to the destination bucket, ensuring minimal overhead.
Use an S3 Lifecycle policy(Option F) to automatically delete the data from the source bucket after it has been migrated, reducing manual intervention.
Option C (CopyObject API): This approach would require more manual scripting and effort.
Option D (Same-Region Replication): SRR is designed for ongoing replication, not for one-time migrations.
Option E (DataSync): DataSync adds more complexity than necessary for this task.
AWS Reference:
S3 Batch Operations
S3 Lifecycle Policies
A company collects data from sensors. The company needs a cloud-based solution to store and transform the sensor data to make critical decisions. The solution must store the data for up to 2 days. After 2 days, the solution must delete the data. The company needs to use the transformeddata in an automated workflow that has manual approval steps.
Which solution will meet these requirements?
Answer : A
Amazon SQS with a 2-day retention ensures the data lives just as long as needed. EventBridge Pipes allow direct integration between event producers and consumers, with optional filtering and transformation. AWS Step Functions supports manual approval steps, which fits the workflow requirement perfectly.
=============
A company needs to optimize its Amazon S3 storage costs for an application that generates many files that cannot be recreated Each file is approximately 5 MB and is stored in Amazon S3 Standard storage.
The company must store the files for 4 years before the files can be deleted The files must be immediately accessible The files are frequently accessed in the first 30 days of object creation, but they are rarely accessed after the first 30 days.
Which solution will meet these requirements MOST cost-effectively?
Answer : C
Amazon S3 Standard-IA: This storage class is designed for data that is accessed less frequently but requires rapid access when needed. It offers lower storage costs compared to S3 Standard while still providing high availability and durability.
Access Patterns: Since the files are frequently accessed in the first 30 days and rarely accessed afterward, transitioning them to S3 Standard-IA after 30 days aligns with their access patterns and reduces storage costs significantly.
Lifecycle Policy: Implementing a lifecycle policy to transition the files to S3 Standard-IA ensures automatic management of the data lifecycle, moving files to a lower-cost storage class without manual intervention. Deleting the files after 4 years further optimizes costs by removing data that is no longer needed.
Amazon S3 Storage Classes
S3 Lifecycle Configuration
A solutions architect needs to design a system to process incoming work items immediately. Processing can take up to 30 minutes and involves calling external APIs, executing multiple states, and storing intermediate states.
The solution must scale with variable workloads and minimize operational overhead.
Which combination of steps meets these requirements? (Select TWO.)
Answer : B, E
AWS Step Functions is the recommended service for orchestrating multi-step, long-running workflows with state tracking, retries, and external API calls. It reduces operational overhead by eliminating the need for custom orchestration logic.
API Gateway receiving work items and sending them to SQS (Option E) provides buffering, elasticity, and decoupling, ensuring immediate ingestion regardless of backend load.
Option A forces Lambda to handle long execution paths, which is not optimal for 30-minute tasks and multi-state workflows. Option C triggers Lambda directly without buffering. Option D uses fixed EC2 instances, which does not scale dynamically.
=====================================================
A company needs a solution to give customers the ability to upload encrypted files to a directory in an Amazon S3 bucket by using SFTP. After customers upload files, the solution must automatically decrypt the files and move them to a second directory within the same S3 bucket for downstream processing.
The solution must not require authentication services. The solution must fully automate all post-upload operations and require minimal ongoing operational overhead.
Which solution will meet these requirements? (Select THREE.)
Answer : A, C, E
The correct answers areA, C, and Ebecause the company needs a fully managed solution forSFTP uploads to Amazon S3, followed byautomatic decryptionandmovement of files to another directorywithminimal operational overhead.AWS Transfer Familyis the best fit because it provides a managed SFTP endpoint directly integrated with Amazon S3. Configuring the S3 bucket as the home directory enables customers to upload files without the company needing to manage its own file transfer servers.
The requirement to avoid separate authentication services and minimize operational work is well served by nativeAWS Transfer Family workflows. A workflow can automatically run post-upload steps on files as they arrive. TheDECRYPTaction is specifically designed to decrypt uploaded encrypted files as part of the managed workflow. After decryption, theCOPYaction can place the processed file into the second directory in the same S3 bucket for downstream processing.
Option B is less appropriate because using S3 events and Lambda adds custom orchestration where a native Transfer Family workflow already handles the need more simply. Option D is incorrect because polling with an external script introduces unnecessary infrastructure and operational overhead. Option F is also incorrect because AWS Batch polling and custom decryption logic is far more complex than a managed file-transfer workflow.
AWS best practices favor managed services and native workflow features to reduce custom code and infrastructure management. Therefore, the best solution is to useAWS Transfer Family for SFTP uploads, along withTransfer Family workflow DECRYPTandCOPYactions to automate the full post-upload process.
A company sets up an organization in AWS Organizations that contains 10AWS accounts. A solutions architect must design a solution to provide access to the accounts for several thousand employees. The company has an existing identity provider (IdP). The company wants to use the existing IdP for authentication to AWS.
Which solution will meet these requirements?
Answer : C
AWS IAM Identity Center:
IAM Identity Centerprovides centralized access management for multiple AWS accounts within an organization and integrates seamlessly with existing identity providers (IdPs) throughSAML 2.0 federation.
It allows users to authenticate using their existing IdP credentials and gain access to AWS resources without the need to create and manage separate IAM users in each account.
IAM Identity Centeralso simplifies provisioning and de-provisioning users, as it can automatically synchronize users and groups from the external IdP to AWS, ensuring secure and managed access.
Integration with Existing IdP:
The solution involves configuringIAM Identity Centerto connect to the company's IdP using SAML. This setup allows employees to log in with their existing credentials, reducing the complexity of managing separate AWS credentials.
Once connected,IAM Identity Centerhandles authentication and authorization, granting users access to the AWS accounts based on their assigned roles and permissions.
Why the Other Options Are Incorrect:
Option A: Creating separateIAM usersfor each employee is not scalable or efficient. Managing thousands of IAM users across multiple AWS accounts introduces unnecessary complexity and operational overhead.
Option B: Using AWSroot userswith synchronized passwords is a security risk and goes against AWS best practices. Root accounts should never be used for day-to-day operations.
Option D:AWS Resource Access Manager (RAM)is used for sharing AWS resources between accounts, not for federating access for users across accounts. It doesn't provide a solution for authentication via an external IdP.
AWS Reference:
AWS IAM Identity Center
SAML 2.0 Integration with AWS IAM Identity Center
By setting upIAM Identity Centerand connecting it to the existing IdP, the company can efficiently manage access for thousands of employees across multiple AWS accounts with a high degree of operational efficiency and security. Therefore,Option Cis the best solution.