Amazon SAP-C02 AWS Certified Solutions Architect - Professional Exam Practice Test

Page: 1 / 14
Total 435 questions
Question 1

A company needs to gather data from an experiment in a remote location that does not have internet connectivity. During the experiment, sensors that are connected to a total network will generate 6 TB of data in a preprimary formal over the course of 1 week. The sensors can be configured to upload their data files to an FTP server periodically, but the sensors do not have their own FTP server. The sensors also do not support other protocols. The company needs to collect the data centrally and move lie data to object storage in the AWS Cloud as soon. as possible after the experiment.

Which solution will meet these requirements?



Answer : C

For collecting data from remote sensors without internet connectivity, using an AWS Snowcone device with an Amazon EC2 instance running an FTP server presents a practical solution. This setup allows the sensors to upload data to the EC2 instance via FTP, and after the experiment, the Snowcone device can be returned to AWS for data ingestion into Amazon S3. This approach minimizes operational complexity and ensures efficient data transfer to AWS for further processing or storage.


Question 2

A company has a Windows-based desktop application that is packaged and deployed to the users' Windows machines. The company recently acquired another company that has employees who primarily use machines with a Linux operating system. The acquiring company has decided to migrate and rehost the Windows-based desktop application lo AWS.

All employees must be authenticated before they use the application. The acquiring company uses Active Directory on premises but wants a simplified way to manage access to the application on AWS (or all the employees.

Which solution will rehost the application on AWS with the LEAST development effort?



Answer : C

Amazon AppStream 2.0 offers a streamlined solution for rehosting a Windows-based desktop application on AWS with minimal development effort. By creating an AppStream 2.0 image that includes the application and using an On-Demand fleet for streaming, the application becomes accessible from any device, including Linux machines. AppStream 2.0 user pools can be used for authentication, simplifying access management without the need for extensive changes to the application or infrastructure.


Question 3

A company wants to use Amazon Workspaces in combination with thin client devices to replace aging desktops. Employees use the desktops to access applications that work with clinical trial dat

a. Corporate security policy states that access to the applications must be restricted to only company branch office locations. The company is considering adding an additional branch office in the next 6 months.

Which solution meets these requirements with the MOST operational efficiency?



Answer : A

Utilizing an IP access control group rule with the list of public addresses from branch offices and associating it with the Amazon WorkSpaces directory is the most operationally efficient solution. This method ensures that access to WorkSpaces is restricted to specified locations, aligning with the corporate security policy. This approach offers simplicity and flexibility, especially with the potential addition of a new branch office, as updating the IP access control group is straightforward.


Question 4

A company needs to implement disaster recovery for a critical application that runs in a single AWS Region. The application's users interact with a web frontend that is hosted on Amazon EC2 Instances behind an Application Load Balancer (ALB). The application writes to an Amazon RD5 tor MySQL DB instance. The application also outputs processed documents that are stored in an Amazon S3 bucket

The company's finance team directly queries the database to run reports. During busy periods, these queries consume resources and negatively affect application performance.

A solutions architect must design a solution that will provide resiliency during a disaster. The solution must minimize data loss and must resolve the performance problems that result from the finance team's queries.

Which solution will meet these requirements?



Answer : C

Implementing a disaster recovery strategy that minimizes data loss and addresses performance issues involves creating a read replica of the RDS DB instance in a separate region and directing the finance team's queries to this replica. This solution alleviates the performance impact on the primary database. Using Amazon S3 Cross-Region Replication (CRR) ensures that processed documents are available in the disaster recovery region. In the event of a disaster, the read replica can be promoted to a standalone DB instance, and EC2 instances can be launched from pre-created AMIs to serve the web frontend, thereby ensuring resiliency and minimal data loss.


Question 5

A company is designing an AWS environment tor a manufacturing application. The application has been successful with customers, and the application's user base has increased. The company has connected the AWS environment to the company's on-premises data center through a 1 Gbps AWS Direct Connect connection. The company has configured BGP for the connection.

The company must update the existing network connectivity solution to ensure that the solution is highly available, fault tolerant, and secure.

Which solution win meet these requirements MOST cost-effectively?



Answer : A

To enhance the network connectivity solution's availability, fault tolerance, and security in a cost-effective manner, adding a dynamic private IP AWS Site-to-Site VPN as a secondary path is a viable option. This VPN serves as a resilient backup for the Direct Connect connection, ensuring continuous data flow even if the primary path fails. Implementing MACsec (Media Access Control Security) on the Direct Connect connection further secures the data in transit by providing encryption, thus addressing the security requirement. This solution strikes a balance between cost and operational efficiency, avoiding the higher expenses associated with provisioning an additional Direct Connect connection.


Question 6

A company wants to migrate an Amazon Aurora MySQL DB cluster from an existing AWS account to a new AWS account in the same AWS Region. Both accounts are members of the same organization in AWS Organizations.

The company must minimize database service interruption before the company performs DNS cutover to the new database.

Which migration strategy will meet this requirement?



Answer : B

The best migration strategy to meet the requirement of minimizing database service interruption before the DNS cutover is to use AWS DMS to migrate data between the two Aurora DB clusters.AWS DMS can perform continuous replication of data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S31.AWS DMS supports homogeneous migrations such as migrating from one Aurora MySQL DB cluster to another, as well as heterogeneous migrations between different database platforms2.AWS DMS also supports cross-account migrations, as long as the source and target databases are in the same AWS Region3.

The other options are not optimal for the following reasons:

Option A: Taking a snapshot of the existing Aurora database and restoring it in the new account would require a downtime during the snapshot and restore process, which could be significant for large databases.Moreover, any changes made to the source database after the snapshot would not be replicated to the target database, resulting in data inconsistency4.

Option C: Using AWS Backup to share an Aurora database backup from the existing AWS account to the new AWS account would have the same drawbacks as option A, as AWS Backup uses snapshots to create backups of Aurora databases.

Option D: Using AWS Application Migration Service to migrate data between the two Aurora DB clusters is not a valid option, as AWS Application Migration Service is designed to migrate applications, not databases, to AWS. AWS Application Migration Service can migrate applications from on-premises or other cloud environments to AWS, using agentless or agent-based methods.


1:What Is AWS Database Migration Service? - AWS Database Migration Service

2:Sources for Data Migration - AWS Database Migration Service

3:AWS Database Migration Service FAQs

4:Working with DB Cluster Snapshots - Amazon Aurora

: [Backing Up and Restoring an Amazon Aurora DB Cluster - Amazon Aurora]

: [What is AWS Application Migration Service? - AWS Application Migration Service]

Question 7

A company wants to design a disaster recovery (DR) solution for an application that runs in the company's data center. The application writes to an SMB file share and creates a copy on a second file share. Both file shares are in the data center. The application uses two types of files: metadata files and image files.

The company wants to store the copy on AWS. The company needs the ability to use SMB to access the data from either the data center or AWS if a disaster occurs. The copy of the data is rarely accessed but must be available within 5 minutes.

Which solution will meet these requirements MOST cost-effectively?



Answer : C

The correct solution is to use an Amazon S3 File Gateway to store the copy of the SMB file share on AWS. An S3 File Gateway enables on-premises applications to store and access objects in Amazon S3 using the SMB protocol. The S3 File Gateway can also be accessed from AWS using the SMB protocol, which provides the ability to use the data from either the data center or AWS if a disaster occurs. The S3 File Gateway supports tiering of data to different S3 storage classes based on the file type. This allows the company to optimize the storage costs by using S3 Standard-Infrequent Access (S3 Standard-IA) for the metadata files, which are rarely accessed but must be available within 5 minutes, and S3 Glacier Deep Archive for the image files, which are the lowest-cost storage class and suitable for long-term retention of data that is rarely accessed. This solution is the most cost-effective because it does not require any additional hardware, software, or replication services.

The other solutions are incorrect because they either use more expensive or unnecessary services or components, or they do not meet the requirements. For example:

Solution A is incorrect because it uses AWS Outposts with Amazon S3 storage, which is a very expensive and complex solution for the scenario in the question. AWS Outposts is a service that extends AWS infrastructure, services, APIs, and tools to virtually any data center, co-location space, or on-premises facility. It is designed for customers who need low latency and local data processing. Amazon S3 storage on Outposts provides a subset of S3 features and APIs to store and retrieve data on Outposts. However, this solution does not provide SMB access to the data on Outposts, which requires a Windows EC2 instance on Outposts as a file server. This adds more cost and complexity to the solution, and it does not provide the ability to access the data from AWS if a disaster occurs.

Solution B is incorrect because it uses Amazon FSx File Gateway and Amazon FSx for Windows File Server Multi-AZ file system that uses SSD storage, which are both more expensive and unnecessary services for the scenario in the question. Amazon FSx File Gateway is a service that enables on-premises applications to store and access data in Amazon FSx for Windows File Server using the SMB protocol. Amazon FSx for Windows File Server is a fully managed service that provides native Windows file shares with the compatibility, features, and performance that Windows-based applications rely on. However, this solution does not meet the requirements because it does not provide the ability to use different storage classes for the metadata files and image files, and it does not provide the ability to access the data from AWS if a disaster occurs. Moreover, using a Multi-AZ file system that uses SSD storage is overprovisioned and costly for the scenario in the question, which involves rarely accessed data that must be available within 5 minutes.

Solution D is incorrect because it uses an S3 File Gateway that uses S3 Standard-IA for both the metadata files and image files, which is not the most cost-effective solution for the scenario in the question. S3 Standard-IA is a storage class that offers high durability, availability, and performance for infrequently accessed data. However, it is more expensive than S3 Glacier Deep Archive, which is the lowest-cost storage class and suitable for long-term retention of data that is rarely accessed. Therefore, using S3 Standard-IA for the image files, which are likely to be larger and more numerous than the metadata files, is not optimal for the storage costs.


What is S3 File Gateway?

Using Amazon S3 storage classes with S3 File Gateway

Accessing your file shares from AWS

Page:    1 / 14   
Total 435 questions