A database specialist needs to enable IAM authentication on an existing Amazon Aurora PostgreSQL DB cluster. The database specialist already has modified the DB cluster settings, has created IAM and database credentials, and has distributed the credentials to the appropriate users.
What should the database specialist do next to establish the credentials for the users to use to log in to the DB cluster?
Answer : B
Correct Answer: B
Explanation from Amazon documents:
Amazon Aurora PostgreSQL supports IAM authentication, which is a method of using AWS Identity and Access Management (IAM) to manage database access. IAM authentication allows you to use IAM users and roles to control who can access your Aurora PostgreSQL DB cluster, instead of using a traditional database username and password. IAM authentication also provides more security by using temporary credentials that are automatically rotated.
To enable IAM authentication on an existing Aurora PostgreSQL DB cluster, the database specialist needs to do the following :
Modify the DB cluster settings to enable IAM database authentication. This can be done using the AWS Management Console, the AWS CLI, or the RDS API.
Create IAM and database credentials for each user who needs access to the DB cluster. The IAM credentials consist of an access key ID and a secret access key. The database credentials consist of a database username and an optional password. The IAM credentials and the database username must match.
Distribute the IAM and database credentials to the appropriate users. The users must keep their credentials secure and not share them with anyone else.
Run the generate-db-auth-token command with the user names to generate a temporary password for the users. This command is part of the AWS CLI and it generates an authentication token that is valid for 15 minutes. The authentication token is a string that has the same format as a password. The users can use this token as their password when they connect to the DB cluster using a SQL client.
Therefore, option B is the correct solution to establish the credentials for the users to use to log in to the DB cluster. Option A is incorrect because adding the users' IAM credentials to the Aurora cluster parameter group is not necessary or possible. A cluster parameter group is a collection of DB engine configuration values that define how a DB cluster operates. Option C is incorrect because adding the users' IAM credentials to the default credential profile and using the AWS Management Console to access the DB cluster is not supported or secure. The default credential profile is a file that stores your AWS credentials for use by AWS CLI or SDKs. The AWS Management Console does not allow you to connect to an Aurora PostgreSQL DB cluster using IAM authentication. Option D is incorrect because using an AWS Security Token Service (AWS STS) token by sending the IAM access key and secret key as headers to the DB cluster API endpoint is not supported or secure. AWS STS is a service that enables you to request temporary, limited-privilege credentials for IAM users or federated users. The DB cluster API endpoint is an endpoint that allows you to perform administrative actions on your DB cluster using RDS API calls.
A company performs an audit on various data stores and discovers that an Amazon S3 bucket is storing a credit card number. The S3 bucket is the target of an AWS Database Migration Service (AWS DMS) continuous replication task that uses change data capture (CDC). The company determines that this field is not needed by anyone who uses the target dat
a. The company has manually removed the existing credit card data from the S3 bucket.
What is the MOST operationally efficient way to prevent new credit card data from being written to the S3 bucket?
Answer : A
A company is using an Amazon Aurora PostgreSQL DB cluster for the backend of its mobile application. The application is running continuously and a database specialist is satisfied with high availability and fast failover, but is concerned about performance degradation after failover.
How can the database specialist minimize the performance degradation after failover?
Answer : A
Correct Answer: A
Explanation from Amazon documents:
Amazon Aurora PostgreSQL supports cluster cache management, which is a feature that helps reduce the impact of failover on query performance by preserving the cache of the primary DB instance on one or more Aurora Replicas. Cluster cache management allows you to assign a promotion priority tier to each DB instance in your Aurora DB cluster. The promotion priority tier determines the order in which Aurora Replicas are considered for promotion to the primary instance after a failover. The lower the numerical value of the tier, the higher the priority.
By enabling cluster cache management for the Aurora DB cluster and setting the promotion priority for the writer DB instance and replica to tier-0, the database specialist can minimize the performance degradation after failover. This solution will ensure that the primary DB instance and one Aurora Replica have the same cache contents and are in the same promotion priority tier. In the event of a failover, Aurora will promote the tier-0 replica to the primary role, and the cache will be preserved. This will reduce the number of cache misses and improve query performance after failover.
Therefore, option A is the correct solution to minimize the performance degradation after failover. Option B is incorrect because setting the promotion priority for the writer DB instance and replica to tier-1 will not preserve the cache after failover. Aurora will first try to promote a tier-0 replica, which may have a different cache than the primary instance. Option C is incorrect because enabling Query Plan Management and performing a manual plan capture will not affect the cache behavior after failover. Query Plan Management is a feature that helps you control query execution plans and improve query performance by creating and enforcing custom execution plans. Option D is incorrect because enabling Query Plan Management and forcing the query optimizer to use the desired plan will not affect the cache behavior after failover. Forcing the query optimizer to use a desired plan may improve query performance by avoiding suboptimal plans, but it will not prevent cache misses after failover.
A company has an Amazon Redshift cluster with database audit logging enabled. A security audit shows that raw SQL statements that run against the Redshift cluster are being logged to an Amazon S3 bucket. The security team requires that authentication logs are generated for use in an intrusion detection system (IDS), but the security team does not require SQL queries.
What should a database specialist do to remediate this issue?
Answer : C
An online bookstore uses Amazon Aurora MySQL as its backend database. After the online bookstore added a popular book to the online catalog, customers began reporting intermittent timeouts on the checkout page. A database specialist determined that increased load was causing locking contention on the database. The database specialist wants to automatically detect and diagnose database performance issues and to resolve bottlenecks faster.
Which solution will meet these requirements?
Answer : A
Correct Answer: A
Explanation from Amazon documents:
Performance Insights is a feature of Amazon Aurora MySQL that helps you quickly assess the load on your database and determine when and where to take action. Performance Insights displays a dashboard that shows the database load in terms of average active sessions (AAS), which is the average number of sessions that are actively running SQL statements at any given time. Performance Insights also shows the top SQL statements, waits, hosts, and users that are contributing to the database load.
Amazon DevOps Guru is a fully managed service that helps you improve the operational performance and availability of your applications by detecting operational issues and recommending specific actions for remediation. Amazon DevOps Guru applies machine learning to automatically analyze data such as application metrics, logs, events, and traces for behaviors that deviate from normal operating patterns. Amazon DevOps Guru supports Amazon RDS as a resource type and can monitor the performance and availability of your RDS databases.
By turning on Performance Insights for the Aurora MySQL database and configuring and turning on Amazon DevOps Guru for RDS, the database specialist can automatically detect and diagnose database performance issues and resolve bottlenecks faster. This solution will allow the database specialist to monitor the database load and identify the root causes of performance problems using Performance Insights, and receive actionable insights and recommendations from Amazon DevOps Guru to improve the operational performance and availability of the database.
Therefore, option A is the correct solution to meet the requirements. Option B is not sufficient because creating a CPU usage alarm will only notify the database specialist when the CPU utilization is high, but it will not help diagnose or resolve the database performance issues. Option C is not efficient because using the Amazon RDS query editor to get the process ID of the query that is causing the database to lock and running a command to end the process will require manual intervention and may cause data loss or inconsistency. Option D is not efficient because using the SELECT INTO OUTFILE S3 statement to query data from the database and saving the data directly to an Amazon S3 bucket will incur additional time and cost, and using Amazon Athena to analyze the files for long-running queries will not help prevent or resolve locking contention on the database.
An ecommerce company is running Amazon RDS for Microsoft SQL Server. The company is planning to perform testing in a development environment with production dat
a. The development environment and the production environment are in separate AWS accounts. Both environments use AWS Key Management Service (AWS KMS) encrypted databases with both manual and automated snapshots. A database specialist needs to share a KMS encrypted production RDS snapshot with the development account.
Which combination of steps should the database specialist take to meet these requirements? (Select THREE.)
Answer : B, D, E
Correct Answer: B, D, E
Explanation from Amazon documents:
To share an encrypted Amazon RDS snapshot with another account, you need to do the following123:
Share the snapshot with the development account by specifying the account ID of the target account1.
Copy the shared snapshot to the development account by using a KMS key of the target account2.
Therefore, option B, D, and E are the correct steps to meet the requirements. Option A is incorrect because you can't share an automated snapshot. Option C is incorrect because you can't share a snapshot that is encrypted using the default KMS key. Option F is unnecessary because the production account does not need to access the development account KMS key.
A database specialist is designing the database for a software-as-a-service (SaaS) version of an employee information application. In the current architecture, the change history of employee records is stored in a single table in an Amazon RDS for Oracle database. Triggers on the employee table populate the history table with historical records.
This architecture has two major challenges. First, there is no way to guarantee that the records have not been changed in the history table. Second, queries on the history table are slow because of the large size of the table and the need to run the queries against a large subset of data in the table.
The database specialist must design a solution that prevents modification of the historical records. The solution also must maximize the speed of the queries.
Which solution will meet these requirements?
Answer : B
Correct Answer: B
Explanation from Amazon documents:
Therefore, option B is the best solution that meets the requirements of preventing modification of the historical records and maximizing the speed of the queries. Option A is not suitable because DynamoDB is a key-value and document database that does not provide a ledger-like transaction log. Option C is not suitable because Aurora PostgreSQL is a relational database that does not guarantee immutability of the historical records. Option D is not suitable because Redshift is a data warehouse that is optimized for analytical queries on large datasets, not for storing and querying individual records.