Amazon DBS-C01 AWS Certified Database - Specialty Exam Practice Test

Page: 1 / 14
Total 322 questions
Question 1

A database specialist needs to enable IAM authentication on an existing Amazon Aurora PostgreSQL DB cluster. The database specialist already has modified the DB cluster settings, has created IAM and database credentials, and has distributed the credentials to the appropriate users.

What should the database specialist do next to establish the credentials for the users to use to log in to the DB cluster?



Answer : B

Correct Answer: B

Explanation from Amazon documents:

Amazon Aurora PostgreSQL supports IAM authentication, which is a method of using AWS Identity and Access Management (IAM) to manage database access. IAM authentication allows you to use IAM users and roles to control who can access your Aurora PostgreSQL DB cluster, instead of using a traditional database username and password. IAM authentication also provides more security by using temporary credentials that are automatically rotated.

To enable IAM authentication on an existing Aurora PostgreSQL DB cluster, the database specialist needs to do the following :

Modify the DB cluster settings to enable IAM database authentication. This can be done using the AWS Management Console, the AWS CLI, or the RDS API.

Create IAM and database credentials for each user who needs access to the DB cluster. The IAM credentials consist of an access key ID and a secret access key. The database credentials consist of a database username and an optional password. The IAM credentials and the database username must match.

Distribute the IAM and database credentials to the appropriate users. The users must keep their credentials secure and not share them with anyone else.

Run the generate-db-auth-token command with the user names to generate a temporary password for the users. This command is part of the AWS CLI and it generates an authentication token that is valid for 15 minutes. The authentication token is a string that has the same format as a password. The users can use this token as their password when they connect to the DB cluster using a SQL client.

Therefore, option B is the correct solution to establish the credentials for the users to use to log in to the DB cluster. Option A is incorrect because adding the users' IAM credentials to the Aurora cluster parameter group is not necessary or possible. A cluster parameter group is a collection of DB engine configuration values that define how a DB cluster operates. Option C is incorrect because adding the users' IAM credentials to the default credential profile and using the AWS Management Console to access the DB cluster is not supported or secure. The default credential profile is a file that stores your AWS credentials for use by AWS CLI or SDKs. The AWS Management Console does not allow you to connect to an Aurora PostgreSQL DB cluster using IAM authentication. Option D is incorrect because using an AWS Security Token Service (AWS STS) token by sending the IAM access key and secret key as headers to the DB cluster API endpoint is not supported or secure. AWS STS is a service that enables you to request temporary, limited-privilege credentials for IAM users or federated users. The DB cluster API endpoint is an endpoint that allows you to perform administrative actions on your DB cluster using RDS API calls.


Question 2

A company migrated an on-premises Oracle database to Amazon RDS for Oracle. A database specialist needs to monitor the latency of the database.

Which solution will meet this requirement with the LEAST operational overhead?



Answer : C

Correct Answer: C

Explanation from Amazon documents:

Amazon RDS for Oracle is a fully managed relational database service that supports Oracle Database. Amazon RDS for Oracle provides several features to monitor the performance and health of your database, such as RDS Performance Insights, Enhanced Monitoring, Amazon CloudWatch, and AWS CloudTrail.

RDS Performance Insights is a feature that helps you quickly assess the load on your database and determine when and where to take action. RDS Performance Insights displays a dashboard that shows the database load in terms of average active sessions (AAS), which is the average number of sessions that are actively running SQL statements at any given time. RDS Performance Insights also shows the top SQL statements, waits, hosts, and users that are contributing to the database load.

Enhanced Monitoring is a feature that provides metrics in real time for the operating system (OS) that your DB instance runs on. Enhanced Monitoring metrics include CPU utilization, memory, file system, disk I/O, network I/O, process list, and thread count. Enhanced Monitoring allows you to view how different threads use the CPU and how much memory each thread consumes.

By enabling RDS Performance Insights and Enhanced Monitoring for the RDS for Oracle DB instance, the database specialist can monitor the latency of the database with the least operational overhead. This solution will allow the database specialist to use the RDS console or API to enable these features and view the metrics and dashboards without installing any additional software or tools. This solution will also provide comprehensive and granular information about the database load and resource utilization.

Therefore, option C is the correct solution to meet the requirement. Option A is not optimal because publishing RDS Performance Insights metrics to Amazon CloudWatch and adding AWS CloudTrail filters to monitor database performance will incur additional operational overhead and cost. Amazon CloudWatch is a service that collects monitoring and operational data in the form of logs, metrics, and events. AWS CloudTrail is a service that records AWS API calls for your account and delivers log files to you. These services are useful for monitoring performance trends and auditing activities, but they are not necessary for monitoring latency in real time. Option B is not optimal because installing Oracle Statspack and enabling the performance statistics feature will require manual intervention and configuration on the RDS for Oracle DB instance. Oracle Statspack is a tool that collects, stores, and displays performance data for Oracle Database. The performance statistics feature is an option that enables Statspack to collect additional statistics such as wait events, latches, SQL statements, segments, rollback segments, etc. These tools are useful for performance tuning and troubleshooting, but they are not as easy to use as RDS Performance Insights and Enhanced Monitoring. Option D is not relevant because creating a new DB parameter group that includes the AllocatedStorage, DBInstanceClassMemory, and DBInstanceVCPU variables will not help monitor the latency of the database. A DB parameter group is a collection of DB engine configuration values that define how a DB instance operates. The AllocatedStorage parameter specifies the allocated storage size in gibibytes (GiB). The DBInstanceClassMemory parameter specifies the amount of memory available to an instance class in bytes. The DBInstanceVCPU parameter specifies the number of virtual CPUs available to an instance class. These parameters are used to configure the capacity and performance of a DB instance, but they do not provide any monitoring or metrics information. Enabling RDS Performance Insights alone will not provide enough information about the OS-level metrics such as CPU utilization or memory usage.


Question 3

A company performs an audit on various data stores and discovers that an Amazon S3 bucket is storing a credit card number. The S3 bucket is the target of an AWS Database Migration Service (AWS DMS) continuous replication task that uses change data capture (CDC). The company determines that this field is not needed by anyone who uses the target dat

a. The company has manually removed the existing credit card data from the S3 bucket.

What is the MOST operationally efficient way to prevent new credit card data from being written to the S3 bucket?



Answer : A


Question 4

A company is using an Amazon Aurora PostgreSQL DB cluster for the backend of its mobile application. The application is running continuously and a database specialist is satisfied with high availability and fast failover, but is concerned about performance degradation after failover.

How can the database specialist minimize the performance degradation after failover?



Answer : A

Correct Answer: A

Explanation from Amazon documents:

Amazon Aurora PostgreSQL supports cluster cache management, which is a feature that helps reduce the impact of failover on query performance by preserving the cache of the primary DB instance on one or more Aurora Replicas. Cluster cache management allows you to assign a promotion priority tier to each DB instance in your Aurora DB cluster. The promotion priority tier determines the order in which Aurora Replicas are considered for promotion to the primary instance after a failover. The lower the numerical value of the tier, the higher the priority.

By enabling cluster cache management for the Aurora DB cluster and setting the promotion priority for the writer DB instance and replica to tier-0, the database specialist can minimize the performance degradation after failover. This solution will ensure that the primary DB instance and one Aurora Replica have the same cache contents and are in the same promotion priority tier. In the event of a failover, Aurora will promote the tier-0 replica to the primary role, and the cache will be preserved. This will reduce the number of cache misses and improve query performance after failover.

Therefore, option A is the correct solution to minimize the performance degradation after failover. Option B is incorrect because setting the promotion priority for the writer DB instance and replica to tier-1 will not preserve the cache after failover. Aurora will first try to promote a tier-0 replica, which may have a different cache than the primary instance. Option C is incorrect because enabling Query Plan Management and performing a manual plan capture will not affect the cache behavior after failover. Query Plan Management is a feature that helps you control query execution plans and improve query performance by creating and enforcing custom execution plans. Option D is incorrect because enabling Query Plan Management and forcing the query optimizer to use the desired plan will not affect the cache behavior after failover. Forcing the query optimizer to use a desired plan may improve query performance by avoiding suboptimal plans, but it will not prevent cache misses after failover.


Question 5

A company has an Amazon Redshift cluster with database audit logging enabled. A security audit shows that raw SQL statements that run against the Redshift cluster are being logged to an Amazon S3 bucket. The security team requires that authentication logs are generated for use in an intrusion detection system (IDS), but the security team does not require SQL queries.

What should a database specialist do to remediate this issue?



Answer : C


Question 6

A company is using AWS CloudFormation to provision and manage infrastructure resources, including a production database. During a recent CloudFormation stack update, a database specialist observed that changes were made to a database resource that is named ProductionDatabase. The company wants to prevent changes to only ProductionDatabase during future stack updates.

Which stack policy will meet this requirement?

A.

B.

C.

D.



Answer : A


Question 7

A news portal is looking for a data store to store 120 GB of metadata about its posts and comments. The posts and comments are not frequently looked up or updated. However, occasional lookups are expected to be served with single-digit millisecond latency on average.

What is the MOST cost-effective solution?



Answer : C

Correct Answer: C

Explanation from Amazon documents:

Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is a storage class for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee1. S3 Standard-IA is designed for long-lived and infrequently accessed data. Examples include disaster recovery, backups, and long-term data retention1.

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run2. Athena scales automatically---executing queries in parallel---so results are fast, even with large datasets and complex queries2.

The news portal can use S3 Standard-IA to store its metadata about posts and comments, which are not frequently looked up or updated. This way, the portal can benefit from the low storage cost of S3 Standard-IA ($0.0125 per GB per month) and the high durability and availability of S31. The portal can also use Athena to query the data stored in S3 using SQL, without having to set up any servers or databases. The portal only pays for the amount of data scanned by each query ($5 per TB scanned) and can optimize the query cost by partitioning, compressing, and converting the data into columnar formats2.

Therefore, option C is the most cost-effective solution for the news portal's use case. Option A is not cost-effective because DynamoDB on-demand capacity mode charges for read and write requests ($1.25 per million read requests and $1.25 per million write requests), regardless of how frequently the data is accessed3. Purchasing reserved capacity can reduce the cost, but it requires a minimum commitment of 100 units per region. Option B is not suitable because ElastiCache for Redis is an in-memory data store that provides sub-millisecond latency, but it is more expensive than S3 Standard-IA ($0.046 per GB per hour for cache.t2.micro node type). ElastiCache for Redis is also not designed for long-term data storage, but for caching frequently accessed data. Option D is not available because DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) is not a valid table class for DynamoDB. The only table classes for DynamoDB are On-Demand and Provisioned.


Page:    1 / 14   
Total 322 questions