Microsoft DP-420 Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB Exam Practice Test

Page: 1 / 14
Total 125 questions
Question 1

You have an Azure subscription.

You plan to create an Azure Cosmos DB for NoSQL database named DB1 that will store author and book data for authors that have each published up to ten books. Typical and frequent queries of the data will include:

* All books written by an individual author

* The synopsis of individual books

You need to recommend a data model for DB1. The solution must meet the following requirements:

* Support transactional updates of the author and book data.

* Minimize read operation costs.

What should you recommend?



Answer : D


Question 2

You are building an application that will store data in an Azure Cosmos DB for NoSQL account. The account uses the session default consistency level. The account is used by five other applications. The account has a single read-write region and 10 additional read regions.

Approximately 20 percent of the items stored in the account are updated hourly.

Several users will access the new application from multiple devices.

You need to ensure that the users see the same item values consistently when they browse from the different devices. The solution must not affect the other applications.

Which two actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.



Answer : B, C


Question 3

You need to create a database in an Azure Cosmos DB for NoSQL account. The database will contain three containers named coll1, coll2 and coll3. The coll1 container will have unpredictable read and write volumes. The col!2 and coll3 containers will have predictable read and write volumes. The expected maximum throughput for coll1 and coll2 is 50,000 request units per second (RU/s) each.

How should you provision the collection while minimizing costs?



Answer : B

Azure Cosmos DB offers two different capacity modes: provisioned throughput and serverless1. Provisioned throughput mode allows you to configure a certain amount of throughput (expressed in Request Units per second or RU/s) that is provisioned on your databases and containers.You get billed for the amount of throughput you've provisioned, regardless of how many RUs were consumed1. Serverless mode allows you to run your database operations without having to configure any previously provisioned capacity.You get billed for the number of RUs that were consumed by your database operations and the storage consumed by your data1.

To create a database that minimizes costs, you should consider the following factors:

The read and write volumes of your containers

The predictability and variability of your traffic

The latency and throughput requirements of your application

The geo-distribution and availability needs of your data

Based on these factors, one possible option that you could choose isB. Create a provisioned throughput account. Set the throughput for coll1 to Autoscale. Set the throughput for coll2 and coll3 to Manual.

This option has the following advantages:

It allows you to handle unpredictable read and write volumes for coll1 by using Autoscale, which automatically adjusts the provisioned throughput based on the current load1.

It allows you to handle predictable read and write volumes for coll2 and coll3 by using Manual, which lets you specify a fixed amount of provisioned throughput that meets your performance needs1.

It allows you to optimize your costs by paying only for the throughput you need for each container1.

It allows you to enable geo-distribution for your account if you need to replicate your data across multiple regions1.

This option also has some limitations, such as:

It may not be suitable for scenarios where all containers have intermittent or bursty traffic that is hard to forecast or has a low average-to-peak ratio1.

It may not be optimal for scenarios where all containers have low or sporadic traffic that does not justify provisioned capacity1.

It may not support availability zones or multi-master replication for your account1.

Depending on your specific use case and requirements, you may need to choose a different option.For example, you could use a serverless account if all containers have low or sporadic traffic that does not require predictable performance or geo-distribution1.Alternatively, you could use a provisioned throughput account with Manual for all containers if all containers have stable and consistent traffic that requires predictable performance or geo-distribution1.


Question 4

You have a container named container1 in an Azure Cosmos DB for NoSQL account named account1 that is set to the session default consistency level. The average size of an item in container1 is 20 KB.

You have an application named App1 that uses the Azure Cosmos DB SDK and performs a point read on the same set of items in container1 every minute.

You need to minimize the consumption of the request units (RUs) associated to the reads by App1. What should you do?



Question 5

You have a database named db1 in an Azure Cosmos DB f You have a third-party application that is exposed thro You need to migrate data from the application to a What should you use?



Answer : B

you can migrate data from various data sources to Azure Cosmos DB using different tools and methods.The choice of the migration tool depends on factors such as the data source, the Azure Cosmos DB API, the size of data, and the expected migration duration1. Some of the common migration tools are:

Azure Cosmos DB Data Migration tool: This is an open source tool that can import data to Azure Cosmos DB from sources such as JSON files, MongoDB, SQL Server, CSV files, and Azure Cosmos DB collections.This tool supports the SQL API and the Table API of Azure Cosmos DB2.

Azure Data Factory: This is a cloud-based data integration service that can copy data from various sources to Azure Cosmos DB using connectors.This tool supports the SQL API, MongoDB API, Cassandra API, Gremlin API, and Table API of Azure Cosmos DB3.

Azure Cosmos DB live data migrator: This is a command-line tool that can migrate data from one Azure Cosmos DB container to another container within the same or different account.This tool supports live migration with minimal downtime and works with any Azure Cosmos DB API4.

For your scenario, if you want to migrate data from a third-party application that is exposed through an OData endpoint to a container in Azure Cosmos DB for NoSQL, you should useAzure Data Factory.Azure Data Factory has an OData connector that can read data from an OData source and write it to an Azure Cosmos DB sink using the SQL API5. You can create a copy activity in Azure Data Factory that specifies the OData source and the Azure Cosmos DB sink, and run it on demand or on a schedule.


Question 6

You have an Azure Cosmos DB for NoSQL account1 that is configured for automatic failover. The account1 account has a single read-write region in West US and a and a read region in East US.

You run the following PowerShell command.

What is the effect of running the command?



Question 7

You have an Azure Cosmos DB for NoSQL account that has multiple write regions.

You need to receive an alert when requests that target the database exceed the available request units per second (RU/s).

Which Azure Monitor signal should you use?



Answer : C

Azure Monitor is a service that provides comprehensive monitoring for Azure resources, including Azure Cosmos DB. You can use Azure Monitor to collect, analyze, and alert on metrics and logs from your Azure Cosmos DB account.You can create alerts for Azure Cosmos DB using Azure Monitor based on the metrics, activity log events, or Log Analytics logs on your account1.

For your scenario, if you want to receive an alert when requests that target the database exceed the available request units per second (RU/s), you should use theDocument Quotametric. This metric measures the percentage of RU/s consumed by your account or container.You can create an alert rule on this metric from the Azure portal by following these steps2:

In the Azure portal, select the Azure Cosmos DB account you want to monitor.

Under the Monitoring section of the sidebar, select Alerts, and then select New alert rule.

In the Create alert rule pane, fill out the Scope section by selecting your subscription name and resource type (Azure Cosmos DB accounts).

In the Condition section, select Add condition and choose Document Quota from the list of signals.

In the Configure signal logic pane, specify the threshold value and operator for your alert condition. For example, you can choose Greater than or equal to 90 as the threshold value and operator to receive an alert when your RU/s consumption reaches 90% or more of your provisioned throughput.

In the Alert rule details section, specify a name and description for your alert rule.

In the Actions section, select Add action group and choose how you want to receive notifications for your alert. For example, you can choose Email/SMS/Push/Voice as an action type and enter your email address or phone number as a receiver.

Review your alert rule settings and select Create alert rule to save it.


Page:    1 / 14   
Total 125 questions