Microsoft DP-203 Data Engineering on Microsoft Azure Exam Practice Test

Page: 1 / 14
Total 331 questions
Question 1

You have an Azure Data Factory pipeline named pipeline1 that includes a Copy activity named Copy1. Copy1 has the following configurations:

* The source of Copy1 is a table in an on-premises Microsoft SQL Server instance that is accessed by using a linked service connected via a self-hosted integration runtime.

* The sink of Copy1 uses a table in an Azure SQL database that is accessed by using a linked service connected via an Azure integration runtime.

You need to maximize the amount of compute resources available to Copy1. The solution must minimize administrative effort.

What should you do?



Answer : A


Question 2

You have an Azure Synapse Analytics dedicated SQL pool named Pcol1. Pool1 contains a table named tablet

You load 5 TB of data into table1.

You need to ensure that column store compression is maximized for table1.

Which statement should you execute?



Answer : B


Question 3

You have an Azure data factor/ connected to a Git repository that contains the following branches:

* mam: Collaboration branch

* abc: Feature branch

* xyz: Feature branch

You save charges to a pipeline in the xyz branch.

You need to publish the changes to the live service

What should you do first?



Answer : D


Question 4

You have an Azure Synapse Analytics dedicated SQL pool named Pool1.

Pool! contains two tables named SalesFact_Stagmg and SalesFact. Both tables have a matching number of partitions, all of which contain data.

You need to load data from SalesFact_Staging to SalesFact by switching a partition.

What should you specify when running the alter TABLE statement?



Answer : B


Question 5

You have an Azure data factory that connects to a Microsoft Purview account. The data 'factory is registered in Microsoft Purview.

You update a Data Factory pipeline.

You need to ensure that the updated lineage is available in Microsoft Purview.

What should you do first?



Answer : D


Question 6

You have two Azure Blob Storage accounts named account1 and account2?

You plan to create an Azure Data Factory pipeline that will use scheduled intervals to replicate newly created or modified blobs from account1 to account?

You need to recommend a solution to implement the pipeline. The solution must meet the following requirements:

* Ensure that the pipeline only copies blobs that were created of modified since the most recent replication event.

* Minimize the effort to create the pipeline.

What should you recommend?



Answer : A


Question 7

You have an Azure data factory named ADM that contains a pipeline named Pipelwe1

Pipeline! must execute every 30 minutes with a 15-minute offset.

Vou need to create a trigger for Pipehne1. The trigger must meet the following requirements:

* Backfill data from the beginning of the day to the current time.

* If Pipeline1 fairs, ensure that the pipeline can re-execute within the same 30-mmute period.

* Ensure that only one concurrent pipeline execution can occur.

* Minimize de4velopment and configuration effort

Which type of trigger should you create?



Answer : D


Page:    1 / 14   
Total 331 questions