Salesforce Certified Platform Data Architect (Plat-Arch-201) Exam Practice Test

Page: 1 / 14
Total 257 questions
Question 1

Cloud Kicks currently has a Public Read/Write sharing model for the company's Contacts. Cloud Kicks management team requests that only the owner of a contact record be allowed to delete that contact.

What should an Architect do to meet these requirements?



Answer : B

Checking if the current user is NOT the owner by creating a ''before delete'' trigger can meet the requirement of allowing only the owner of a contact record to delete that contact. A trigger is a piece of Apex code that can execute before or after a record is inserted, updated, deleted, or undeleted. A ''before delete'' trigger can prevent the deletion of a record by using theaddError()method.


Question 2

Universal Containers is exporting 40 million Account records from Salesforce using Informatica Cloud. The ETL tool fails and the query log indicates a full table scan time-out failure. What is the recommended solution?



Question 3

Cloud Kicks needs to purge detailed transactional records from Salesforce. The data should be aggregated at a summary level and available in Salesforce.

What are two automated approaches to fulfill this goal? (Choose two.)



Question 4

UC has a variety of systems across its technology landscape, including SF, legacy enterprise resource planning (ERP) applications and homegrown CRM tools. UC has decided that they would like to consolidate all customer, opportunity and order data into Salesforce as part of its master data management (MDM) strategy.

What are the 3 key steps that a data architect should take when merging data from multiple systems into Salesforce? Choose 3 answers:



Answer : C, D, E

The three key steps that a data architect should take when merging data from multiple systems into Salesforce are:

Analyze each system's data model and perform gap analysis. This step involves understanding the structure and meaning of the data in each system, identifying the common and unique data elements, and mapping the data fields between the systems. This step also involves assessing the quality and consistency of the data, and identifying any data cleansing or transformation needs.

Utilize an ETL tool to merge, transform, and de-duplicate data. This step involves using an ETL tool to connect to the source systems, extract the data, apply any data transformations or validations, and load the data into Salesforce. This step also involves applying de-duplication rules or algorithms to avoid creating duplicate records in Salesforce.

Work with stakeholders to define record and field survivorship rules. This step involves collaborating with the business users and owners of the data to determine which records and fields should be retained or overwritten in case of conflicts or discrepancies. This step also involves defining the criteria and logic for record and field survivorship, and implementing them in the ETL tool or in Salesforce.

Creating new fields to store additional values from all the systems is not a key step, but rather a possible outcome of the gap analysis. It may not be necessary or desirable to create new fields for every value from every system, as it may result in redundant or irrelevant data. Installing a 3rd party AppExchange tool to handle the merger is not a key step, but rather a possible option for choosing an ETL tool. It may not be the best option depending on the requirements, budget, and preferences of the organization.


Question 5

UC is preparing to implement sales cloud and would like to its users to have read only access to an account record if they have access to its child opportunity record. How would a data architect implement this sharing requirement between objects?



Question 6

A large retail company has recently chosen SF as its CRM solution. They have the following record counts:

2500000 accounts

25000000 contacts

When doing an initial performance test, the data architect noticed an extremely slow response for reports and list views.

What should a data architect do to solve the performance issue?



Answer : B

The correct answer is B, add custom indexes on frequently searched account and contact object fields. Custom indexes are a way to improve the performance of your queries and reports by creating indexes on specific fields that are often used in filters or joins. By adding custom indexes on frequently searched account and contact object fields, you can speed up the response time for reports and list views. Loading only the data that users are permitted to access, limiting data loading to the 2000 most recent records, or creating a skinny table are also possible solutions, but they are either not feasible, not scalable, or not supported by Salesforce.


Question 7

Universal Containers (UC) has a Salesforce instance with over 10.000 Account records. They have noticed similar, but not identical. Account names and addresses. What should UC do to ensure proper data quality?



Answer : C

Enabling Account de-duplication by creating matching rules in Salesforce, which will mass merge duplicate Accounts, is what UC should do to ensure proper data quality for their Account records. Matching rules allow UC to define how Salesforce identifies duplicate Accounts based on various criteria, such as name, address, phone number, etc. Mass merge allows UC to merge up to 200 duplicate Accounts at a time, based on the matching rules. This simplifies and automates the process of de-duplicating Accounts and improves data quality. The other options are either more time-consuming, costly, or error-prone for ensuring proper data quality.


Page:    1 / 14   
Total 257 questions