Salesforce Certified Data Cloud Consultant (Data-Con-101) Exam Practice Test

Page: 1 / 14
Total 170 questions
Question 1

Which two common use cases can be addressed with Data Cloud?

Choose 2 answers



Answer : A, C

Data Cloud is a data platform that can help customers connect, prepare, harmonize, unify, query, analyze, and act on their data across various Salesforce and external sources. Some of the common use cases that can be addressed with Data Cloud are:

Understand and act upon customer data to drive more relevant experiences.Data Cloud can help customers gain a 360-degree view of their customers by unifying data from different sources and resolving identities across channels. Data Cloud can also help customers segment their audiences, create personalized experiences, and activate data in any channel using insights and AI.

Harmonize data from multiple sources with a standardized and extendable data model.Data Cloud can help customers transform and cleanse their data before using it, and map it to a common data model that can be extended and customized. Data Cloud can also help customers create calculated insights and related attributes to enrich their data and optimize identity resolution.

The other two options are not common use cases for Data Cloud. Data Cloud does not provide data governance or backup and disaster recovery features, as these are typically handled by other Salesforce or external solutions.


Learn How Data Cloud Works

About Salesforce Data Cloud

Discover Use Cases for the Platform

Understand Common Data Analysis Use Cases

Question 2

A customer needs to integrate in real time with Salesforce CRM.

Which feature accomplishes this requirement?



Answer : A

The correct answer is A. Streaming transforms. Streaming transforms are a feature of Data Cloud that allows real-time data integration with Salesforce CRM.Streaming transforms use the Data Cloud Streaming API to synchronize micro-batches of updates between the CRM data source and Data Cloud in near-real time1.Streaming transforms enable Data Cloud to have the most current and accurate CRM data for segmentation and activation2.

The other options are incorrect for the following reasons:

B . Data model triggers.Data model triggers are a feature of Data Cloud that allows custom logic to be executed when data model objects are created, updated, or deleted3. Data model triggers do not integrate data with Salesforce CRM, but rather manipulate data within Data Cloud.

C . Sales and Service bundle.Sales and Service bundle is a feature of Data Cloud that allows pre-built data streams, data model objects, segments, and activations for Sales Cloud and Service Cloud data sources4. Sales and Service bundle does not integrate data in real time with Salesforce CRM, but rather ingests data at scheduled intervals.

D . Data actions and Lightning web components.Data actions and Lightning web components are features of Data Cloud that allow custom user interfaces and workflows to be built and embedded in Salesforce applications5. Data actions and Lightning web components do not integrate data with Salesforce CRM, but rather display and interact with data within Salesforce applications.


1:Load Data into Data Cloud

2: [Data Streams in Data Cloud]

3: [Data Model Triggers in Data Cloud] unit on Trailhead

4: [Sales and Service Bundle in Data Cloud] unit on Trailhead

5: [Data Actions and Lightning Web Components in Data Cloud] unit on Trailhead

: [Data Model in Data Cloud] unit on Trailhead

: [Create a Data Model Object] article on Salesforce Help

: [Data Sources in Data Cloud] unit on Trailhead

: [Connect and Ingest Data in Data Cloud] article on Salesforce Help

: [Data Spaces in Data Cloud] unit on Trailhead

: [Create a Data Space] article on Salesforce Help

: [Segments in Data Cloud] unit on Trailhead

: [Create a Segment] article on Salesforce Help

: [Activations in Data Cloud] unit on Trailhead

: [Create an Activation] article on Salesforce Help

Question 3

A consultant is planning the ingestion of a data stream that has profile information including a mobile phone number.

To ensure that the phone number can be used for future SMS campaigns, they need to confirm the phone number field is in the

proper E164 Phone Number format. However, the phone numbers in the file appear to be in varying formats.

What is the most efficient way to guarantee that the various phone number formats are standardized?



Question 4

A customer wants to create segments of users based on their Customer Lifetime Value.

However, the source data that will be brought into Data Cloud does not include that key performance

indicator (KPI).

Which sequence of steps should the consultant follow to achieve this requirement?



Answer : A

To create segments of users based on their Customer Lifetime Value (CLV), the sequence of steps that the consultant should follow is Ingest Data > Map Data to Data Model > Create Calculated Insight > Use in Segmentation.This is because the first step is to ingest the source data into Data Cloud using data streams1.The second step is to map the source data to the data model, which defines the structure and attributes of the data2.The third step is to create a calculated insight, which is a derived attribute that is computed based on the source or unified data3.In this case, the calculated insight would be the CLV, which can be calculated using a formula or a query based on the sales order data4. The fourth step is to use the calculated insight in segmentation, which is the process of creating groups of individuals or entities based on their attributes and behaviors. By using the CLV calculated insight, the consultant can segment the users by their predicted revenue from the lifespan of their relationship with the brand. The other options are incorrect because they do not follow the correct sequence of steps to achieve the requirement.Option B is incorrect because it is not possible to create a calculated insight before ingesting and mapping the data, as the calculated insight depends on the data model objects3.Option C is incorrect because it is not possible to create a calculated insight before mapping the data, as the calculated insight depends on the data model objects3.Option D is incorrect because it is not recommended to create a calculated insight before mapping the data, as the calculated insight may not reflect the correct data model structure and attributes3.Reference:Data Streams Overview,Data Model Objects Overview,Calculated Insights Overview,Calculating Customer Lifetime Value (CLV) With Salesforce, [Segmentation Overview]


Question 5

During a privacy law discussion with a customer, the customer indicates they need to honor

requests for the right to be forgotten. The consultant determines that Consent API will solve this

business need.

Which two considerations should the consultant inform the customer about?

Choose 2 answers



Answer : C, D

When advising a customer about using the Consent API in Salesforce to comply with requests for the right to be forgotten, the consultant should focus on two primary considerations:

Data deletion requests are submitted for Individual profiles (Answer C): The Consent API in Salesforce is designed to handle data deletion requests specifically for individual profiles. This means that when a request is made to delete data, it is targeted at the personal data associated with an individual's profile in the Salesforce system. The consultant should inform the customer that the requests must be specific to individual profiles to ensure accurate processing and compliance with privacy laws.

Data deletion requests submitted to Data Cloud are passed to all connected Salesforce clouds (Answer D): When a data deletion request is made through the Consent API in Salesforce Data Cloud, the request is not limited to the Data Cloud alone. Instead, it propagates through all connected Salesforce clouds, such as Sales Cloud, Service Cloud, Marketing Cloud, etc. This ensures comprehensive compliance with the right to be forgotten across the entire Salesforce ecosystem. The customer should be aware that the deletion request will affect all instances of the individual's data across the connected Salesforce environments.


Question 6

A customer notices that their consolidation rate has recently increased. They contact the

consultant to ask why.

What are two likely explanations for the increase?

Choose 2 answers



Answer : A, D

The consolidation rate is a metric that measures the amount by which source profiles are combined to produce unified profiles in Data Cloud, calculated as 1 - (number of unified profiles / number of source profiles). A higher consolidation rate means that more source profiles are matched and merged into fewer unified profiles, while a lower consolidation rate means that fewer source profiles are matched and more unified profiles are created. There are two likely explanations for why the consolidation rate has recently increased for a customer:

New data sources have been added to Data Cloud that largely overlap with the existing profiles. This means that the new data sources contain many profiles that are similar or identical to the profiles from the existing data sources. For example, if a customer adds a new CRM system that has the same customer records as their old CRM system, the new data source will overlap with the existing one. When Data Cloud ingests the new data source, it will use the identity resolution ruleset to match and merge the overlapping profiles into unified profiles, resulting in a higher consolidation rate.

Identity resolution rules have been added to the ruleset to increase the number of matched profiles. This means that the customer has modified their identity resolution ruleset to include more match rules or more match criteria that can identify more profiles as belonging to the same individual. For example, if a customer adds a match rule that matches profiles based on email address and phone number, instead of just email address, the ruleset will be able to match more profiles that have the same email address and phone number, resulting in a higher consolidation rate.


Question 7

A consultant is setting up Data Cloud for a multi-brand organization and is using data spaces to segregate its data for various brands.

While starting the mapping of a data stream, the consultant notices that they cannot map the object for one of the brands.

What should the consultant do to make the object available for a new data space?



Answer : D

When setting up Data Cloud for a multi-brand organization, if a consultant cannot map an object for one of the brands during data stream setup, they should navigate to the Data Space tab and select the object to include it in the new data space. Here's why:

Understanding the Issue

The consultant is using data spaces to segregate data for different brands.

While mapping a data stream, they notice that an object is unavailable for one of the brands.

This indicates that the object has not been associated with the new data space.

Why Navigate to the Data Space Tab?

Data Spaces and Object Availability :

Objects must be explicitly added to a data space before they can be used in mappings or transformations within that space.

If an object is missing, it means it has not been included in the data space configuration.

Solution Approach :

By navigating to the Data Space tab , the consultant can add the required object to the new data space.

This ensures the object becomes available for mapping and use in the data stream.

Steps to Resolve the Issue

Step 1: Navigate to the Data Space Tab

Go to Data Cloud > Data Spaces and locate the new data space for the brand.

Step 2: Add the Missing Object

Select the data space and click on Edit .

Add the required object (e.g., a Data Model Object or Data Lake Object) to the data space.

Step 3: Save and Verify

Save the changes and return to the data stream setup.

Verify that the object is now available for mapping.

Step 4: Complete the Mapping

Proceed with mapping the object in the data stream.

Why Not Other Options?

A . Create a new data stream and map the second data stream to the data space : Creating a new data stream is unnecessary if the issue is simply object availability in the data space.

B . Copy data from the default data space to a new DMO using the Data Copy feature and link this DMO to the new data space : This is overly complex and not required if the object can simply be added to the data space.

C . Create a batch transform to split data between different data spaces : Batch transforms are used for data processing, not for resolving object availability issues.

Conclusion

The correct solution is to navigate to the Data Space tab and select the object to include it in the new data space . This ensures the object is available for mapping and resolves the issue efficiently.


Page:    1 / 14   
Total 170 questions