Salesforce Certified Data Cloud Consultant (Data-Con-101) Exam Practice Test

Page: 1 / 14
Total 170 questions
Question 1
Question 2

A global fashion retailer operates online sales platforms across AMFR, FMFA, and APAC. the data formats for customer, order, and product Information vary by region, and compliance regulations require data to remain unchanged in the original data sources They also require a unified view of customer profiles for real-time personalization and analytics.

Given these requirement, which transformation approach should the company implement to standardise and cleanse incoming data streams?



Answer : B

Given the requirements to standardize and cleanse incoming data streams while keeping the original data unchanged in compliance with regional regulations, the best approach is to implement batch data transformations . Here's why:

Understanding the Requirements

The global fashion retailer operates across multiple regions (AMER, EMEA, APAC), each with varying data formats for customer, order, and product information.

Compliance regulations require the original data to remain unchanged in the source systems.

The company needs a unified view of customer profiles for real-time personalization and analytics.

Why Batch Data Transformations?

Batch Transformations for Standardization :

Batch data transformations allow you to process large volumes of data at scheduled intervals.

They can standardize and cleanse data (e.g., converting different date formats, normalizing product names) without altering the original data in the source systems.

Compliance with Regulations :

Since the original data remains unchanged in the source systems, batch transformations comply with regional regulations.

The transformed data is stored in a separate layer (e.g., a new Data Lake Object or Unified Profile) for downstream use.

Unified Customer Profiles :

After transformation, the cleansed and standardized data can be used to create a unified view of customer profiles in Salesforce Data Cloud.

This enables real-time personalization and analytics across regions.

Steps to Implement This Solution

Step 1: Identify Transformation Needs

Analyze the differences in data formats across regions (e.g., date formats, currency, product IDs).

Define the rules for standardization and cleansing (e.g., convert all dates to ISO format, normalize product names).

Step 2: Create Batch Transformations

Use Data Cloud's Batch Transform feature to apply the defined rules to incoming data streams.

Schedule the transformations to run at regular intervals (e.g., daily or hourly).

Step 3: Store Transformed Data Separately

Store the transformed data in a new Data Lake Object (DLO) or Unified Profile.

Ensure the original data remains untouched in the source systems.

Step 4: Enable Unified Profiles

Use the transformed data to create a unified view of customer profiles in Salesforce Data Cloud.

Leverage this unified view for real-time personalization and analytics.

Why Not Other Options?

A . Implement streaming data transformations : Streaming transformations are designed for real-time processing but may not be suitable for large-scale standardization and cleansing tasks. Additionally, they might not align with compliance requirements to keep the original data unchanged.

C . Transform data before ingesting into Data Cloud : Transforming data before ingestion would require modifying the original data in the source systems, violating compliance regulations.

D . Use Apex to transform and cleanse data : Using Apex is overly complex and resource-intensive for this use case. Batch transformations are a more efficient and scalable solution.

Conclusion

By implementing batch data transformations , the global fashion retailer can standardize and cleanse its data while complying with regional regulations and enabling a unified view of customer profiles for real-time personalization and analytics.


Question 3

Northern Trail Outfitters (NTD) creates a calculated insight to compute recency, frequency,

monetary {RFM) scores on its unified individuals. NTO then creates a segment based on these scores

that it activates to a Marketing Cloud activation target.

Which two actions are required when configuring the activation?

Choose 2 answers



Question 4

A customer is concerned that the consolidation rate displayed in the identity resolution is

quite low compared to their initial estimations.

Which configuration change should a consultant consider in order to increase the consolidation rate?



Question 5

A company wants to include certain personalized fields in an email by including related attributes during the activation in Data Cloud. It notices that some values, such as purchased product names, do not have consistent casing in Marketing Cloud Engagement. For example, purchased product names appear as follows: Jacket, jacket, shoes, SHOES. The company wants to normalize all names to proper case and replace any null values with a default value.

How should a consultant fulfill this requirement within Data Cloud?



Answer : D

To normalize purchased product names (e.g., converting casing to proper case and replacing null values with a default value) within Salesforce Data Cloud, the best approach is to create a batch data transform that generates a new DLO. Here's the detailed explanation:

Understanding the Problem : The company wants to ensure that product names in Marketing Cloud Engagement are consistent and properly formatted. The inconsistencies in casing (e.g., 'Jacket,' 'jacket,' 'shoes,' 'SHOES') and the presence of null values need to be addressed before activation.

Why Batch Data Transform?

A batch data transform allows you to process large volumes of data in bulk, making it ideal for cleaning and normalizing datasets.

By creating a new DLO, you ensure that the original data remains intact while providing a clean, transformed dataset for downstream use cases like email personalization.

Steps to Implement This Solution :

Step 1: Navigate to the Data Streams section in Salesforce Data Cloud and identify the data stream containing the purchased product names.

Step 2: Create a new batch data transform by selecting the relevant data stream as the source.

Step 3: Use transformation functions to normalize the product names:

Apply the PROPER() function to convert all product names to proper case.

Use the COALESCE() function to replace null values with a default value (e.g., 'Unknown Product').

Step 4: Configure the batch data transform to output the results into a new DLO . This ensures that the transformed data is stored separately from the original dataset.

Step 5: Activate the new DLO for use in Marketing Cloud Engagement. Ensure that the email templates pull product names from the transformed DLO instead of the original dataset.

Why Not Other Options?

A . Create a streaming insight with a data action: Streaming insights are designed for real-time processing and are not suitable for bulk transformations like normalizing casing or replacing null values.

B . Use formula fields when ingesting at the data stream level: Formula fields are useful for simple calculations but are limited in scope and cannot handle complex transformations like null value replacement. Additionally, modifying the ingestion process may not be feasible if the data stream is already in use.

C . Create one batch data transform per data stream: This approach is inefficient and redundant. Instead of creating multiple transforms, a single batch transform can handle all the required changes and output a unified, clean dataset.

By creating a batch data transform that generates a new DLO, the company ensures that the product names are consistently formatted and ready for use in personalized emails, improving the overall customer experience.


Question 6

A consultant has an activation that is set to publish every 12 hours, but has discovered that

updates to the data prior to activation are delayed by up to 24 hours.

Which two areas should a consultant review to troubleshoot this issue?

Choose 2 answers



Question 7

Every day, Northern Trail Outfitters uploads a summary of the last 24 hours of store transactions to a new file in an Amazon S3

bucket, and files older than seven days are automatically deleted. Each file contains a timestamp in a standardized naming convention.

Which two options should a consultant configure when ingesting this data stream?

Choose 2 answers



Answer : B, C

: When ingesting data from an Amazon S3 bucket, the consultant should configure the following options:

The refresh mode should be set to ''Upsert'', which means that new and updated records will be added or updated in Data Cloud, while existing records will be preserved. This ensures that the data is always up to date and consistent with the source.

The filename should contain a wildcard to accommodate the timestamp, which means that the file name pattern should include a variable part that matches the timestamp format. For example, if the file name isstore_transactions_2023-12-18.csv, the wildcard could bestore_transactions_*.csv. This ensures that the ingestion process can identify and process the correct file every day.

The other options are not necessary or relevant for this scenario:

Deletion of old files is a feature of the Amazon S3 bucket, not the Data Cloud ingestion process. Data Cloud does not delete any files from the source, nor does it require the source files to be deleted after ingestion.

Full Refresh is a refresh mode that deletes all existing records in Data Cloud and replaces them with the records from the source file. This is not suitable for this scenario, as it would result in data loss and inconsistency, especially if the source file only contains the summary of the last 24 hours of transactions.Reference:Ingest Data from Amazon S3,Refresh Modes


Page:    1 / 14   
Total 170 questions