Cloud for Good
Close this search box.

Salesforce Data Cloud Myths: Truth or False?

Salesforce Data Cloud Myths: Truth or False?

If you’ve been keeping an eye on LinkedIn, attending recent Salesforce events, or watching Indiana Fever games, you’ve probably noticed that Salesforce isn’t just about their CRM anymore. It’s now CRM + DATA + AI. One could argue that Salesforce has always been about these three elements, with Einstein AI having been around for years and data being crucial for CRM functionality. So why does it matter now? What’s new and exciting? The answer lies in Data Cloud.

Data Cloud represents Salesforce’s latest evolution, rooted in and formerly known as Salesforce Genie. It’s now central to how organizations can scale and grow in an era where data is the new currency.

Data Cloud is the fastest growing product in Salesforce’s history, pushing new boundaries of innovation. Salesforce is heavily investing in this tool to provide customers with better access to their data, enabling actionable insights to achieve organizational goals.

As Data Cloud rapidly develops, potential clients often have questions about its function and how it can address their challenges. Through roundtables, presentations, webinars, and articles, we’ve noticed that these questions are becoming increasingly complex. At the same time, misconceptions about Data Cloud are also emerging. I’d like to address some common myths and provide the facts.

MYTH: Data Cloud requires MuleSoft

Salesforce designs its solutions to work seamlessly together, with continuous investment in their development. Data Cloud can ingest data from multiple systems and platforms, offering several out-of-the-box (OOTB) connectors. While many connectors are for Salesforce products, there are also connector options for SFTPs, Snowflake, AWS, websites, and more. The extensive list of connectors includes prebuilt solutions via MuleSoft Anypoint Exchange, highlighting Salesforce’s commitment to integrating commonly used tools beyond the Salesforce ecosystem.

Interface of a window that appears when a user chooses to add a new data stream within Saleforce Data Cloud

Using MuleSoft Anypoint Platform can accelerate connecting commonly used data sources, but it isn’t required. The Data Cloud ingestion process is like the broader Salesforce platform, allowing various integration solutions to connect and harmonize data effectively.

MYTH: Data Cloud will de-duplicate your data

When discussing Data Cloud, the term “harmonize” often causes confusion. Harmonizing data means standardizing your data model. For instance, a Contact record in Salesforce has specific fields we want to leverage. However, data from an HR system might structure similar information differently. Effective unification requires the alignment of these tables so that information in, for example, Salesforce’s “Mobile Phone” field matches information in the HR system’s “Phone 2” field. Data Cloud maps fields to a common data model, ensuring consistent representation.

Consider, for example, having two Ryan Blakes in your Salesforce org with different mobile numbers. The first question would be about managing duplicates in Salesforce, but let’s focus on Data Cloud. When Data Cloud ingests your Salesforce data, it retains the original source information, identifying both Ryan Blakes. After harmonizing the data, it undergoes “Identity Resolution,” which uses declaratively created rules to match individuals based on attributes like email, address, device ID, or phone number. This process creates a Unique ID for each record, linking them together, which is known as the Unified Individual ID.

In the case of duplicate Ryan Blakes, Data Cloud will recognize the duplicates, create a unique ID for both records, and retain the original source IDs. It identifies matching attributes (like address) but won’t automatically de-duplicate Salesforce records. This intentional design choice means Salesforce is not attempting to be a Master Data Management System (MDM).

Source ID Name Phone Address Unified Individual ID
Salesforce Org #1
Ryan Blake
100 Example Street
6f94e3d79b24c65ee 3fcfc47c43bce53
Salesforce Org #1
Ryan Blake
100 Example Street
6f94e3d79b24c65ee 3fcfc47c43bce53

MYTH: Data Cloud will create a golden record

There’s a common misconception in the Salesforce community about getting a “golden record” with Data Cloud. While the term suggests a Master Data Management (MDM) system where a single, updated record is synchronized across all systems, this is not what Data Cloud is designed to do.

Data Cloud retains the original source information and identifies matches across systems, leveraging this data to facilitate engagements. This concept is known as the Data Cloud Key Ring.

Let’s build on the previous example where Ryan Blake might appear twice in Salesforce, which is leveraged for Volunteer Management and once in an HR system that hosts employee information, each with different email addresses. Data Cloud identifies Ryan as the same person. If Ryan also appears in an email marketing tool (not connected to Salesforce) with another email address, Data Cloud links these records under a Unified Individual ID, like a key ring with unique keys.

Source ID Name Email Address Unified Individual ID
Salesforce Org #1
Ryan Blake
100 Example Street
6f94e3d79b24c65ee 3fcfc47c43bce53
Salesforce Org #1
Ryan Blake
100 Example Street
6f94e3d79b24c65ee 3fcfc47c43bce53
HR System
Ryan D. Blake
100 Example Street
6f94e3d79b24c65ee 3fcfc47c43bce53
Email Marketing System
Ryan and Christine Blake
100 Example Street
6f94e3d79b24c65ee 3fcfc47c43bce53

In this setup, we can reach out to Ryan Blake through our email marketing tool, recognizing him as both an employee and a volunteer, which is information not typically retained in the email marketing system.

In an MDM setup, you’d typically update all systems with a matching email address from a single source, like the HR system. However, Data Cloud doesn’t do this. Instead, it uses data from multiple systems to create actionable, personalized experiences. For instance, a newsletter could include dynamic content about volunteer opportunities and company news, tailored to Ryan’s multiple roles. This approach acknowledges Ryan’s engagements without overwriting his preferences or original data sources.

MYTH: You can’t ingest Custom Objects from Salesforce

This topic has caused some confusion, suggesting a significant limitation that isn’t accurate. It’s important to clarify this upfront. During the data ingestion process, you can select which objects to ingest from your Salesforce CRM Org. The system identifies the API names of the objects and fields from the data source. When setting up the connection to Salesforce with Data Cloud, ensure the Data Cloud integration user has access to the necessary information, like you would through assigning Permission Sets for Salesforce users. If the integration user has access to custom objects, you can ingest and map that information accordingly.

Interface in Salesforce Data Cloud's data ingestion process showing the API names of connected data stream options, data connector types, and custom and standard objects

MYTH: Data Cloud requires a Data Scientist and takes a long time to implement

We often get asked about how much Data Cloud aligns with the “low-code, no-code” approach commonly associated with Salesforce CRM, and whether it takes a long time to implement. Given the need to ingest, map data, run identity resolution, and generate insights, it might seem like a job for a data scientist. However, this isn’t necessarily the case.

Data Cloud leverages various data sources that require alignment and strategy. While understanding the comprehensive data landscape might fit a data scientist’s role, skilled Salesforce Admins often manage data integration from third-party applications regularly. Since Data Cloud focuses on actioning data, defining use cases is crucial for strategy and planning. Effective and scalable Data Cloud implementation requires careful planning, which often outweighs the time spent on the actual implementation.

I love a good analogy, and painting a room comes to mind when thinking about a Data Cloud implementation. For those who have painted or hired someone to paint a room, consider how much time is spent prepping the room compared to actually painting the room. You move furniture, cover the floor, tape the trim, fill holes, sand the walls, and only then do you get to paint. If you decide to paint another room, you repeat the prep work before applying paint. If you plan to paint the trim, it’s more efficient to prep for it upfront rather than mid-process. This doesn’t even include the time spent selecting the paint color and gathering materials.

Data Cloud is similar; thorough upfront planning and preparation significantly impacts the implementation timeline and ensures you leverage data effectively to meet your goals. Identifying use cases and understanding your data sources in advance can streamline the implementation process, making your Data Cloud journey more efficient and effective.

MYTH: Data Cloud is expensive

Cost is a common question surrounding Data Cloud, as potential users want to understand the financial implications. Data Cloud operates on a consumption-based pricing model, so discussions with your Salesforce Account Executive are essential. At Cloud for Good, we approach these conversations strategically to understand the goals for Data Cloud, identify appropriate data sources, and estimate potential data volumes.

Increasingly, technology conversations emphasize the importance and value of having a data strategy. Data is the new currency and being strategic with it—much like managing finances—creates more opportunities, especially given the vast amount of information that may reside in different systems.

In the realm of Big Data, five characteristics are often highlighted: Volume, Variety, Veracity, Value, and Velocity. These factors should be considered in the context of the consumption-based pricing model and included in discussions with Salesforce and an implementation partner like Cloud for Good. Matt Wash, from the Salesforce Data Cloud Partner Enablement Team, elaborates on these in a LinkedIn post:

  • Volume & Variety: How many systems, objects, fields, and rows are needed for my use case?
  • Veracity: Is the data of good quality and ready for harmonization?
  • Value: Does the data support actual business outcomes and KPIs?
  • Velocity: How quickly do my use cases require data processing?

As mentioned earlier, Cloud for Good, in collaboration with your Salesforce Account Executives, is ready to assist you in estimating potential consumption based on your use cases. Feel free to reach out for more information!

In Summary

In conclusion, Salesforce Data Cloud represents a significant evolution in managing and leveraging data within your organization. Data Cloud helps break down data silos, providing actionable insights to drive organizational goals. Despite initial misconceptions, implementing Data Cloud doesn’t require extensive coding skills or a data scientist. Instead, thorough planning and preparation, akin to prepping a room before painting, can streamline the process and maximize efficiency.

Understanding the value of a comprehensive data strategy is crucial, as data becomes the new currency. Addressing the five V’s of Big Data—Volume, Variety, Veracity, Value, and Velocity—ensures that your data supports meaningful business outcomes and KPIs.

At Cloud for Good, we recognize the importance of a strategic approach to a Data Cloud implementation. Our team of certified professionals is ready to assist you on this journey. We offer a starter package to help you get hands-on with the tool and explore its capabilities. Whether you need help understanding your data sources or defining use cases, our data practice can provide the expertise you need.

Talk to us about Data Cloud and discover how our tailored solutions can help you harness the full potential of your data.