By Francis Scudellari
If you’ve ever undertaken the task of importing data from another CRM into Salesforce, you’ve probably experienced frustration along the way. If you’re just starting a migration, there’s a good chance you will run into some, but you can minimize the pain with a heavy dose of preparation.
A lot of the frustration associated with migrations comes from the users who own the data, and who, for example, can’t understand why everything they see on the Bio tab in Raiser’s Edge can’t easily be copied exactly the way it is to the Contact page in Salesforce. There’s also plenty to be had for the data migrator who has to untangle the knotted relational strings that put scattered data into a single page on the users’ screens (not to mention the need to clean up the messiness that has accumulated over years of use).
To get to the core of why it can be hard and painful to get the data to match up between two CRMs, let’s first touch on a couple of technical concepts.
The first concept is the Model-View-Controller (MVC) framework. It’s a very abstract description of how Salesforce and other CRMs work. The “Model” in the MVC is the database layer of the CRM, and in Salesforce this includes the data objects and their related apex controllers. The “View” in the MVC is the presentation layer of the CRM; that is the page layouts, reports and dashboards where users can see and work with the data stored by the Model. Finally, the “Controller” is the logic that dictates how the View updates to present data based on Users’ interactions with the CRM.
That can all be a little confusing, but the key take away is that what is displayed in the View as data on a single Tab or Page, may actually be a presentation of data that is stored in the multiple interrelated tables in the Model. The View could also present to the user “data” that’s not stored anywhere in the Model, for example values that are calculated on the fly or are aggregations of multiple fields.
Another geeky concept that database folks are familiar with is Database Normalization. Essentially, the degree to which a database is “normalized” will be reflected in how many data objects are used to store the data presented on a single page. For example, in a highly normalized database, you’re likely to see separate tables for phone numbers, for emails, for addresses, and even for the values of fields we define in Salesforce as pick-lists.
Salesforce does NOT use a very normalized data model. For example, a Contact’s phone numbers are stored in typed fields rather than in a separate table where they could be linked to more than one Mobile phone or Work phone. What that means for us is a data headache if we’re trying to migrate from a CRM that uses a highly normalized data model.
What’s most important for users to understand about Normalization is, well-architected databases are set up to allow the CRM to store and manipulate the data as efficiently as possible. Well-architected databases are NOT designed to make data migrations easy or efficient. That role is typically played by report engines or query tools, which replicate the data-handling of the CRM’s presentation layer.
For that reason, often it’s easiest to export data from a CRM using its reporting tools. This can often be much more efficient that trying to export raw data tables from a highly normalized database, since the reporting tool will do the work of combining fields from related tables for you.
Using reports is not a perfect solution, however. One common issue you’ll find when using reports is that not all systems let you export the primary keys from the data tables. Not having those keys could present a major problem if you have a need to run recurring updates of constituent data from the source system. Some source systems also won’t give you access to all the data, or will come with limitations on what data you can combine. Even worse, some systems might not let you export to a CSV, or another easily importable format.
Also, not all systems have user-friendly report builders like Salesforce, and it can be tricky to put together the exports you need, or you may not feel confident that you’ve filtered the data in a way that ensures you’re getting everything you need to import. What then?
If you absolutely have to work with raw data files, all is not lost. There are still good ways to reduce the effort of data transformation.
Simple and Not so Simple Transformations in Excel
It’s not the most sophisticated solution, but there’s a lot you can do in Excel to transform the data exported to CSVs or other delimited text file formats from the source system. For example, you can use functions such as VLOOKUP() and SUMIF() to link up values from different tables. You can also write formulas in Excel to perform some simple and some pretty complex data tansformations. This includes everything from text concatenation, to date conversion, to adding formatting to phone numbers or substituting values to enforce some data consistency.
Building Your Own Queries
If you have access to a favorite relational database (Access, MySQL, SQL) that you feel comfortable working in, you could also import the exported files there. This will allow you to define queries that recreate the source system’s table joins in a more reliable way than using VLOOKUP() in Excel, where it’s easy to make mistakes. If the Source System uses a SQL database, often there will be predefined scripts you can use to try to build the queries the CRM uses. You can also try your hand at writing your own SQL queries and scripts. There are lots of good resources on the Internet for learning SQL (such as at W3Schools), and the syntax is very close to the SOQL used in Salesforce.
Of course if you go this route, you’ll need to be or to become familiar with the database architecture of the source system. Ideally, there’s good documentation to guide you in this effort, but for some systems it involves a lot of time exploring the data tables to see how they’re related, and where the key data is housed.
Using the Power of Salesforce
If you’re expert in Salesforce already, it may be easier to leverage that knowledge to handle the data transformation you need to perform. Pursuing this path, you could create custom data objects and fields in Salesforce that replicate the Model of the Source System, and then build out automation that transforms and combine the data before transferring the values to the target Salesforce objects and fields.
For example, let’s say that the source system stores the constituent’s birth date in separate Month, Day and Year fields, and that the Year isn’t always populated. You could create custom fields to store these separate values, and then build out a ProcessBuilder automation to combine the three (when all three are present), and copy the resulting Date to the standard Birthdate field. Depending on your level of expertise, you can also build Processes and Flows that do more advanced work, like updating values from a Child to a Parent record, or creating new records on another object, although you’d have to keep system limits in mind if you’re importing data in builk.
In some cases, you can also take advantage of existing apex triggers. One common scenario for this involves importing from a data model that has normalized addresses. If you’re using the Nonprofit Success Pack (NPSP), you can make use of that package’s triggers to set the preferred address on Households and Contacts. After migrating all your Household Accounts and related Contacts, import all of a Household’s addresses to the NPSP Address object, which is a child to Account. As long as you flag one of these addresses as the Default address, the information from that record will be automatically copied to the Account’s Billing Address fields, and to the Mailing Address fields on all related Contacts in the Household.
If you’re not comfortable yet writing automation in Salesforce, you can also make use of a good ETL (Extract-Transform-Load) tool to perform data transformations. These can have their own learning curves. Some, such as Apsona and CRM Fusion’s Demand Tools allow you to manipulate data with mass updates once it’s been imported, and even perform some simple transformation as it’s imported from CSV files. Others, such as Jitterbit or Skyvia, can act as middle-ware between Salesforce and a source system (as long as they can connect to the source or files extracted from the source system), and perform more advanced transformations before the data gets imported.
Seek out the Community
Another great resource all Salesforce users have is the Salesforce Community itself. The Salesforce Trailblazer Community and the Power of Us Hub are great places to engage with other users and experts when you have questions or run into obstacles with your data migration. There’s even a Data Migration focused group on the Hub.
You may also be interested in reading: