I just returned from a meeting with one of my clients, a major player in the automotive industry, who have an integration between SAP R/3 and Salesforce. Every day they send flat file extracts from both systems to an external agency to perform data matching. This got me thinking, why not add a small data quality, or customer data management, component into your integration workflow, to do this automatically rather than going to an external agency?
I’ve previously stressed the business value of data and process integration, not only for improving process efficiency but also to enable enterprise mobility. However today I’d like to explain why data quality and Master Data Management is a key component of any data integration or enterprise mobility project.
With 25% of companies believing their data is inaccurate, 91% suffering from common data quality issues, and up to 12% of revenue being wasted just because of poor data quality, this looks like a small step that can have a significant impact on the business.
Objective of integration and enterprise mobility
While business process integration and enterprise mobility are different projects with different touchpoints and objectives, data quality is vital for both.
In an integration project, we are trying to effectively share information between multiple systems which each contain elements of the customer record. For example, while the customer’s product interest and history might be stored in the CRM while their billing, delivery and support history might be in the ERP, the customer record is not one or the other but both these records along with any other information that may be in other systems such as social platforms. The purpose of the integration project is to build automated workflows that share information between systems, to ensure that changes are reflected and to present information from multiple systems to a single screen for the user.
Most systems contain a duplicate record finder, and many companies invest heavily in improving their data quality – so why does this remain an issue?
In an enterprise mobility project, we typically have the same challenge of presenting information from multiple systems to the user on a single screen, but mobile brings other challenges as well. For example, typing on a small touchscreen increases the chance that critical data may be misspelled (increasing the chance of duplicating customer records); and users are also far less likely to search multiple records, as they get frustrated faster on mobile.
From this it should be clear that high quality customer data is vital for both integration and mobility projects. However, this surely isn’t news: most systems contain a duplicate record finder, and many companies invest heavily in improving their data quality, so why does this remain an issue?
Part of the problem is that customer contact data erodes, which is to say becomes outdated and incorrect, at a rate of about 40% per year for the critical name, address, and job title fields. This equates to around five changes per day in a database of 5,000 records. Further, mergers and acquisitions often create overlapping customer records, and in some cases these can comprise 40% of the customer database.
Without good quality data your business has to tread a line between wasting time and harming your image with multiple calls to duplicate records, and missing opportunities but being too cautious.
The other part of the problem is that data standards, types and quality differ across multiple systems, and this makes it difficult to measure product profitability, customer lifetime value, and effectively cross-sell or upsell products.
How is it done now?
Today customer data management is typically done through “extended strings”, which means that the system compares pre-selected data elements and suggests they make be a duplicate if these strings match. This then needs to be followed up manually, meaning you have to decide what level of missed duplications or false positives you are willing to deal with. Typical extended strings might be:
• Postcode, last name (first 5 characters), and premise number• Postcode, last name (first 5 characters), first name (first 5 characters)• Last name (first 5 characters), first initial, premise number and street name
Unfortunately all of these can easily fail to find a duplicate record, as shown in the image below (from http://www.helpit.com/us/datasheets/ukmatchingenginewhitepaper.pdf)
The problem is simple, and especially relevant to enterprise mobility projects. The extended strings typically used today do not take into account:
• Pronunciation• Inconsistent, non-structured or transposed data• Missing or incomplete data
Further, because potential matches are not graded, every potential duplicate needs to be manually reviewed with no ability to focus on delivering quick wins. Clearly, duplicated customer records are hard to spot and potentially damaging, while manually deduplicating them is laborious, lengthy and may still not be effective.
What do we need?
Just as our enterprise integration needs a smart platform that can follow business logic between systems, and enterprise mobility needs to be able to work natively across multiple channels and ecosystems from a single source, data quality also needs a smart solution that can automatically cross-check customer data across multiple systems.
Data quality also needs a smart solution that can automatically cross-check customer data across multiple systems
This allows entire customer records to be compared even when composed of multiple disparate elements, but the complexity of data means we need to take multiple sophisticated approaches and use an intelligent scoring mechanism so users can be presented with just the records that are not clearly duplicate or unique.
In order to create integrated data quality that outperforms extended strings, the solution needs:
• Parsing and restructuring: the ability to identify and relocate data to its correct fields, and split unstructured names into component parts• Phonetic matching: to convert names, addresses, and companies into phonetic equivalents and compensate for various phonetic misspellings• Element matching: to match names with elements missing or reversed• Name lexicons: which can cope with abbreviated forms of names• Acronym and initial matching: to handle inconsistencies between names arising from acronyms and initials• Business word identification: to identify key words in a business name and recognise similarities• Non-phonetic fuzzy matching: to detect and resolve common typing or spelling errors• Standardised strings and words: to match data in multiple languages, and recognise the same string with different ordering
Building such a data quality solution into an enterprise mobility or integration project presents users with integrated customer data of high quality, allowing them to work more effectively and easily. This makes them more efficient and makes your business stand out to your customers by having good quality, consistent information about them.
The post Data Quality In Integration and Mobile Projects appeared first on David Akka Blog.