A Guide to ERP Master Data Migration

Today’s guest blog post is about MDM for SAP.

At the heart of every organization is their Enterprise Resource Planning (ERP) systems, that organizations install to simplify and integrate their business operations like supply chain, finance, human resource, procurement, and marketing into a cohesive system.

The databases stored in the ERP systems come from multiple sources and geographies, also varying upon the type of datasets that organizations need to maintain. The datasets help the departments to draw analytics and project the growth and opportunities that and growth of the company.

A Deloitte report states that “data migration and generative AI are key growth areas, with 80% of global business leaders believing AI will boost business efficiency”. With technological advancements, data migration often becomes a challenge for the organizations, yet it stands out to be an important step that companies need to take up, for efficient ERP implementation, for maintaining the data quality. Hence, the process needs careful planning and execution, to avoid any data inaccuracy and inconsistencies.

Master data is the fundamental business data like customers, products, suppliers, employees, and financial accounts that fuel an organization’s daily operations. Inappropriate data migration can result in data inconsistency, business disruption, and expensive delays. Thus, it’s crucial to plan and implement master data migration meticulously.

In this guide, we will discuss the best practices, strategies, and important considerations to make your master data migration to your new ERP system smooth and successful.

What is ERP Master Data Migration?

ERP Master Data Migration is the activity of migrating important data from old systems (or other data sources) into the new ERP system, when there are any upgradations announced in the ERP software. Gartner reports that75% of ERP strategies are not strongly aligned with overall business strategy, leading to confusion and lackluster results.”

Like SAP ECC migration to SAP S/4 Hana, there are steps that organizations can take to build their SAP master data management [learn more]. It includes extracting, transforming, cleansing, and loading (ETL) the data into the new ERP environment with the assurance that it is accurate, consistent, and in alignment with the business needs.

Why is Master Data Migration Important?

The quality of your data will reflect the success of your ERP implementation. Correct master data guarantees:

  • Operational Efficiency: With dependable and uniform data, business operations such as procurement, sales, inventory, and accounting work harmoniously.
  • Data Consistency: Master data migration ensures the new ERP system works with harmonized and standardized information.
  • Compliance: For industries where they must follow rules, data integrity is paramount to handle audits and compliance with rules.
  • Decision Making: Quality data facilitates proper reporting and quality decision-making.

Major Challenges of ERP Master Data Migration

Prior to discussing the best practices, let us mention the major challenges most organizations encounter while performing the ERP master data migration:

  • Data Quality Issues: Unreliable, stale, or erroneous data in legacy systems.
  • Complicated Data Structures: Legacy systems can have various data formats, which are to be mapped to the new ERP system.
  • Integration across Multiple Systems: Data migration from multiple unrelated systems can be a challenge of integration.
  • Disruption to Business: Incomplete or incorrect data migration can result in operational downtime and business process delay.
A diagram of challenges that companies face while migrating their ERP master data.

ERP Master Data Migration Step-by-Step Guide

Master data migration to a new ERP system is a business-critical process that must be properly planned, executed, and verified to maintain business continuity and data integrity. This detailed guide dissects the critical steps in effective ERP data migration.

1. Planning and Preparation

The key to a successful migration is careful planning. In the absence of a solid plan, the migration process itself can soon turn chaotic and result in problems such as data loss, inconsistencies, or delays. Here’s how you can lay the groundwork for a seamless migration:

  • Scope and Data Requirements: Identify the kind of data that must be migrated customer information, financial data, product information, inventory, vendor information, etc. Determine the source systems where this data is located now and decide if all data must be migrated at one time or in phases.
  • Assess Data Quality: Conduct a thorough review of the existing data to assess its quality. Look for discrepancies, outdated information, duplicate records, and gaps. This is the time to clean up the data before it enters the new system, as poor-quality data can have far-reaching consequences in the ERP system.
  • Set Clear Objectives: Define clear objectives for migration. Whether it is enhancing data accuracy, simplifying processes, or facilitating improved analytics and reporting, your objectives will inform the migration strategy. These objectives will also inform the assessment of the success of the migration process.
  • Create a Budget and Timeline: Set a firm timeline for migration, accounting for testing, data validation, and potential holdups. Be sure that you have sufficient time and budgetary resources to conduct the migration with success.

2. Data Transformation and Mapping

Once you have a complete picture of the data that needs to be migrated, the second step is mapping and reshaping it to the form of the new ERP system. This makes sure that the data from all the different legacy systems fits perfectly with the fields and format in the new ERP system.

  • Data Mapping: Determine how each data element of the old system will be represented in the new system. Customer names and addresses in the old system, for instance, can be broken down into several fields in the new system, such as first name, last name, and address. It’s important to map all data fields meticulously to ensure accuracy.
  • Data Transformation: This is the process of transforming data into a compatible form for the new ERP system. It can entail normalizing values (e.g., product codes to a standardized format) or altering the format of dates and addresses. Data transformation ensures data from different sources integrate well and consistently into the new system.
  • Data Enrichment: Missing data or out-of-date records must sometimes be filled in or updated prior to migration. Data enrichment can involve the addition of missing customer details, the updating of out-of-date financial records, or ensuring data consistency between data sets.

3. Data Cleansing and Validation

Data cleansing is a critical step that provides for the quality of the data to be migrated. Clean data results in more credible reporting and better decision-making.

  • Remove Duplicates: Determine and remove any duplicate records so that the new system will not get duplicate data, which will lead to confusion and inefficiency.
  • Correct Inaccurate Data: Carefully scan data for inaccuracies. This could involve correcting wrong addresses, outdated prices, or incorrect product descriptions. It is important to have accurate data for seamless operations in the new ERP system.
  • Validate Data: Validate the data against business rules and ensure it is of high quality before migration. This could be done by involving major stakeholders or department heads in validation of the accuracy of the data. Validating data ensures that data migrating is accurate and useful.

4. Migration Execution

After data cleansing, transformation, and mapping have been done, the final thing to do is to perform the migration. It entails transferring data from old systems to the new ERP system.

  • Test Migration: Always begin with a test migration in a sandbox or non-production environment. This will enable you to mimic the migration process and see if there are any problems prior to the actual migration. Test the migration with a subset of data so you can ensure data integrity and mapping accuracy.
  • Full Migration: After successfully completing the test migration and having rectified all the issues, go ahead with the full data migration. Depending on the complexity and volume of the data, the migration can occur in stages—moving data for one department at a time to avoid risk and reduce downtime. It is also possible that a phased process can be used to identify and resolve issues early.
  • Data Migration Tools: Leverage the use of specialized migration tools and services to automate the migration process. These tools can streamline data transfers, minimize manual errors, and accelerate the migration process.

5. Post-Migration Testing and Monitoring

Once the data has been successfully migrated, it’s critical to conduct thorough testing to verify that everything functions as anticipated. This stage entails checking the integrity and functionality of the migrated data and ensuring that the new ERP system runs smoothly.

  • System Validation: Verify all aspects of the new ERP system that depend on the data that has been migrated, such as reporting, system integrations, and entry forms. This confirms that the system is doing what it’s supposed to, and that the data is properly reflected throughout the system.
  • User Acceptance Testing (UAT): Involve end-users and stakeholders to verify the usability and accuracy of data in the system. UAT confirms that the new system complies with the business needs and that users are at ease handling the migrated data. It’s critical to obtain key users’ sign-off of the readiness of the system before going live.
  • Monitor for Problems: Post-migration, keep monitoring the system for any performance problems, data inconsistencies, or user complaints. Have a good communication channel with end-users to report and correct any problems in a timely manner. Keep an eye on system performance, integrations, and data streams to make sure that everything works as expected.

6. Training and Support

Training and assistance are vital to familiarize the users with the new ERP system and to settle post-migration issues at the earliest.

  • End-User Training: Perform training classes for all employees involved to acquaint them with the new ERP system and how to handle the migrated data. From basic data input to sophisticated reporting functions, everything needs to be trained on. Hands-on training sessions enable the users to efficiently move around the system and cut down the learning curve.
  • Ongoing Support: Provide post-migration support to resolve issues that occur after the migration. This could involve establishing a help desk, offering FAQs, or providing troubleshooting material. Ongoing support will allow any unforeseen problems to be resolved as soon as possible, maintaining operation smoothness.
  • Documentation: Offer extensive documentation that explains new workflows, data access methods, and system features. Documentation will act as a guide for users and facilitate consistency in data handling within the new ERP system.
A roadmap showing the steps that companies must walk through for a proper migration of their ERP master data.

Best Practices for Successful ERP Master Data Migration

  • Involve Key Stakeholders: Engage key business stakeholders in the migration process. Their feedback will ensure that the data migration is business-oriented and minimizes resistance to change.
  • Use Automated Tools: Utilize data migration software and tools to automate and simplify the migration process. These tools can minimize errors, save time, and provide improved data consistency.
  • Backup Data: Always backup your data prior to starting the migration process. This will give you a rollback strategy in case of failure.
  • Establish a Governance Framework: Set well-defined roles and responsibilities for data management prior to, during, and after the migration.
  • Prioritize Data Quality: Prioritize data quality from the start since bad data will destroy the ERP system’s success. Clean, correct, and standardized data are essential for smooth migration.

Conclusion

Master data migration is an important activity in implementing or replacing an ERP system. By planning your data meticulously, mapping it, cleansing it, and validating it, you can steer clear of typical pitfalls and guarantee that your new ERP system runs smoothly and effectively. Data migration to a new ERP system calls for a systematic approach, the appropriate tools, and cooperation from all stakeholders involved. Through adherence to the best practices established in this guidebook, you can make your ERP master data migration smooth and position your organization for future success.

Modern Data Quality at Scale using Digna

Today’s guest blog post is from Marcin Chudeusz of DIGNA.AI. a company specializing in creating Artificial Intelligence-powered Software for Data Platforms.

Have you ever experienced the frustration of missing crucial pieces in your data puzzle? The feeling of the weight of responsibility on your shoulders when data issues suddenly arise and the entire organization looks to you to save the day? It can be overwhelming, especially when the damage has already been done. In the constantly evolving world of data management, where data warehouses, data lakes, and data lakehouses form the backbone of organizational decision-making, maintaining high-quality data is crucial. Although the challenges of managing data quality in these environments are many, the solutions, while not always straightforward, are within reach.

Data warehouses, data lakes, and lakehouses each encounter their own unique data quality challenges. These challenges range from integrating data from various sources, ensuring consistency, and managing outdated or irrelevant data, to handling the massive volume and variety of unstructured data in data lakes, which makes standardizing, cleaning, and organizing data a daunting task.

Today, I would like to introduce you to Digna, your AI-powered guardian for data quality that’s about to revolutionize the game! Get ready for a journey into the world of modern data management, where every twist and turn holds the promise of seamless insights and transformative efficiency.

Digna: A New Dawn in Data Quality Management

Picture this: you’re at the helm of a data-driven organization, where every byte of data can pivot your business strategy, fuel your growth, and steer you away from potential pitfalls. Now, imagine a tool that understands your data and respects its complexity and nuances. That’s Digna for you – your AI-powered guardian for data quality.

Goodbye to Manually Defining Technical Data Quality Rules

Gone are the days when defining technical data quality rules was a laborious, manual process. You can forget the hassle of manually setting thresholds for data quality metrics. Digna’s AI algorithm does it for you, defining acceptable ranges and adapting as your data evolves. Digna’s AI learns your data, understands it, and sets the rules for you. It’s like having a data scientist in your pocket, always working, always analyzing.

Figure 1: Learn how Digna’s AI algorithm defines acceptable ranges for data quality metrics like missing values. Here, the ideal count of missing values should be between 242 and 483, and how do you manually define technical rules for that?

Seamless Integration and Real-time Monitoring

Imagine logging into your data quality tool and being greeted with a comprehensive overview of your week’s data quality. Instant insights, anomalies flagged, and trends highlighted – all at your fingertips. Digna doesn’t just flag issues; it helps you understand them. Drill down into specific days, examine anomalies, and understand the impact on your datasets.

Whether you’re dealing with data warehouses, data lakes, or lakehouses, Digna slips in like a missing puzzle piece. It connects effortlessly to your preferred database, offering a suite of features that make data quality management a breeze. Digna’s integration with your current data infrastructure is seamless. Choose your data tables, set up data retrieval, and you’re good to go.

Figure 2: Connect seamlessly to your preferred database. Select specific tables from your database for detailed analysis by Digna.

Navigate Through Time and Visualize Data Discrepancies

With Digna, the journey through your data’s past is as simple as a click. Understand how your data has evolved, identify patterns, and make informed decisions with ease. Digna’s charts are not just visually appealing; they’re insightful. They show you exactly where your data deviated from expectations, helping you pinpoint issues accurately.

Read also: Navigating the Landscape – Moden Data Quality with Digna

Digna’s Holistic Observability with Minimal Setup

With Digna, every column in your data table gets attention. Switch between columns, unravel anomalies, and gain a holistic view of your data’s health. It doesn’t just monitor data values; it keeps an eye on the number of records, offering comprehensive analysis and deep insights with minimal configuration. Digna’s user-friendly interface ensures that you’re not bogged down by complex setups.

Figure 3: Observe how Digna tracks not just data values but also the number of records for comprehensive analysis. Transition seamlessly to Dataset Checks and witness Digna’s learning capabilities in recognizing patterns.

Real-time Personalized Alert Preferences

Digna’s alerts are intuitive and immediate, ensuring you’re always in the loop. These alerts are easy to understand and come in different colors to indicate the quality of the data. You can customize your alert preferences to match your needs, ensuring that you never miss important updates. With this simple yet effective system, you can quickly assess the health of your data and stay ahead of any potential issues. This way, you can avoid real-life impacts of data challenges.

Watch the product demo

Kickstart your Modern Data Quality Journey

Whether you prefer inspecting your data directly from the dashboard or integrating it into your workflow, I invite you to commence your data quality journey. It’s more than an inspection; it’s an exploration—an adventure into the heart of your data with a suite of features that considers your data privacy, security, scalability, and flexibility.

Automated Machine Learning

Digna leverages advanced machine learning algorithms to automatically identify and correct anomalies, trends, and patterns in data. This level of automation means that Digna can efficiently process large volumes of data without human intervention, erasing errors and increasing the speed of data analysis.

The system’s ability to detect subtle and complex patterns goes beyond traditional data analysis methods. It can uncover insights that would typically be missed, thus providing a more comprehensive understanding of the data.

This feature is particularly useful for organizations dealing with dynamic and evolving data sets, where new trends and patterns can emerge rapidly.

Domain Agnostic

Digna’s domain-agnostic approach means it is versatile and adaptable across various industries, such as finance, healthcare, and telcos. This versatility is essential for organizations that operate in multiple domains or those that deal with diverse data types.

The platform is designed to understand and integrate the unique characteristics and nuances of different industry data, ensuring that the analysis is relevant and accurate for each specific domain.

This adaptability is crucial for maintaining accuracy and relevance in data analysis, especially in industries with unique data structures or regulatory requirements.

Data Privacy

In today’s world, where data privacy is paramount, Digna places a strong emphasis on ensuring that data quality initiatives are compliant with the latest data protection regulations.

The platform uses state-of-the-art security measures to safeguard sensitive information, ensuring that data is handled responsibly and ethically.

Digna’s commitment to data privacy means that organizations can trust the platform to manage their data without compromising on compliance or risking data breaches.

Built to Scale

Digna is designed to be scalable, accommodating the evolving needs of businesses ranging from startups to large enterprises. This scalability ensures that as a company grows and its data infrastructure becomes more complex, Digna can continue to provide effective data quality management.

The platform’s ability to scale helps organizations maintain sustainable and reliable data practices throughout their growth, avoiding the need for frequent system changes or upgrades.

Scalability is crucial for long-term data management strategies, especially for organizations that anticipate rapid growth or significant changes in their data needs.

Real-time Radar

With Digna’s real-time monitoring capabilities, data issues are identified and addressed immediately. This prompt response prevents minor issues from escalating into major problems, thus maintaining the integrity of the decision-making process.

Real-time monitoring is particularly beneficial in fast-paced environments where data-driven decisions need to be made quickly and accurately.

This feature ensures that organizations always have the most current and accurate data at their disposal, enabling them to make informed decisions swiftly.

Choose Your Installation

Digna offers flexible deployment options, allowing organizations to choose between cloud-based or on-premises installations. This flexibility is key for organizations with specific needs or constraints related to data security and IT infrastructure.

Cloud deployment can offer benefits like reduced IT overhead, scalability, and accessibility, while on-premises installation can provide enhanced control and security for sensitive data.

This choice enables organizations to align their data quality initiatives with their broader IT and security strategies, ensuring a seamless integration into their existing systems.

Conclusion

Addressing data quality challenges in data warehouses, lakes, and lakehouses requires a multifaceted approach. It involves the integration of cutting-edge technology like AI-powered tools, robust data governance, regular audits, and a culture that values data quality.

Digna is not just a solution; it’s a revolution in data quality management. It’s an intelligent, intuitive, and indispensable tool that turns data challenges into opportunities.

I’m not just proud of what we’ve created at DIGNA.AI; I’m most excited about the potential it holds for businesses worldwide. Join us on this journey, schedule a call with us, and let Digna transform your data into a reliable asset that drives growth and efficiency.

Cheers to modern data quality at scale with Digna!

This article was written by Marcin Chudeusz, CEO and Co-Founder of DIGNA.AI.  a company specializing in creating Artificial Intelligence-powered Software for Data Platforms. Our first product, Digna offers cutting-edge solutions through the power of AI to modern data quality issues.

Contact me to discover how Digna can revolutionize your approach to data quality and kickstart your journey to data excellence.

Modern Data Quality: Navigating the Landscape

Today’s guest blog post is from Marcin Chudeusz of DIGNA.AI. a company specializing in creating Artificial Intelligence-powered Software for Data Platforms.

Data quality isn’t just a technical issue; it’s a journey full of challenges that can affect not only the operational efficiency of an organization but also its morale. As an experienced data warehouse consultant, my journey through the data landscape has been marked with groundbreaking achievements and formidable challenges. The latter, particularly in the realm of data quality in some of the most data-intensive industries: banks, and telcos, have given me profound insights into the intricacies of data management. My story isn’t unique in data analytics, but it highlights the evolution necessary for businesses to thrive in the modern data environment.

Let me share with you a part of my story that has shaped my perspective on the importance of robust data quality solutions.

The Daily Battles with Data Quality

In the intricate data environments of banks and telcos, where I spent much of my professional life, data quality issues were not just frequent; they were the norm.

The Never-Ending Cycle of Reloads

Each morning would start with the hope that our overnight data loads had gone smoothly, only to find that yet again, data discrepancies necessitated numerous reloads, consuming precious time and resources. Reloads were not just a technical nuisance; they were symptomatic of deeper data quality issues that needed immediate attention.

Delayed Reports and Dwindling Trust in Data

Nothing diminishes trust in a data team like the infamous phrase “The report will be delayed due to data quality issues.” Stakeholders don’t necessarily understand the intricacies of what goes wrong—they just see repeated failures. With every delay, the IT team’s credibility took a hit.

Team Conflicts: Whose Mistake Is It Anyway?

Data issues often sparked conflicts within teams. The blame game became a routine. Was it the fault of the data engineers, the analysts, or an external data source? This endless search for a scapegoat created a toxic atmosphere that hampered productivity and satisfaction.

Read: Why Data Issues Continue to Create Conflicts and How to Improve Data Quality.

The Drag of Morale

Data quality issues aren’t just a technical problem; they’re a people problem. The complexity of these problems meant long hours, tedious work, and a general sense of frustration pervading the team. The frustration and difficulty in resolving these issues created a bad atmosphere and made the job thankless and annoying.

Decisions Built on Quicksand

Imagine making decisions that could influence millions in revenue based on faulty reports. We found ourselves in this precarious position more often than I care to admit. Discovering data issues late meant that critical business decisions were sometimes made on unstable foundations.

High Turnover: A Symptom of Data Discontent

The relentless cycle of addressing data quality issues began to wear down even the most dedicated team members. The job was not satisfying, leading to high turnover rates. It wasn’t just about losing employees; it was about losing institutional knowledge, which often exacerbated the very issues we were trying to solve.

The Domino Effect of Data Inaccuracies

Metrics are the lifeblood of decision-making, and in the banking and telecom sectors, year-to-month and year-to-date metrics are crucial. A single day’s worth of bad data could trigger a domino effect, necessitating recalculations that spanned back days, sometimes weeks. This was not just time-consuming—it was a drain on resources amongst other consequences of poor data quality.

The Manual Approach to Data Quality Validation Rules

As an experienced data warehouse consultant, I initially tried to address these issues through the manual definition of validation rules. We believed that creating a comprehensive set of rules to validate data at every stage of the data pipeline would be the solution. However, this approach proved to be unsustainable and ineffective in the long run.

The problem with manual rule definition was its inherent inflexibility and inability to adapt to the constantly evolving data landscape. It was a static solution in a dynamic world. As new data sources, data transformations, and data requirements emerged, our manual rules were always a step behind, and keeping the rules up-to-date and relevant became an arduous and never-ending task.

Moreover, as the volume of data grew, manually defined rules could not keep pace with the sheer amount of data being processed. This often resulted in false positives and negatives, requiring extensive human intervention to sort out the issues. The cost and time involved in maintaining and refining these rules soon became untenable.

Comparison between Human, Rule, and AI-based Anomaly Detection
Comparison between Human, Rule, and AI-based Anomaly Detection

Embracing Automation: The Path Forward

This realization was the catalyst for the foundation of digna.ai. Danijel (Co-founder at Digna.ai) and I combined our AI and IT Know-How to create AI-powered software for Data Warehouses. This led to our first product Digna, we needed intelligent, automated systems that could adapt, learn, and preemptively address data quality issues before they escalated. By employing machine learning and automation, we could move from reactive to proactive, from guesswork to precision.

Automated data quality tools don’t just catch errors—they anticipate them. They adapt to the ever-changing data landscape, ensuring that the data warehouse is not just a repository of information, but a dependable asset for the organization.

Today, we’re pioneering the automation of data quality to help businesses navigate the data quality landscape with confidence. We’re not just solving technical issues; we’re transforming organizational cultures. No more blame games, no more relentless cycles of reloads—just clean, reliable data that businesses can trust.

In the end, navigating the data quality landscape isn’t just about overcoming technical challenges; it’s about setting the foundation for a more insightful, efficient, and harmonious future. This is the lesson my journey has taught me, and it is the mission that drives us forward at digna.ai.

This article was written by Marcin Chudeusz, CEO and Co-Founder of DIGNA.AI.  a company specializing in creating Artificial Intelligence-powered Software for Data Platforms. Our first product, Digna offers cutting-edge solutions through the power of AI to modern data quality issues. 

Contact us to discover how Digna can revolutionize your approach to data quality and kickstart your journey to data excellence.

What You Should Know About Master Data Management

Today’s guest blog post is from Benjamin Cutler of Winpure. In here Benjamin goes through a few things that you in a nutshell should know about master data management.

People

People have multiple phone numbers and multiple email addresses and in 2022 there must be several decades of historic contact information available for any one person. Most of us move at least once, every few years. Sometimes we go by different nicknames in different situations, some people even change their names. We hold different titles throughout the course of our careers and we change companies every few years. Only a few people in our lives know exactly how to get a hold of us, at any given time. Many of us change vehicles just as often as we change our hair color. Many of us are employees, most of us are also customers, many of us are spouses and sometimes we are grandparents, parents, aunts, uncles, and children at the same time. Sometimes we’re out enjoying ourselves and sometimes we just want to be left alone. We each have unique interests and desires, but we also have many things in common with other groups of people.

Products

Products have many different descriptions, they come in many different variations, different sizes, different colors, and different packaging materials. Similar products are often manufactured by different manufacturers, and they can be purchased from many different commercial outlets, at different price points. Any one product on the market at any one time will likely be available in several variations, but that product will also likely change over time as the manufacturer makes improvements. Products can be purchased therefore they can also be sold. They can also be returned or resold to other buyers, so there are different conditions and ways to determine product value. There are SKU and UPC numbers and other official product identification and categorization systems including UNSPSC and others, but none of them speak the same language.

Companies

Companies are made up of many different people who come and go over time. The company may change names or change ownership. It may have multiple locations which means multiple addresses and phone numbers, and they probably offer many different ways to contact them. Depending on where you look, there are probably more than a dozen different ways to find their contact information, but only some of those company listings will be correct. Companies have tax IDs and Employer IDs and DUNS IDs in the US, and there are many different systems worldwide.

Addresses

Addresses are the systems we use to identify locations. Each country and territory has its own system so each system is different. In the US we use premise numbers, street names with and without street prefixes and suffixes, we use unit numbers, states, counties, cities, towns and 5 and 9 digital numerical postal codes. Addresses and address systems can change over time, and they are inherently one of the most inconsistent forms of identification. Addresses are usually riddled with errors, misspellings, different structures and formatting, and they can be very difficult to work with. What makes this even more difficult is that the same address represented in multiple internal business systems will often be represented differently, and will rarely match the way the same address is represented externally.

Data

Data is a digital description of all of these things. Data usually comes in columns and rows and all shapes and sizes. Data about these things is captured, stored in business systems and it’s used to get work done. Need to call a contact? Check your contact data. Need to know a company’s billing address? Check your company data. Need to know something about a product? Check your product information. Need to know something about where your customers live and work or where to deliver the product? Check your address information. But here’s the thing: the information rarely matches from system to system and it’s very hard to keep up to date. This is especially difficult for a few reasons. Internally your company probably has many different business systems and many different ways of storing and representing these things, so it rarely matches internally, plus, the way that your company stores and represents this information will almost never match external information. How can you know the best way to contact your customer who has multiple phone numbers and multiple email addresses? If you’re searching some external system for updated information about some product or contact and the information doesn’t match, how do you find the new information? How can you know if your own information is correct and up to date? How can you scale your efforts to communicate with hundreds or thousands of customers at a time, communicating information that is specifically relevant for each of them? If the information doesn’t match or is not correct, how can you know who is who?

Relationships

The relationships across people, other groups of people, products, other groups of products, companies, other groups of companies, addresses, and other addresses, is where the rubber hits the road. Business value comes from connecting companies and products or services with other people and companies, and other products and services, at scale. Customers purchasing products might be interested in purchasing related products. Customers often buy things based on location. Companies selling to customers might be able to sell more, if they target similar customers in similar locations. Products and services also sell well based on location, and companies can optimize sales territories and delivery routes based on the relative proximity to other locations.

People and Technology

The people and technology between all of this, finds it difficult to keep up. People do things one by one and we’re good with ambiguity. We program computers and business systems to do things faster. Computers do things programmatically and very quickly but they’re not good with ambiguity. People can see similarity between things that are similar, but computers and business systems cannot. People might be good with troubleshooting and critical thinking, but computers and business systems are not. A computer program might be able to find the same customer in multiple systems and might be able to update that customer’s information all at once, but how can you know if the new information is the best information? Knowing that your customer probably has multiple phone numbers and multiple addresses and multiple nicknames, how can you know which information is correct? Doing this at scale can be very, very difficult.

In Conclusion

Master Data Management is very difficult but it’s fundamental in scaling your business. People can sell products door-to-door, but data and technology allow us to market, sell, deliver, and service our products and services, to tens and hundreds of thousands of people in milliseconds, regardless of the distance. Most organizations still view data as a cost of doing business but with the right investments in people, process, technology, and in data management, we can scale as worldwide organizations.

Popular Entries on The Resource List

This site has a list of white papers, ebooks, reports and webinars from solution and service providers.

The aim is to give inspiration for organizations having the quest to implement or upgrade their Master Data Management (MDM), Product Information Management (PIM) and/or Data Quality Management (DQM) capability.

The list has now been online in a month and it is time to look at which entries that until now have been the most popular in terms of click through. These are:

ROI of MDM, PIM and DQM

Exploring The ROI of PIM and MDMHave you ever wondered how to effectively evaluate the return on investment (ROI) of a Product Information Management (PIM) and Master Data Management (MDM) implementation? Then, take a look at some real-life examples. Download the Enterworks ebook on Exploring The ROI of PIM and MDM.

MDM, PIM and DQM market overview

The State of Product Information Management 2020Get an overview of why PIM solutions are implemented in more and more organizations, which capabilities a 2020 PIM solution needs to cover, where the market is heading and who the PIM vendors in the market are and how this affect your purchase of PIM. Download the Dynamicweb PIM white paper The State of Product Information Management 2020. 

MDM, PIM and DQM implementation

virtual-conference-webcast-revConferences cancelled? Stuck working from home? Bring the conferences to you with an virtual MDM conference. Don’t miss this must see 6 week live webcast series and hear what other companies are doing in the world of MDM along with best practices and workshops by industry experts.. Register for this Enterworks webcast series at the Everything Master Data Management (MDM) Virtual Conference.

Extended MDM

Intelligent Data Hub - Taking MDM to the Next LevelMDM solutions have been instrumental in solving core data quality issues in a traditional way, focusing primarily on simple master data entities such as customer or product. Organizations now face new challenges with broader and deeper data requirements to succeed in their digital transformation. Help your organization through a successful digital transformation while taking your MDM initiative to the next level. Download the Semarchy white paper Intelligent Data Hub – Taking MDM to the Next Level.

Data Quality

4 Keys to Unlocking Data Quality with MDMBusinesses today face a rapidly growing mountain of content and data. Mastering this content can unlock a whole new level of Business Intelligence for your organization and impact a range of data analytics. It’s also crucial for operational excellence and digital transformation. Download the 1WorldSync and Enterworks ebook on 4 Keys to Unlocking Data Quality with MDM.

Next To Come

More resources from solution and service vendors are on the way. Additionally, there will also be a Case Story List with success stories from various industries. Stay tuned.

If you have comments, suggestions and/or entries to be posted (yes, there is a very modest fee), then get in touch here:

Go back

Your message has been sent

Warning
Warning
Warning
Warning

Warning.