Avoiding Pitfalls in Designing a Master Data Management (MDM) Architecture
Master Data Management (MDM), as a concept, has drawn a great deal of interest from departments heavily invested in several Business Intelligence (BI) and Enterprise Performance Management (EPM) applications. MDM promises a utopian management center, a one-stop shop solution for master and metadata of quickly changing BI and EPM systems. Beyond the easier user interface to make numerous changes, MDM, as a purpose built tool, features the ability to create data governance workflows, auditing, and archiving processes. From an information technology (IT) perspective the final idea is that the MDM software will act as a central hub for all application administrators and designated business power users. The tool will seamlessly integrate into a current production process and feed the individual applications based on each systems native format to feed in changes or rebuild structures (think dimension load rules in Essbase).
There are obvious challenges that are expected to come up in an MDM initiative as the goals are often lofty, including agreeing on governance, production processes, user interaction, and workflow. This doesn’t even take into account the change management challenge of working with multiple departments that almost always have different goals and uses of the tool. Luckily, these items are generally a well understood reality as a part of the overall effort. However, the one item that is both thoroughly misunderstood is how the MDM software architecture integrates into an existing production and business process. Hint, the word “existing” is at the root of this misunderstanding.
The specific misconception I would like to tackle is the expectation that one piece of MDM software acts both as the user interface as well as handles all integration with existing systems. As well, it is perceived to still fit into the same production process from a business side that existed before the elimination of a lot of “boxes and arrows” in the existing process. I may be contradicting what many ‘sales’ types promise that a MDM product easily integrates into disparate systems and simplifies the architecture. One thing the ‘sales pitch’ does not clarify is that because of the advantages of the MDM product, a good MDM initiative also includes a re-engineering and tuning effort of the surrounding processes. Oops, they must have forgot that part.
In several recent experiences, the biggest hurdle in gaining operational buy-in during the MDM initiative was centered on the disillusionment that resulted from the recommendation to re-engineer existing integrations as well as adding new ones. One devil’s advocate reaction summarized the sentiment of this disillusionment perfectly: “So let me get this straight, we are going to simplify and consolidate our production process by adding additional steps?” Well in a single word response, “Yes!”
So how is this possible and why is it necessary? In order to clarify this struggle, the diagram below clearly demarks the MDM tool from the processes that typically happen externally of the tool. The diagram typically has three states:
- Current with MDM; and
- The eventual MDM goal.
The specifics of the drawing changes from implementation to implementation but the basic result of the different states illustrates an initial increase in the amount of boxes and arrows, not less. There are two primary reasons why this is the case:
- The MDM initiative actualizes undocumented manual business logic and processes that are often not represented in current state architectures. After reviewing an often oversimplified current state architecture that a client provides me, my two favorite questions to begin probing for these undocumented secrets is to ask: “Ok, so is this really all there’s to it?” and “Is this always how the production process works? What happens when <fill in the blank> event fails?” The answers to these questions have to be key architectural considerations as they almost always are the leading indicators of why the current state struggles.
- The scope and charter of the initial MDM initiative is championed by only one or two target systems and therefore the initiative has to minimize changes to upstream systems and processes.
MDM Phase 1 implementations are often striving to “sow the seeds” of consolidation but end with creating and adjusting current processes resulting in more “pieces” to the architecture due to project charter and scope. Such an intermediate step is necessary in order to immediately show value, get organizational buy-in, and keep project length to “bites that can be chewed”. There is nothing wrong with this approach and this state is the reality for the vast majority of initial MDM initiatives. In fact, several phases for different source/target systems may initially all start out like this!
In future phases, however, the MDM tool becomes a true hub of existing systems and master data integrations are specialized on a per application basis. Separate management routines of master data (common or not) cease to exist in subscriber (source/target) systems. The consolidation of business logic continues until all business logic is completely removed from integrations. The integrations serve only to communicate from system to system. Additionally, maintenance and error handling business processes and logic are candidates to be consolidated and eliminated from the source and target systems. It is at this point that the architecture morphs into what the initial MDM concept prescribed a hub and spoke system.
Acknowledging and accounting for this incremental effort, especially the additional integrations, is a critical step in getting buy in for the MDM initiative as a whole. From an overall cost perspective, it is not uncommon that these integrations steps can equal the work load of the core MDM tool development. Even so, the value proposition the MDM tool provides immediately should not be ignored. In the long run it is always cheaper to correct un-auditable, manual, and error prone processes so they can’t fail or have controlled failure scenarios with auditing and user warnings/guides than it is to incrementally take a hit in user frustration and IT all nighters during the end of every reporting period? Tie to this the added benefit of beginning to figure out as a company what departmental and application differences exist in a centralized setting instead of conceding that Bob’s going to have to stay another weekend to contrive a way to get all applications in synch again (without obviously ever getting a chance to adjust the business process that created that issue), and it then allows a slow (and often painful) breaking down process in order to support an expandable and dynamic MDM solution.
Mr. Lehner specializes in master data management, system integration, data warehousing, and management reporting for the Data Services group of Ranzal & Associates.