What would be the outcome if you commissioned a group of traditional reference data management experts to build the World Wide Web back in the early ‘90s? As a renowned expert in data modeling, your only solution is to build a data model, identifying all attributes and relationships to be used for the sum of experiences demanded by the user.
You have gifts as a Data Modeler your peers can only dream of. And who cares if you end up with 25,000 attributes in your data model – you understand them far better than any mere mortal could and are determined to show your mastery of modelling tools. CERN will be impressed. You would tell CERN as long as they put in place the right “governance” process all will be well.
Thankfully this wasn’t the approach taken in CERN when building the original World Wide Web. The complexity of non-stop change, massive data volume, conflicting demands and classifications of data is what the World Wide Web is known for. The future of reference data management needs to take the same approach.
We have gotten to a stage where traditional reference data management approaches have outlived their useful life. The demands of reference data management in 2010 are far beyond what was envisaged 15-20 years ago, where the roots of today’s data model-centric solutions lie. The world then was one of known knowns, a couple of feeds, simple asset classes and a handful of data consumers.