Scott Taylor has been noticing patterns and structuring fundamental data for a very long time. When he was young, his parents say that instead of building things with his Legos, he would organize and categorize the individual blocks.
At present he’s an increasingly well-known expert in the area of Master Data and has built an impressive following and client list while writing, speaking and advising in this area. His Linkedin profile includes the descriptors “Data Whisperer” and Data Brand Builder.
The master data content space is different than what a lot of us think of when we think of MDM – master data management. MDM is software, and according to Gartner, it is a practice area and an initiative to be undertaken. But what Scott focuses on is the master data itself; that which results from this MDM activity. The data is what does the work, carries the load, and is that “foundational, core set of content that drives a business”.
He unambiguously sees this master data as the most important data a company has. And while other data – for example, sales data – is of course hugely important, sales data can’t be relevant or accurate without a well-mastered product file, and a well-mastered customer file. And it’s that “well-mastered” bit that is the focus of Scott’s work.
Scott got the bug for this master data content notion early in his career, while he was with Trade Dimensions (now part of Nielsen). The company had developed a list – and subsequently turned it into a database – of around 30,000 store locations. For context, this was just long enough ago that, when the company would license the data to a user, they frequently had to give the customer a computer to house and run the data since customers rarely had enough hard drive space or processing power of their own.
At its most basic, the data was a comprehensive list of stores. The store data was primarily sold to product manufacturers, who needed a reliable list of stores for deliveries etc. Scott, as a junior sales guy, was charged with selling to related service providers like brokers, sampling companies, signage companies and anyone that distributed or merchandised around the manufacturers.
Scott began to appreciate that all these customers were using this data as a foundational part of their businesses. The reason was because each record had a unique ID that could unfailingly identify a store without having to use an address.
Scott then began an effort to rebrand this ID number and make it more prominent, calling it the TDLinx code. This identifier was, in essence, a universal language of stores and accounts for the CPG, food and beverage, tobacco, and retail categories. It was much like a UPC code, but for stores. When new stores were opened, it became essential that the new store was quickly added to the database for everyone’s use. It was definitely master data.
This fundamental learning became a bedrock principle to Scott. Over the years he began collecting more of these fundamental principles of what works with reference data, what doesn’t work, and how critical master data can be, both to an organization, but also, in the case of the TDLinx code, to a whole industry. “We got Coke and Pepsi to agree through the same data. It was exhilarating and I learned a lot about the power of common data.”
Later, at Nielsen, Scott was recruited into a role in an innovation group and got a deeper look at master and reference data assets – metadata – that was all over the company. He essentially mapped their master data landscape. After Nielsen he began to consult for other data providers, such as WPP/Kantar, in the same way, to help them identify, productize and brand their existing master data and taxonomy assets.
Most recently, Scott led an effort at Dun & Bradstreet to reposition their legacy data assets (such as the DUNS number), as part of a broader and more strategic master data offering. “At D&B, master data was the second biggest business after their trade credit services, yet the term wasn’t even on the website!” He quickly became the company’s global evangelist and strategic SME on the topic of master data. “They had been doing it for decades but were saddled with confusing terminology and awkward positioning. They just didn’t use the correct vocabulary. As one partner told me ‘you have given us a new way to talk about what we already know.’ It was as much a marketing job as anything else.”
Flash forward to today, and it’s clear that every organization collects far more data than they can reasonably use and thoroughly care for. And virtually all of them are grappling with the data at its most fundamental level – their master data – to try to get it right.
So Scott’s abiding passion happens to be greatly in demand – almost more than ever before. His evangelistic business today, much like it was back then, is about helping companies distill their data challenges down, and ensure that they first address that singular consideration: identifying, structuring and maintaining that master data. From that strength, all other data activities are possible. But without good master data, all those other sexy pursuits like AI, machine learning and analytics simply won’t succeed.
Scott spends his time delivering a really engaging set of keynote presentations, explanations, talks and frameworks to help companies get this fundamental piece right. He teaches both executives and practitioners how to discuss and agree on these topics, so that they can fulfill all their wildest data fantasies far more easily, accurately and effectively. “But I focus on the strategic WHY rather than the technical HOW,” he said.
And the stakes for getting this right have never been higher. It used to be that companies could keep this messy problem locked away in a server room with some tech guys buzzing around when needed. But those days are gone. Today, companies that can’t get this right will become irrelevant and simply disappear.
Some of Scott’s recent talks and content on Master Data:
To connect with Scott, you can reach him at: