There’s no mystery why dirty data is bad and clean, reliable data is good. Accurate manufacturing data management...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
can help managers run their supply chains more efficiently, focus on producing the products customers most want to buy and wage more effective marketing campaigns.
The devil, of course, is in the details. With so much information streaming in from so many places, how can manufacturers get it all under control?
The answer comes from the right mix of data management best practices and tools, say consultants and managers who struggle with the challenge.
It’s a balancing act that’s familiar to BlueLinx Corp., an Atlanta-based building-products company that at one point ran 60 warehouses, each with its own back-end data system. Inconsistent data across the facilities was a constant problem. It sometimes led purchasing agents to buy inventory when the items were already fully stocked at a satellite facility that used a nonstandard product description or identification number.
But after initiating enterprisewide data quality discipline, BlueLinx reduced excess inventory, improved the quality and speed of product information searches, increased sales force productivity and enhanced customer service. The timing was critical given the building industry downturn and the resulting need for cost controls. “Having inventory that’s duplicate or not moving quickly is very expensive,” said Meg Hulme, BlueLinx’s director of IT application development and e-commerce.
Establishing a data quality baseline
The first step in boosting data accuracy is to establish a data quality baseline by profiling the company's data. It involves running a series of sample queries designed to flag anomalies, such as the same product identified by multiple names. The tedious process is suited for technicians adept at formulating SQL queries, but the payoff can be big.
When BlueLinx profiled its data a few years ago, it found so many obsolete and redundant product listings it eventually purged about two-thirds of the 300,000 items in its database. It’s an ongoing process: Three years into the data quality effort, BlueLinx is finally ready to profile its current data records. “This time around we are purging obsolete data that we’re able to identify because we’re managing the lifecycle of our products much more efficiently,” Hulme said.
With a data quality baseline in place, organizations can make effective use of tools designed to maintain information accuracy. A data warehouse is one of the biggest and most complex components in data management. But it plays the pivotal role of aggregating information from disparate production systems so planners can perform complex analyses, such as identifying cost and sales trends for the top-selling products in vertical markets.
“Data warehouses help companies effectively organize information for delivery to end users as opposed to organizing information for the ERP system, which is designed to process transactions,” said Juan Porter, president of TopDown Consulting, a San Francisco-based specialist in BI and enterprise performance management systems.
MDM and ETL: Indispensable data management tools
An analytical master data management (MDM) system can help ensure that the information housed in a data warehouse is reliable and stays that way. MDM systems address the nitty-gritty details of how companies organize and store data. They also give users across the organization definitions and a common vernacular for describing information, Porter said.
We want our MDM system to be the master and ruler of all product data.
Meg Hulme, director of IT application development and e-commerce, BlueLinx
MDM databases share their clean data with any other system that needs it, including transaction-oriented order- or inventory-management applications. They can also feed information to analytics systems and data warehouses. Because of the differing data needs of various systems, some companies run separate transactional and analytical MDM implementations.
“We want our MDM system to be the master and ruler of all product data,” Hulme said.
MDM technology itself is relatively economical, analysts report. According to some estimates, up-front data grooming represents about 90% of the cost of an MDM system, with technology installation accounting for the remaining expenses.
Once the big data management pieces are in place, manufacturers can look at a variety of smaller tools to help with data-cleansing chores. Traditionally, extract, transform and load (ETL) programs have been used to move data from an ERP system, for example, into a data warehouse. But they’ve evolved in recent years to incorporate many of the data-cleansing utilities formerly found in standalone data management tools. Experts say look for ETL solutions that support advanced data-grooming features.
To keep information clean after the up-front planning and implementation stages, companies will need to focus on data governance, a formal process for ensuring that each change or new entry to master data records has been verified for accuracy. Limiting the number of individuals who are authorized to modify information is one key to good governance. BlueLinx relies on a three-person master data team composed not of IT people, but supply-chain experts who understand the relationship of the products to the forecasting and inventory-management systems.
“They are the gatekeepers,” Hulme said.