Given the limitations of data quality dimensions and categories as discussed in the previous post, is there a useful alternative?
Well Suzanne Embury and colleagues think so. Firstly they acknowledge that data quality cannot be precisely measured, the best one can hope for is an estimate. Real world information changes constantly so we cannot know how good our representation of it is at any point in time.
Data is becoming an increasingly valuable asset to companies as the tools and technology to exploit its value continue to develop. However while good quality data analysed properly can bring value, the costs incurred from poor quality data can be considerable.
A report by the Data Warehouse Institute in 2002 estimated business costs of $600 billion arising from poor data quality. Gartner Group in 2013 suggested data quality problems cost American companies on average $14.2 million a year while in a recent report IBM put the total cost of poor data quality to the US economy at a staggering $3.1 trillion a year. While these figures need to be interpreted with some caution (these organizations supply data quality management solutions so it is probably in their interests that such numbers are high) their scale indicates that data quality management has the potential to deliver huge value to organization that implement it correctly.