Why?
The volume of data owned and managed by the enterprise is growing at an exponential rate. Companies also want their employees to absorb all this additional information and process it at an increasingly faster rate. Businesses want IT to maintain an expanding systems landscape, all while delivering new capabilities quickly and maintaining high Service Level Agreements (SLAs). This is impossible to do manually.
Challenge:
The most common approaches to monitoring data quality are either by visual inspection or using queries that require deep technical skills.
For example, as a business user reviews newly received orders from a customer, they either spot check the data or export it into excel and visually inspect it. They repeat this effort the next week and the week after that. And since these are not reconciliation checks, they miss identifying data issues across systems and processing periods. That’s inefficient!
Then there are IT programmers. They write ad-hoc queries or reports on databases that can process the audits and validation faster than business spot checks. However, this requires deep technical skills and domain knowledge to be able to do that. It also requires a lot of time with each set of functional business users to understand the business context and requirements they have for the data. And once they do it, IT still has to send the failures to the business users to validate the fixes and update the data at the source, if they have permissions, or they may then need IT to do it.
How:
Look for data quality tools that automate the execution of business rules. Automation should allow for batch execution of business rules, on-demand execution of business rules as well as scheduling the rules to run. Automatic execution should include robust workflow and notification capabilities that allow pushing reports of audits by email to individual users, and electronically enabling approvals and fixes.