If you read this article, chances are you work in HR and you have a more than average interest in data. If so, you are probably familiar with the ‘Wall of Boudreau’. For those of you who are not: the wall famously describes the struggle that organizations face in the transition from descriptive analytics to predictive analytics.
Hitting the second wall in HR measurement
Though the need for progressing from descriptive to predictive analytics is clear and more and more organizations are breaching this wall, HR already faces the next challenge.
In their great article titled “HR is hitting a second wall”, Patrick Coolen and Frank van den Brink argue the existence of a second wall. In the article they rightly state that most of the predictive (and prescriptive) HR analytics projects are not (yet) automated or productized. Instead analyses are made on an ad-hoc or project basis.
Coolen and Van den Brink urge organizations to advance from ad-hoc analytics to continuous analytics. In this vision analytics moves from understanding what happened in the past (descriptive analytics) to trying to predict what is going to happen in the future (predictive analytics) to continuously grasping what is happening right now (continuous analytics).
Data quality as an essential prerequisite to continuous analytics
In order to make continuous analytics work it is essential that the quality of your data is in shape. Not just now, but all the time.
That is easier said than done. Data is everywhere and the possibilities to analyze seem endless. But even in modern times, the bulk of all data is still typed in manually at some point. People that administrate data make mistakes, typos, forget things, and take shortcuts.
An example is filling in a dash or a zero in a required field because it will help them submit a form quicker. That means your data is getting polluted on a daily basis.
Both with prescriptive and predictive analytics the quality of the data used can benefit from a one-time cleanup. And that is usually how it works in practice. You decide to perform an ad-hoc analysis. You find out the quality of underlying data is bad. You fix the quality of the data on an ad hoc basis. Your attention moves away to the next analysis. And, oh yeah…The data you’ve just cleaned gets more polluted with every day.
Though this approach (sort of) works for ad hoc analyses, for continuous analytics you need continuous access to reliable data. Just like you can clean your house just before your parents-in-law visit, you can clean up your data right before an ad hoc analysis.
Continuous analytics, however, is like giving your parents-in-law the key to your house. Before you can be so bold, you need to make sure your house is clean and tidy all the time.
What is continuous improvement?
Continuous monitoring of data quality is not enough. Since your data is getting polluted on a daily basis, it should also be cleaned on a daily basis. This is where continuous improvement comes in.
In a previous article, I’ve explained the concept of continuous improvement in more detail. For those of you who haven’t read it, here’s a quick recap.
What we call Continuous Improvement is the process of continuously analyzing your (HR) administration for errors and risks, following up on issues and implementing structural changes to prevent them in the future.
Regardless of the
For each exception the system logs who has done what and why. This not only ensures exceptions are being handled but it also offers valuable feedback for the algorithms and process used. If the exception found was in fact not a flawed record the administrative team flags the exception as a so-called ‘false positive’. The input of the false positives ‘trains’ the algorithms to be more precise. If the same exceptions happen over and over again, it could mean the processes are not clear or need adjustment (step 4).
This last step is very important. By making structural changes in your data-entry process, in field instructions, or in the input fields themselves, you can stop mopping and start turning down the taps.
Note that this last step is generally never taken when working with ad hoc analyses. One reason is that the focus at that time tends to be on the analysis itself and not on the root cause of the bad data. Another reason is that often you need to monitor something over a period of time before truly understanding the root cause of the problem.
The general pattern that we see when applying a new check in our continuous improvement approach is that we see loads of exceptions in the first period of time. After a while, through structural changes that are made or simply because of growing awareness, the number of exceptions is and stays drastically lower.
Will it drop to zero? Mostly the answer to that question is ‘no’. That stresses the importance of continuous improvement. Even when you are constantly monitoring and improving data quality, even then, mistakes and typos will get made and occasionally shortcuts will still get taken. Can you imagine what happens if you don’t apply continuous improvement?
The good thing is, because the number of exceptions will drastically drop over time, it will free up resources to start adding additional checks. That way, step by step, you can build on continuously improving the data quality of a large part of your administration.
I realize data quality is probably the most boring aspect of HR analytics to most of us. Instead, we would all rather do fancy analyses and wow our managers with killer metrics. But the point is that you can never progress to continuous analytics without continuously improving your data quality. Just like in Boudreau’s wall we could have never progressed to ‘forecasting’, without having invested in ‘data systems and portals’ first.
If done well, continuous improvement can break down the second wall, opening up the endless possibilities of continuous analytics.