One of the biggest challenges and critiques about employee engagement and satisfaction surveys is that the results can be overwhelming in volume but low on insight. In larger organizations in particular, developing a single survey which caters for all of flavors of what people do can lead to a somewhat convoluted and unstructured set of questions. And when the results are reported back to the business with no effective synthesis or summary, it makes it challenging for managers to know what to do with them.
This year my team and I had a chance to help rethink how to generate meaningful insight from our company-wide satisfaction survey. The objective was clear: better understand what it is that makes our people satisfied – what they are in it for – and ensure that this is understood in how we report results back to our business leaders.
It was a great opportunity to apply some powerful statistical techniques to cut through the noise in the data and narrow in on the most important insights. Even more amazing were the conclusions we drew: that no matter what part of the organization someone is in, or what kind of work they do, there is a remarkable consistency about what keeps people excited and engaged at McKinsey.
Reducing the complexity of survey results using factor analysis
Satisfaction surveys have the habit of becoming more complicated and less organized over time. Managing an ever-widening group of stakeholders in the organization can become a more and more demanding affair.
One common consequence of this is that survey questions get added, tweaked, adjusted as different stakeholders seek to get more insights on aspects of the employee experience that are important to them. Before you know it, surveys start to lose some of the original structure behind them and results become more convoluted and challenging to interpret.
Rather than try to obsessively centralize control over survey content, which can be frustrating for stakeholders, analytics techniques can be applied after the fact that can reduce the complexity of the results and hone in on the key themes covered by the mass of questions.
This year we used a dimensionality reduction technique called factor analysis to create more sharp and intuitive analytics of our survey results. The concept behind factor analysis is that all measurable survey responses (usually responses on a Likert scale) are a function of a (smaller) number of latent variables. By analysing the statistics of the survey questions, identifying groups of responses that relate strongly with each other, and applying a dash of expert judgment from a psychometrician, these latent variables can be unearthed.
We ended up identifying 11 latent variables that were represented by different combinations of survey questions. Examples of these latent variables were Opportunities and Incentivesand Work life balance. Being able to represent survey results lined up against this smaller set of latent variables added considerable value in terms of making them more straightforward to interpret.
Finding the things that matter most
Most surveys have a single outcome question which asks the responded to provide a final, overall opinion on the most critical construct being measured. For example, ‘To what extent to you agree with the statement ‘I am happy in my work’?’ or ‘How would you describe your overall level of satisfaction currently?’. If we regard this single question as the outcome variable, we can use statistical techniques to understand how much of the response to that outcome can be explained by each of the latent variables indentified using factor analysis. Is it all about incentives? Or does the manager you work for drive your satisfaction the most?
Relative importance analysis is a technique used to estimate the importance of individual predictors in a regression model…
Click here to continue reading Keith McNulty’s article.