At one point in my career, I was working as an HR-manager for Belden, a company which, at that time, was making cables and wires for shavers, TVs and other electronic equipment. I envied some of my colleagues in the management team who “battled” for their share of the annual budget. Especially the marketing manager always presented nice investment calculations which included cost like promotion campaigns, advertisements, customer visits, and more.
This marketing manager always showed great creativity when presenting the benefits of his proposals, based on assumptions about market growth, growth in market share, number customer returns, etc.
Impressed as I was I also made the observation that the assumptions made by this marketing manager were not based on “rocket science” but more on “educated guesses” (which were presented very convincingly).
I wondered why HR in that company only presented the costs of HR-investments and did not show scenarios for benefits of HR-initiatives for training. Most of us are familiar with the learning evaluation model of Kirkpatrick (1976) but I asked myself: is there a formula via which we can make an estimate of our (financial) return on learning?
We know that learning and development potentially can contribute a lot to the “employee lifetime value” as described in this article.
These two came together when my business manager asked me to present a business case for the added value of training of quality control inspectors in the factory. I was reading literature about utility analysis at that time, giving me the faith that also in HR we had more rigorous approaches to financial estimates of HR-initiatives.
Utility analysis defined
So what is utility analysis? For Human Resource Management, utility analysis refers to a specific tool designed to estimate the institutional gain or loss anticipated to a company from various HR interventions designed to enhance the value of the workforce (Sturman, 2003).
In other words: it’s a tool to calculate the utility, or profitability, of interventions.
For learning interventions the basic model is:
Utility (U) = Benefits (B) – Costs (C)
To calculate the utility, we need to define both Benefit and Cost:
Benefits = d * SDy * t (*N)
d = effect size of the training
SDy = Euro value of Performance Change
t = time (number of years the training has effect)
N = number of participants
The effect size
How do you calculate the effect size?
The effect size is the difference between before and after a training. For example, you have someone who is not trained (T0). After training this person (T1) you expect this person to be better at the thing he/she was trained at. So the score of T1 increase compared to T0 on the competency measurement or language test.
To test this effect, you can take the average (mean) score of the group for both T0 and T1. You deduct the mean at T0 from the mean at T1 and then hope that the difference is positive.
To standardize it, you divide it by the standard deviation of the pretest. The picture below represents the size of the effect of the training program.
How d (the effect size) is computed? It is simply the difference between the means of the trained (the red curve) and untrained groups (the blue curve) in standard Z-score units. This might be the difference in average job performance, time to competency, learning, and so on.
So let’s return to the quality-control inspectors in my company Belden.
Their job performance was evaluated in terms of a work sample – that is, the number of defects identified in a small sample of products with a known number (for example, 10) of defects.
Suppose the average job performance score for employees in the trained group is 7 and for those in the untrained group is 6.5, and the standard deviation of the job-performance scores is 1.0.
The effect size can be then be calculated as :
In other words, the performance of the trained group is half a standard deviation better than that of the untrained group.
Ideally, you should work with an experimental group (receiving training) and a control group (receiving no training). In the literature you can find estimates for the effect size ( e.g. Arthur et al, 2003)):
d = .20 -> small effect
d = .50 -> average effect
d = .80 -> large effect
The euro value of performance change
In the formula presented above, the variable SDy is defined as the added value in euros (or dollars, etc.) of an individual who performs one standard deviation above average (that is, compared to the average performer).
When we multiply the effect size as described above by SDy we can estimate the euro value associated with the estimated performance change.
The simplest approach, estimating SDy, is feasible when there is clear euro-value performance data on each employee (e.g. in sales jobs). Unfortunately, such financial data are rarely available for many types of jobs.
Another simple approach to estimating SDy involves using the following simple rule, which is based on multiple studies (e.g. Becker and Huselid, 1992).
- For low-complexity jobs, SDy is estimated to equal 40 percent of the job’s salary;
- For moderate complexity jobs, it is equal to 60 percent of salary;
- For high complexity jobs, SDy is equal to 100 percent of salary.
Cost of learning
Of course, the utility formula is not complete without estimating the costs. Very often a distinction is being made by:
- Direct costs: material, instructor, accommodation, travel, and administration
- Indirect costs: opportunity costs = gross daily salary trainee * course sessions * number of trainees
The utility of the training for quality inspectors at Belden
So back to the challenge to calculate the utility of the training of the quality inspectors. We used the following information to estimate the utility
- 25 quality inspectors to be trained
- Duration of the effect of training: 1 year
- Effect size (dt): 0.5 SD
- Year salary: 40,000 euro; the SDy is 16,000 euro (using the 40% estimation rule)
- Total cost of the training program: 50,000 euro
- ∆ Utility = (d * SDy) * (N * t) – C
- ∆ Utility = (0.5 * 16.000) * (25 * 1) – 50,000 = 150,000 euro
From these findings we can also calculate the Return on Investment (ROI).
Making use of utility analysis
Utility analysis can be quite complex but in my experience, it can be explained very well. Of course we must make certain assumptions, basically in the same way as the marketing manager of Belden was making assumptions about the benefits of “his” marketing investments.
So in summary what are the main assumptions to be made in our utility formula in case you don’t have exact numbers?
- Estimate the time the training has an effect . Some training programs (e.g. learning a new software release of a system which changes regularly) have a short time frame of impact. Learning new skills like e.g. selection interview training usually has a longer effect. An often used assumption is a time frame of one year.
- Estimate the effect size of the training. Depending on the nature of the training there are good estimates in the literature (e.g. Arthur et al, 2003) about effect size. Remember that the average effect size is approximately 0,5 SD.
- Estimate the euro value of Performance Change. As indicated you can use make use of the 40%-rule in case of “low complexity jobs”. With more complex jobs the Euro value might jump to 100% of salary.
General managers and HR directors need to be able to determine where investments in human capital are necessary to contribute to the bottom line of the organization. Utility analysis is a tool that can help make this happen.