Who Are We Measuring For, Anyway?
This week we’re featuring a special guest post by Robert O. Brinkerhoff, Ed.D., an internationally recognized expert in evaluation and training effectiveness. Below, he discusses why it is better for training leaders to worry about pay-off to the customer rather than trying to earn a high ROI score.
Who are we measuring for, anyway?
By: Robert O. Brinkerhoff, Ed.D.
I have been invested in and struggling with measurement of training since my earliest days as a training officer in the U.S. Navy, some forty and more years ago. For decades, I’ve witnessed new approaches rise and fall, and it seems like almost all of them are focused on the How of measurement versus the Why. We’d be better off worrying more about customer-centric measures – metrics that would help our costumer get more value, than training centric measures – measures that help us training folks justify and defend our existence.
A good bad example or case in point is the concern with the question “How do we isolate the value of the training intervention”, a proscription of the several ROI procedures that have been ascendant in the recent past.
The typical “ROI” (return on investment) process advises practitioners to estimate and include in their calculation an “isolation” factor; the degree to which some business outcome, a sales increase for example, can be attributed to the training intervention.
According to the ROI calculation, the greater that the training contribution is estimated to be, then the higher the ROI score will be. With a business outcome worth $100, for example, training can take credit for $50 of this result if the isolation factor is estimated at 50%. But, on the other hand, if the isolation factor were estimated to be only 10%, then training gets credit only for a $10 bump. Knowing that training departments love to report a nice high ROI score, it accrues to the benefit of the training department to pursue the highest possible isolation factor, since these lead to a higher calculated ROI score. On the other hand, a low isolation factor leads to a lower calculated ROI score. In other words, looking at this from the perspective of the training department wanting to earn high ROI scores the perfect scenario is a business outcome that is estimated to be caused solely by the training: an isolation factor of 100%, since this would allow the training function to take all the credit for the business value proceeded by a training-inspired performance improvement.
But it strikes me that the customer is better off with the lowest possible isolation factor.
A “perfect’ isolation score of 100% would mean that only the training drove the result; that there was absolutely zero contribution by any other factor in the performance environment. But think about this for a moment. If this is true, then the training is totally unaligned with all other performance system factors – no incentives, no coaching, no peer pressure to perform, no manager influence, etc. This training-driven perfect scenario strikes me as a worst-case scenario for the customer, as the training is flying directly into a forceful head-wind.
An isolation score of zero, on the other hand, seems to me to be the perfect solution: it means the organization has figured out a way to drive a desired performance improvement and resultant business result solely by manipulating the performance system, so there is no need for any training at all. Isn’t this scenario exactly what the performance-oriented practitioners would seek? Training is always an expensive solution. If you can get to the desired business outcome without spending a dime on training – by just adjusting incentives, motivation, performance support, etc. – isn’t this what a thoughtful HRD leader should strive for?
Too many training measures seem to me to be less about doing something good for the business up front and more about figuring out how to divvy up the chips and recognition after the game is over.