Showing Effectiveness of Training on Business Performance

I am exploring methods of showing effectiveness of training on quality of service measures in IT.  Since there are many factors that could affect quality, how does one determine a realistic correlation between training and the effectiveness of quality of service?

Each month numerous Service Performance Measures are taken and reported. These could be used to establish a baseline at the beginning of an education initiative. 

Does anyone have suggestions on how to measure realistic effectiveness of training?

7 Replies
Dave Neal

First of all, ask your client (yourself?) what the purpose of the training is. By that, I mean why do they perceive a need in the first place? What do they hope to achieve by this training? Happier customers? How do they know that the customer is unhappy? That would be the measurement.

i.e., too many complaints to the 1-800 number means that, after training, the number of complaints drops.

At least, that's one approach.

Melissa Chiaramonti

Interesting.  I would love to know what the perceived need is as well. Happier customers is IMHO ....difficult to measure, but pairing it up with something concrete, say - customer call average per issue (for example it might take 2.5 calls to resolve an issue for each customer contact - lowering that would be really cool) might be more of a true look into the impact of training. You could more confidently state that the training, if it was on techinical skills or problem solving (again for example), could be the reason for happier customers (based on those awful likert surveys >.&lt.

Let us know what approaches you are considering....

Holly MacDonald

Hi Ramon

One approach that I've had success with (although more on building a business case for training prior to development) is to identify the things that may have impacted performance (manager, large project, new hire, etc) and then if there are say 5 factors, I attribute only 20% of the change to the training. You need to identify what that is (call volume, speed of resolution, service quality, as the others have pointed out). I have found if you are reasonable and show logic, even if the factors aren't all evenly weighted, you can show positive impact. It isn't an exact science, so there is going to be some leaps of faith! You could ensure that you have a "control" group (those that haven't had training, or it's been awhile) as a way to show some correlation. You could create a "dashboard" that shows a mix of qualitative and quantitative measures. 

Most importantly (I believe) is that you are making the link at all and that you are showing long term trends, not a one time blip. If you can correlate to timing of training that is additional evidence to help support. The second most important thing is that you aren't too attached to your solution and if it shows a negligible impact, you use that information to make a change in support of continued performance improvement. 

I hope some of those help, I'm not at my most concise this evening it seems!


Fred Marquez

Holly is sending you in the right direction.  Start at the begining by studying and indentifying specific performance atributes that require improvement and measure them.  At the end of training cycle you need to re-measure those performance attributes to see what the impact was.  This is called performance analysis and is a bit different that the usual needs analysis stuff we do.  To do a good one, you need consider interviewing multiple levels of employees so that you can get a clear picture of the performance gap,  and key performance attributes that need to measured/improved.


Beth Worthy


What I suggest is, effectiveness of training can be measured in the form of a valuable time and resources. Apart  from this , realistic effectiveness of training can also be done by following the four major actions which includes reaction, learning, behavior and finally result/ outcomes.

Bob S

Hi Ramon,

The "classic" way is to do a control group in order to isolate any additional factors. Provide the training to one group but not to another and compare results before and after. Be sure all other things are equal.

But be warned, this kind of isolated measurement has a risk associated to it. If for some reason the results do not show a significant improvement, or (gulp) a decline, you may be in an uncomfortable position.

This is one reason, though not the only, that many folks chose to do "whole group" metrics instead. These make note of the fact that there are several factors that may have impacted the results. But that's a good thing too. Because the best training solutions do not occur in a vacuum... rather they are one piece of a business solution to improve performance. Things like process review, refocusing on key goals, management follow-up and more, often come out of, or work with, a training solution.

My advice? Unless someone is forcing you to do isolated control group testing of your training solutions, I would avoid it. Instead focus on the measurable performance improvement for the whole group and note that along side training, other factors contributed to the success....and encourage your stakeholders to always consider a complete solution in the future vs jus throwing training at it.

Hope this helps,


Rich Johnstun

Measuring the effectiveness of training is always one of the "holy grails" of training. That's how the justify our existence, right?

We try to go about it from multiple directions because there will always be some sort of performance metric you can gather, but this usually only tells part of the story.

  • First thing we use are our pre and post training business metrics. Check the performance metrics (whatever those might be for a particula position) and we try to do 30, 60, and 90 days both before and after the training event. Obviously for some positions this will be easier than others.
  • Pre and post training evaluations. We try to have associates take a quick exam prior to training and usually 30 days after training. We try to make it very lean and concise (typically 5-8 questions). This is a good way to validate our training objectives.
  • Post training associate self-evaluation. We run these any where from 30-90 days after the training, depending upon the program.  This is obviously a subjective measure, but it gives the associate a chance to provide input on how they feel they are doing and if the training helped improve their performance.
  • Post training manager survey. Similar timing as the associate self-evaluation. This gives the individuals manager or supervisior the ability to provide input on how they feel the training impacted their associate.
  • Finally, random audits/interviews. On a regular basis we will schedule a 10 minute interview phone call with learners and managers post training to get a more casual type of feedback. Often people are more comfortable saying things than they are putting them in writing.

All of this might seem excessive, but my company spends a lot of money on training (millions) and we train about 10,000-12,000 associates a year. Not a huge number, but enough that we see the value in doing the most we can to sure that the ROI is there.