Training Evaluation Mistake #3

May 1, 2013

Thank you to the 947 people who completed the Knowledge of Training Evaluation Strategy quiz. If you missed it, you can still take it now and see your results at the conclusion.

This week, we continue to address each quiz question and provide background information.

Training Evaluation Mistake #3: Using Up All Training Evaluation Resources on Levels 1 & 2

The quiz question related to this topic yielded a quite even split of responses — 57% true and 43% false:

A detailed post-program evaluation is important to complete for every program.

This outcome actually is not surprising. Over 90% of training around the world is being measured with some type of reaction sheet or Level 1 tool. However, recent Kirkpatrick writings challenge this tradition.

We actually believe the correct answer to the question is false. A detailed post-program evaluation is only important to complete for some, not all, programs. All resources are limited, so you need to determine if the post-program evaluation is the place to spend a significant portion of them or not. It’s critical to save resources to support on-the-job application and measure the overall results.

Here are some guidelines to help you to decide when a detailed post-program evaluation is required:

When to Use a Detailed Post-Program Evaluation

  • Pilot programs
    • Mission-critical programs
    • Programs that have been changed or enhanced

      Tips for Keeping Evaluations Short for the Rest of Your Programs

      For the rest of your training programs, we encourage you to think about the data you’ve collected through two lenses: usefulness and credibility. Usefulness relates to information that the program facilitator and the training department need to ensure that they are doing their jobs well. Credibility relates to the data and information stakeholders expect to see to validate the program’s value to the organization.

      Typically, post-program evaluation forms primarily contain information that is useful to the trainers, but not terribly relevant to stakeholders. Keep the post-program tool short and limited to information that will be tabulated and reported. 

      Here are some practical tips for keeping your forms brief:

      • Have the trainer gather information formatively, or during the session, on topics such as room comfort, catering and program pace
        • Remove any questions from your evaluation for which the information is not directly used or applied
        • For programs repeated frequently for which great amounts of data have already been collected, consider making the evaluation optional, perhaps online
        • If you need comprehensive information on a new facilitator or other program change, use a dedicated observer in the classroom instead of counting on your participants to give large amounts of feedback

          What other tips do you have for making the most of the evaluation that you do choose to complete? Please log in and share your comments with us below. We also welcome any of your questions and comments on this series, or any training evaluation related topic.

          Additional resources:
          
          Kirkpatrick Four Levels® Evaluation Certification Program
          
          Kirkpatrick Four Levels® Evaluation Certificate Program
          
          Training on Trial
          
          The Brunei Window Washer: Bringing Business Partnership to Life
          
          Kirkpatrick Then and Now
          
          Getting to Kirkpatrick Levels 3 and 4 (recorded webinar)
          
          Roe's Rising Star
          
          Three Steps to Effectiveness
          
          Scroll to top Arrow