How to Measure Microlearning Effectiveness with Kirkpatrick’s Evaluation Model

If you have started implementing microlearning in your organization, it’s really important to measure how much it has contributed to learning, behavioral changes, and performance improvements. And Kirkpatrick’s evaluation model shows us how.

The Basics

To use Kirkpatrick’s evaluation model to measure your microlearning efforts, you first need to collect data by asking 6 simple questions:

  1. Who are we evaluating?
    – the learners, their managers, the trainers, or even the customers
  2. What are we evaluating?
    – knowledge, behavior, or skill that the learner needs to learn, recall, and apply
  3. Why are we evaluating?
    – the desired outcome of evaluating the people or parameters specified
  4. When are we evaluating?
    – the most appropriate time to collect data – ‘immediate’ or ‘later’.
       o Immediate evaluation measures immediate reaction or   response.
        o Evaluation at a later point/s (e.g., the application of learning after 1, 3, 6, 9 months) is usually a better measure.
  5. Where are we evaluating?
    – the location or medium where data is collected – on-the-job observation, paper-based assessments, or online surveys
  6. How are we evaluating?
    – how to collect the required information, responses, and feedback – surveys, quizzes, reports, etc.

These 6 questions form the basis for Kirkpatrick’s four levels of evaluation.

Kirkpatrick’s 4 Levels of Evaluation

Kirkpatrick’s model measures training effectiveness at 4 levels – Reaction, Learning, Behavior and Results.

The table here describes what each of these 4 evaluation levels mean, and how they can be measured.

Evaluation level What it involves (The details) Indicative examples
 Level 1: Reaction Did the  learners enjoy the training?  How did learners react or respond  to the training? The learner’s reaction to the   training session immediately after   its completion on:Quality of contentExtent and ease of learningRelevance to job, etc. Surveys, ratings
 Level 2: Learning    Did learning transfer occur?  What was learned?What was not learned?To what extent did learners   improve their knowledge, skills, or   behavior?Ideally, learners need to   be evaluated before (pre-test) and   after the training (post-test) to   measure progress. Quizzes or assessments before and immediately after trainingObservations by peers and instructorsAssessments can be planned for:Each microlearning lessonThe larger training initiative
 Level 3: Behavior    Did the training result in change in  behavior? Are learners applying learning to in   workplace? This assessment can start weeks   or months after training. In-depth surveys and interviewsFeedback from managersJob performance reviews
 Level 4: Results  Did the training achieve the desired  outcomes?  Are the outcomes truly the result of  the training? This level requires measuring the   training objective both pre- and   post-training. Comparing baselining and post-training performance over timePerformance measurements based on key performance indicators (KPIs), OR success-factors tied to learning objectives

Now let’s see how this framework is used in a format that can be used by L&D professionals.

Kirkpatrick’s Evaluation Format for Learning Professionals

The evaluation format given here is only indicative, and can be customized based on your learning evaluation needs. The content in BLUE in the format is only for reference, and may be deleted when creating your own evaluation format.

 What’s getting evaluated?  [Mention the exact nature of evaluation]
 3.                   ………….and so on                 
 Why to evaluate?  [Mention issues, lacunae, problem areas and  pain-points why evaluation is being done]
 3.                      ……………..and so on
S. No. Level Who’s being assessed? Where is it done? When should you assess? How do you evaluate? (suggested options)
1 Reaction The learner Online Immediately  after  the microlearning lesson  A quick survey or rating (Likert’s or stars)
2 Learning The learner Online Self-assessment by learner within each lesson Assessment of learning at end the lesson Scenarios or videos (or other graphic formats) with multiple-choice questions Graphics, animations, or gaming role plays Completing a report for a topic-related scenario True/false questions or multiple-choice options for self-assessment
3 Behavior The learner Online Monthly or quarterly after the training Surveys, Likert’s scale, or open-ended questions Topic-related assessment questions
    The manager Online Assessing a microlearning session periodically Survey, ‘yes-no’ questions (polar responses) and open-ended questions Provide options to managers to share progress and results
4 Outcome The manager Online Assessing a microlearning lesson on a quarterly basis Surveys, options to share trends or progress for long term training effectiveness

The Kirkpatrick evaluation format tells us that evaluation needs to be an on-going and long-term process. It shows how measuring effectiveness of microlearning initiatives is not a one-time activity!

By measuring effectiveness over time, you can establish performance baselines of your learners as well as your learning initiatives. 

This is useful for the management team as well. It enables you to effectively communicate with your stakeholders, and help gain their trust and confidence.

To conclude, measuring microlearning effectiveness with the Kirkpatrick evaluation framework not only gets you management support, but also gets them to recognize your contribution.

Visited 371 times, 1 visit(s) today
Share the Post: