How to Measure Microlearning Effectiveness with
Kirkpatrick’s Evaluation Model

If you have started implementing microlearning in your organization, it’s really important to measure how much it has contributed to learning, behavioral changes, and performance improvements. And Kirkpatrick’s evaluation model shows us how.

The Basics

To use Kirkpatrick’s evaluation model to measure your microlearning efforts, you first need to collect data by asking 6 simple questions:

  1. Who are we evaluating?
    – the learners, their managers, the trainers, or even the customers
  2. What are we evaluating?
    – knowledge, behavior, or skill that the learner needs to learn, recall, and apply
  3. Why are we evaluating?
    – the desired outcome of evaluating the people or parameters specified
  4. When are we evaluating?
    – the most appropriate time to collect data – ‘immediate’ or ‘later’.
       o Immediate evaluation measures immediate reaction or   response.
        o Evaluation at a later point/s (e.g., the application of learning after 1, 3, 6, 9 months) is usually a better measure.
  5. Where are we evaluating?
    – the location or medium where data is collected – on-the-job observation, paper-based assessments, or online surveys
  6. How are we evaluating?
    – how to collect the required information, responses, and feedback – surveys, quizzes, reports, etc.

These 6 questions form the basis for Kirkpatrick’s four levels of evaluation.

Kirkpatrick’s 4 Levels of Evaluation

Kirkpatrick’s model measures training effectiveness at 4 levels – Reaction, Learning, Behavior and Results.

The table here describes what each of these 4 evaluation levels mean, and how they can be measured.

Evaluation levelWhat it involves (The details)Indicative examples
 Level 1: Reaction Did the  learners enjoy the training? How did learners react or respond  to the training? The learner’s reaction to the   training session immediately after   its completion on:
  • Quality of content
  • Extent and ease of learning
  • Relevance to job, etc.
  • Surveys, ratings
 Level 2: Learning   Did learning transfer occur?  What was learned?What was not learned?To what extent did learners   improve their knowledge, skills, or   behavior?Ideally, learners need to   be evaluated before (pre-test) and   after the training (post-test) to   measure progress.
  • Quizzes or assessments before and immediately after training
  • Observations by peers and instructors
Assessments can be planned for:
  • Each microlearning lesson
  • The larger training initiative
 Level 3: Behavior   Did the training result in change in  behavior? Are learners applying learning to in   workplace? This assessment can start weeks   or months after training.
  • In-depth surveys and interviews
  • Feedback from managers
  • Job performance reviews
 Level 4: Results  Did the training achieve the desired  outcomes?  Are the outcomes truly the result of  the training? This level requires measuring the   training objective both pre- and   post-training.
  • Comparing baselining and post-training performance over time
  • Performance measurements based on key performance indicators (KPIs), OR success-factors tied to learning objectives

Now let’s see how this framework is used in a format that can be used by L&D professionals.

Kirkpatrick’s Evaluation Format for Learning Professionals

The evaluation format given here is only indicative, and can be customized based on your learning evaluation needs. The content in BLUE in the format is only for reference, and may be deleted when creating your own evaluation format.

 What’s getting evaluated? [Mention the exact nature of evaluation]


………….and so on                 
 Why to evaluate? [Mention issues, lacunae, problem areas and  pain-points why evaluation is being done]


                      ……………..and so on
S. No.LevelWho’s being assessed?Where is it done?When should you assess?How do you evaluate? (suggested options)
1ReactionThe learnerOnlineImmediately  after  the microlearning lesson A quick survey or rating (Likert’s or stars)
2LearningThe learnerOnlineSelf-assessment by learner within each lesson Assessment of learning at end the lessonScenarios or videos (or other graphic formats) with multiple-choice questions Graphics, animations, or gaming role plays Completing a report for a topic-related scenario True/false questions or multiple-choice options for self-assessment
3BehaviorThe learnerOnlineMonthly or quarterly after the trainingSurveys, Likert’s scale, or open-ended questions Topic-related assessment questions
  The managerOnlineAssessing a microlearning session periodicallySurvey, ‘yes-no’ questions (polar responses) and open-ended questions Provide options to managers to share progress and results
4OutcomeThe managerOnlineAssessing a microlearning lesson on a quarterly basisSurveys, options to share trends or progress for long term training effectiveness

The Kirkpatrick evaluation format tells us that evaluation needs to be an on-going and long-term process. It shows how measuring effectiveness of microlearning initiatives is not a one-time activity!

By measuring effectiveness over time, you can establish performance baselines of your learners as well as your learning initiatives. 

This is useful for the management team as well. It enables you to effectively communicate with your stakeholders, and help gain their trust and confidence.

To conclude, measuring microlearning effectiveness with the Kirkpatrick evaluation framework not only gets you management support, but also gets them to recognize your contribution.


0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
On Key
Scroll to Top

This website uses cookies to enhance user experience and to analyze performance and traffic on our website. We also share information about your use of our site with our social media, advertising and analytics patners. Privacy policy