It’s not just the L&D team that wants to know whether the training programs it developed and delivered were successful—others do as well, including those in the C-Suite.
More than simply a post-course exit poll asking learners how they felt about the training, evaluating training results should go deeper, as it is important for organizational success.
Unfortunately, more than two-thirds of companies say that an inability to measure learning’s impact represents a challenge to achieving critical learning outcomes, according to a survey conducted by learning industry research firm Brandon Hall Group.
And “if you can’t measure it, you can’t improve it,” as Peter Drucker, the “father” of modern management theory, was fond of saying.
Certainly, no organization would continue an effort without a way to measure it or improve it. As such, what can today’s organizations do to better evaluate training in order to improve overall business results and ROI?
Learn how to leverage operational data and insights for value and impact. Download this free eBook to learn which operations data to identify, access, and analyze for organizational success.
Enter the Kirkpatrick Model
The Kirkpatrick Model is the best-known model for evaluating the results of training and development programs. It sounds like something that would be privy only to L&D insiders, but many of the components actually appear to be sound business and operational strategies that would be in place regardless—even when not necessarily evaluating the L&D effort.
The four levels of the Kirkpatrick Model include:
Level 1: Reaction
This is the most basic level, measuring how participants felt about the course post training. These are just opinions about whether learners felt that the experience was satisfying, valuable, or worth their time. Other questions that could be presented in this level might include how the learner felt about the trainer or facilitiator, and even the look and feel, design, or multimedia elements presented.
While this level might seem superficial, as it is all self-reported, and there is no way at this level to understand the impact of your training, there is still some value here. Level 1 is an assessment of employee engagement, which your instructional designers can take note of and incorporate for future projects.
However, to another extent, a generally positive Level 1 Reaction in the Kirkpatrick Model can indicate that the learner will want to put the training into action—leading to changes in behavior, which are measured in the higher levels of the model. A negative learning experience and Level 1 Reaction will not incentivize the learner to do anything with what they just learned.
Level 2: Learning
This level analyzes whether learners truly understood the training, as seen in any increase in knowledge or acquisition of skills.
This can be performed in several ways, depending on the type of training. For example, an annual compliance training as required by the industry or government regulator might simply require an assessment at the end of the training, and a passing or high enough score would meet the needs of this level.
However, training on a type of software, especially when the group of learners all currently possess varying degrees of mastery of that software, might have L&D provide an assessment at the beginning of the training, then deliver the very same or similar assessment at the end of the training course, to see just how much people learned or even needed to learn.
In this way, Level 2 of Kirkpatrick is a way to determine whether your employee training, even just at a content level, did its job, or as way to see how much of the material learners knew beforehand, what they learned from the training, and whether the course delivered anything new or noteworthy.
Level 3: Behavior
Kirkpatrick’s Level 3 goes even deeper than Level 2 and seeks to understand whether employees are actually using what they learned—and how L&D and the business can measure it.
Measuring behavioral changes is very much a challenge for learning leaders. As such, they will need to lean on the business units to deliver feedback. There might be business metrics that can prove or disprove changes in the learner’s behavior: an increase in contacts made, an increase in customer care tickets closed, a faster turnaround on a project, or any key performance indicator.
While these are important, there might be other behavioral changes worth noting, and so other managers and co-workers might also be involved in reporting any post learning changes on the part of the employee.
Level 4: Results
This is the highest level and goes beyond reactions, knowledge, skills, and behaviors. Level 4 determines whether the training results had a positive impact on the business or overall organization.
This might take some time to get up and running, especially for an organization new to evaluating training results. Establishing systems to report employee performance, especially after training has completed, will require cross-functional teams.
While a performance boost or a new skill acquisition might be welcome when evaluating Level 3, these positive changes will need to be translated into business results. As such, managers will be required to find the connections to an operating metric that would ultimately impact Level 4 results.
Oftentimes, when seeking Level 4 measurement, goals might need to be set before the development of training. In this way, everyone on board—instructional designers, subject matter experts, learners, and the business unit—are made aware of how the training will ultimately affect the organization as a whole.
In this recent webinar, award-winning learning measurement expert, Paul Leone, PhD, shared a practical framework for measuring and showcasing the business value of training.
Watch Now
Evaluating training results requires data
Unfortunately, while organizations understand the need to evaluate the impact of their training, they simply don’t do it.
Brandon Hall Group found that only one-third, or 34 percent, of employee learning leaders measure Level 1 of the Kirkpatrick Model. Each level decreases thereafter: only 16 percent measure Level 2, 5 percent Level 3, and 3 percent measure Level 4.
Clearly, there is an opportunity in measurement. Brandon Hall found that the biggest challenge to measuring learning impact is the lack of proper metrics, at 42 percent of respondents.
However, nearly all—91 percent—see the need to measure.
The tools exist, and L&D cannot do it alone. Organizations are starting to leverage Learning Operations Managers to step in and help L&D uncover results using tools they did not have access to in the past.
With access to the right data, leaders can better evaluate training effectiveness and the organization will see greater ROI.
Right now, L&D teams are facing unprecedented challenges and opportunities. Learn more about leveraging and overcoming them in this free guide:
The Complete Guide to L&D Trends and Challenges in 2022