Learning measurement is a bit like flossing your teeth. Almost everyone–including executives, L&D team members, and other stakeholders–agrees it’s a good idea, but very few L&D teams do it consistently well. As a result, learning effectiveness measurement has often been viewed as a liability–the missing link in otherwise effective training initiatives. Learning designers worked hard to develop effective training programs. Learners said the training was enjoyable. Beyond that, there was little or nothing to connect training to results.
Now, thanks to a new emphasis on the strategic importance of a comprehensive L&D program, learning measurement strategy is becoming an asset that empowers L&D to verify its value to the company. So what are the key aspects of turning training measurement into one of L&D’s biggest assets?
Understanding the benefits of in-depth learning measurement
In-depth learning measurement offers several distinct benefits.
Identifying whether training is working at all
At the most basic level, learning effectiveness measurement tells you if training = learning, at all. Was the goal of the training to help new hires correctly complete the on-boarding paperwork in a certain length of time and with virtually no errors? Did the training initiative target an increase of departmental sales volume by 15%, or a reduction of on-the-job injuries by 50%? After the training, what happened? The adage applies for learning professionals–”inspect what you expect”–so you can justify the time and effort spent on training.
Fine-tuning the mix of training provided
L&D departments that measure the learning that results from their training can move beyond “Our training was effective,” to “These aspects of our training were the most effective, and these need to be modified or removed.” Eventually, companies that continue to fine-tune their L&D efforts provide learning that fits specific individuals, while it contributes to the overall objectives of the firm.
Justifying the validity of your training budget
In a post-pandemic world where L&D budgets are being slashed, it’s imperative to be able to justify that funds for training were well-spent. According to a 2021 study by the Brandon Hall Group, 47% of the companies polled cut their L&D budget after the pandemic.
Furthermore, 26% of those cuts reduced the L&D budget by more than half. The L&D department that follows training initiates with in-depth measurement has objective means to support their value proposition with statements like, “Last year, L&D training initiatives resulted in a 10% increase in sales and a training ROI of 36%.” Those that have to rely on nebulous data may not survive on the budgetary cutting block.
Determining what to measure for learning initiatives
If you’re convinced that learning measurement is necessary, but confused about exactly what to measure in your learning objectives, you’re not alone. According to industrial and organizational psychologist and training measurement expert, Dr. Paul Leone, “Measuring and reporting whether training ‘worked’ is the most important part of the training journey.” Yet, Dr. Leone’s experience indicates that the majority of companies don’t measure learning results because they don’t know how and/or what to measure.
Several frameworks provide a learning measurement strategy for L&D. One of the best-known and widely-used is the Kirkpatrick Model, which measures training on 4 levels–Reaction, Learning, Behavior, and Results.
Employees who took the training provide feedback that tells how they feel about it. Was it enjoyable? Was it helpful? Was the instructor effective? Easy to follow? Well-prepared?
Typically, L&D teams use brief surveys, smile sheets or other means to quickly gather this information. It provides the first round of post-training data that can be used to measure learning effectiveness.
This level relies on feedback from assessments provided by L&D at various times. An initial assessment might provide a benchmark of where learners are prior to training. A post-training assessment given immediately after the training evaluates if the training had a short-term impact. Assessments given a month or more after the training evaluate long-term learning and provide L&D with comparison data.
Levels 1 and 2 focus on “paper” or memory learning feedback provided by the people who participate in the training. In contrast, Level 3 evaluates how the training affects employees’ work habits. Did they internalize the training so that it changed their behavior? This level requires input from managers and co-workers in addition to the employees themselves.
Level 4 looks beyond specific L&D programs and participants. At the Results level, the Kirkpatrick model evaluates whether or not the overall training program positively impacted the company’s key performance indicators. For example, did the sales training lead to an increase in sales, both overall and per salesforce member? Did safety training produce fewer worker’s comp claims and save the company money? Obviously, results like these are easier to quantify than is something like employee satisfaction that leads to lower turnover and higher employee loyalty.
Dr. Leone advocates a model that extends the Kirkpatrick model of learning measurement and evaluation by adding two steps.
Level 5 uses ROI to answer the question of whether an L&D initiative provides sufficient return to the company to justify the investment in it.
Level 6–Climates that Maximize
This level looks outside the training itself to evaluate the work situations that affect the efficacy of the training. Are managers allowing their employees to utilize what they learned? Are they responsive? Does upper management communicate the value of corporate training?
The Kirkpatrick Model, and Dr. Leone’s extension of it, are just two possibilities. Cognota’s operations platform for learning teams includes an Insights functionality that enables you to discover trends, identify focus areas and understand the impact of your learning investments . Try a free trial or book a demo to see for yourself.
Gathering reliable data
Establishing top-notch training is one thing, but stakeholders expect learning teams to effectively verify that the training is top-notch. Quoting Dr. Leone again: “92% of business leaders and executives WANT to see training RESULTS” but fewer than 10% actually do. In many companies, the gap between those two statistics is driven by inadequate data collection and measurement, even though L&D knows the value of timely and accurate data collection and measurement.
One of the difficulties in data gathering is the subjectiveness of the data gathered from the participants. Take, for example, this classic, post-training, data-gathering question, “Was this training worthwhile?” So many factors affect that answer. No two participants will consider the same factors in the same way or weigh them equally.
If you’ve been asked to evaluate any training course, you’ve probably seen some version of these two, as well: “Did the instructor explain the concepts adequately?” “Was the instructor well-prepared?” Even if the questions are multiple-choice and provide some structure for the responses, “adequately” and “well-prepared” are wildly subjective.
The thorny issue here is that participants are the key to gathering useful data. Their input is essential; learning designers need it desperately. So, effective data gathering relies on multiple sources of information gathered at different times and in different ways. In addition to the traditional post-training interviews given immediately, savvy L&D departments also know the importance of:
- Interviewing participants before and after the training.
- Administering written or verbal assessments after the training.
- Observing participants as they perform the task(s) addressed by the training. Pre-training observations provide a benchmark, and post-training observations signal whether or not the training changed behavior.
- Participating in focus groups that permit group interaction.
As your L&D team gathers data, they need to remember the 5 Key Principles of Good Data:
- Velocity: Data must be gathered and analyzed at the right speed.
- Variety: Different types of data will be needed to tell the overall learning and performance story.
- Veracity: Data must be trustworthy and free of bias and disruptive outliers.
- Volume: The appropriate amount of data must be collected to enable the holistic measurement strategy.
- Value: Data must be selected for collection and analysis based on its ability to foster the right questions and deliver value to the business and employees.
Data-gathering that follows these guidelines is much more likely to yield data that’s measurable.
Measuring the impact of learning
Knowing how to measure learning and its impact starts by going back to basics and asking,
“What goal motivated this initiative?”
Was the primary goal to:
- Change behavior or increase skill levels at the individual learner’s level?
- Improve the efficiency, effectiveness or cohesion of a team (or teams) so that they deliver higher sales, fewer compliance issues, faster delivery times, or lower turnover?
- Support key business objectives of the organization, such as increasing ROI, improving market share, or fostering a more inclusive culture?
You’ll need to consider the purpose of the training in order to prioritize and analyze the data you’ve gathered. The process requires collaboration with team members and savvy to select and utilize data that fully and accurately presents the L&D’s overall impact.
Importance of measuring the operational efficiency and internal performance of the L&D department
It’s worth reiterating that measuring the operational efficiency and internal performance of the L&D department will become increasingly important as:
1) Business leaders expect L&D to justify–and quantify–their results, and;
2) Budgets for L&D continue to shrink.
Beyond that, training for the sake of training simply isn’t good enough any more. As more and more companies build L&D into their strategic planning, executives will need to understand and utilize the concept of LearnOps in order to consolidate important processes, data sources and tools for stakeholders across the corporation.
Effective LearnOps leads to better experiences for learners, more efficient use of resources, and L&D systems that mesh with corporate strategy and short-term goals.
Expanding the definition of learning measurement
Effective learning measurement and evaluation includes gathering, organizing, analyzing, and utilizing diverse types of data from many sources. Many L&D teams focus on gathering performance data to measure the impact of training—but what about the internal performance of L&D?
Do you know how many training requests you receive each quarter? Or where in the business they come from? How many of those requests lead to a training initiative and how many go unanswered because your team lacks the resources to fulfill them?