Choosing Learning and Development KPIs for Organizational Impact

learning and development kpis

Measuring the impact of your learning and development initiatives is critical—not only for your programs’ continuity and departmental success, but also for the advancement of the organization.

While it is generally accepted that organizations of all sizes must constantly be training their employees, understanding what’s working and what’s not can be a challenge. 

Organizations want to ensure that their employees’ skills are improving and that they are receiving the proper knowledge and support tools to excel in their roles. 

What is a learning and development KPI?

Learning and development KPIs, or key performance indicators, are quantitative measurements that can inform L&D, learning stakeholders, senior management, and even learners on program success.

And “if you can’t measure it, you can’t improve it,” as Peter Drucker, the “father” of modern management theory, was fond of saying.

Unfortunately, more than two-thirds of companies say that an inability to measure learning’s impact represents a challenge to achieving critical learning outcomes, according to a survey conducted by learning industry research firm Brandon Hall Group.

As such, selecting, implementing, and managing KPIs is important for keeping everyone on track, with an eye on the impact of training on the organization’s bottom line.

The importance of learning and development KPIs

Without KPIs for learning and development, no one will be able to agree whether a learning experience was successful. While qualitative opinions can vary, the absence of a system to measure program attributes will lead to confusion and frustration—not only for learning leaders but ultimately for learners, who might receive poorly designed courses because previous processes weren’t measured properly.

KPIs hold value for the following individuals and groups:

Instructional designers, who can better understand which pieces of content resonated best with learners and which did not.

Learning leaders, who can understand quantitatively which courses proved successful with learners and which did not.

Learning business partners, who can determine the value of training when they see the impact on both individual and team job performance

Senior management, who can see the financial impact of trained employees with new skill sets and increased knowledge.

Strategies for selecting the right KPIs for your organization

Just as every organization is different, every L&D department is different, and so the right KPIs should be different. 

However, there are some overall strategies that can be used to help you and your team select the optimal KPIs for your organization.

Ensure that the training can solve a business problem

When you can connect the training to an outcome that will solve a business problem, you have found a perfect KPI to measure.

For example, the success of customer care agent training would be better trained customer care agents, who, after undergoing the training, would be delivering swifter resolutions and more satisfied customers. 

L&D could work with the business unit to determine whether there was faster closure to tickets, or satisfaction scores from customers increased after agents underwent training. These metrics can be recorded for each agent, and also for the department as a whole. In this way, the KPIs are directly related to business success.

Confirm that the data is available   

Wanting to measure everything and anything is a noble pursuit, but it’s important to verify that the data is available before deciding to include that KPI.

An example of this might be L&D’s desire to test the relevance of recent compliance training. Aside from learner satisfaction feedback, or knowledge retention scores collected after the conclusion of the course (i.e., a quiz), management might want to measure just how far the compliance training might take hold across the organization. They may want to measure fewer legal mistakes, or fewer IT security incidents; however, this data might be difficult to collect regularly at scale.

Test and iterate

Finally, you need to build in some flexibility regarding L&D KPIs. Some might seem great at the onset, but might be difficult in practice. Other KPIs might elicit friction or resistance from certain teams or departments who feel that they need to deliver the numbers just to “tick off boxes” or satisfy some goal that they do not share.

One way to know that a KPI isn’t working is when the numbers don’t seem to move, or there are no discernible patterns that can determine for L&D or senior management program success or failure.

What mistakes you should avoid when selecting KPIs

While testing and iterating on the right KPIs to use is certainly a strategy for determining the right mix, there are also some mistakes to avoid when selecting KPIs.

1. Selecting vanity metrics

Every individual or organization wants proof that their efforts are making a difference and making a positive impact on the team, department, or even organization as a whole. 

Oftentimes, individuals or teams might choose “vanity” metrics to accomplish this, data that indicates positive performance—a percent increase in something—without realizing that what is being measured isn’t truly impactful.

For example, L&D may have access to an LMS showing the number of course completions. The team might notice a steady increase in the number of people taking the course. But this might not necessarily translate into behaviour changes or performance improvements.

2. Ignoring negative results

A practice related to selecting vanity metrics includes ignoring negative performance to ensure that only positive results are displayed and shared with others. 

While few people would welcome disappointment, it’s important for learning leaders to incorporate metrics that might deliver bad news—not just at the beginning but moving forward as well.

Negative performance can inform the team, within L&D and externally in the business units, what needs to be remedied, in order to keep improving the L&D function and ultimately, the total learning experience for employees.

If only positive numbers result, others, including senior management, might suspect that the KPIs were not set up properly, or worse, that poor performance is being hidden by learning leaders on purpose.

3. Doing what other companies are doing

For L&D organizations new to selecting KPIs, it might make sense to choose metrics that other companies in the industry are using. These can serve as a good start; however, testing and iterating to decide the best activities to measure is in order, as each organization has unique processes, needs, and resources.

Teams might be hesitant to begin data collection on certain learning-related behaviors, such as course completions or assessment scores, only to cease collecting data months later. Remember that KPIs are meant to measure behaviors that will eventually impact the overall performance of the organization. “Smaller” KPIs can be left behind if it was later determined that data is available on more meaningful and relevant outcomes. 

4. Waiting too long to measure 

When a new measurement tool is introduced, or a new process is launched, the team might be hesitant to begin collecting performance data. Perhaps the team is waiting for more learners to complete a course, or they are waiting for more learning partners to complete training intake requests. However, this could be detrimental.

Early data collection demonstrates to everyone that measurement is important. As long as data is being collected, eventual tweaks and improvements can be made to the courses, assessments, support materials, and the like.

Don’t worry if it isn’t perfect. As mentioned previously, iteration and adjustments to measurement processes are key to finding the perfect fit that works for L&D leaders and business partners alike.

Examples of learning and development KPIs

Still not sure what to measure? Just as there are tried and true instructional design models to help L&D build courses, there are tried and true evaluation models to help learning teams measure the results of their work.

The Kirkpatrick Model is the best-known model for evaluating the results of training and development programs. Some might be hesitant to lean on an evaluation model developed strictly for learning industry practitioners, for fear that what is measured might not be relevant to those outside of the learning function.

However, the flexibility and adaptability of the Kirkpatrick Model have helped it endure, making it useful and relevant for other departments across the organization. 

Below is a summary of the 4 different levels of the Kirkpatrick Model for learning measurement.

Level 1: Reaction

At this most basic level, learners are asked about how they felt about the training they just completed. Questions might be as simple as:

  • Did you enjoy it?
  • Did you feel that it was valuable?
  • Did you feel that it met your expectations?
  • Did you feel that the instructor did a good job?

These questions, usually delivered via a simple post-course questionnaire, are only intended to scratch the surface. Level 1 does not inform L&D or learning managers whether the course actually fulfilled its objective and actually delivered knowledge or skills to the employee—the later levels do that. 

However, the value of Level 1 is that it gives the L&D team an idea how the learning experience was received, and how the team might be able to improve the learning experience.

Further, sentiment could provide an indication of the likelihood that the learner will actually implement what they just learned. 

Level 2: Learning

This next level moves on from Level 1, seeking to determine whether any skills were acquired or learning was absorbed. 

To determine this, instructional design teams will usually deliver assessments at the end of a course, in a quiz-like format to test just how much content the learner retained. 

However, Level 2 can be delivered in more creative, strategic ways to determine knowledge or skill acquisition. One variation is to deliver identical assessments, one at the beginning and then one at the end of the course. The number of correct answers at the end can indicate that learning took place. 

However, a surprise might be where responses are identical in both——meaning that learners either learned very little or they knew much of the material beforehand and the course did not deliver anything new or noteworthy.

Level 2 can also be delivered on a continuing basis, perhaps one week, one month, or even quarterly or annually after the training has been completed. This also addresses the forgetting curve and can be a true measure of the ability of L&D’s courses to meet the needs of learners.

Level 3: Behavior

Kirkpatrick’s Level 3 goes even deeper than Level 2 and seeks to evaluate whether employees are actually applying what they learned in their jobs.

This level typically requires time for new behaviors to settle in. Further, this type of measurement requires L&D to work closely with the employee’s co-workers, managers, and others with whom the learner regularly works in order to garner the requisite data. If the training course has had the desired effect, it will be noticeable to everyone involved.

“Behavior can only change if the conditions for it are favorable,” notes Ashleigh Hull in eLearningIndustry. “Will the boss actually let your participant apply their new knowledge? Is there a tool or a system that has not been put in place? Does the learner have any desire or incentive to apply the learning? And what can be done to remedy these situations?”

Level 4: Results

Kirkpatrick’s final level of evaluation determines whether the learning experience positively impacted the organization as a whole. This relies on goals being set—and data collection processes put in place—before the development of the training. What changes were managers looking for? How is success defined? 

Unlike in Levels 1, 2 and 3 of the Kirkpatrick evaluation model, the learner would not be involved in supplying this data for the instructional design team or even for their manager. Level 4 would be “behind the scenes” to some extent, as processes the organization has in place would need to be set to capture the benefits of the employee’s improved performance and new skill acquisition.

Examples of KPIs for learning and development, especially as they map to the different levels of the Kirkpatrick Model, include:

Employee satisfaction and engagement 

This is Level 1, where surveys post-training are issued to learners to gauge their satisfaction with the material that was just presented to them. 

While some feel that this is superficial and of little use to the performance improvement of the organization as a whole, strong satisfaction feedback is the first step in determining whether the learning was meaningful to employees, who, now that they feel engaged and energized, are likely to apply what they just learned to their jobs.

These types of evaluations can be quantified (“On a scale of 1 to 10, how satisfied were you…”), so that the L&D team can score the satisfaction rates of their courses.

Employee satisfaction and engagement are still very much important to learning leaders. According to LinkedIn Learning’s 2022 Workplace Learning Report, qualitative feedback from employees and employee engagement survey scores are the #1 and #2 ranked methods for measuring the impact of skill-building programs at organizations.

Employee knowledge and skill retention

Also delivered post-training are assessments to determine how much the employee learned while participating in the training. 

Correct and incorrect answers can be tallied, and final scores can be issued, perhaps on a recognizable scale, such as 0 to 100.

As cited above, these don’t need to be delivered once post-training. They can be delivered over time, and L&D can observe any changes in performance. These successive evaluations can serve as a strong KPI regarding the ability of the learner to apply what they learned on the job.

Application to role and customer impact

This Kirkpatrick Level 3 KPI will require participation from several teams in order to collect the right data, but would be of interest to more senior managers seeking more concrete results demonstrating the impact of L&D programs.

Data-driven organizations most likely have systems in place to collect this data, such as platforms that capture time-tracking, support tickets assigned and closed, project or product contributions, and the like, in order to connect employee performance to the successful completion of tasks.

Impact on organizational metrics

For companies well underway in capturing performance-related data, setting up KPIs to understand impact on the organization as a whole would seem like a natural next step. 

Senior management of such organizations most likely has channels in place to collect employee performance data. KPIs as this level most likely have this lower-level data fed into a dashboard-like platform that enables a top-down review of how employees and entire teams have improved as a result of undergoing training and subsequent assessments.

Measure the success of L&D with learning operations software 

Understanding what’s working and what’s not can be difficult in fluctuating environments. However, the right technology can instantly streamline existing processes and formalize those that were previously ad-hoc and scattered across disparate tools, providing the all-important KPIs and analytics that can help you and your team better understand success.

As the first and only operations platform built specifically for L&D teams, Cognota’s LearnOps platform brings your entire workflow from intake to design to project management into one platform. Not only does this save time and create cost efficiencies, but you’ll get auto-generated insights into the performance of your team and the learning needs of your entire organization. 

Check out Cognota in action with a free trial or by speaking with the sales team to book a demo.

You might also like

Article Details

Collaborate & Grow:
Discover the

Choosing Learning and Development KPIs for Organizational Impact