CLO Series: Bror Saxberg of Kaplan

Cognota® has recently kicked off a series of interviews with current and former Chief Learning Officers and senior learning professionals. We are pleased to share the results here in our blog.

We recently were fortunate enough to catch up with Bror Saxberg, CLO of Kaplan and Board Advisor at Cognota®.  This blogpost incorporates his responses to questions related to the challenges of designing learning systems, taking advantage of trends, and measuring ROI. 

Bror_Saxberg.jpg

Saxberg shares his top three biggest challenges when designing learning experiences:

(1) Deciding which learning outcomes match deeply what top experts decide and do.  From cognitive science research, “the majority of expertise isn’t available to be expressed verbally by experts anymore — they don’t even know all that their minds do!” explains Saxberg. “That makes it tricky to try to be very clear about what the right learning outcomes are.”  

(2) Getting valid and reliable evidence for learning outcomes.  Saxberg explains that this is not what most faculty or trainers have experience doing. “They are more focused on the training itself, rather than gathering high quality evidence,” he adds. There are known ways to design assessments to look for the real impact of learning outcomes, but it is expensive and atypical to get support for this. 

(3) Getting the right balance of investment against learning outcomes.  “Often there’s pressure to do something ‘cool’ rather than focus on something effective, from learners, managers, and more, yet sometimes cool gets in the way,” explains Saxberg. The ability to spend enough time and effort to design learning environments correctly, given this additional pressure, is difficult. Saxberg suggests that learning leaders determine which learning outcomes are the hardest or most important, then ensure that the ‘learning engineering’ around those is given the best effort.  “For the rest, it can be OK to lighten up, or use cool but less effective methods, depending on the specific need you have,” suggests Saxberg.

Saxberg reveals that his team develops mostly in-house, but sometime outside.  Outside developers are used as a load-balancing device, though his team prefers in-house. “We’ve trained our own folks to be learning engineers, with the ability to apply best practices in learning science and learning measurement to resolve learning and business challenges,” explains Saxberg. “There aren’t many outsourcers like that out there.” 

As for the biggest difficulty when designing learning, Saxberg feels that often neither instructional designers nor general managers/businesspeople know enough about how expertise and learning really work in order to prioritize investments in learning the right way. “At the top level, this means general managers and their learning counterparts aren’t sure how to decide how valuable skills are — how much it would be worth for high value, high volume, and high variability jobs to get the median performer to behave more like a top performer,” explains Saxberg.

As such, this limits the investment discussion in learning to a pure cost discussion vs. incorporating returns from different methods. At the daily instructional design level, this means many designers do work they’ve been trained to do, but are not necessarily aware of the research underpinnings that could either support what they’re doing or suggest a more effective/efficient activity. 

Saxberg also feels that this also gets in the way of technology use:  instead of thinking through what change in learning is needed, and then selecting the technologies to make that change more reliable, available, affordable, data-rich, and personalized, “folks are picking technologies because they are the latest,” admits Saxberg.

When asked if his learning team is sufficiently agile totake advantage of rapidly shifting corporate L&D changes, Saxberg explains, “We’re training our learning team in the principles of how learning and learning measurement are actually best done, and then leaving them to use these principles to problem-solve within the constraints of their own learning needs within their businesses.”

“It is very much like Stanford D-School work, or other best-practice engineering:  you want the principles to be known, folks to have experience applying them, and then give them scope to use them to resolve the constraints in each learning problem as needed,” he adds.

“More explicit attention by general managers to the actual value added to their companies by better learning environments will help them get the investments right.”

Can you measure ROI?  Saxberg explains that ideally, you first identify for a job category what the value added of that job is: What additional funds come in when done well?  What are the costs if errors occur? 

Benchmarking can help. If you have enough people available in a job category, you then use these to look for your top-performers, so you can see “how well it can be done” practically in your organization.  By estimating how much value you could add by getting most of the folks in the job category closer to that level, you can then measure the potential of a training initiative.

Once you’ve completed the training (with pilots along the way), you can then look before and after, both at the key behaviors you were trying to get people to adopt (implementation fidelity), and at the improvements in value added.  You can then compare the costs of developing the training (relatively fixed cost) and delivering the training (variable costs), against the value added across the targeted population. 

The discussion moved to future trends in corporate learning. Saxberg predicts that all learning environments will wind up as hybrid environments, where technology is used early in training for the simplest level of skill mastery and fluency, and then experts are incorporated in the learning designs to provide coaching and feedback for increasingly cognitively complex decision-making that machines are not capable of doing. 

The interaction and learning performance data from all of this can create critical feedback loops:  one at the individual level, to help speed up or slow down or alter the individual learning experience, and the other at the learning environment level, to help designers see which areas need more work based on the difficulty of actually getting there. Indeed, that data flow is likely to replace formal assessment altogether — what we want to know is whether learners can accomplish complex tasks in an expert manner — monitored practice can tell us that.

“I think more explicit attention by general managers to the actual value added to their companies by better learning environments will help them get the investments right,” explains Saxberg. “In this increasingly competitive world, where the quality of your decisions at the individual worker level really matter, adding a documentable billion dollars of value to your company through better training of a category of workers becomes worth investing tremendous resources, vs only trying to knock a few more dollars off the hourly cost of instructional delivery.”

Saxberg provides three additional challenges:

(4) Finding reliable ways to work on learner motivation.  Learning some skills is quite difficult (e.g., think about writing), and many learners have misconceptions about their own capabilities and the ways to lift them.  We need more work showing how we help students break through their own blockades to starting, persisting, and putting in mental effort on hard, important learning outcomes:  a) Get them to realize the value of doing it, and the value of the methods; b) Get them to realize they really CAN do it; c) Help them to get other things out of the way, if they feel blocked; and d) make sure they’re in a positive emotional state when they tackle the learning work.  This is applicable for all learning environments.

(5) Getting suppliers and vendors in the learning space to embrace evidence-based thinking about their products and services, and documenting those. 

(6) Getting banks of reliable assessment tasks along with marking rubrics for the most important skills in a skill category. These would help our whole profession — professional societies developing these across organizations would help every organization, vs. each organization trying to do their own assessment of competence. 

According to Saxberg, more use of evidence-based learning approaches would significantly improve the whole learning environment.  “All of us should be asking our own teams, and our suppliers and vendors, for more evidence and sources of evidence for why and what they’re offering/planning is ‘better’ than what has come before, and is adding real value to learners and their employers,” he suggests.

“It’s not easy,” concludes Saxberg, “but it’s the only way to turn learning work from a cost-focused exercise into what it is always talked about being: an investment.” 

You might also like

Article Details
SHARE THIS ARTICLE
GET MORE LIKE THIS
SEARCH OUR SITE

Connect,
Collaborate & Grow:
Discover the
LearnOps®
Community

CLO Series: Bror Saxberg of Kaplan