Don't Forget the Learning!
When we’re optimizing development for a do-more-with-less world, it’s easy to overlook the hard bit – how do we know our interventions work? Let’s look at ways to ensure your learning development does what it should.
It’s about Quality
We’ve talked about the value of a solid relationship between instructional/learning developers and subject matter experts. The experts know their domain, but they don’t necessarily understand learners and their needs. This is where a good designer partnership comes in – it aligns and organizes the expert’s knowledge so that learners can develop their own expertise. Done right, it can lead to quality outcomes.
Quality outcomes come from measurable behavior changes that meet the needs of learners and the organization alike. They have a ripple effect as improvements move outward from the initial intervention.
Quality outcomes are the result of careful analysis, design, and refinement – focusing on crucial skills and knowledge; organizing this in ways that facilitate learning and practice; and asking the right questions during early sessions to hone in on the answer. But that middle bit – organizing to facilitate learning and practice – can be difficult. How do we ensure our learners get every bit of goodness from our work?
It’s about Motivation
It shouldn’t come as a surprise that learner engagement and motivation is a significant predictor of outcomes. Motivational frameworks such as Keller’s ARCS emphasize relevance as a key element. And this is where Generative AI falls short:
- Large Language Models won’t refer to the context of learners’ work unless you supply all that information yourself. This context is one of the best ways to drive relevance.
- Their well-known tendency to hallucinate answers can lead to incorrect or misleading content. Nothing kills motivation faster than learners having to work their way back from an error in the materials.
- LLMs aren’t great storytellers. A good story involves through-lines of ideas and connections that run across the story. In a similar fashion, great instructors know how to call back and reinforce prior knowledge over time. LLMs can’t do this as their “context window” – akin to a one’s working memory – is too small to hold all of the past content.
- If you try to work around the latter problem by having a LLM read in its own content, its quality will decrease over time.
It’s about the Learning
All of these point to one thing – you can’t cut out the thought process and authoring skills that drive motivation and quality outcomes. You may be able to augment stand-alone parts with AI-generated content, but don’t expect it to build the whole thing.