Managing Risks with SMEs, Part 1

Managing Risks with SMEs, Part 1

June 8, 2024

In the beginning of this series, we talked about whether learning development should be entirely on your subject experts and how instructional designers can help. In this post, we’ll talk about the risks and mitigations when a subject matter expert (SME) develops learning content.

Let’s imagine a scenario where a subject matter expert is asked to develop their first course using generative AI. There are quite a few challenges and pitfalls along the way.

Risk #1: The gap between the expert and the learner

The first challenge is in identifying the performances to be learned. Remember that the SME, in their mastery, has internalized their core knowledge so well it’s effectively unconscious. This can lead to foundational knowledge being glossed over.

Or the problem can go the other way: it can lead to too much knowledge being passed. When this exceeds what learners can internalize and/or will need, it can lead to disengagement, misunderstanding, the wrong skills learned, and a poor use of time.

At worst, it can lead to the fundamental problem being misidentified. When I had an initial call for an industrial safety problem, the experts said “people aren’t following instructions”. You can imagine how poorly a “learning to follow instructions” course could go, so I dug deeper. When I asked for examples, they told me horror story after horror story and I spotted a commonality – either 1) the gauges were broken (and a backup wasn’t checked), or 2) something changed while the operator was distracted. This resulted in a simulation with a high failure rate on the first run, but > 75% correct performance on a second run.

Mitigation: Develop a scoped engagement where an ID helps uncover the fundamental performance issues. A skilled ID can probe for foundational knowledge, fit the knowledge to learner needs, and provide a blueprint for the work to follow. The artifacts from this process can play a key role in guiding future SMEs. While designing a Web Accessibility course, I realized that the skills involved were so interconnected that my engineering experts would have to break them down and sequence them. So we held a couple of working sessions where I helped them break down a set of scenarios into their composite skills. The resulting multi-page document took a while to refine, but then accelerated the rest of the development process. It also proved useful to give to SMEs starting other courses as an example.

Risk #2: Time needed to generate and refine content

An ID-led development process looks something like this:

  • The ID and SME meet to understand the performance need
  • The ID structures the resulting intervention and confirms it with the SME
  • The SME often drafts some initial content
  • The ID builds out the content while optimizing for the learners
  • The SME reviews the work regularly and the ID updates based on this

A solo SME will need to spend significantly more time if working alone. It’s tempting to skip briefing an ID and just draft a deck directly. This is great if you’re writing for peers and will be in the classroom, but becomes significantly harder when the content needs to stand on its own. SMEs often do full rewrites after testing and getting feedback as they understand the learners needs. A skilled ID can start closer to the final structure and make incremental revisions instead of full rewrites.

Could using generative AI reduce this time? If your content is complex in any way, AI generation may have more overhead than using a talented writer. Consider the limitations of current Large Language Models:

  1. They rely heavily on the public web for training data. This data mostly represents the general public view on a subject. It may not contain current information, and LLMs tend to “hallucinate” facts where data is lacking.
  2. You often have to prompt with an example of the expert answer if you want that used in the result. For example, ask a LLM how to develop a course and then ask how to develop one using ADDIE (or the approach of your choice). Both results will be quite different.
  3. You can also upload documents to most LLMs, but it’s not clear the LLM will extract the correct answer. Again, you may need to guide it to the answer by example.
  4. You’ll need to review the results carefully. Note that generative AI generally doesn’t revise its own content but generates a new version with the requested changes, potentially with fresh errors.
  5. Editing and revision time often outstrips the original writing time, and prompt development is a longer iterative process. There’s a huge risk that SME+AI writing & revision time will exceed that of content developed by an ID and/or writer.

Mitigation: Use the first few modules to develop a set of guidelines and a job aid for research and prompting. Have a standard workflow to review and update documents. If you’re using a chat-based AI, give the sessions a consistent name and/or save the link to promote reuse.

Risk #3: Garbage in, Garbage out

Today’s generative AIs are statistical reproduction machines — they’re trained to match requests to generated content, whether that’s a question and answer or an image from its description. As mentioned before, their factual content often comes from the web at large and is refreshed periodically.

This means that the inputs to the AI need to contain your own carefully-selected examples. Some systems, like Google’s Gemini Pro, can do the searching and selection for you. Most systems will allow you to upload your own documents (but be careful — if a non-expert could come to the wrong conclusion, so can the AI). The best results come when you give worked examples to build on.

Ultimately, this means whoever is entering prompts also needs solid research skills and the ability to “search” the prompt space with alternate phrasings. It’s a specialized skill in itself, and current prompting courses are often model-specific (can’t be used on another AI or even between major updates of the same system.)

Mitigation: Focus on capturing high-quality prompt examples, documenting the processes used by your most successful prompters, and build an internal community of practice around your AI use.

Next time

Next time, we’ll look at the costs associated with using generative AI and putting the burden on the SME. See you then.