Forum Discussion

smous's avatar
smous
Community Member
6 days ago

What if most course review pain points come from the same root issue?

I’ve been noticing something during course reviews that I can’t unsee anymore.

A lot of the most common pain points in L&D tend to show up together:

  • courses that feel unclear even when the content is accurate,
  • long review cycles with lots of subjective feedback,
  • inconsistent standards across modules or teams,
  • stakeholder disagreement about “what good looks like,”
  • cognitive overload on otherwise solid screens,
  • accessibility gaps caught late,
  • endless rewrites that don’t really improve the experience,
  • and very little shared way to measure quality.

Individually, these look like separate problems.

But together, they start to feel like a standards and clarity gap, not a content gap.

Lately, I’ve been exploring whether having a shared review lens — one that looks at clarity, consistency, accessibility, and experience as a whole — could reduce a lot of this friction earlier in the process.

I’m curious:

  • Do these issues tend to cluster for you too?
  • Where do reviews usually break down?
  • What’s helped you create more alignment, if anything?

I’m interested in patterns more than tools.

1 Reply

  • HoneyTurner's avatar
    HoneyTurner
    Community Member

    I think my experience is a bit different. I moved up through the company from the most elementary role through team leader and account specialist, then took a sideways leap into course development. It means I have a very in depth knowledge of all the content I'm being asked to develop, as well as personal experience as the learner and as the one guiding others through pressure points and educational deficiencies. Additionally, my supervisor and I seem to have the same style when it comes to breaking bigger concepts into smaller easy to access content.

    So, the big hurdles are:

    • during the scripting/outline stage, getting input from the people who the course is meant to help.
      •  Most often they cite time as an issue and use avoidance techniques as a way to slow down change.
        • We've learned to proceed with the course design and ask very specific questions along the way so they only need to make one tiny decision at a time. 
        • We also give them opportunity at the very beginning to tell their story. What does their current educational method look like, what are the results and the timelines and the short comings. People like talking about themselves freely a lot more than they like initiating change. This also allows them to hear how inconsistently this topic is being handled so they understand why a unified learning tool is needed. So even if they're not enthusiastic participants in the development, they are excited to use the final outcome.
    • during the actual main development stage, I have a couple people who I bounce ideas off of. They give feedback and will brainstorm verbiage, functionality and styling.
      • They're both really busy, but not avoidant.
        • Using an instant messenger seems to work best. It means they're notified when I need them, but if they're not available, it doesn't get buried the way email does. Everything outstanding is right there the next time we do connect.
    • during the end phase review, this is where things break down for us most.
      • Endless waiting time for people to actually do the reviews,  and reviews that make you wonder if they were using a critical eye or just enjoying a break from their normal activities.
        • We use a 2 fold approach for this. A chart that includes a description of the current level of development, what the next steps are (specific points that need critiquing) and a colour coded tag indicating who is responsible for this step. All dated so it's clear how long something's been delayed. This is paired with what amounts to nagging. Getting eyes on that chart on a regular basis. But I have a meeting with my supervisor set on a repeating schedule to make sure that my projects and needs do not get buried behind other meetings.
      • Almost all the change requests we get at this stage are minor and easy to implement.
        • We achieve this by keeping our review pool small. If we include content that may be perceived differently by different segments (Canadian, new employees, temp workers, California, etc.) then we choose just 1 representative from each so collectively they review the overall content, but we don't get conflicting recommendations for the specialties.
        • We also distinguish between reviewers who have a critical eye and know the content well, vs decision makers. So that we can take the feedback and where there are incongruencies, present the options to the decision makers for final decision instead of bouncing back and forth over opinion.