11 Replies
Cheryl Kent

Hi Tammy,

guess it depends on what your learning object is intended to do.

Not sure I can narrow it down to one question but here are 3 that help to give us confidence that our etutorials are hitting the mark:

Did this etutorial help you to find the information you were looking for?

Did you learn what you needed to know?

Do you feel more comfortable using the resource(s) mentioned in this tutorial?

Our etutorials are pretty short and focused.

It would be interesting to see what others people suggest.

David Tait

Hi Tammy,

Like Cheryl said it's difficult to narrow this down to one question but I'll try:

How likely are you to recommend this training to a co-worker? 

As well as the rating scale I'd add a text entry field asking for further feedback to back up their rating.

Ideally I'd then look for a pattern in the scores and comments to see if there were any issues to address to help improve future training.

Tammy Knoll-Anderson

Hi Cheryl and David,

Here's what I landed on (from eLearning diva extraordinaire, Christy Tucker, who sussed out the research and pointed me in the direction of Will Thalheimer). 

From one to five stars, how capable will you be to put what you’ve learned into practice? Choose one option to rate your ability to use the skills taught.

I am NOT ready.
I need MORE GUIDANCE.
I need MORE EXPERIENCE.
I am FULLY COMPETENT.
I am CAPABLE at an EXPERT LEVEL.

Please leave a comment to let us know why you choose your rating.

By Will Thalheimer https://www.worklearning.com/2016/07/06/smile-sheet-questions-new-examples-july-2016/

Thanks Christy!

Tammy Knoll-Anderson

Hi David, 

I've always been a NPS fan (coming from a call center background), but it never really seemed to fit well with T&D. This article by Will Thalheimer seems to get at the heart of why it isn't as meaningful as it could be. 

https://www.worklearning.com/2018/01/09/replacement-for-the-net-promoter-score-for-learning-assessments/

Let me know your thoughts.

Thanks!

Tammy

Bob S

Hi Tammy,

Your premise of "one question" is fine as a hypothetical to stimulate conversation amongst this group, but as you know it's rarely a good idea to ask a single feedback survey question and draw any conclusions. Will Thallheimer makes this point as well.

For example... In the excellent question posed above (which we've used extensively btw), I might answer "D - fully capable at an Expert Level" .    What you don't know (if that is all you ask) is that I went into the training as an expert and you bored me to tears and wasted my time.

The reverse could also be true... I might rate my readiness as quite low - not because of the content/completeness of the course but because I personally have concerns about a very narrow piece and have attached anxiety over the lack of depth in that one area that has been a pain point on occasion.

You get the idea. I just wanted to remind everyone that meaningful feedback may need to be a bit more expansive... if we really want it be meaningful.

Thanks for starting the conversation!

Dave Ferguson

To extend Tammy's example, as an element of a large instructor-led program, I've used a follow-up survey, sent a few weeks after formal training. The survey listed specific skills from the training in user-centered language ("Produce the ABC report," "Update a customer store profile"), and asked people to report how confident they felt about their ability using a four-point scale:

 1: CAN'T DO
 2: CAN DO SOMEWHAT
 3: CAN DO
 4: CAN DO EASILY

This was still self-reporting, with its inherent limits; our team thought of it as "Kirkpatrick level 1.5." 

To David Tait's point, we found higher scores (3.25 and above) for the most important skills -- those that were more central to the job. And with a numeric scale, it was easy to summarize reports by calendar period or geographic region. We used a bar chart with groups for the major modules from the course.

We would include representative free-form comments. I do recall one respondant who added his own point for some questions: "5 - CAN DO IN MY SLEEP."

Tammy Knoll-Anderson

Hi Bob,

Absolutely get it! I'm working with an LMS and authoring tool that includes a built in course rating survey question. The client wants us to use that. Because of that, I was trying to get the most out of the one question requirement.

As the feedback comes in, I hope to be able to recommend improvements to their surveys and move them away from the pre-built feature to a more well-rounded and robust feedback option.

I appreciate the dialog and discussion!

Thanks!

Tammy

David Tait

Hi Tammy,

I'm enjoying the discussion you've started. I think its clear that there isn't one single question that is capable of drilling in to enough key areas to be effective.

Certainly the link you posted to Thalheimer contains a more detailed set of options to choose from than a simple rating scale of 1-5 but its still flawed in as much that survey feedback is always going to be relative to the person giving it. For example, I might find a course highly effective but you might not. It might not be that the course itself is badly designed/written, it might just be that one of us prefers elearning and the other has another learning preference.

Without giving the learner the ability to feed back with free text entry I feel that we'll never be able to interpret the ratings in a meaningful enough way to improve future courses.

Cheryl Kent

Hi,

this has been an interesting discussion.

You may be asking questions to check whether a users has learnt something. You may be asking a question to assess whether your elearning object is satisfactory, interesting, exciting etc. As previous commentators have mentioned it is all subjective.

We try to encourage users to complete our feedback by indicating that we use the feedback to improve our service. We do include free text entry box too.

I did like the suggestion though of whether a learner would recommend to a colleague.