0% Errors Unrealistic?

We all know that no matter what we do, how hard we try, or how often we test before we launch, sooner or later someone is going to have a problem completing a course . Sometimes it is user error. Sometimes the LMS loses connection with the eLearning. Sometimes neither we nor anyone else can figure out what is causing the error for a learner.

Although we strive for 0% errors, this may be unrealistic.  Or is it?

  1. Should there be an acceptable percentage of errors with any eLearning launch?
  2. What would you consider to be an acceptable number of completion errors for a launch to 100,000 learners?
  3. Can you point to any documentation that references a standard number?

Just wanted to see what others were thinking. Thank you for sharing your thoughts.

 

1 Reply
Nicole Legault

Hey Philip!

This is a really interesting question, thanks for posting it here in the E-Learning Heroes community.

I was actually just having this conversation with another Instructional Designer at Articulate and we both agree we don't love setting a passing score of 100%. However, in certain cases (for example when creating compliance training) that might be a requirement so you can prove that  learners understood all the information. It really depends on the situation.

As far as I know there are no real guidelines for setting e-learning scores. A lot of people tend to go with an 80% pass rate... that is also the default passing score for a Storyline quiz, but why? I'm not sure. Probably because a score of around 80% indicates the learner likely understood MOST of the content, but leaves some room for error. 

I do know that the formula or thinking behind creating Pass/Fail scores is that something that I've dug into a bit in the past and the most useful resource I've been able to find is this handy document: Passing Scores: A manual for setting standards of performance on educational and occupational tests (pretty dry name, right? haha). It's old (as you can tell from the cover!) but it's actually filled with some really useful tidbits to help you decide how to set your scores. It covers all the important methods (nedelsky, ebel, angoff) and talkes about borderline group method, etc.

If you're doing some research you might want to look into something called "standard setting". Here's another article I found helpful: http://www.fisdap.net/blog/how_set_cut_score 

From the article: 

"Even after using this rigorous process, there’s still a possibility that standard error could result in a competent student failing the exam. To mitigate the effect of error on pass rates, all Fisdap exams include a 5% standard error of measurement (colloquially know as the “fudge factor”). This buffer helps remove the influence of factors such as ambiguous test items, test-taker fatigue, and  guessing. By protecting against these variables, we keep students from missing the cut off “by one question,” so to speak."

That paragraph would seem to advise against setting a pass score of 100%. At least 95% is a minimum margin of error it would seem.

 

When choosing how many points you assign to each question you want to think about each question... how important is this question? How serious would it be if they got this question wrong in real life? How often do they do this task or need to know the information at hand? These factors should all influence how many points your question is worth. You might have some easier questions that are worth less points, and some harder questions that are worth more points.

I wont lie it can become tricky to calculate all these point combos and decide on a final pass score s (at least math is certainly not MY forte!) but that's what you need to do. When calculating the passing score and points for some complicated branching scenarios I had to use a calculator and make a little spreadsheet with my numbers (yes, it can get that serious!)

Hope this is helpful... and hope others from the community may chime in. This is a really interesting and valuable ID topic!