Evaluating the eLearning - the E factor

Aug 18, 2015

After the eLearning, when Learning Management System History reports are available and you can assess numbers of correct/incorrect answers on tests - what do you look for to know if the problem of poor scoring lies in the eLearning module, or the question wording, or the student?  How is the effectiveness of the eLearning determined?

4 Replies
Bob S

Hi Lori,

Depending on your LMS and how you have things set up, these kinds of reports can be invaluable.  Couple of things we look for...

  • What % of learners successfully answered each question?  Glaring differences between questions can be indicative of a badly worded question, or confusing/missing content
  • What are the most common wrong answers?  In other words, for those that got the question wrong, what wrong answer did they give most often.   If a particular answer stands out as being far more common than the others, this might indicate a "trick" answer, or misleading content in the course.
  • What % of learners passed exam on their first attempt; and of those who didn't what % passed on second attempt?   This ratio of first to second attempt successes can be an indicator of either how difficult your quiz is overall, and/or if the relevant information appears "buried" in the clutter of the course vs called out clearly.

Note, this sort of information above may not be available depending on the particular LMS or how you have managed courses/exams.  This is one reason many administrators/IDs actually pull the quizzes out separately or even (gulp) use the LMS vendor's quizzing tool so they can get question-level data such as above.

Great question by the way, and I hope this helps!

This discussion is closed. You can start a new discussion or contact Articulate Support.