Do elearning courses need quizzes?

Sep 01, 2011

Just received an interesting question from a blog reader. Rather than replying offline, I thought it was valuable to share for discussion.


“Do e-learning courses have to include a test or quiz? I have several designers on my team who don’t think courses need tests, but our business partners insist the courses contain at least a final quiz.
Is it acceptable to build courses that don’t include tests or quizzes?”

What do you think? Do you build courses without assessments? If you do, what types of courses don't include assessments?

Is a non-graded, practice interaction equivalent to a graded assessment?

15 Replies
Steve Flowers

Hmm... I've gone both ways on this issue and my current view is that if you're gonna spend time building it, you may as well get some type of confidence measurement.

I've rarely built courses without some type of assessment gating. However, sometimes these are built without a score recording. In some industries, recording a score for a course makes folks uneasy (have seen a few in the financial industry with this view).

I would counter these questions with some questions of my own:

  • If you're not measuring the level of change in the learner, is your solution really a course?

I think sometimes folks like to package info-coasters and hang them on the LMS. These are loosely (and inaccurately in my opinion) called courses. But really, if you're not sure there's some type of change in the learner... wouldn't it more accurate to call this an ePamphlet? Not that a tutorial or ePamphlet isn't valuable. They are. But a tutorial is more performance support than learning or education. An ePamphlet is just information and isn't a heap better than a print document.

  • If you don't care enough to measure the impact of your solution on learners / performers, what do you care about?

I think sometimes, rarely, you can get more valuable data from a Level 4 measurement than you can from a Level 2 or even a Level 3 evaluation. Looking at real impact on the organization is a far better thermometer if you can swing it. But golly, sometimes it's really hard to get honest data from a level 4. Big effort. Sometimes the reward isn't worth that effort.

  • Are you including a post-test as a matter of course?

Good assessments are part of a good instructional strategy. Excellent assessments are written BEFORE the content has been designed. I think many (if not most) designers sweep through the content to find trivia nuggets to add to a post-test. Heck, I've done this and it's a terrible way to build an assessment.

  • What's your assessment measuring?

Sometimes I see folks adding mandatory pre and post tests to collect a delta measurement. While I don't think this is always a terrible idea, it doesn't always seem like an informed strategy. We'll most often apply a single Test that can be accessed at any time. This functions as a pre-test or a post-test. We designed the thing to solve a specific K&S gap for a known audience. We tested the solution in a controlled study using delta measurements and it works. Why the heck would we need a delta measurement for every enrollment after the efficacy study?

  • Can your test serve other purposes?

Tests are actually really great learning instruments. Research has shown that providing an opportunity to *fail at guessing* before instruction can improve learning quality.  Test questions with consequence can also heighten retention. Stress biology produces varied results, but much of the time feedback associated with a non-casual question will be retained better than feedback with a more loose association following a casually presented knowledge check.

Tests can also be a great mechanism for providing *credit for stuff you know already* and can be used to tailor a presentation to minimize redundant torture.

When combined with other data points, tests can help to improve course designs. It would be a shame not to have the right data to inform scientifically driven design improvements when the time comes... So, in short. Unless you have a darn good reason not to include a test, there are plenty of good reasons TO include some type of performance gating challenge. Sometimes this is a test. Sometimes this can be a practice activity that requires a score or level of performance to continue. Depends on the situation

Sheila Bulthuis

This is a topic I seem to come back to often, with both colleagues and clients.  And I agree with much of what Steve said, but I usually end up on the other side of the fence: “If you don’t have a pretty good reason for testing, why do it?”  And my other frequent comment is “If you are going to include a test, make it a good one.  Just putting in a bunch of multiple choice and true/false questions is not necessarily testing anything meaningful.”

 

In fact, I often put it this way with clients:

 

If you’re telling me we need a test to ensure they understand and/or can apply the content in the course, then we need to build a test that truly does that.  If we’re training on a software application, we need a test that actually tests their ability to use the system, not a test that determines whether they remember the maximum number of characters a particular field can hold (sadly, a real example…).  If we’re training on ethics, we need a test that gives them ethically ambiguous situations and asks them to determine what they’d do, not a test that checks to see if they can select the correct definition of “ethical” (another real example!).  But good tests are difficult to write – it’s so much faster and easier to just write the content and then search through it for nuggets to test on!  I press hard for either a truly meaningful test or none at all – but I often lose that battle, and I’ve definitely written some tests I’m not proud of.  J

 

When I do get agreement that the test needs to be a good one, I ask:  If we build such a test, and someone passes it, will you feel confident that they have the knowledge/skills/abilities you want them to have?  If yes, then we should give people the option to “test out” of the training.  People who already know what they’re doing will pass and don’t even need to take the course.  People who don’t pass (or choose not to take the pre-test) will take the training, and then test to make sure they understood it.  Of course, this wouldn’t apply to something like a brand-new software system that no one has ever seen, but it does apply in a lot of situations.

 

All that said, I’ve done courses with and without a test or quiz.  I much prefer “learning quizzes” where instead of a grade the learner is getting real-time feedback on answers and proceeding accordingly – as Steve says, it’s very effective.  And if the test is graded, what’s the outcome of that grade?  What happens to learners who don’t pass the test?  There should be remediation, and possibly re-testing.

 

To me, like so many other things, the answer to “Do e-learning courses need tests?” is:  “it depends .”  It depends on the content, the audience, the goals of the training, the preferences (or mandates) of the client, etc.  But like all the other parts of the course, I think it should be well-thought-out and purposeful, not something that’s done because “we always do it this way.”

 

My two cents.  J

Steve Flowers

Great points, Sheila. I think my main apprehension to excluding a test is that much of the time you'll end up with a course (I use this label very liberally in this case) that doesn't sufficiently challenge the learner to solve any problems or to... think 

If the course is gated with sufficient challenge, I can see plenty of opportunities to avoid the post-test doldrums. In this case, the course itself is filled with small challenges and "next-button ratcheter's speed bumps" that make the experience meaningful and worthy of production and distribution.

I just don't think a course is worth building if some kind of authentic challenge isn't levied. A mindless activity is a waste 'o time at each end of the supply chain. If a course is a learning experience in itself, sometimes there isn't enough of a reason to include a test in every enrollment.

Frankly, most courses I see are 99% content-centric : less than 1% skill / task centric. While I think that most content-centric course should never have been commissioned, it doesn't change the fact that there are more content centric courses hitting student enrollments every day than skill centric courses. In most of these cases stakeholders indicated that it was important that folks know "these things". Logically, it only makes sense the ensure that they do... It's a paradox. A paradox that is best countered up front -- but rarely countered. If a client insists on focusing performance at the level of Bloom's Knowledge verbs, a knowledge-based test is the medicine they signed up for. Bleh, the thought of such misguided engagements causes visceral distress... :P

On a lighter and more constructive note... Here's another way we've used tests. I view L2 evals primarily as an opportunity to test the solution. More so a test of the solution than a test of the learner in the case of eLearning solutions. Since if you know your learner and if you know your skill and knowledge gap and if you have designed your solution well and your solution works in your test cases... it stands to reason that it will work again, every time, until one of the driving variables changes significantly. Once you've proven that the solution will work with your target audience with relative certainty (through a statistically significant sample using pre / post tests and surveys) these "training wheels" can come off the solution until evidence surfaces that it no longer does its job.

Neat discussion. I don't think there's one right answer all of the time. But my big 'ole gut still swings in the direction of meaningful challenge. And sometimes, most times when it's important enough to train, that challenge is in the assessment and under measurement:)

Meg Bertapelle

I agree with pretty much everything I've read above - so I think also, it depends!

We typically are tasked with providing an "assessment" of anatomy, disease state, and procedure information to have some "proof" that our salesforce is prepared to support that particular surgical procedure in the field.  Now, while I think everyone in the whole chain knows that these online assessments do NOT really prove much of anything, it's what was possible at the time of development with the resources available.  There is movement toward more of a role play and/or field observation type assessment, but those are hard to do.  In future/new program & course development, we try to build in some practice "simulations" of situations they might encounter to try and improve our assessments, but it's a pretty slow process, and of course, re-doing the existing ones is lower on the priority list.

I don't think every "course" needs to have an assessment - but then, as I think Steve said, it's not really a "course" but more of a performance support or reference tool.  Which is fine with me - watch it when you want to brush up before the sales call or before supporting the procedure & make sure you remember the details you need to.  Personally, I would never expect the sales force to remember everything that they need to know for their jobs all at once...  there's WAAAAAYYYY too much information - especially detailed and situationally-specific information!  I HOPE that they use our courses as reference to prepare for the next case/meeting/whatever, and as long as they know where to find it, it works technically, and it helps, then I've done my job.

As an ID, I would really like to have assessments that actually do measure some sort of performance improvement, but that's a misty future state that we're (hopefully) heading toward.  We now have the resources to begin, and the management support to protect our endeavors, so it's an exciting time! :)

David Glow

David, 

The wording of your request asks several key questions:


1. “Do e-learning courses have to include a test or quiz? /
Is it acceptable to build courses that don’t include tests or quizzes?”

 2. "Do you build courses without assessments? If you do, what types of courses don't include assessments?"

3. "Is a non-graded, practice interaction equivalent to a graded assessment?"

I agree with the "it depends" perspective that has been offered, but wanted to add a few points from my own experience (6 years specifically dedicated to assessment administration in a highly regulated industry).

The bottom line I'd offer is "Capability or learning outcome must be demonstrated in some measurable way"- I think this is the key thing that distinguishes between "does a course need a test" vs "do we need to assess that skills can be applied on the job"?  (and it may not be "a course"- it is usually a combination of learning resources and tools to drive the behavior).

I think too often, developers think of singular item (course) in-event assessment (usually based on content knowledge, not skills application).  Although there is a place for this in the learning field, we need to push to extend the measure outside a singular learning event and look at both a larger learning ecosystem and also measurement that extends to outcomes after a learning event (does it matter than I can pass a course test or perform on the job?) Of course, this introduces a myriad of factors to address, timing, controlling influencing factors, etc.  

So, "is it acceptable to build courses that don't include tests of quizzes?"- certainly. Project Management courses, especially at advanced levels, share techniques to be applied to projects that quizzes or even fairly robust simulation testing won't fully predict the true ability of the employee to apply the knowledge and skills in their own work context. So, in this case, non-graded practice interaction (let's just call it work) is tracked in a manner to determine the learning results. A lot of methods are used to isolate impacts from influencing factors not related to training (admittedly, it isn't a perfect measure, but neither is testing in most cases).  In my opinion, even with it's imperfections, this measure is actually a more accurate measure of learning outcomes than most testing deployed (I agree with Meg's observation).  

However, there are assessments for different reasons- such as developmental assessment- checking foundation knowledge, understanding of baseline concepts via quizzes to correct any misinformation prior to letting users progress, or basic practice. Another favorite trick of mine is to start a learning solution with a test. Find out where folks are, so they know where they stand, what they need to focus on to improve, and illustrate what areas where they are performing adequately (testing out where appropriate to optimize time on task).

Also, in sales training, often it is much more insightful to not measure or track 'formal' knowledge items (steps in sales process, or simulations that train on how to handle objections). Generally the knowledge is easy to memorize, and the simulations have to be so "black and white" for fair assessment that they are too simple when compared to real-world situations.  Also, sales professionals often know the "WHAT" to do, but the "HOW" (mojo) is off. Formal testing can't see nuances of style, conversational timing, guiding conversations, active listening, eye contact...  roleplays, secret shopper evals, or even uploading recorded videos of a rehearsed sales presentation for review by one's peers will yield MUCH better information in this "soft" skills area (rubrics to structure the feedback is helpful).

So, is it okay to develop learning solutions without tests or quizzes? Sure. But I don't think it's okay  to develop a learning solution without some method to determining outcomes. Test/quizzes embedded in courses just aren't always the best tool (actually they almost are never the best tool for "the endgame").

One last example: Internet Security Training. Better to give folks a course on malware and phishing and test at the end? Checks basic understanding- but the heightened awareness biases the results because of the context.  What's a better test of what they take from the course? Phish your own employees and see how many fail. Now, that's a solid assessment (and better it's a fake phish test than real cybercriminals finding weaknesses in your security!)

Steve Flowers

That's a great reminder, David. And I think it's right on the money. The best performance assessments are... authentic performance. Remembering those real-world opportunities to capture "what they really do" is the key to success in so many cases. And eLearning isn't always the best channel to challenge and measure the aggregate performance picture. 

We don't always have control over the long term strategy and many times the customer simply isn't interested in anything other than a shrink-wrapped self-contained electronic package. I think this leads to a tendency to bolt-on an abstraction of the performance measurement in the form of multiple-choice trivia. As far as the commissioned effort is concerned, it checks a box. This is a consultation challenge.

But eLearning doesn't always need to represent the whole picture. I think we might miss opportunities for component level challenge and measure if we don't break performance down and highlight those components that the digital environment can handle well, and in some cases far superior to the physical environment.  It seems like eLearning has gotten the rep for the "knowledge channel". Which translates to the "information channel".  In my opinion, this perspective short-changes the potential of the medium to service building mental models and sharpen decision-making skills that contribute to big picture performance. 

My organization (military / gov't) has been performance focused for a few decades. Most of our training has a vocational / technical focus. Naturally, this focus makes sense. It comes together pretty easily. In the military tradition, this focus tends to swing heavily in the direction of behaviorist type measurements. It doesn't often consider a constructivist or cognitive viewpoint. In many cases the military performance focus intentionally excludes these considerations. While I think the behaviorist view is important, I think it's folly to ignore the totality of stuff that contributes to successful performance. An organization's exclusive value of overt behaviors makes it difficult to resource the engineering of sub-skill support. And these sub-skills are where an eLearning product can really shine. Offering practice of the stuff that happens from the neck up can, and does in many cases, improve the performance of the task aggregate. 

We've started to look at building objectives for eLearning a little bit differently. Breaking down the performance into elements, we intentionally try to avoid the "content trap". It doesn't always work out but I think it's a good discipline to apply. This makes the path to objective identification longer than it might traditionally appear. The breakdown sequence looks something like this:

  • Performance / Business Requirements > Tasks > Cognitive / Covert Sub-tasks

And this leads to the identification of:

  • Practice Opportunities > Measurement Opportunities > Objectives

Ideally, content comes far later in the engineering process.

Small bit practice is no substitute for aggregate demonstration of capacity. But mastery of small bits can be a tremendous force multiplier to the development and improvement of the "skill alloy". If it can be practiced in the digital environment, it can be measured. And if it can be measured, why shouldn't it be? Measurement can be recorded without resorting to a multiple choice test bank

Steve Flowers

A couple of caveat / expansions. My last paragraph indicated the dreaded "we should because we can" line. Not really what I meant I'd qualify that with "only collect data you intend to use". I think this is a good rule of thumb for data collection. If nobody is going to use or even look at the data, maybe you don't need to be spending resources collecting it.

Here's my line of thinking:

  • If you know what skill or component skill you're building and your solution targets building these skills through practice you have built a natural pathway for measurement of the learner's skill. This measurement can be useful for the learner and in some cases can be useful for the organization and / or the design and development staff.
  • If a course is content-centric a test of short term memory will only measure short term recall. Capturing this measurement won't have much value and will likely result in mildly annoyed learners. There are better ways to see if folks are paying attention
  • Skills fade fast. Frequency of performance should figure into the decision-making process for measurement capture. Capture of sub-task performance won't mean much if the skill is rarely required.

Measurement can take many forms. For example, completion measurement is binary.  What contributes to the binary flip varies from solution to solution. While the technology enables collection of vary granular data points, it's not always necessary. This binary completion represents a, hopefully intentional, threshold.

David Anderson

First of all, thank you.This is far more info than I could have shared on my own.

Having worked mostly on the vendor/service provider side, I’ve learned to appreciate both sides of this issue. I’ve seen designers bristle after clients offered to “drop the activities” just so they could get their courses built faster. On the other side, I've seen designers help clients rethink their project goals to produce something far more meaningful than they imagined.

Know the rules… but be prepared to break them.

I always get a laugh at Cathy's post on m/c questions: Can you answer these 6 questions about multiple-choice questions?

Sam Currie

Great question and answers.

In short I would say that unless a quiz is meaningful it should not be added to a course. Many eLearning courses present quizzes which test memory rather than understanding. I really like to include "test then tell" quizzes to my eLearning courses rather than just peddling out facts to the student, but this too needs to be handled sensibly.

All too often badly thought out quizzes can be patronising and ineffective.

Bob S

Great discussion.

Many folks here have talked around it, but I think it's worth calling out directly the best reason for creating a test...

... it allows us to teach to the test.

Seriously. Whether or not you actually deploy it or score it, starting with the assumption you are creating a test to measure competence has a profound impact on your design.

Benefits include:

  • Keeps SMEs focused on what matters
  • Ensures designers call out key content
  • Informs "weighting" of topics and even course structure (in some cases)

So I suggest starting with the premise "If you had to prove someone knew/could do this, what things would you test on?".  Then decide later if you actually deploy or score the test.

Bob

David Glow

Bob-

I see where your thinking is going with this, and I can say that "teach to the test" has good and bad design impacts (handled correctly, mostly good).

But, I raise you the following strategy: Everyone is buzzing about flipping the classroom today (thank you Mr. Khan), but I think "flip assessment" is a great way to just assess users straight away to show where they are, what they need to do, and repeat with frequency.

No different than when they pull the bus up to the beach on the Biggest Loser and say "run the mile" and watch folks realize exactly where their fitness stands.  Then, repeat with more formal measures via physicians.  They haven't been taught to eat, train, no practice yet, this will expose every issue.  Then, focused on specific, personal issues, they can tailor a program to fit the person's needs.  We may not get as tailored as a custom fitness program, but flipping assessment, and repeating will not just "teach to the test", it defines for the person what needs to be taught, orients around that.

Like my friends with thier Internet Security Awareness Training(TM) did- phish up front to really know what the issues are, and then train to those risk exposures. I have used that model several times successfully in highly regulated financial services training areas to optimize training.  And the business intelligence it brings the organization- awesomesauce.

Meg Bertapelle

I would add to @David's comment to say, if you assess frequently, don't use the same test/ questions!  People will be able to figure out that they shouldn't answer the same if they got it wrong last time, and rather than an assessment of having learned/internalized the content, you'll have an assessment of whether they can remember what they said last time...

This discussion is closed. You can start a new discussion or contact Articulate Support.