Hi - one of the areas we're trying to focus on is creating quizzes and assessments. I was wondering if anyone had some resources or best practices on writing better quiz questions. We're using Quizmaker '09 and sometimes build non-scored quizzes in Presenter. Thanks!
Well, one thing that I like to do is build (or at least start building) my quiz questions first. Often I do it at the same time that I nail the learning objectives, even before I start building the course. Kind of starting with the end in mind, just to make sure that the quiz questions really reflect the specific things that you want the learner to be able to do or know when they've mastered the content.
Yeah...when i first started writing questions i used to think about all the content i had and then framed questions based on the information i have presented in the course. but over time i have realised that some of the questions may not fulfill the learner's needs after taking the course. They need to implement what they learnt.
As my seniors would always insist that questions need to be based on your learning objectives only. You can have inline questions just to test what was learnt from the previous slides but the final assessment must be based on the objectives. You can then decide what types are you going to use. MCQ, MRQ, True or false etc.
First of all, decide what it is you are trying to "test", and why.
Most end-of-course "Quizzes" that I have ever seen, (and many I admit that I have produced), are merely tests of short-term memory.
If I am going to do this sort of thing I will always try and recommend that it takes place as an activity some time after the course has been completed, at least that way you can test that the "things" have stuck in the memory!
There is then the question of WHAT you ask. Many better than I, (e.g. Cathy Moore) have written on this. A great way to "test" is to provide scenarios, (perhaps using the Tab interaction), which give the learner choices based on understanding of what they need to do on the job. Let them make decisions, and then teach them what the implications of the decisions are, based on thise decisions.
If you are doing a standard test, you need to have a really good idea of what happens if people "fail".
I'm reminded of what an ISD person who once reported to me also said. If you look at a quiz and the longest answer in each quiz question is usually correct then the quiz was not well crafted. Beginners, especially when SMEs, tend to have the correct answer be the longest.
I agree with Bruce, most corporate eLearning can be scenario driven and so can its assessments. The idea is to not test knowledge retention directly, but to test application of knowledge, thus you are primarily testing if they have learned to apply the knowledge, but you are also testing retention at the same time.
I generally use knowledge retention quizzes as formative assessment peppered through the learning, to help the learner self test if they understand and to provide rich feedback on progress. Then I use scored summative assessment at the end to test if they can apply the knowledge, if they can synthesise it, generalise it to broader frameworks and situations etc...
I've found that using Bloom's Taxonomy question-starters has helped me to have a mixture of easy, intermediate and advanced questions allow us to write formative evaluations at the knowledge and comprehension levels.
7 Replies
Well, one thing that I like to do is build (or at least start building) my quiz questions first. Often I do it at the same time that I nail the learning objectives, even before I start building the course. Kind of starting with the end in mind, just to make sure that the quiz questions really reflect the specific things that you want the learner to be able to do or know when they've mastered the content.
Yeah...when i first started writing questions i used to think about all the content i had and then framed questions based on the information i have presented in the course. but over time i have realised that some of the questions may not fulfill the learner's needs after taking the course. They need to implement what they learnt.
As my seniors would always insist that questions need to be based on your learning objectives only. You can have inline questions just to test what was learnt from the previous slides but the final assessment must be based on the objectives. You can then decide what types are you going to use. MCQ, MRQ, True or false etc.
First of all, decide what it is you are trying to "test", and why.
Most end-of-course "Quizzes" that I have ever seen, (and many I admit that I have produced), are merely tests of short-term memory.
If I am going to do this sort of thing I will always try and recommend that it takes place as an activity some time after the course has been completed, at least that way you can test that the "things" have stuck in the memory!
There is then the question of WHAT you ask. Many better than I, (e.g. Cathy Moore) have written on this. A great way to "test" is to provide scenarios, (perhaps using the Tab interaction), which give the learner choices based on understanding of what they need to do on the job. Let them make decisions, and then teach them what the implications of the decisions are, based on thise decisions.
If you are doing a standard test, you need to have a really good idea of what happens if people "fail".
Bruce
I'm reminded of what an ISD person who once reported to me also said. If you look at a quiz and the longest answer in each quiz question is usually correct then the quiz was not well crafted. Beginners, especially when SMEs, tend to have the correct answer be the longest.
I agree with Bruce, most corporate eLearning can be scenario driven and so can its assessments. The idea is to not test knowledge retention directly, but to test application of knowledge, thus you are primarily testing if they have learned to apply the knowledge, but you are also testing retention at the same time.
I generally use knowledge retention quizzes as formative assessment peppered through the learning, to help the learner self test if they understand and to provide rich feedback on progress. Then I use scored summative assessment at the end to test if they can apply the knowledge, if they can synthesise it, generalise it to broader frameworks and situations etc...
I've found that using Bloom's Taxonomy question-starters has helped me to have a mixture of easy, intermediate and advanced questions allow us to write formative evaluations at the knowledge and comprehension levels.
I agree that it's helpful to
This guide from the National Board of Medical Examiners has good ideas and examples for writing challenging multiple-choice questions.
Some of my blog posts that might be helpful:
This discussion is closed. You can start a new discussion or contact Articulate Support.