Alternative Question if Failed

Jan 23, 2013

Hello everyone. I have a test with 100% passing score. My ISD wants to give the learner a chance to pass it, so she had created an alternative set of questions. The students are not suppose to know if they answered the question correctly. If they do answer it correctly, they will be redirected to the second set of questions. If they don't answer correctly, the alternative question will be loaded. Is there an easy solution to this (other than faking it?) Thank you.

21 Replies
Irina Flowers

Hi Donna, thank you for your reply. OK. I have 12 questions (6 sets). The student only needs to answer 6 questions correctly to receive 100% score. The ISD wants to give the student a chance to answer an alternative question if they failed a question. 

Something like this:

Question 1

If Correct - Go to set 2

If Incorrect - Load an alternative question from set 1

Does this make sense?

Irina Flowers

Hi Donna,

If Question A (from Set 1) fails it should load an alternative Question B (From Set 1), so, yeah, a specific question.

Set 1: Question A and Question B

If Correct - go to Set 2

If Incorrect - Go to Set 1 Question B

Set 2: Question 2A and Question 2B

If Correct - go to Set 3

If incorrect - Go to Set 2 Question B

and so on. 

Justin Wilcox

Hi Irina.

Maybe someone has a creative solution for you but I really see this as a case where drawing 6 questions from a 12 question quiz bank is the best solution. If you are not allowing them to answer those first 6 "required" questions again if they get one wrong, then they aren't required are they? I mean if you are giving them 6 alternate questions if they don't answer the first 6, the person taking the quiz has not proven that they can answer the first six required questions. 

So rather than worrying about that, you could simply draw 6 questions from one bank. You could also lock those six questions so they have to take them and then pull a random number of other questions and require 100%.

Steve Flowers

Here's the way I'm reading this one. You have multiple objectives and each objective offers a single (or multiple) question or activity to validate the objective. However, you want to give the participant a 2nd chance if they bomb an objective. Is that right?

 This is a bad news, good news, bad news situation.

  • The bad news is there aren't any tools on the market that I know of that make this case easy to create. That includes Storyline.
  • The good news is that you can (sort of) create this effect in Storyline with some serious bending and stacks of triggers.
  • The bad news is that it's complicated and what you'd expect from built-in features like review might not work after you've created such a custom / complex assembly. Maintenance could also be complicated by such an engineered build.

It might be better to create a simpler mechanism. For example, you might have a single test with multiple banks (see an example described here for how that might work). Then offer single bank "retakes" for the sections that the learner didn't do well on to capture the second try.

The change to the example linked above would take out the seamless re-pull of questions for the objective to offer a single flow through the stitched quiz.

Alexandros Anoyatis

Hi Irina,

You could try this. I think it might work (it depends on how you want your "reporting" to function):

1. Create a text variable (e.q. "qid").
2. Create 1 trigger in each question slide and a set the value accordingly (1a, 1b, 2a, 2b etc)
3. On the feedback correct master set triggers as follows :
"Go to Slide 2a when timeline starts/ends (depending on your setup) if value of slide is 1a or 1b"
"Go to Slide 3a when timeline starts/ends (depending on your setup) if value of slide is 2a or 2b"   etc.

...........................................................
"Go to Slide "Results" when timeline starts/ends (depending on your setup) if value of slide is 6a or 6b"

4. On the feedback incorrect master set triggers as follows :
"Go to Slide 1b when timeline starts/ends (depending on your setup) if value of slide is 1a"

"Go to Slide 2b when timeline starts/ends (depending on your setup) if value of slide is 2a"

...........................................................
"Go to Slide "Results" when timeline starts/ends (depending on your setup) if value of slide is 6b"

There is a more elegant way (using a numeric variable) but i think this one makes more sense right now.

I really hope this works for you.

Alex

Donna Morvan

Hello Irina,

Here's what I came up with.

First off, it's going to be a maze if you really want to stick to this style of questioning. I was able to pretty much deduct the same logical picture in my head as illustrated by Steve and made use of triggers to get the effect you wanted.. the triggers were set to snake around the questions.

As a tip: Consider the advice of Steve. Coming from the perspective of someone who wants to prioritize that learners actually did learn something, it makes more sense to do it their way. Feedback is crucial. I don't understand the reasons why we want to "fool" learners (lack of a better term) into thinking they got 100% when they didn't. :(

Hope this helps.

Donna Morvan

Irina Flowers said:

Hey Donna, I think this what ISD wants. Her logic is this: if the student required to pass the test with 100% and they know that they made an error, they will just quit the test. 


Hello Irina,

There are other ways you can have a learner achieve 100%. You can limit the number of attempts per question and automatically direct them to a slide that tells them what they need to review so that when they retake it, they are more prepared and the chances of acing it are better.

You also have to consider setting more realistic and progressive targets. While in my fantasy world, all of them get 100% on the first try, we all know that this does not happen all the time unless we pretty much spoonfed the answers.

Maybe we can push back and re-evaluate what the goal is really. Achive 100% for the sake of? Or do we want our learners to come out of it much more informed and equipped to do better at what they do.

Donna

Steve Flowers

Yep. I think keep it simple is the best option My "WTF sense" would go off if I saw a second question in the same band, keeping count in my head, and encountering 8 questions when I expected 5.

If folks need to retest for an objective, just link separately to those tests if they don't do well. There's a way you could setup a retest that only included the objectives they didn't demonstrate mastery... it's complicated, but possible. Probably best to just set it up simple and let the participant select the path for things they didn't complete.

Steve Flowers

The other advantage to the separate retake scenario: it gives the participant the chance to bone up on the things that they didn't master. Sending immediately to a substitute question is a second chance that makes up for "oops" but doesn't make up for "I really don't know how do do that". A single question for an objective is also problematic from a statistical perspective. More challenges will yield more reliable data.

Steve Flowers

Here's a quick and dirty alternate example. In this one, rather than retaking a single question (which is statistically questionable anyway), the participant is shown a grid of their strengths and weaknesses then given a single opportunity to increase their score.

There are a few custom variables to make this work. I also employed question banks for each question since they belonged to separately tracked topics. This seems strange but this container is a good idea in the long run. Inevitably, folks will realize that a single question isn't a valid mechanism for testing mastery. When that happens, all you need to do is add to the existing banks and reconfigure the pulls.

The blank results slides between banks process the results, shove a value in a placeholder variable for calculation later, and shuttle on to the next bank.

Each of the 2 scenes is nearly identical except for the banks and variable references within the result slide triggers.

This discussion is closed. You can start a new discussion or contact Articulate Support.