Help, I'm stuck in the wrong percentage trap and I can't get out!

Nov 30, 2017

I'm having a design issue with scored questions in Storyline 2 and appreciate your ideas here.

  1. The course branches to 4 tracks, users can take any track. 
  2. There are 5 question banks, 1 for each track plus 1 that's common.
  3. Users should receive 3 randomized questions from the question bank for their track, then 2 randomized questions from the question bank common to all. And they do.

The problem is the scoring percentages.

If I put the common question bank and results slide in a merged common path at the end, the scoring does not work. The points increment correctly, but the percentages do not as the user will not see the questions in that are in the question banks they did not visit. Nor should they. But they are being judged on them.

If I duplicate the common question bank and results slide within each track, then the results slide can be set for that track alone and displays properly...but the course itself can only be set to send ONE of the four results slides to the LMS. 

I know I can't send points or any other custom variables to the LMS. I'm pretty sure there's no fancy way that I can manually increment the percentage variable to only count what users actually see. So what can I do to get the correct percentage sent when the completion triggers? Thanks! 



4 Replies
Michael Sheyahshe

So, in terms of LMS, you say it uses SCORM 1.2, correct? If so, are you required to send the learner SCORE in addition to the COMPLETED states to the LMS?

If sending the score isn't required, you can send the COMPLETE message to the LMS regardless of score, by using number of slides (as you point out, above).

In this scenario, you would not allow learners to reach whatever the end slide is (which fulfills the # of slides for complete to be fired-off) until they pass the (reach a certain percentage) whatever randomized questions they receive.

One way to accomplish this is to create custom variables in SL2 to track the score, rather than contributing to a centralized results slide. Does each question have 'feedback' layers (usually correct/incorrect)? If so, you can use the correct layer to an increment to a variable (ex. "currentScore"). After each branched quiz, you can evaluate the variable, then 'branch' to a "try again" slide OR to the end slide (which would trigger the 'complete').

Of course, that's just one way to accomplish this. We can talk more, if you need clarification or I'm not being clear, above.

HTH. Good luck.

This discussion is closed. You can start a new discussion or contact Articulate Support.