Multiple weights to a single answer

Jun 29, 2019

Howdy,

I have bene searching for weeks for an answer, maybe I am asking the wrong question. I am building what I call Mock-OJTs where the learner is presented with a prompt from actual calls and then a selection of answers. Each answer has needs to present different weights - one for whether the answer is right technically or not, one for if they chose the appropriate way to deliver the answer.

The idea is to measure their knowledge of a technical issue and to measure their understanding of appropriate customer service. The reason they need to have different weights is because the result isn't just a percentage of right or wrong, they get a two-part result of a Likert scale of 1-5 on their knowledge and one on their courtesy. These are part of their KPIs, so we want their results to mimic what their post call survey result would look like.

I have been trying multiple avenues, including trying to have one question assigned two variables, but I am not able t work out the logic.

Thanks!

5 Replies
David Schwartz

Hi Rick,

If I understand the issue, each of the questions will allow only one answer, but each answer has different weights for knowledge and courtesy.

If that's right, what I am attaching should be a step in the right direction. I didn't use the quiz questions, but just set up buttons in a button set (so that the user can change answers, and the initial choice is undone). For each screen, I use two variables, knowledge and courtesy, and each gets set according to the button selected. 

I added another button, called Submit, that adds each of the variables to its respective summed variable, that is sum_knowledge and sum_courtesy, and then goes on to the next screen.

It was quick and dirty, and I didn't take into account what happens if the user clicks PREV to go back. If you want users to go backward, I would be inclined to have individual variables for each screen, e.g., knowledge1 and courtesy1, and then subtract them from the totals when the user clicks PREV.

Let me know what you think.

 

Rick Jacobs

Hey David et al,

Here is the engine I created based off of your insights. I added some features to it to get the result I was looking for. Your file definitely helped me solve the one thing that I was struggling with - thanks! The file is just the engine, it isn't prettied up at all.

This version:

Each question has a flexibility to set one point to Courtesy (courtesy variable) and one point to Knowledge (knowledge variable).

Each question can be a mix of either 1s or 0s. As each question is answered, it tracks the result in "sum_courtesy" and "sum_knowledge." Selecting the answer automatically advances the eLearning - there is no going back.

I added an "answer" variable that is always set to 1, to account for branching scenarios based on how a user answers. This is added up along the way with "sum_answer."

The final score is a Likert Scale of 1-5, 5 being the best, on how the user performed in negotiating their call. This is done by "sum_knowledge" being divided by "sum_answer," then multiplied by 100, then divided by 20. The result is a range of 1-5.

It cannot go to zero because they either score something or the call is "escalated" to management and course ends.

Thanks for your help and I hope this is of benefit to some people. I designed this to test knowledge and understanding of tech support and customer service concepts.

David Schwartz

Hi Rick,

Looks great! I love one thing in there that I would never have thought of: naming those scenes to indicate what screen branched to them. So often I look at story view of a module, and of course it's tough to determine what branches to what without clicking around. I will definitely use that in the future.

 

This discussion is closed. You can start a new discussion or contact Articulate Support.