We developed a javascript confidence score slider for some assessments. Anyone doing confidence level checking with your assessments? If so, what instrument, method, algorithm, etc are you using to associate the question answer score with the confidence score? How do you display that on the Results slide? Thanks.

Do you mean a likert scale type of thing? If so, we have a likert question type, but it's non-graded. What kind of score are you looking for? Could you explain your specific use case? It may be easier for people to jump in with suggestions if they know exactly what it is you're trying to do and why. Thanks!

Thanks Allison for your clarification request. Here goes. We already have the slider and js code completed for the confidence score (we are aware of the Likert in Storyline). Both scores appear on the Results screen. If a person received a score of 90% and passes but has a confidence score of 40% (primarily they did a lot of guessing), their passing score might not be 90%. We are trying to determine how to have some validity in that composite score (combining the assessment and the confidence scores). If the assessment score was 90% maybe their true valid composite score is more like 65 or 70 which might mean that the person did not "pass" that assessment. Our course designs will be offering alternative content and modes of learning as a means of improving content knowledge and learner confidence. K-12 and universities are using confidence scoring now using various models to determine that end score. Without getting too deep, we are looking for assistance from the community on how they did the same thing or their suggestions for creating that valid, composite score. Hope this is a little more clear, Thanks to everyone.

I have an idea that might work for you but need to know 2 things first:

How many assessment questions would the typical average be (without the additional confidence question)?

Would the confidence score potentially have an equal weight as the question? In other words, right answer = 10 pts and 100% confidence = 10 pts, 90% = 9, 80%=8 etc.

Or, do you want the confidence to be a magnifier of both good and bad options? The right choice+ high confidence = lots of points the right choice + low confidence = fewer points the wrong choice + low confidence = no points the wrong choice + high confidence = negative points

Hi Owen - first let me express how pleased I am to read and use all of your prior suggestions and tips/techniques. Thanks for your very deep participation.

Now - my biggest issue is validity - is there a specific algorithm that we should be concerned about (I came across Bier Confidence)?? There are normally not more than 15 questions in the assessment. The questions have an equally balanced score and the confidence is independent - the learner drags across the slider (a variable/reference box shows the confidence score).

The confidence score does not need to be a magnifier just a static number from the slider that will/can be used to manipulate the assessment score. Again, if the learner received a score of 90 but a confidence of 55, a different score (probably lower) should be generated and presented to the student. This has a ripple effect: the print button will disappear and no certificate will be provided at first: the learner may receive specialized remediation content to boost their lower confidence on a question basis. This then overwrites the retry quiz and/or, once any question also receives a <50% confidence, or we can place the remediation on the Retry button. The learner will be presented the additional learning immediately (Will Thalmier's point of providing immediate feedback re-enforcement). Sorry for such a long response.

sorry to extend this - our future vision is to use xAPI for performance measurement, data analysis, etc. This is one reason why the validity factor of the confidence score comes into play.

## 6 Replies

Hi David!

Do you mean a likert scale type of thing? If so, we have a likert question type, but it's non-graded. What kind of score are you looking for? Could you explain your specific use case? It may be easier for people to jump in with suggestions if they know exactly what it is you're trying to do and why. Thanks!

Thanks Allison for your clarification request. Here goes. We already have the slider and js code completed for the confidence score (we are aware of the Likert in Storyline). Both scores appear on the Results screen. If a person received a score of 90% and passes but has a confidence score of 40% (primarily they did a lot of guessing), their passing score might not be 90%. We are trying to determine how to have some validity in that composite score (combining the assessment and the confidence scores). If the assessment score was 90% maybe their true valid composite score is more like 65 or 70 which might mean that the person did not "pass" that assessment. Our course designs will be offering alternative content and modes of learning as a means of improving content knowledge and learner confidence. K-12 and universities are using confidence scoring now using various models to determine that end score. Without getting too deep, we are looking for assistance from the community on how they did the same thing or their suggestions for creating that valid, composite score. Hope this is a little more clear, Thanks to everyone.

I have an idea that might work for you but need to know 2 things first:

Or, do you want the confidence to be a magnifier of both good and bad options?

The right choice+ high confidence = lots of points

the right choice + low confidence = fewer points

the wrong choice + low confidence = no points

the wrong choice + high confidence = negative points

Hi Owen - first let me express how pleased I am to read and use all of your prior suggestions and tips/techniques. Thanks for your very deep participation.

Now - my biggest issue is validity - is there a specific algorithm that we should be concerned about (I came across Bier Confidence)?? There are normally not more than 15 questions in the assessment. The questions have an equally balanced score and the confidence is independent - the learner drags across the slider (a variable/reference box shows the confidence score).

The confidence score does not need to be a magnifier just a static number from the slider that will/can be used to manipulate the assessment score. Again, if the learner received a score of 90 but a confidence of 55, a different score (probably lower) should be generated and presented to the student. This has a ripple effect: the print button will disappear and no certificate will be provided at first: the learner may receive specialized remediation content to boost their lower confidence on a question basis. This then overwrites the retry quiz and/or, once any question also receives a <50% confidence, or we can place the remediation on the Retry button. The learner will be presented the additional learning immediately (Will Thalmier's point of providing immediate feedback re-enforcement). Sorry for such a long response.

sorry to extend this - our future vision is to use xAPI for performance measurement, data analysis, etc. This is one reason why the validity factor of the confidence score comes into play.