Assessment Demonstration with Confidence Rating and Dynamic Results

So here is a (much simpler) demonstration of an assessment technique I've been working on for a while.

The key feature of this assessment is that it scores not just whether a submitted answer is correct but also how confident the learner was.

This is useful when you want to identify different groups from your assessment and tailor responses to them:

  • High Score & High Confidence -these are your stars, support and let them go
  • High Score & Low Confidence - they need to be assured that they're doing the right thing - and we need to understand why they lack confidence
  • Low Score & Low Confidence - OK - they know what they don't know and they don't know it.  Training time...
  • Low Score & High Confidence - Oh Oh.  These people just "know" they are right - but they are wrong.  If these people are pilots or train drivers let's get them out of the driving seat real quick....and then let's work out what the issue is.

The assessment has some other features:

  • On each page you can answer the Q's in any sequence but the submit button will not appear till all Q's and (if needed) a confidence score is recorded for each one
  • You will always be prompted to record a confidence score (if required)before you can move to next question
  • If you choose "Don't Know" you won't be prompted for a confidence score as we can assume you are 100% confident that you "Don't Know"
  • If you change an answer before you click Submit that's OK - the assessment automatically recalculates
  • The assessment "remembers" which topic(s) you have passed and will only show you questions related to topics you have not yet passed
  • Wanna check your previous score?  You can do that after your first run through...

I'm curious to know what people think and would be happy to answer any questions or hear any feedback!

10 Replies
Kevin Hayes

Thanks Daniel. Yes very aware of Amplifire and others globally who use confident based testing as a product.  Obviously this lacks the benefit of a huge database to enable in-depth statistical analysis of responses and the scoring algorithm is not particularly complex - but I am pleased (and tbh a little surprised) I was able to implement an assessment methodology as complex as this within the confines of Storyline 360.

Kevin Hayes

Thanks Phil.

Firebase or a "proper" database solution would definitely be the way forward rather than Google-sheets. 

Google-sheets was only intended as an interim solution and it does require daily tweaking to keep it working.

Our intent is to migrate to an xAPI LRS Database in the near future.

That being said I've also been impressed at how far we've managed to push Google-sheets to capture the data from the assessment.  In total there are about 150 data items recorded within the assessment (the real one - not the demo) which includes the answer (correct or not) for every single question submitted by all learners. 

Thus far it has recorded over 2200 assessment results from learners located across the globe.  One great feature of Google-sheets is that it enables me to provide the stakeholders a "live" dashboard of results which they can monitor from a simple web page (using Google-sheets web-publish option)

Kevin Hayes

You're absolutely right, Diane, and I should have been clearer. 

Questions which are answered as "Don't Know" are not calculated within the overall calculation of the score of correctness/confidence. 

The scoring algorithm keeps score of how many responses are recorded as "Don't Know" and ensures that their score does not impact the overall calculation. 

As you suggest the actual confidence rating recorded (for calculation purposes) of a "Don't Know" response is zero in regards to the scoring calculation. 

In this respect respondents are not penalised (i.e. as showing up as confidently wrong) for "Don't Know".  They will simply show as in need of training.

Probably too much information some roles you might actually prefer people to take a calculated guess rather than "Don't Know". 

If they're a train driver and unsure what the signal means I definitely don't want them to guess. 

If they're a futures trader for a bank and need to make an investment decision in a fast moving and uncertain situation where all the facts cannot be known - well then refusing to act unless 100% certain in that role could actually be a drawback.


Kevin Hayes

Sure - I've placed the Storyline File here using WeSendIt:

The only branching I applied is that the test keeps track of which modules you have passed; so when you do the test again it will only show you the module you have not passed and will "skip" modules you have passed

You could also use this logic to branch within a single sitting of the assessment (i.e. if a answers is correct or wrong or if a confidence score is above or below a value) - I suppose that would mean if a person gets a question incorrect (or doesn't know) you could provide immediate feedback with links to refresher material...or whatever :)