Wrong feedback bug in QM 13
We have this recurring bug (was rare, but happens - in this case it appeared 2 in about 6 tests).
Answers are getting the wrong feedback.
In the past neither Articulate nor us were able to replicate this - but it still happens and we still get complaints about it - that might force us to use another tool - if we caanot count on QM to provide an accurate feedback.
Have any of you encountered such a bug?
I'm afraid this might much more prevalant that we know - since the learners either do not know for sure what the right answer is. Nor do they always take a screnshot - and we just guess they remembered incorrectly what they chose.