I think my quarantine fatigue and real-life fatigue are getting to me: How do you "read" these Rise matching quiz question results?
It looks like 3 and 4 are wrong (which is accurate); however, I can only deduce that from the blue check marks next to 1 and 2. The numbers in the black circles and the matchable numbers to the right are the same. As a user that tells me they're correct (which isn't accurate).
Yes, the checkmarks in the blue circles indicate correct answers.
Also, the black circles with numbers indicate incorrect answers and show how the items on the left should have been matched with the items on the right/
I've created a generic example that may be easier to understand. See below.
It might just be me, but deciphering what this means takes a lot of mental power on my part; I can only imagine what our users who aren't in SL all day make of it.
I think a red X in a circle for incorrect responses would make more sense than this. And, to make it even worse, what if I got all them wrong? As I said in my OP the only reason I was able to understand what was wrong was because I had the correct response feedback to compare it to.
The feedback can be more than "improved"; it should be overhauled.
Because of what I think is poor UX, this feedback puts a lot on the shoulders of learners to connect the dots and make sense of what's incorrect about their choices. The check mark is a solid signal of correctness, but a number in a black circle is ambiguous.
Also, it seems you're basing the legitimacy of this feedback communication on the fact drag item 3 says "drag item 3" and that *obviously* doesn't match up with "drop target 4" because they're different numbers. Our users don't see any of that. I bet we don't put "drag item 3" and "drop target 4" on these draggables; there's content there instead, right? This mechanism requires learners to not only try to make sense of the feedback - check mark (ubiquitous indicator of correctness) vs number in a circle (possibly incorrect?) - but they also need to compare the content to see if they're correct, BUT if they're in an assessment they might not know that difference, thereby exacerbating misunderstandings, misconceptions, etc.
As developers, we "get it" only because we built it.
Hope I don't come across as hostile. Lol. I just don't think it works at all.
Hi there, Heather. I really appreciate your thorough insight into how we can improve the matching question feedback. I can understand the confusion, and I'll be sure to share all of this with our team. Thank you for being part of this community!
6 Replies
Hi Heather,
Yes, the checkmarks in the blue circles indicate correct answers.
Also, the black circles with numbers indicate incorrect answers and show how the items on the left should have been matched with the items on the right/
I've created a generic example that may be easier to understand. See below.
It might just be me, but deciphering what this means takes a lot of mental power on my part; I can only imagine what our users who aren't in SL all day make of it.
I think a red X in a circle for incorrect responses would make more sense than this. And, to make it even worse, what if I got all them wrong? As I said in my OP the only reason I was able to understand what was wrong was because I had the correct response feedback to compare it to.
I'm not feeling this at all.
Thanks for your response!
If you get them all wrong, the numbers will tell you what the correct matches were as shown below.
We use this question type, although not a lot, and we've never had a learner raise an issue about them. Sure, the feedback can be improved.
Rise consistently uses black to indicate incorrect answers. E.g. in multiple choice questions a black rectangle is placed around wrong answers.
The feedback can be more than "improved"; it should be overhauled.
Because of what I think is poor UX, this feedback puts a lot on the shoulders of learners to connect the dots and make sense of what's incorrect about their choices. The check mark is a solid signal of correctness, but a number in a black circle is ambiguous.
Also, it seems you're basing the legitimacy of this feedback communication on the fact drag item 3 says "drag item 3" and that *obviously* doesn't match up with "drop target 4" because they're different numbers. Our users don't see any of that. I bet we don't put "drag item 3" and "drop target 4" on these draggables; there's content there instead, right? This mechanism requires learners to not only try to make sense of the feedback - check mark (ubiquitous indicator of correctness) vs number in a circle (possibly incorrect?) - but they also need to compare the content to see if they're correct, BUT if they're in an assessment they might not know that difference, thereby exacerbating misunderstandings, misconceptions, etc.
As developers, we "get it" only because we built it.
Hope I don't come across as hostile. Lol. I just don't think it works at all.
Hi there, Heather. I really appreciate your thorough insight into how we can improve the matching question feedback. I can understand the confusion, and I'll be sure to share all of this with our team. Thank you for being part of this community!
I see my hostility has paid off. :)
Thanks!
This discussion is closed. You can start a new discussion or contact Articulate Support.