I've been trying to learn how to use MacOS VoiceOver in order to test out courses with a screen reader and that has been challenging but I've gotten the hang of it enough to start testing. I decided to update the element of a course that seemed the least accessible - a drag-and-drop interaction. I updated the interaction to be a "select one" option for each of the items that were previously dropped. I haven't loved using layers with the screen reader so I made each object/question it's own slide, but I don't know if that's entirely necessary. In my old version, the object being dropped was an image and descriptive text appeared when you hovered over it, which is also not accessible (or at least I couldn't figure out how to make it work well) so I changed the object being dropped to be a box with the descriptive text. I tried to keep the fun and visually interesting element of the object moving to the owner by adding a motion path when clicking the answer and changing the placed state of the object to be the original image.
I also read that auto-playing music/sounds doesn't work well with a screen reader so I've set the default to be no sounds and added an initial screen to turn the sound on if the user chooses. This then adjusts a variable that is used in triggers on each slide with sound. I also fixed the focus order (removing a lot of objects that don't need to be read aloud) and gave alt text as needed.
I also updated a slider interaction which required a trigger to only play animation sounds if the sound was turned on, fixing the focus order, and adding some alt text for buttons and the slider.
Demo of updated course slides:
https://360.articulate.com/review/content/6220eb5f-71a3-4bdc-bfd8-d62e75213237/reviewFile:
https://drive.google.com/file/d/1igOsFP6qZ2RUOJ4FahskL9mL_p_JVsLx/view?usp=share_link