Hi,
I'm happy to help. Learners with visual impairments can use aids like JAWS, NVDA, VoiceOver or other similar aids. Some may use other aids like screen magnification. Screen readers aren't perfect and they rely on designers and developers to think about their needs.
If we consider a matching question with 4 options. The learner with visual impairments has to keep the stem of the question in their mind. Then they have to tab (or arrow) down the left-hand side of the options to hear them. Next they have to tab (or arrow) down the right-hand side to hear that column. They have to keep all 8 options in their mind and try and match them up. They then have to go back to the left-hand column (using the keyboard commands again) find the option they want to select first and select it using the return key (usually). Now they have to go to the right-hand column and select the correct item. And repeat as necessary. Also they now have to keep in mind what responses they have already matched up. I don't actually know what they do if they want to change their answer. You might want to try using NVDA (it's free) or VoiceOver on a Mac and try navigating a course with your eyes closed to get an idea of the challenge.
Matching, sequencing, etc. types of questions add a massive cognitive load for learners with disabilities. If they have to use most of the processing power to keep the mechanical features of a question in mind they have little left to answer the question.
I recently wrote a lessons learned document about accessibility for a company that had used hired a third-party testers who were visually impaired. It is humbling and distressing to watch someone spend 20 minutes just to find the start button on a course that was just one example. Most times they had better results but it made me realize that even the best companies struggle with creating learning experiences that are barrier free.