I did manage to cobble together a solution based on various suggestions, but I'm not sure how well it will perform in a published course. Importantly, I wanted partial credit for each correct hotspot selected (another bit of functionality I'd like to see in Storyline for multiple select questions but that's another topic). I'm laying this out here because I found it frustrating that the feedback I got was not what I was looking for.
I have an ungraded multiple hotspot question before the graded quiz. There are 7 correct hotspots, and I only allow the user to make 7 selections among many options. There is a tally that shows how many have been selected and an option to reset to start over. Click count and correct count are captured to variables, let's say NumCorrect and NumTotal. Once they make 7 selections, a pop-up says they have to submit or reset. (The reset button not only resets the variables, it also resets the shading showing which areas they clicked on already).
Then, the graded quiz starts. In the graded quiz, I have 7 dummy True/False questions that are blank (background color square over slide) for which the correct answer is True (1 point). I put a graphic on those pages that looks like the spinning progress wheel Articulate uses and rotated it differently on each slide so it sort of gives the illusion that it's spinning. On each fake question slide, an if then else statement executes where if NumCorrect >= (some number), it sets the True/False to True, which adds a point to the point total. So on FakeSlide1, if NumCorrect >= 1, it sets it to True and adds 1 point to the quiz total. On FakeSlide2, if NumCorrect >= 2, then it sets it to True and adds 1 to the point total and so forth out to 7. Each slide is set to autoadvance.
I tried making the timeline on the fake slides as short as possible but then it wouldn't add the numbers correctly, autoadvancing before it executed the calculation, so I had to increase the timeline on these slides. As I'm writing this, I'm asking myself if I can link the autoadvance to the execution of the calculation, so I will go back and investigate that. This sequence of calculate before advancing is my concern with how well this will work once published: if the user's computer or network selection is slow, could this happen even with the longer timeline?
The other downside of this solution is that I had to disable the Review option since it wouldn't go to this slide. Since this training is for compliance purposes, I find that unfortunate.
So, I still think this is an obvious feature to make available (as well as partial credit for multiple correct answer questions) within the graded quiz. I have a lot of beefs with the software in general being buggy and I worry about how well this course will perform since the preview sometimes doesn't work properly (if I exit and redo the preview, it will work, so it's not a problem with my course).
Apologies in advance if there is a sleeker solution or I used the wrong terminology, etc. I self-taught this software and have only been using it a few days because I'm not supposed to working on this course "on the clock". I also thought I'd be just finishing up someone's previous training and wouldn't really need to learn the software, so I didn't invest the time in all the trainings offered. If you have a more streamlined solution, please comment!!! :)