Debugging Procedures for problems between the eLearning client and the LMS
Hi all. It always seems to me that the most difficult bugs to find and squash are the ones that hide between programs (as opposed to within one).
What do you do, for instance, when a course is giving a high percentage of incompletions, for users who believe (sometimes quite adamently) that they HAVE successfully completed the course? Some immediate tests come to mind:
- Test across browsers,
- Test across OSs
- Test with changing between computers in mid-course
- Test on mobile devices
- Test by jumping around in the course
- Test by giving incorrect answers repeatedly
- Test by standing over the shoulder of people who say they've completed it successfully, on the computer they had used and at the site they had used it
- Do all the above with a debug window open
- Test in SCORM Cloud with it's debug features.
I just went through a 2-week learning and testing period to find out the affect of the length of the suspend data string on completion, using SCORM 1.2. In so doing, I learned how to count the length of the string after every screen change, how to estimate the number of characters added per screen (and how to estimate how that count will vary based on the type of Storyline features used within a screen), and, finally, that our particular LMS doesn't enforce the 4096-character limit of the SCORM 1.2 suspend data string limit. Note that this is covered in another discussion here in this area.
We are all faced with this type of debugging need from time to time. What strategies and procedures do you use? If you would, share those that have been most effective. It will help us all. And thanks!