Debugging Procedures for problems between the eLearning client and the LMS

Feb 17, 2015

Hi all. It always seems to me that the most difficult bugs to find and squash are the ones that hide between programs (as opposed to within one). 

What do you do, for instance, when a course is giving a high percentage of incompletions, for users who believe (sometimes quite adamently) that they HAVE successfully completed the course? Some immediate tests come to mind:

  • Test across browsers,
  • Test across OSs
  • Test with changing between computers in mid-course
  • Test on mobile devices
  • Test by jumping around in the course
  • Test by giving incorrect answers repeatedly
  • Test by standing over the shoulder of people who say they've completed it successfully, on the computer they had used and at the site they had used it
  • Do all the above with a debug window open
  • Test in SCORM Cloud with it's debug features.

I just went through a 2-week learning and testing period to find out the affect of the length of the suspend data string on completion, using SCORM 1.2.  In so doing, I learned how to count the length of the string after every screen change, how to estimate the number of characters added per screen (and how to estimate how that count will vary based on the type of Storyline features used within a screen), and, finally, that our particular LMS doesn't enforce the 4096-character limit of the SCORM 1.2 suspend data string limit. Note that this is covered in another discussion here in this area.

We are all faced with this type of debugging need from time to time. What strategies and procedures do you use?  If you would, share those that have been most effective. It will help us all. And thanks!

Best,

Robert Edgar

 

39 Replies
Gerry Wasiluk

Great stuff, Steve (as usual). :)

Here's one many folks don't think of:  sometimes "everything technical" is working correctly but the learner complains they're still not getting a completion.

That's because the developer has designed or written the course in such a way that leads the learner to exit the course early, thinking they have completed it.  (Yes, this even happened in courses with the side menu of slide titles.)

Many times we saw this after "check your knowledge" types of quizzes after a topic in the course.  Things were worded in such a way that some learners might possibly interpret that they were done with the course.

So, for things like this, we urged developers not to use words like "completed," "done," finished," "successful," "passed," etc., until the actual end of the course after the learner had passed the final quiz. 

And for any quiz-like activities before the final quiz to be sure to word things to indicate to learners that they needed to move on or continue in the course after completing the activity.

Gerry Wasiluk

Back in the day, we could check on the former situation (i.e., verify exactly where the learner left the course early and what slides they had visited) because the resume data in Articulate software courses was readable and easily indicated learner progress. 

Now, with Storyline and the Articulate studio 13 Suite, we've "lost that" as the resume data is compressed and is not readable.

Not a criticism in the least as I understand why it's done.  :)  Just something I sometimes miss when debugging problems . . .

Kim  Miller

Gerry - That's a good point:

"Many times we saw this after "check your knowledge" types of quizzes after a topic in the course.  Things were worded in such a way that some learners might possibly interpret that they were done with the course."

I agree, since the default wording on the Results slide for a quiz is "congratulations, you passed". Maybe that could be changed to "Congratulations, you answered these 4 (fill in the number) questions correctly."

Kim  Miller

Hi Gerry, Thank you so much for your detailed reply.

<perhaps there's a way we can see the content work in EDU 2.0.  I'd be happy to look at this if given access.> I'd love to pass along your contact info to the EDU tech people, but I can't promise they'll care to talk with you about my problems. Seems like not many teachers use SCORM with them but maybe they'll be interested in working out bugs with you.

<Another thing is to give one of us your course and try it on our LMS's--or, better yet try it on SCORM Cloud to see if it works there. > I'd like that. Don't know how myself. Can I privately email you my zipped files somehow and you can test it? (((Are you paid to do this??? If not, forget it! I don't want to bother you.))))

<Another thing to try is to publish to another standard (like AICC or SCORM 2004 or Tin Can, if the LMS supports one of these) and see if things are better or not.> I don't think EDU supports those.

<And sometimes you really need an IT type with the appropriate access >  Very True, but I don't have that.

Your suggestions for troubleshooting are very helpful. And I didn't know about the list of supported hardware/software for playback, so that will be really nice for me to share with the students. 

Just so all you smart techie people know - one of my wonderfully outrageous 13 year old students decided to hack into the LMS's grade book for the SCORM assignment made by Storyline, and gave himself a nice grade of 400% (then later changed it to 101% so "I wouldn't notice").  He told me that the setup for the SCORM on our EDU 2.0 LMS is "ridiculous", because - here I'm going to get this wrong I know - he said something to the effect that the SCORM software asks the clients' computer to grade the quiz questions, so the client can score their own questions and send whatever score they want to the LMS.  He said the much smarter way would be to have the SCORM grade itself on some other server and report the score to the LMS without asking the clients' computer to nicely, honestly tell it what score its human "earned".  

So I'm sure all that means something to someone.

(Don't worry, this is a good honest kid, he was just trying to explore the weaknesses to help us.) 

Kim  Miller

Hi Steve, what you say makes sense here, "More often than not, intermittent problems recording completion on the LMS are directly related to connectivity or user action."

One of my students believes that on the iPhone, the small screen when placed in landscape mode has an inaccurate touch sensitivity, so when my students would try to press "No" on the option "Resume course where you left off?"  even though they were poking "no" on the landscape iPhone screen, the screen registered "yes" since it was only about a half centimeter away. 

I have not verified this myself but it would explain the experience of at least 4 of my students last week.

Gerry Wasiluk

Hey, Kim!

RE: "<Another thing is to give one of us your course and try it on our LMS's--or, better yet try it on SCORM Cloud to see if it works there. > I'd like that. Don't know how myself. Can I privately email you my zipped files somehow and you can test it? (((Are you paid to do this??? If not, forget it! I don't want to bother you.))))"

Sure, send me your files.  Contact me via gerrywaz@comcast.net 

I'll test first in SCORM Cloud and then in one or two other LMS's.

No charge.

Will report results back here.

If you just send me the published output, and there's a final quiz, can you also send me the correct answers?

 

Steve Flowers

"Just so all you smart techie people know - one of my wonderfully outrageous 13 year old students decided to hack into the LMS's grade book for the SCORM assignment made by Storyline, and gave himself a nice grade of 400% (then later changed it to 101% so "I wouldn't notice"). "

Yes. Smart kid:) It's not tough to discover but due to the way SCORM works, every implementation between content and LMS is insecure and exposed. Meaning, with a little bit of JavaScript know-how it's not difficult to manually force completions and scores through the console. For this reason, in my opinion, SCORM is wholly inappropriate for anything high-stakes. High-stakes assessments and activities should be measured using another mechanism:)

Most folks aren't as astute or technology savvy as your 13 year old student. Most of those that are won't apply those super powers to cheat a self-paced program. This places the risk of the behavior really low for low-stakes measurement. Even so, there are ways to verify SCORM tech-cheats if you're watching:

  • Total time in the module
  • Reporting submitted interactions / questions
  • Using essay / free-text responses

 

 

Gerry Wasiluk

One more thing . . .  (Just thought of this while grocery shopping with the wife--gee, wonder what that means. :) )

Another thing to check is to see if the course and content are set up correctly in the LMS. 

In our old Saba LMS, how you set up courses with the content using Passing/Failed or Completed/Failed for LMS reporting was critical (and not doing the same set up things when you used Completed/Incomplete or Passed/Incomplete). -

Doing these things wrong in the LMS often gave bizarre results, like a learner passing a course with 100% on the final quiz not being marked as successful--or the learner would fail the final quiz and got marked as successful.

Saw this many times in testing and debugging . . .

Phil Baruch

Military Exercises causing e-learning not to track because of wireless radio interference. That is THE BEST reason I have ever heard. Gerry and Steve have clearly been on the front-lines of supporting courseware and just want to say thanks for the time and effort in articulating your knowledge. I support AbilityLMS and I have seen everything you have mentioned (except the Military thing) and we troubleshoot course problems based on this rule set:

1. Is the course communicating. Our system logs all course communication; however Fiddler will validate the course or LMS API is posting to the LMS as well. If nothing is showing, check the JavaScript console. F12 / Console Tab is your friend! No communication, no tracking. 

2. Is the communication well formed. AICC has a certain structure, Scorm is just a log of API calls, but the entire lifecycle of the session is logged in our LMS. If your LMS does not Log, Fiddler can help as you can record the Fiddler session and send that to the LMS vendor if you think it is an LMS problem. Unfortunately, Fiddler is not practical to ask your users to download and run Fiddler while taking the course. When engaging the LMS vendor, if you are armed with the actual data being sent to the LMS, you can force them to defend how the LMS is or is not handling the communication. The LMS vendor can also mount the posting of the course from the recording to validate proper behavior in the LMS. 

While there is a place for SCORM CLOUD, it has limitations. Articulate technology is really good in standards conformance. One of the best I have seen in my working career. SCORM CLOUD is a key stakeholder in standards conformance. It will almost always work, because this is a controlled, stable environment. It is not a production environment where the user's actions or the operating environment has a set of factors than introduce instability. Consider 1000 people taking a course you are rolling out to a workforce taking the training between 1 and 3 PM. SCORM CLOUD cannot profile that sort of load to the server or bandwidth to the network. 

3. Do the logs show a completion being sent but not recorded as complete in the LMS. If we complete / pass in the logs and not in the LMS, the problem is in the LMS and the LMS may be overriding completion based on how the course is defined in the LMS. 

My experience with Articulate / Storyline is when a legitimate failure comes up it is always repeatable, and you have to really push the design patterns in building content to get there. We rely more on the LMS logging in troubleshooting as the ultimate repository of the data is the LMS. This means the LMS really has to have defensive technology to support that it is getting good communication. Good logging is also much better than SCORM CLOUD as it gives you actionable data in the production environment. If done before the LMS process the data, you eliminate finger pointing as the data is the raw data the LMS gets. 

 

This discussion is closed. You can start a new discussion or contact Articulate Support.