Debugging Procedures for problems between the eLearning client and the LMS

Feb 17, 2015

Hi all. It always seems to me that the most difficult bugs to find and squash are the ones that hide between programs (as opposed to within one). 

What do you do, for instance, when a course is giving a high percentage of incompletions, for users who believe (sometimes quite adamently) that they HAVE successfully completed the course? Some immediate tests come to mind:

  • Test across browsers,
  • Test across OSs
  • Test with changing between computers in mid-course
  • Test on mobile devices
  • Test by jumping around in the course
  • Test by giving incorrect answers repeatedly
  • Test by standing over the shoulder of people who say they've completed it successfully, on the computer they had used and at the site they had used it
  • Do all the above with a debug window open
  • Test in SCORM Cloud with it's debug features.

I just went through a 2-week learning and testing period to find out the affect of the length of the suspend data string on completion, using SCORM 1.2.  In so doing, I learned how to count the length of the string after every screen change, how to estimate the number of characters added per screen (and how to estimate how that count will vary based on the type of Storyline features used within a screen), and, finally, that our particular LMS doesn't enforce the 4096-character limit of the SCORM 1.2 suspend data string limit. Note that this is covered in another discussion here in this area.

We are all faced with this type of debugging need from time to time. What strategies and procedures do you use?  If you would, share those that have been most effective. It will help us all. And thanks!

Best,

Robert Edgar

 

39 Replies
Steve Flowers

You mentioned SCORM Cloud's debugging feature. This is handy (as you have likely found) it allows you to retain previous sessions. I love this and really appreciate that it's free.

The other thing I think is useful is using the JavaScript console in Chrome and Firefox. IE11 developer tools are also handy. Much of the time, I'm able to troubleshoot problems with communication protocol or custom execution with Chrome. One neat thing about the console, you can force in method execution by typing the method you want to execute.

Gerry Wasiluk

Turning on the debug mode in the content ( http://www.articulate.com/support/storyline-2/how-to-enable-lms-debug-mode ) can sometimes be very helpful.

It can be a little taunting at first if you are not familar with some of SCORM or AICC commands but, over time, you will pick somethings up.

Good to look for any error messages in the debug conversation between the Storyline content and the LMS.

Steve Flowers

Sometimes, only when you absolutely have to, setting up a Fiddler session is useful in tracking down where something has gone awry. I'd only go this way for cross-platform consistent issues. Combined with other data sources, this can give you a useful picture.

  • Client side (JS Console + Content debugging turned on)
  • Server side (SCORM Cloud / LMS server-side debugging)
  • Between client and server (Fiddler)
Gerry Wasiluk
Steve Flowers

Sometimes, only when you absolutely have to, setting up a Fiddler session is useful in tracking down where something has gone awry. I'd only go this way for cross-platform consistent issues. Combined with other data sources, this can give you a useful picture.

  • Client side (JS Console + Content debugging turned on)
  • Server side (SCORM Cloud / LMS server-side debugging)
  • Between client and server (Fiddler)

 

Good point.  I admire you, Steve (as always) for using Fiddler.  Have never cared for that tool.  I'm more a HTTPWatch or HTTP Professional fan, especially as they display POST information from the content to the LMS.

Gerry Wasiluk

Another good source (SOMETIMES!) is the LMS vendor.  See (and press if needed) if your vendor has any documentation, KB's, case histories, forums, etc., for recording e-learning needs and issues.

Sometimes you have to press them for this info--do it.  Or, sometimes, only IT types have access to things like this.  Find a way to get this info.

Also, check the LMS documentation to see if they are any leads.

For example, your LMS may have some characters that are reserved to the LMS or it doesn't like to see in things like quiz answers.  We saw this in old Quizmaker quizzes in our old Saba LMS.  LMS didn't like answers sent to the LMS with a "%" sign.  So we had to spell "percent" out instead of using the symbol so the LMS would "like it."

Steve Flowers

I hate combing Fiddler logs. I'm not usually the one capturing the log, so I don't get the opportunity to set it up. I usually get everything I need from Chrome's network log or the equivalent in IE11 Dev tools, Firefox, or Safari. There are some neat things you can do by connecting an iPad to your Mac to monitor the same traffic through Safari. All of these track post requests as well. One limitation of using these versus a traffic sniffer, you'll need to manually jump between frames if there's some activity in a hidden frame.

I use the Dev Tools all of the time when building scripts for custom JavaScript triggers. You can edit the JS in user.js on the client side and execute either using the story or the console without needing to re-upload a new publish. Handy.

Screen shot of Chrome's developer tools, Network tab.

Gerry Wasiluk

LMS's record a lot of data sometimes, data that average users and admins never see.

So sometimes, the critical data that you need for debugging an issue is not available through the LMS or one of its canned reports.  And sometimes even if the LMS has a custom report building function, the data you might need to debug an issue isn't available there.

Or the data is spread out between various reports and user screens and is difficult to collate together for effective troubleshooting.

In cases like this, being able to (1) understand the LMS's data models (especially tables and relationships) and then (2) being able to query it for all you exactly need (like with SQL) can be invaluable.

Having someone techie who can safely query the LMS tables (as in read-only) with something like SQL can often bring leads.  And maybe it can help lead to some custom reports that admin users can run to help figure out issues.

Robert Edgar

Does anyone know of a function for Storyline that will tell the user if s/he has lost communication with the LMS?  If you start a course, then turn off your wifi (when you're not also plugged into a wired connection), you can coast for quite a ways before your screen freezes. Users will blame the course if they've lost communication with the server, and not discover it for ten minutes. It would be better to have an immediate pop-up that says that announces that the communications has been lost. Anyone?

Steve Flowers

Here are a couple of posts describing ways you could do that:

Detect if the course is connecting via LMS. You could loop this (for example using an animation path and trigger when animation is finished) to check if still connected to the LMS. Or you could use the second one.

This one checks for an internet connection but not for the LMS connection using a JS library. A script loads the library dependencies through the JS trigger. These are "carried on" using a Web Object. No post publish surgery required.

Gerry Wasiluk

Turning to discovering issues before learners see them . . .

I'm often surprised when folks have only one version of a LMS running.  Not good.  At the very least--even when using a cloud or SAS LMS--you should have a "QA" or testing version of your LMS.

This version of the LMS is an exact dupe of your production LMS (in terms of code).  Use this LMS for training admins and for testing things--especially e-learning content.

Back in my day of managing a LMS, we started treated e-learning content like it was an application subject to some governance--and not just "glorified online PowerPoint."  Before we did that, we epected admins and developers to "do the right thing" on their own. 

Bad assumption.  Work demands and expediency often trump best practices.  People will often "cut corners" at times.

Now governance gets a bad name sometimes, especially when it seems to become a thing unto itself and becomes too onerous.  Your LMS admins and developers, especially in organizations where learning is highly de-centralized usually hate it as it may slow them down--the "oh, no--more hoops to jump through to get things done" syndrome.

Yet someone has to speak for your "silent partner" in your LMS--the learners.  Learners deserve a product that, at the very least, has been tested to work in the LMS.

We tried to frame governance in a "velvet glove," just enough to not slow admins and developers down too much yet enough to standardize things that need regulating for the learner.

Tying the testing LMS and governance together, we expected admins and developers to test ALL their e-learning courses in the testing LMS first (we even gave them guidance on how and what to test and no limits on what they could with courses in QA).  Our two mantras were "NEVER ASSUME ANY NEW COURSE WILL WORK CORRECTLY" and "YOUR LEARNERS ARE NOT YOUR BETA TESTERS."

(We even standardized on the e-learning tools that we would support--back then it was the Articulate Suite and Lectora--today that'd be Studio and Storyline.  We quickly realized there was no way we could become experts on every authoring tool.  Didn't have the resources.  Those not using the approved tools had to have their courses approved by IT and were responsible for all support issues.)

Admins and developers were then expected to sign off on QA testing for us.  Then, in the production LMS, they could only set so much up before we did the last bit to make the learning available finally for the learner (the LMS we were using only gave one way of doing this).  We had a simple online form to request us to do this work and it included the sign-off that the content was tested.  We then tried to guarantee a one-business day turnaround.  Most often, the work happened within an hour or two of the request.

Yeah, it was extra work for us and admins/developers but, IMVHO it was worth it.  We got a lot of problems with content during testing, even finding a few bugs in earlier versions of Articulate Presenter which Articulate, to their great credit, quickly addressed.  Most issues were not software but developer mistakes, LMS issues, network issues, learner PC issues, etc.

Before we did this it was chaos.  Like a plant would roll out a series of compliance courses due in a short while and learners were having immediate issues--like courses not completing or people who failed a course got a passing grade.  We'd get a frantic call from the plant for immediate help.  When you're a small staff that's already overworked such calls are not fun.  Most times we were able to help in quick order--but it was not fun.

After implementing a little governance things were a whole lot better.  Problems got discovered in QA.  Fewer emergency calls for help and those than came up were often because the course was not tested properly even though the admin signed off that it was.

After I left, a new LMS, as part of a new HR suite, was put in place.  Governance also went out the door as upper management didn't want the LMS team to do this "menial work."  Admins could do what they wanted in production.

Can you guess what the result was?  Yup, increased lerner problems with courses and a bit of chaos.

And just last week I helped another client with an important course that was having issues with external learners.  Was it tested before rolling it out?  Nope . . .

Steve Flowers

We have a UAT (test environment) that is identical in core code but page structure, accounts, and content don't get replicated so we only use that environment to test new features or structures. For content, we'll test in the production environment in our platform. Luckily, it's fairly easy to setup limited visibility and distribution.

I actually upload a SCO into the prod environment as soon as I get a publish for review even if it's full of placeholders. So every run at review (through a content link, not run with LMS API) runs from the LMS. We ask the stakeholder and stakeholder staff to take the course using their preferred browser before making one more round of testing ourselves. 

Narrowing the field of dev tools reduces risks. We run into a few problems post deployment (small handful for every roll-out) with most new stuff. Some of it is folks running the wrong course (not following instructions). Some is issues with connection / LMS problems. Very small number of issues we run into are with content things we actually have control over (like the suspend data debacle.)

Gerry Wasiluk

First, a caveat . . .  the following will not work with every LMS--and should be thoroughly tested.

Background:  Sometimes a LMS will "take over" determining completion for a SCORM 1.2 e-learning module.  Many times you may want JUST the content to determine completion--or you do not want a course to be marked as "failed" if the learner does not pass the final quiz.

For the Saba LMS, and Articulate courses with a final quiz, we had to do the following:

1) In the published Articulate output, open the imsmanifest.xml file with something like Notepad or Notepad ++.

2) Look for a line like this example: 

<adlcp:masteryscore>100</adlcp:masteryscore>

The line above means the passing score for the final quiz is 100%.

3) Carefully, remove JUST the number (in the example above, that is "100").

So, after editing, the line should look this:

<adlcp:masteryscore></adlcp:masteryscore>

4) Make no other edits.  Save the file.

5) Zip up all the files for importing into the LMS.

6) TEST, TEST, TEST.

Again, this may work with just a few LMS's.  Besides Saba, I had to do this for a client running Totara (Moodle).

 

For Saba, there was one other edit we had to make for SCORM 1.2.  Saba expects suspend data notifications differently.  If Saba does not get the suspend notification it expects, the SCORM 1.2 course may not complete.  Caveat:  This may just be a Saba-thing.

1. In the Storyline published output, look for the SCORMFunctions.js file in the LMS folder.

2. Open the file with something like Notepad or Notepad ++.

3. Look for this line:

var SCORM_SUSPEND = "suspend";

Remove JUST the word suspend.

So, after editing, the line should look like this:

var SCORM_SUSPEND = "";

4) Make no other edits.  Save the file.

5) Zip up all the files for importing into the LMS.

6) TEST, TEST, TEST.

Again, this last edit MAY JUST be a Saba thing but I'll throw it out just in case . . .

Gerry Wasiluk

Here's a rare one . . .  sometimes there is something on the learner's PC that causes issues.

Not sure how common this is anyone but we had a couple of cases a few years back where the learner had downloaded something off the Internet and this loaded some object in their default browser that was interfering with communications between the content and the LMS.

Once we got lucky and located the object and removed it, the Articulate courses were able to communicate with the LMS successfully.

With modern anti-virus/malware prevention and tools, I'm not sure how often this occurs, but it's worth mentioning.

 

Gerry Wasiluk

When problems are not happening for everyone . . .

If you do not do this now, you should: record and track information on learner problems with the LMS and e-learning content.  If organizations have learners contact some IT helpdesk, this information may be recorded for you.

If not, among other things, you want to know these things: the learner's location, the date/time the problem happened, and basic PC/software information.

You want to look for patterns.  Like if problems are only happening at a certain location or at a certain time of day or with certain software on a PC.

Some organizations keep track of what is on everyone one's PC.  If you can access this information, and, again, find some pattern among learners having issues, this may be helpful.

If not, paid tools like Browserhawk can be helpful for getting information on a learner's PC.  Or use a basic freebie like http://supportdetails.com

Kim  Miller

This is a really good question. Being a humble junior high teacher simply trying to make science SCORM packages for our EDU 2.0 LMS, I do not understand any of your answers, but have experienced multiple and various problems from trustworthy students saying they do complete my SCORM assignment but don't have the answer show up in the LMS.  These are students using all kinds of devices and browsers. Then other students have no problem at all. VERY FRUSTRATING !

After multiple communications with the IT group at the LMS, the EDU 2.0 tech team say it's a problem on Storyline's end. I personally don't have the time to troubleshoot this.

Is there a forum somewhere here on the Storyline boards where people like you (techie, knowledgeable types) help resolve known issues of Storyline's products working with EDU 2.0? Then I could point the EDU tech team to that thread and say "here it all is, in your language" :-)

I would really appreciate anyone's suggestions on this. It's almost become a deal breaker on whether or not I can continue to use Storyline to make content for our school... we need it to reliably work with EDU 2.0 across all devices.

Thanks,

Kim

Robert Edgar

Hi Kim,

I don't know about EDU 2.0. If someone else here knows of a EDU 2.0 forum, please speak up!

I will say that "all kinds of devices and browsers" is a hint, though. The communications between an learning course and the LMS is both more active and verbose than a standard webpage and your browser. If you can find a browser and OS that seems to work, you could start by asking students to use those. 

I don't know of any authoring systems that work all the time for all systems. Perhaps your IT group could work with you on one OS and one browser, or perhaps one browser across two OSs. 

Also: keep your courses short--I'd recommend under 15 min. each. Break them up in separately-launching modules  if you  have to cover more of the curriculum. Get the minimum working, then move outward from there.

I hope that's helpful. Good luck.

Gerry Wasiluk

To add on to Robert's reply . . .

I'm also not familiar or experienced with EDU 2.0.  If no one else is, perhaps there's a way we can see the content work in EDU 2.0.  I'd be happy to look at this if given access.

Another thing is to give one of us your course and try it on our LMS's--or, better yet try it on SCORM Cloud to see if it works there.  If it does, that would be a piece of data that may suggest that it's not the content having the issue.

Another thing to try is to publish to another standard (like AICC or SCORM 2004 or Tin Can, if the LMS supports one of these) and see if things are better or not.

And sometimes you really need an IT type with the appropriate access who can get their hands dirty and "gather evidence."

To add on to my previous posting, you or some IT type needs to collect data and observations. And keep an open mind to what the issue may be as you do that.  Comments like "it must be the Storyline content" or "it must be the LMS" are too easy and counterproductive without hard data or evidence.

In my experience with any LMS-e-learning content issue, here are just some of the main areas of possibility:

  • The content itself, either as a bug or maybe something like the SCORM 1.2 limit on resume data being violated
  • The LMS
  • The user's device
  • Software running on the user's device, including the OS
  • The network

(I've edited my previous post to include making sure you know what device, OS, and browser.  Again, the goal is identify patterns from user issues. )

Content "talks to" the LMS at various points via posts from the browser (e.g., bookmarks, scores, time spent, etc.) to the LMS and the LMS usually responds back with an acknowledgement or error message.

And if someone can actually observe such posts from the content to the LMS (using something like Fiddler or HTTPWatch), and if the LMS processes them and acknowledges them, or throws up an error message "under the hood," you have more data points to help you solve the problem.

I've seen issues with some LMS's when the posts from the content come too fast for the LMS to process, or the LMS is too busy or not robust enough, and posts from the content get lost or not get cued up for processing.

And things like network latency can cause posts to arrive out of sequence.

But I've learned to not assume any of these things are happening without first researching the issue with an open mind.

And, in the case of Storyline, be sure you know what is supported for playback, which is currently this:

 

Make sure your learners are using the supported hardware/software for playback.

Gerry Wasiluk

It helps to have access to the IT types who manage your network.  Sometimes they know something that may help.

One example--years ago we convinced a developer to switch from Lectora to the Articulate Suite because she was better suited to that.

She produced her first Articulate course.  It was designed for certain folks in Engineering, all English-speakers internally around the world for this global company.

When the course started failing for learners in eastern Poland and western Russia, she and her manager reacted by blaming the Articulate content.  My staff and I got a lot of heat.

I talked to my IT friends in networking and we did some investigating.  Turns out IT had a long history of transactional web problems in that area.  Part of the network to that area was actually radio waves, and transmissions got messed up during bad or severe weather or during intense military maneuvers (think ordinance exploding in the air).  So sometimes the network was just "fouled up."

Turns out all the learners problems coincided with bad weather or military maneuvers in the area.  We presented the developer and her manager with the resolution and they were pacified.  They notified affected learners appropriately.

P.S. We never got an apology for their "overreacting" and the heat they gave us . . .

Gerry Wasiluk

Lastly, one of the quickest things you can do is have the learner with a problem try the course on another PC or device.  (If we could verify they really completed the course, we manually completed them.)

Some learners are eager to help you out and will do anything to help you try things or collect data to solve content problems. 

Other learners are so upset they just want to get their completion.  For these last folks we often suggested they try the course on another PC or device (one supported by the authoring tool) and see if the course worked properly for them.

Many times this worked, which would possibly suggest some learner PC hardware/software/OS issue.

We once had one learner in Italy who could not get a mandatory compliance course to complete or work properly for him.  And the course sponsor would not accept us manually completing the course for the learner.

Among other things, his local IT folks examined his PC with a fine toothcomb and everything was right.  They couldn't find anything wrong with it.

We asked the learner to try the course on his workmate's PC.  Lo and behold, the course worked perfectly. 

And the local IT folks told us that except for user data and documents, the two PCs were identical in terms of hardware, OS, and software.

Go figure . . .

Steve Flowers

Yes, LMS admin folks tend to jump to the authoring tool as the problem. Lots of factors. Without evidence, I think it's premature to point fingers in any particular direction. More often than not, intermittent problems recording completion on the LMS are directly related to connectivity or user action. 

Javascript works the same way, for the most part, across browsers. So if it works in one, the mechanisms used to send completion and suspend data *should* work in all configurations and platforms.

Things can get a little silly if someone has a bad internet connection or if the server has a hiccup (yes, this does happen more often than admins will admit).

Steve Flowers

We've entertained recording parallel data and using some error indication in Storyline itself with an internet connection detector.  The detector is pretty easy to put in:

https://community.articulate.com/discussions/articulate-storyline/internet-connection-detection-plugin#reply-189588

For my next new development, I'll add in a function to send the username and browser agent details on launch and also record the completion to a Google Spreadsheet. It won't help if the user loses internet connection but it'll give us additional data points when stuff does go wrong.

This discussion is closed. You can start a new discussion or contact Articulate Support.