Storyline Suspend Data Compression

Nov 09, 2018

Good day!

As some of us know SCORM 1.2 only limits its suspend data for up to 4096 characters. Storyline (360) compresses its data (e.g. SL variables and such) in order to fit the limitations of the suspend data. There must be an underlying decompression algorithm or its own unique reader on SL to read the suspend data. 

My question is when this compressed suspend data becomes decompressed, would there be a possibility of it hitting the 4096 limit?

60 Replies
David Hansen

Yep, Dario is spot on.  For this very reason, I like to use a command line zip tool.  Then I can be sure that I'm in the right directory (where the imsmanifest.xml file is located) and getting all the files.  For the tool I use, the command looks like this: zip -r ../course.zip *   In the this example, the '-r' signals to recurse into subdirectories, the '../course.zip' is the zip archive name (located in the directory above the current directory), and the '*' specifies to package all files/directories from the current working directory.

Victor Madison

David, counting the bytes in the suspend data using the SCORM Cloud registration info is great.  This is the closest method I have seen for measuring the suspend data for a course.  It may not be exact, but it at least gives you an idea of where you are.  Thank you very much.

Andrea Briganti

I read this long post and this story is something that perhaps Articulate can consider to improve the software. I am not a developer but I wonder if articulate can do something to bypass this limitation. I don't know....I found this script inside the file scormdriver.js of the course published in SCORM 1.2... What does this mean? Is articulate chunking the volume of suspend data as soon as the y reach 4096? However, is it possible to make something to ignore that limit keeping the SCORM 1.2 format?

 


if(USE_STRICT_SUSPEND_DATA_LIMITS==true) {
if(strData.length > 4096) {
WriteToDebug("SCORM_SetDataChunk - suspend_data too large (4096 character limit for SCORM 1.2)");
return false;
}else{
return SCORM_CallLMSSetValue("cmi.suspend_data", strData);
}
}else{
return SCORM_CallLMSSetValue("cmi.suspend_data", strData);
}
}

David Hansen

"I wonder if articulate can do something to bypass this limitation." - unfortunately, the limitation of 4096 bytes comes from the SCORM 1.2 specification itself.  Depending on the LMS system, some will adhere to the specification very strictly and others will make accommodations where it makes sense.  Eg, many LMS systems will allow the administrator to configure the maximum size of suspend_data regardless of the spec (all variants of the Rustici driver support this).   However, Articulate is bound by whatever implementation of the specification the LMS's has and can not just bypass that.

Note: The really simple solution is actually just to use any variant of SCORM 2004 where it was quickly recognized that 4096 bytes was insufficient and increased the specification maximum to 32768 bytes (as well as many other improvements to the specification).   SCORM 1.2 is from 2000-2001, almost 20 years ago!  It's unbelievable to me that we are still struggling to get out from under a specification from that long ago!  Unfortunately, they did not build in a plan for obsolescence to force progression to newer standards (many standards today do this).  About all I can say is there must be a clear desire for simplicity and that is one of the driving reasons why so many people continue to use SCORM 1.2 - it is "simpler".

In regard to your question about that piece of source code, that is designed for the few LMS systems that will potentially crash your course if you attempt to send more than 4096 bytes.  If you happen to have one of those rare LMS systems, then you can simply set the USE_STRICT_SUSPEND_DATA_LIMITS variable in the Configuration.js file.  Most LMS systems that strictly support 4096 bytes will merely drop the portion that is over 4096.   This will often result in your Articulate course resuming some place prior to where you actually left off.   With the setting above, if you get to a point that suspend_data exceeds the 4096, all suspend_data will be dropped and you will be resumed to the beginning of the course.

Andrea Briganti

Hi David,

thanks a lot for your reply. I am fully aware about the fact that SCORM 1.2 is not really a modern format. My Company is selling compliance courses for the financial industry so we have to deal with a wide range of LMS (some accepting only SCORM 1.2) and prepare our catalogue in various formats is challenging : this is the reason for which we decided to simplify our production process using SCORM 1.2 only. 

Our catalogue counts more than 200 courses and some are in 4 languages and at least one update per year is requested by the Authority.

Beside that, it is also true that our courses are big not only in terms of slides but also in terms of variables and interactions. Of course it is possible to simplify it but this activity will request time and I am also not sure that it will bring to a good result without jeopardizing the quality of the courses.

Last, we are also using moodle for few clients  but, as per my understanding, it doesn't support the SCORM 2004 or better...the course is working but the resuming function doesn't.

At the end I decided to write to the community and share my thoughts on this topic users, which quit the course without completing it, can't restart from where they left and this is perceived as a bug. On the other hand users that are taking the course in one shot without interruptions are super happy....

 

 

 

Phil Mayor

You can also hack the code to increase the limit further Dan Marsden should be able to point you in the right direction. However, I would look into this further and see how much data is being sent in your packages the override gives you 64K characters sounds to me like something is off.

David Hansen

FWIW, my company's largest course was producing about 60kb of suspend data by the time a learner reached the end - and we consciously were trying to keep the course simple (we broke out coverage for specific state's into separate versions of the course - which makes maintenance and upkeep a royal PITA, especially due to other issues Articulate doesn't seem to want to address).  Andrea had mentioned "it is also true that our courses are big not only in terms of slides but also in terms of variables and interactions".   And he mentioned multiple languages and I wasn't sure if he meant that each course had support for 4 different languages within it or whether they were different courses.  Having multiple languages within the course would make them "huge".  We've contemplated doing it that way and experimented with it for one customer.   It is definitely ideal for the learner to be able to choose the language they want during the course, and most LMS systems have issues assigning a course based on language to particular learners (what language should a course assignment be - only the learner really knows that answer).  So, it would not surprise me at all if he was exceeding 64kb.

Note: as I mentioned in my original reply on how to add compression, using zlib compression achieves around a 10:1 compression ratio on Articulate's suspend data.   I can't imagine very many instances where the suspend data would exceed say 500kb.  That probably would put the course into the gargantuan category (IMO)...

Michael Puckett

I wanted to share this update for our SCORM 1.2 bookmarking fix for an older LMS system that only uses SCORM 1.2. Our course size was 80 slides in 11 scenes with one final assessment and fairly text heavy as it deals with inclusive recruitment practices. 

I was able to get the course bookmarking to work 100% throughout the course by changing it to the classic player and making a few other minor modifications. I could not get the zlib compression procedure to work because the fix no longer works with SCORMDriver.js for some odd reason. At least for me and I tested it about 5 times.

Even when using the classic player with HTML5/Flash as suggested by Articulate, the index_lms_html5.html still points to SCORMDriver.js and not the api.js file. When I made the changes according to the procedure above, the course would always start back at the main slide. For expediency, I decided to just try changing a few player settings and other things recommended in the community to see if that would work. And it did!

The idea about reducing the suspend data was the key to my success. In order to do that, I changed the course player back to the old version (Classic Player) and set all the slides to ‘Reset to Initial State.’ Not sure why, but the modern player seems to increase the suspend data size by 50%. The reason I can say that is that when testing with the modern player, I could only bookmark to slide 40. After changing to the classic player, I could go all the way to slide 80 and since I only have that many slides in the course, that basically doubled my suspend data size. And that is with a 10 question final assessment with correct/incorrect feedback of at least a paragraph each.

From my point of view, the classic player does not affect the appearance of our course that much because we are using custom navigation icons. It's worth the slight cosmetic changes and we gained full bookmarking from the change. I would be curious to know why the new modern player affects the suspend data size so much.

I performed the following steps in my final test (Test 8)

1.            First, set slide properties for all slides to:

                a.            Slides advances: By User

                b.            When revisiting: Reset to Initial State

2.            Next, change the course player to the classic version

3.            Then disable (uncheck) all player tab options: resources, menu, glossary, notes, title, volume, seekbar, accessibility, logo, captions

4.            On the Other tab (gear icon), set Player size to: Scale player to fill browser window

5.            For Resume on restart, set to: Always Resume.

6.            Click OK to save player.

7.            Click Save to save player options to course file.

8.            On Triggers panel, confirm there are no extra project variables in the course that are not being used. If use count is 0, delete those.

9.            On Slide Properties panel, uncheck Slide Navigation and gestures for all slides. (You will need to click through each slide to be sure the Prev, Next buttons are not enabled. Otherwise, it will show duplicate navigation ( if you are using custom navigation).

10.          Save File

11.          Publish Settings:

                a.            Formats: HTML5 Only

                b.            Player: Classic – Storyline Player

                c.             Quality: Optimized for standard delivery

                d.            Publish: Entire project

                e.            Tracking: Results slide (11.12 Quiz Results)

                f.             LMS: SCORM 1.2

12.          Reporting and Tracking

                a.            LMS Reporting: Passed/Failed

                b.            Tracking Options: Check when the learner completes a quiz

                c.             11.12 Quiz Results- Final Assessment

13.          Click Publish

14.          Save to ZIP

15.          Save Project file.

I tested it twice using SCORM Cloud with the same results. I hope this procedure helps you and I look forward to learning of new compression procedures for Storyline 360.

Please let me know if you have any questions.

Michael

Michael Puckett

I am updating my earlier post with new findings: SCORM Suspend data is not affected by modern course player.

After working with the Articulate team, I went back to retest my Test5_ResettoInit and my findings aligned with Articulate in that the modern course player does impact the suspend data as initial tests found. It is possible that I uploaded the wrong SCORM package during this test so I stand corrected. It appears that the real solution is found in the Reset to Initialization settings and the course player does not affect the suspend data.

I hope this new information helps!

Michael

Cleo's testing update from Articulate:

I ran a test using my Storyline 360 (Build 3.47.23871.0) and created a new project file with 10 slides (no question slides. I set all slides Slide Properties: When revisiting: to "Reset to initial state" and used the Classic and Modern Player, where I turned off most of the player features except for the navigation. I published it for SCORM 1.2 and uploaded it to SCORM Cloud LMS. I tested the course using Google Chrome browser.

Based on the test results, the player (Modern or Classic) doesn't contribute to the number of suspend_data, unless you have enabled or disabled any player features in the Slide Properties.

What contributes to the suspend_data includes (but not limited to): Slide and timeline information, Object states, Question slide objects states, etc...

 

I hope this information helps.

Cleo Sinues

Storyline Support Engineer

 

Peter Brown

Thanks @David Hansen for generously sharing (over an extended period of time) your expertise and tips on this especially troublesome problem. I think I may have to dive in and try your compression method for a particular project/client combination.

One question, however. Have you, or anyone who has implemented the compression JS, noticed any pauses/latency between page transitions etc due to the compression/decompression of the suspend_data?

I'm guessing (hoping!) not, as presumably the time it takes to compress/decompress will be offset to a large degree by the greatly reduced amount of data being transmitted.

Thanks again for sharing this deep expertise.

David Hansen

No, we have not experienced any sort of slide-to-slide delay due to compression.  More often, there will be a delay due to fetching the assets for the next slide (you will experience the three loading dots when this occurs).

A couple of notes on compression:

  • The compression algorithm being used is called deflate, which is a hybrid of Lempel-Ziv 1977 (LZ77) and Huffman encoding.  https://en.wikipedia.org/wiki/Zlib
  • The pako package implements and optimizes this algorithm directly in pure Javascript.  Information on this implementation, the source code, and some performance metrics can be found on their github page: https://github.com/nodeca/pako
  • Today's processors (even in cell phones) are so ridiculously advanced compared to the processors available when these algorithms were first developed and highly optimized. It's just not even funny.
  • The amount of data we are talking about in this scenario, even when it's on the larger size of say 64kb, is still so ridiculously small compared to what these algorithms are typically being used for nowadays.
  • The pako page mentions benchmarks of deflate-pako running at around ~10mb/sec, and inflate-pako running at ~130mb/sec.   Based on that benchmark, compressing (deflating) a mere 64kb would take ~6.1 milliseconds!  I highly doubt anyone on this planet would actually notice 6 milliseconds. 🤔

So, I really don't think you will see any perceived delay.   We certainly have not ourselves nor received any reports that are attributable to the compression.

Janagiraman D

Hello all,

Our clients wants the module to be published in "Storyline 360 V3.48 or 49". When we try the above compression using "Pako.min.js", its not working for us.

I have added the "try" in "function GetDataChunk()" --- Line 32263 of "Scromdriver.js" and "try" corresponding to "function SetDataChunk()" --- Line 32286. Please find the attached screenshot for reference.

I have used the above methods already with storyline version 3.35 and it worked fine and working fine now too.

Could anyone help me to find a solution for this.