Forum Discussion
Storyline Suspend Data Compression
Good day!
As some of us know SCORM 1.2 only limits its suspend data for up to 4096 characters. Storyline (360) compresses its data (e.g. SL variables and such) in order to fit the limitations of the suspend data. There must be an underlying decompression algorithm or its own unique reader on SL to read the suspend data.
My question is when this compressed suspend data becomes decompressed, would there be a possibility of it hitting the 4096 limit?
- AndreaBrigan523Community Member
Hi David,
thanks a lot for your reply. I am fully aware about the fact that SCORM 1.2 is not really a modern format. My Company is selling compliance courses for the financial industry so we have to deal with a wide range of LMS (some accepting only SCORM 1.2) and prepare our catalogue in various formats is challenging : this is the reason for which we decided to simplify our production process using SCORM 1.2 only.
Our catalogue counts more than 200 courses and some are in 4 languages and at least one update per year is requested by the Authority.
Beside that, it is also true that our courses are big not only in terms of slides but also in terms of variables and interactions. Of course it is possible to simplify it but this activity will request time and I am also not sure that it will bring to a good result without jeopardizing the quality of the courses.
Last, we are also using moodle for few clients but, as per my understanding, it doesn't support the SCORM 2004 or better...the course is working but the resuming function doesn't.
At the end I decided to write to the community and share my thoughts on this topic users, which quit the course without completing it, can't restart from where they left and this is perceived as a bug. On the other hand users that are taking the course in one shot without interruptions are super happy....
- PhilMayorSuper Hero
Moodle is one of the LMSs where you can override the strict scorm limit
- AndreaBrigan523Community Member
Hi Phil,
How can I do that? Indeed I already checked the moodle.org to find a way but I had difficulties in finding a good procedure. Can you maybe advise or suggest a website where I can get the procedure?
- DavidHansen-b20Community Member
Welcome to my world! Except you apparently have not yet run into the customer that has an LMS that does NOT support SCORM 1.2 (only 2004 or later)...
- NathanHartwickCommunity Member
You can override the SCORM limit by going to site administration > plugins > SCORM package and scroll to the bottom of the page to Admin Settings and uncheck the box for SCORM standards mode.
- AndreaBrigan523Community Member
Hi Nathan
thanks a lot but the standard mode was already unchecked.
- PhilMayorSuper Hero
You can also hack the code to increase the limit further Dan Marsden should be able to point you in the right direction. However, I would look into this further and see how much data is being sent in your packages the override gives you 64K characters sounds to me like something is off.
- DavidHansen-b20Community Member
FWIW, my company's largest course was producing about 60kb of suspend data by the time a learner reached the end - and we consciously were trying to keep the course simple (we broke out coverage for specific state's into separate versions of the course - which makes maintenance and upkeep a royal PITA, especially due to other issues Articulate doesn't seem to want to address). Andrea had mentioned "it is also true that our courses are big not only in terms of slides but also in terms of variables and interactions". And he mentioned multiple languages and I wasn't sure if he meant that each course had support for 4 different languages within it or whether they were different courses. Having multiple languages within the course would make them "huge". We've contemplated doing it that way and experimented with it for one customer. It is definitely ideal for the learner to be able to choose the language they want during the course, and most LMS systems have issues assigning a course based on language to particular learners (what language should a course assignment be - only the learner really knows that answer). So, it would not surprise me at all if he was exceeding 64kb.
Note: as I mentioned in my original reply on how to add compression, using zlib compression achieves around a 10:1 compression ratio on Articulate's suspend data. I can't imagine very many instances where the suspend data would exceed say 500kb. That probably would put the course into the gargantuan category (IMO)...
- DavidHansen-b20Community Member
Okay - thanks for the update. We haven't upgraded from the Spring update yet and verifying what's changed and doing QA on all our courses is on our to-do list. I will update this thread with the necessary changes once I have taken care of that.
- MichaelPucke288Community Member
I wanted to share this update for our SCORM 1.2 bookmarking fix for an older LMS system that only uses SCORM 1.2. Our course size was 80 slides in 11 scenes with one final assessment and fairly text heavy as it deals with inclusive recruitment practices.
I was able to get the course bookmarking to work 100% throughout the course by changing it to the classic player and making a few other minor modifications. I could not get the zlib compression procedure to work because the fix no longer works with SCORMDriver.js for some odd reason. At least for me and I tested it about 5 times.
Even when using the classic player with HTML5/Flash as suggested by Articulate, the index_lms_html5.html still points to SCORMDriver.js and not the api.js file. When I made the changes according to the procedure above, the course would always start back at the main slide. For expediency, I decided to just try changing a few player settings and other things recommended in the community to see if that would work. And it did!
The idea about reducing the suspend data was the key to my success. In order to do that, I changed the course player back to the old version (Classic Player) and set all the slides to ‘Reset to Initial State.’ Not sure why, but the modern player seems to increase the suspend data size by 50%. The reason I can say that is that when testing with the modern player, I could only bookmark to slide 40. After changing to the classic player, I could go all the way to slide 80 and since I only have that many slides in the course, that basically doubled my suspend data size. And that is with a 10 question final assessment with correct/incorrect feedback of at least a paragraph each.
From my point of view, the classic player does not affect the appearance of our course that much because we are using custom navigation icons. It's worth the slight cosmetic changes and we gained full bookmarking from the change. I would be curious to know why the new modern player affects the suspend data size so much.
I performed the following steps in my final test (Test 8)
1. First, set slide properties for all slides to:
a. Slides advances: By User
b. When revisiting: Reset to Initial State
2. Next, change the course player to the classic version
3. Then disable (uncheck) all player tab options: resources, menu, glossary, notes, title, volume, seekbar, accessibility, logo, captions
4. On the Other tab (gear icon), set Player size to: Scale player to fill browser window
5. For Resume on restart, set to: Always Resume.
6. Click OK to save player.
7. Click Save to save player options to course file.
8. On Triggers panel, confirm there are no extra project variables in the course that are not being used. If use count is 0, delete those.
9. On Slide Properties panel, uncheck Slide Navigation and gestures for all slides. (You will need to click through each slide to be sure the Prev, Next buttons are not enabled. Otherwise, it will show duplicate navigation ( if you are using custom navigation).
10. Save File
11. Publish Settings:
a. Formats: HTML5 Only
b. Player: Classic – Storyline Player
c. Quality: Optimized for standard delivery
d. Publish: Entire project
e. Tracking: Results slide (11.12 Quiz Results)
f. LMS: SCORM 1.2
12. Reporting and Tracking
a. LMS Reporting: Passed/Failed
b. Tracking Options: Check when the learner completes a quiz
c. 11.12 Quiz Results- Final Assessment
13. Click Publish
14. Save to ZIP
15. Save Project file.
I tested it twice using SCORM Cloud with the same results. I hope this procedure helps you and I look forward to learning of new compression procedures for Storyline 360.
Please let me know if you have any questions.
Michael
- MichaelPucke288Community Member
I am updating my earlier post with new findings: SCORM Suspend data is not affected by modern course player.
After working with the Articulate team, I went back to retest my Test5_ResettoInit and my findings aligned with Articulate in that the modern course player does impact the suspend data as initial tests found. It is possible that I uploaded the wrong SCORM package during this test so I stand corrected. It appears that the real solution is found in the Reset to Initialization settings and the course player does not affect the suspend data.
I hope this new information helps!
Michael
Cleo's testing update from Articulate:
I ran a test using my Storyline 360 (Build 3.47.23871.0) and created a new project file with 10 slides (no question slides. I set all slides Slide Properties: When revisiting: to "Reset to initial state" and used the Classic and Modern Player, where I turned off most of the player features except for the navigation. I published it for SCORM 1.2 and uploaded it to SCORM Cloud LMS. I tested the course using Google Chrome browser.
Based on the test results, the player (Modern or Classic) doesn't contribute to the number of suspend_data, unless you have enabled or disabled any player features in the Slide Properties.
What contributes to the suspend_data includes (but not limited to): Slide and timeline information, Object states, Question slide objects states, etc...
I hope this information helps.
Cleo Sinues
Storyline Support Engineer
- PeteBrown1Community Member
Thanks @David Hansen for generously sharing (over an extended period of time) your expertise and tips on this especially troublesome problem. I think I may have to dive in and try your compression method for a particular project/client combination.
One question, however. Have you, or anyone who has implemented the compression JS, noticed any pauses/latency between page transitions etc due to the compression/decompression of the suspend_data?
I'm guessing (hoping!) not, as presumably the time it takes to compress/decompress will be offset to a large degree by the greatly reduced amount of data being transmitted.
Thanks again for sharing this deep expertise.
- DavidHansen-b20Community Member
No, we have not experienced any sort of slide-to-slide delay due to compression. More often, there will be a delay due to fetching the assets for the next slide (you will experience the three loading dots when this occurs).
A couple of notes on compression:
- The compression algorithm being used is called deflate, which is a hybrid of Lempel-Ziv 1977 (LZ77) and Huffman encoding. https://en.wikipedia.org/wiki/Zlib
- The pako package implements and optimizes this algorithm directly in pure Javascript. Information on this implementation, the source code, and some performance metrics can be found on their github page: https://github.com/nodeca/pako
- Today's processors (even in cell phones) are so ridiculously advanced compared to the processors available when these algorithms were first developed and highly optimized. It's just not even funny.
- The amount of data we are talking about in this scenario, even when it's on the larger size of say 64kb, is still so ridiculously small compared to what these algorithms are typically being used for nowadays.
- The pako page mentions benchmarks of deflate-pako running at around ~10mb/sec, and inflate-pako running at ~130mb/sec. Based on that benchmark, compressing (deflating) a mere 64kb would take ~6.1 milliseconds! I highly doubt anyone on this planet would actually notice 6 milliseconds. 🤔
So, I really don't think you will see any perceived delay. We certainly have not ourselves nor received any reports that are attributable to the compression.
- PeteBrown1Community Member
Thanks again for introducing, following and supporting this really useful technique.
- JanagiramanDCommunity Member
Hello all,
Our clients wants the module to be published in "Storyline 360 V3.48 or 49". When we try the above compression using "Pako.min.js", its not working for us.
I have added the "try" in "function GetDataChunk()" --- Line 32263 of "Scromdriver.js" and "try" corresponding to "function SetDataChunk()" --- Line 32286. Please find the attached screenshot for reference.
I have used the above methods already with storyline version 3.35 and it worked fine and working fine now too.
Could anyone help me to find a solution for this.
- DavidHansen-b20Community Member
What is the error you are getting (should be visible from the browser developer console)?
Or are you saying it just isn't working? Are you configured for SCORM? or cmi5 or tincan?
- JanagiramanDCommunity Member
We have configured the course with SCORM 1.2.
Course configured to "Resume" when the user relaunch it.
But, when I relaunch the course after viewing some slides, all the previously recorded suspend data has been lost.
I can't find any error in developer console.
Please find the attached "Debug_Log" for your reference.
- DavidHansen-b20Community Member
Well, I definitely see an inflate error getting thrown in your debug output. Hard to say what the error is though.
Perhaps modify this line:
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error");
to be:
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error: " + err);
That might help to give an indication of why pako.inflate is throwing an error.
Also, I might recommend opening the debug window on the first session, grab a copy of the suspend data (both compressed/uncompressed) at the end of the session, and then compare that with what is in the registration data on cloud.scorm, and with the data as then logged at the startup/resume of the second session. That will help identify if there is a strange issue with the data getting corrupted between the two sessions.
Seeing what the actual error message pako.inflate is throwing will help to further troubleshoot.
- JanagiramanDCommunity Member
Thanks for providing more information David.
Please find the attached debug log's of Session 1 & 2; plus the Suspend_Data comparison of Session 1 debug log and cloud.scorm data.