Forum Discussion
Storyline Suspend Data Compression
Good day!
As some of us know SCORM 1.2 only limits its suspend data for up to 4096 characters. Storyline (360) compresses its data (e.g. SL variables and such) in order to fit the limitations of the suspend data. There must be an underlying decompression algorithm or its own unique reader on SL to read the suspend data.
My question is when this compressed suspend data becomes decompressed, would there be a possibility of it hitting the 4096 limit?
- DavidHansen-b20Community Member
If you are willing to open and manipulate files in the SCORM package, you can apply an easy patch that will add data compression to the suspend data sent to the LMS. Note: Articulate suspend data is NOT compressed. It is, however, not human readable. But, it is VERY compressable - typically achieving a 10:1 ratio using a typical zlib compression method.
Note: the following suggestion is only for people comfortable with unzipping/zipping their SCORM package and doing basic edits to text-based xml, html and javascript files. If this is not you, then you should not consider doing this or find someone that can help you with it. A good xml/html/js savvy editor is also beneficial. The brackets editor is a good example.
What you need to do:
1) Obtain the pako.min.js package. This is an open-source, well-developed and reviewed, time-tested zlib compression library written purely in Javascript. You can google it and download just that file or download the latest version right from the repository using this link: pako.min.js. You are now going to add this file into your SCORM package (zip archive).
2) Unzip your SCORM course into a directory and change your working directory there.
3) Put a copy of the pako.min.js file into the
lms/
subdirectoy.4) Next edit
index_lms_html5.html
and search for "lms/API.js". You should find something that looks like this:<script src="lms/API.js" charset="utf-8"></script>
Then add this new line after that line:
<script src="lms/pako.min.js"></script>
Save the changes.
5) Next edit
imsmanifest.xml
, and go to the end of the file. Just before the line</resource>
, add a new line with:<file href="lms/pako.min.js" />
Save the changes.
You have now successfully added the zlib compression library into your SCORM package. All you need to do now is modify the routines that are used to send and receive the suspend data with the LMS. To do that:
6) Edit the file
lms/API.js
Search for "function GetDataChunk". Replace the line containing
return objLMS.GetDataChunk();
with the following lines:try {
var strDataC=objLMS.GetDataChunk();
WriteToDebug("GetDataChunk strDataCompressed="+strDataC);
var strData=window.pako.inflate(atob(strDataC), {to: 'string'});
WriteToDebug("GetDataChunk strData="+strData);
return strData;
} catch (err) {
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error");
return "";
}Then scroll down a bit to the next function which should be "function SetDataChunk". Replace the line containing
return objLMS.SetDataChunk(strData);
with the following lines:try {
var strDataC=btoa(window.pako.deflate(strData, {to: 'string'}));
WriteToDebug("SetDataChunk strDataCompressed="+strDataC);
return objLMS.SetDataChunk(strDataC);
} catch (err) {
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Deflate error");
return "";
}Save your work.
At this point you are now done with the modification to add compression to your suspend data, so you can now:
7) Zip the contents of your SCORM package back up.
Some caveats about this modification: the format of the Articulate suspend data is a sequence of information that matches the sequence of slides in your course. If the data happens to get truncated, it is possible for Articulate to still process the data up to the point of the truncation and resume up to the last slide of data. This means resuming will still "kind of" work, just not exactly. However, if the compressed data gets truncated, the compression algorithm will fail completely and ALL the resume data will be thrown out (and you'll resume at the first slide). For me, this is a more than worthwhile trade-off, especially since in my experience, I typically have been seeing a 10:1 reduction in suspend data size - this means 30KB of suspend_data size will now be just 3KB and less than the default SCORM 1.2 size limit for suspend_data (4096 bytes).
Good Luck.
As to why Articulate development hasn't just added something like this to their course exporter? Anybody's guess, I suppose...
- samerCommunity Member
Thanks David for the solution!
Can I just copy and paste the lms/API.js to a different course, so i don't have to keep editing?
- GerryWasilukCommunity Member
Interesting. THANKS for sharing. :)
- DavidHansen-b20Community Member
No, not exactly... However, I have definitely encountered problems with some content authoring tools and certain LMS's where certain characters in the suspend_data cause problems with the LMS.
I first started working with the lower-level SCORM Javascript interface code (ADLnet's opensource version) when I needed to address a similar issue with a large client of ours. The client wasn't going to change or upgrade their LMS system and we couldn't change the content authoring tool since it we were just a distributor. What I discovered in that particular situation was that specific LMS imlementation of the SCORM interface was doing a Javascript eval() to save the suspend_data. So any Javascript escape characters were then causing problems. To address this, I simply wrapped the suspend_data with a btoa(), atob() as it was being sent to the LMS. Those are native Javascript functions to convert from binary to base64 and vice versa). This addressed that problem and the client was ecstatic to have the course working with their LMS.
The story then progresses when we had another large client that had another older LMS. The LMS only supported SCORM 1.2 and could not override the size of the suspend_data from the 4096 bytes specified in SCORM 1.2 for suspend data. We had authored this particular course in Articulate and the suspend_data size towards the end of the course was averaging 8-12 kbytes (the course has since grown and now averages around 24 kbytes). The behavior an Articulate course typically exhibits when this limit is reached is to just resume at whatever previous location is represented by that first 4 kbytes. Since we were invested in Articulate for this course, and the client couldn't just upgrade their LMS, we needed to figure something out. When I looked at the LMS api within Articulate and recognized that it was pretty much based on the ADLnet opensource, I knew exactly how to handle it.
Now, I don't have any experience with Moodle. We either support customer's LMS systems or our own that is based on cloud.scorm.com. And I haven't had to deal with any customer's that are using Moodle. I don't know if that's just because we haven't had any customer's using Moodle or whether Moodle generally works just fine and thus we haven't had any reported issues. What I do know is that the SCORM 1.2 specifications states suspend_data must be a "A set of ASCII characters with a maximum length of 4096 characters". Articulate does use some uncommon characters like ^ and ~. Those particular characters should not cause any Javascript problems, but then again it's hard to say how Moodle is implemented and what their interpretation of "ASCII characters" really is.
Lastly, I will say, that this patch does encode the compressed data in base64, so it should be very safe with any LMS. It might even address your issue with Moodle. Base64 is a very common standard for encoding to a minimum set of "safe" characters (that is characters that are common to most encodings and printable). You can read more about that here: https://wikipedia.org/wiki/Base64
- MichaelBauerCommunity Member
Thanks for the insight here :)
I'm not willing to get in to the SCORM coding, but at least doing a "Reset to initial state" should help!
- DavidHansen-b20Community Member
Yes, that should be fine. The lms/API.js file rarely would change unless there is some update to handle a new API or fix. That seems to rarely happen these days, and many people haven't even adopted TinCan/xAPI yet.
- samerCommunity Member
Hello David,
It appears that since recent Articulate update to 360 this code no longer works.
Would you have any ideas or suggestions as to why?
Help much appreciated!
Kind regards
Sharon Amer
- DarioDabbiccoCommunity Member
Hi Sharon, this is an error I usually see when you zip the root folder of the storyline output (es. project - storyline ouput) and not the actual files inside the folder (index_lms.html, imsmanifest.xml etc.). This creates an "extra" outer folder that some LMSes such as Moodle do not accept.
- DavidHansen-b20Community Member
Yep, Dario is spot on. For this very reason, I like to use a command line zip tool. Then I can be sure that I'm in the right directory (where the imsmanifest.xml file is located) and getting all the files. For the tool I use, the command looks like this:
zip -r ../course.zip *
In the this example, the '-r' signals to recurse into subdirectories, the '../course.zip' is the zip archive name (located in the directory above the current directory), and the '*' specifies to package all files/directories from the current working directory.
- DarioDabbiccoCommunity Member
Hi Sharon, there's been a change in the names and organization of files in the output, but this method is still valid. You have to look for the
lms/scormdriver.js
file, and you will find the javascript functions you're looking for, with very slight but not influent variations.- samerCommunity Member
Thankyou for your reply. I have located the js functions and so far working
successfully.
- DavidHansen-b20Community Member
Well, hrmph! It does look like pako changed their API in v2.0.0:
## [2.0.0] - 2020-11-17
### Changed
- Removed binary strings and `Array` support.So, that does mean the getDataChunk() and setDataChunk() do need some updates to deal with deflate now returning an Int8Array and inflate requiring an Int8Array as input.
Though your suggested changes do work, I have tuned them just a bit. Note: I chose to use String.prototype.split to turn a string into an array because it has the widest & oldest browser support and can still be done in one line. Plus the only real issue with using split is not a problem here since we would not be encountering any actual UTF-16 characters coming out of deflate nor going into inflate as they specifically are dealing with 8-bit integers (eg, an 8-bit integer would never produce a UTF-16 character that would then break using split).
So, here is my updated pako patch file that I am now using with pako > v2.0.0:
--- index_lms.html.orig 2020-09-16 11:15:29.634371759 -0700
+++ index_lms.html 2020-09-16 11:15:06.980645108 -0700
@@ -13,5 +13,6 @@
#app { height: 100%; width: 100%; }^M
</style>^M
<script src="lms/scormdriver.js" charset="utf-8"></script>^M
+ <script src="lms/pako.min.js"></script>^M
<script>window.THREE = { };</script>^M
</head>^M
--- lms/scormdriver.js.orig 2021-07-03 14:55:52.000000000 -0700
+++ lms/scormdriver.js 2022-05-02 13:08:13.100525415 -0800
@@ -32257,7 +32257,15 @@
return "";
}
- return objLMS.GetDataChunk();
+ try {
+ var strDataC=objLMS.GetDataChunk();
+ var strData=window.pako.inflate(atob(strDataC).split('').map(function(c){return c.charCodeAt(0)}), {to: 'string'});
+ WriteToDebug("GetDataChunk strData="+strData);
+ return strData;
+ } catch (err) {
+ SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error");
+ return "";
+ }
}
//public
@@ -32270,7 +32278,14 @@
return false;
}
- return objLMS.SetDataChunk(strData);
+ try {
+ WriteToDebug("SetDataChunk strData="+strData);
+ var strDataC=btoa(window.pako.deflate(strData).reduce(function(s,i){return s+String.fromCharCode(i)},''));
+ return objLMS.SetDataChunk(strDataC);
+ } catch (err) {
+ SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Deflate error");
+ return "";
+ }
}
//public Hi Christian,
The suspend data is compressed and not human readable, but it's still something that your Learning Management System (LMS) would be able to read and decipher. I haven't seen anyone crack the algorithm or determine a way around it though.
If you can share a bit more about what you're hoping to accomplish or any trouble you've run into - I or others in the ELH community may be able to point you in the right direction.
- ChristianOmpadCommunity Member
Hello Ashley,
I am trying to figure out a way to resolve an unwanted behavior in a course I'm working on. Even when its completed, it would always resume to an exam question. I would like it to resume on last page that the user was in when they completed the course. So far every topics in ELH and help from support have lead me to a conclusion that I am facing a suspend_data problem.
These are the suggested solutions that I have drawn from the discussion:
1. Publish to SCORM 2004 3rd/4th ed.
2. Minimize/delete slides.
3. Only set important slides' settings to resume to saved state and set others to reset to initial state to minimize sending data to suspend_data.
1 and 2 are not an option since the client's LMS only supports SCORM 1.2 and everything in the course is based on their specs. Therefore, it leaves me with option 3 but I have made little progress as to how to make this random behavior not be so random.
- ChristopherPCommunity Member
Thanks for the solution for SCORM 1.2 suspend_data
Hi Christian,
If you have a large course that exceeds suspend data limits, here are some suggestions for correcting it:
- Disable the resume feature in Storyline.
- Reduce the number of slides until the resume feature works as expected. The limit will vary, depending on a variety of factors. You'll need to test your content in your LMS to verify.
- Republish your course for SCORM 2004 3rd Edition or 4th Edition, both of which support much longer suspend data.
There are some community ideas shared here that may be helpful to you as you research what may work best for your situation and hopefully others in the community will chime in to help you out.