If you are willing to open and manipulate files in the SCORM package, you can apply an easy patch that will add data compression to the suspend data sent to the LMS. Note: Articulate suspend data is NOT compressed. It is, however, not human readable. But, it is VERY compressable - typically achieving a 10:1 ratio using a typical zlib compression method.
Note: the following suggestion is only for people comfortable with unzipping/zipping their SCORM package and doing basic edits to text-based xml, html and javascript files. If this is not you, then you should not consider doing this or find someone that can help you with it. A good xml/html/js savvy editor is also beneficial. The brackets editor is a good example.
What you need to do:
1) Obtain the pako.min.js package. This is an open-source, well-developed and reviewed, time-tested zlib compression library written purely in Javascript. You can google it and download just that file or download the latest version right from the repository using this link: pako.min.js. You are now going to add this file into your SCORM package (zip archive).
2) Unzip your SCORM course into a directory and change your working directory there.
3) Put a copy of the pako.min.js file into the lms/
subdirectoy.
4) Next edit index_lms_html5.html
and search for "lms/API.js". You should find something that looks like this:
<script src="lms/API.js" charset="utf-8"></script>
Then add this new line after that line:
<script src="lms/pako.min.js"></script>
Save the changes.
5) Next edit imsmanifest.xml
, and go to the end of the file. Just before the line </resource>
, add a new line with:
<file href="lms/pako.min.js" />
Save the changes.
You have now successfully added the zlib compression library into your SCORM package. All you need to do now is modify the routines that are used to send and receive the suspend data with the LMS. To do that:
6) Edit the file lms/API.js
Search for "function GetDataChunk". Replace the line containing return objLMS.GetDataChunk();
with the following lines:
try {
var strDataC=objLMS.GetDataChunk();
WriteToDebug("GetDataChunk strDataCompressed="+strDataC);
var strData=window.pako.inflate(atob(strDataC), {to: 'string'});
WriteToDebug("GetDataChunk strData="+strData);
return strData;
} catch (err) {
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error");
return "";
}
Then scroll down a bit to the next function which should be "function SetDataChunk". Replace the line containing return objLMS.SetDataChunk(strData);
with the following lines:
try {
var strDataC=btoa(window.pako.deflate(strData, {to: 'string'}));
WriteToDebug("SetDataChunk strDataCompressed="+strDataC);
return objLMS.SetDataChunk(strDataC);
} catch (err) {
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Deflate error");
return "";
}
Save your work.
At this point you are now done with the modification to add compression to your suspend data, so you can now:
7) Zip the contents of your SCORM package back up.
Some caveats about this modification: the format of the Articulate suspend data is a sequence of information that matches the sequence of slides in your course. If the data happens to get truncated, it is possible for Articulate to still process the data up to the point of the truncation and resume up to the last slide of data. This means resuming will still "kind of" work, just not exactly. However, if the compressed data gets truncated, the compression algorithm will fail completely and ALL the resume data will be thrown out (and you'll resume at the first slide). For me, this is a more than worthwhile trade-off, especially since in my experience, I typically have been seeing a 10:1 reduction in suspend data size - this means 30KB of suspend_data size will now be just 3KB and less than the default SCORM 1.2 size limit for suspend_data (4096 bytes).
Good Luck.
As to why Articulate development hasn't just added something like this to their course exporter? Anybody's guess, I suppose...