Storyline Suspend Data Compression

Nov 09, 2018

Good day!

As some of us know SCORM 1.2 only limits its suspend data for up to 4096 characters. Storyline (360) compresses its data (e.g. SL variables and such) in order to fit the limitations of the suspend data. There must be an underlying decompression algorithm or its own unique reader on SL to read the suspend data. 

My question is when this compressed suspend data becomes decompressed, would there be a possibility of it hitting the 4096 limit?

60 Replies
Janagiraman D

We have configured the course with SCORM 1.2.

Course configured to "Resume" when the user relaunch it.

But, when I relaunch the course after viewing some slides, all the previously recorded suspend data has been lost.

I can't find any error in developer console.

Please find the attached "Debug_Log" for your reference.

David Hansen

Well, I definitely see an inflate error getting thrown in your debug output.  Hard to say what the error is though.

Perhaps modify this line:

SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error");

to be:

SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error: " + err);

That might help to give an indication of why pako.inflate is throwing an error.

Also, I might recommend opening the debug window on the first session, grab a copy of the suspend data (both compressed/uncompressed) at the end of the session, and then compare that with what is in the registration data on cloud.scorm, and with the data as then logged at the startup/resume of the second session.   That will help identify if there is a strange issue with the data getting corrupted between the two sessions.

Seeing what the actual error message pako.inflate is throwing will help to further troubleshoot.

David Hansen

Uh, first, those were suggestions for you to use to debug, not to just collect the info for someone else to analyze.  If you get the error exception message from pako.inflate, then I might be able to help decode what that could mean.

Second, you need to actually include that output.  It would be near the beginning of the debug log (just after the call to GetDataChunk).   The file you attached as blank_Session_2.html does not start at the beginning and does it include the first GetDataChunk call that would then have the key error message.

Jan Vilbrandt

Hi David,

first of all: A big thank you for your idea and sharing it with us.

My company is using the LMS "LSO" from SAP.

I ran into an error when using your code. I have included pako V. 2.0.4. Maybe there is a bug in that package. 

Second there is a problem with your base64 encoded data. Your program code does not compress "binary data" (the zipped data from pako) but a textstring. So the result isn'nt really "compressed".

This is my solution based on your idea: (follow the unstructions on page 1 of this conversation)

function getDataChunk() {

.....

try {
var strDataC=objLMS.GetDataChunk();
WriteToDebug("GetDataChunk strDataCompressed="+strDataC);

var blob=atob(strDataC);

var decarray=[];

Object.values(blob).forEach( function (item) { decarray.push(item.charCodeAt(0)); });

var strData=window.pako.inflate(decarray, {to: 'string'});
WriteToDebug("GetDataChunk strData="+strData);
return strData;
} catch (err) {
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error: "+err);
return "";
}

}

function setDataChunk() {

...

try {
var strDataC="";
var compressed=window.pako.deflate(strData);
var blob="";
Object.values(compressed).forEach( function (item) { blob+=String.fromCharCode(item); });
strDataC=btoa(blob);
WriteToDebug("SetDataChunk strDataCompressed="+strDataC);
return objLMS.SetDataChunk(strDataC);
} catch (err) {
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Deflate error^: "+err);
return "";
}

}

The result works fine with Articulate Storyline 3 Version 3.15 and should work with Articulate Storyline 360 as well.

I tested with a training which creates about 7.300 bytes of suspendData.

The compressed data is only about 1.500 bytes.

The compression is great. The result is only 20% of the uncompressed data.

 

Some additional notes to the instructions of page 1:

<script src="lms/API.js" charset="utf-8"></script>

is now 

 <script src="lms/scormdriver.js" charset="utf-8"></script>

The name of the file ist now index_lms.html (not index_lms_html5.html).

Thanks again, David, for coming up with that idea.

Best wishes,

Jan

David Hansen

Well, hrmph!   It does look like pako changed their API in v2.0.0:

## [2.0.0] - 2020-11-17
### Changed
- Removed binary strings and `Array` support.

So, that does mean the getDataChunk() and setDataChunk() do need some updates to deal with deflate now returning an Int8Array and inflate requiring an Int8Array as input.

Though your suggested changes do work, I have tuned them just a bit.   Note: I chose to use String.prototype.split to turn a string into an array because it has the widest & oldest browser support and can still be done in one line.  Plus the only real issue with using split is not a problem here since we would not be encountering any actual UTF-16 characters coming out of deflate nor going into inflate as they specifically are dealing with 8-bit integers (eg, an 8-bit integer would never produce a UTF-16 character that would then break using split).

So, here is my updated pako patch file that I am now using with pako > v2.0.0:

--- index_lms.html.orig 2020-09-16 11:15:29.634371759 -0700
+++ index_lms.html 2020-09-16 11:15:06.980645108 -0700
@@ -13,5 +13,6 @@
#app { height: 100%; width: 100%; }^M
</style>^M
<script src="lms/scormdriver.js" charset="utf-8"></script>^M
+ <script src="lms/pako.min.js"></script>^M
<script>window.THREE = { };</script>^M
</head>^M
--- lms/scormdriver.js.orig 2021-07-03 14:55:52.000000000 -0700
+++ lms/scormdriver.js 2022-05-02 13:08:13.100525415 -0800
@@ -32257,7 +32257,15 @@
return "";
}

- return objLMS.GetDataChunk();
+ try {
+ var strDataC=objLMS.GetDataChunk();
+ var strData=window.pako.inflate(atob(strDataC).split('').map(function(c){return c.charCodeAt(0)}), {to: 'string'});
+ WriteToDebug("GetDataChunk strData="+strData);
+ return strData;
+ } catch (err) {
+ SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error");
+ return "";
+ }
}

//public
@@ -32270,7 +32278,14 @@
return false;
}

- return objLMS.SetDataChunk(strData);
+ try {
+ WriteToDebug("SetDataChunk strData="+strData);
+ var strDataC=btoa(window.pako.deflate(strData).reduce(function(s,i){return s+String.fromCharCode(i)},''));
+ return objLMS.SetDataChunk(strDataC);
+ } catch (err) {
+ SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Deflate error");
+ return "";
+ }
}

//public

 

Nick Morrison

What's crazy to me is that this issue is STILL a thorn in everyone's side after all this time.
If a compression system/string via java "freeware" is available - why can't Articulate just amend their programming to contain this (or a similar fix) in their SCORM 1.2 export tool in the first place?

We all want/need it.

This isn't something that we should have to get into the guts of the SCORM Package to play with/adjust "hack" just to make our courses work the way we (our clients) want them to.

Afterall - it's not as though courses, tracking and logging demands are getting smaller.