Forum Discussion

ChristianOmpad's avatar
ChristianOmpad
Community Member
6 years ago

Storyline Suspend Data Compression

Good day!

As some of us know SCORM 1.2 only limits its suspend data for up to 4096 characters. Storyline (360) compresses its data (e.g. SL variables and such) in order to fit the limitations of the suspend data. There must be an underlying decompression algorithm or its own unique reader on SL to read the suspend data. 

My question is when this compressed suspend data becomes decompressed, would there be a possibility of it hitting the 4096 limit?

  • Uh, first, those were suggestions for you to use to debug, not to just collect the info for someone else to analyze.  If you get the error exception message from pako.inflate, then I might be able to help decode what that could mean.

    Second, you need to actually include that output.  It would be near the beginning of the debug log (just after the call to GetDataChunk).   The file you attached as blank_Session_2.html does not start at the beginning and does it include the first GetDataChunk call that would then have the key error message.

  • JanVilbrandt's avatar
    JanVilbrandt
    Community Member

    Hi David,

    first of all: A big thank you for your idea and sharing it with us.

    My company is using the LMS "LSO" from SAP.

    I ran into an error when using your code. I have included pako V. 2.0.4. Maybe there is a bug in that package. 

    Second there is a problem with your base64 encoded data. Your program code does not compress "binary data" (the zipped data from pako) but a textstring. So the result isn'nt really "compressed".

    This is my solution based on your idea: (follow the unstructions on page 1 of this conversation)

    function getDataChunk() {

    .....

    try {
    var strDataC=objLMS.GetDataChunk();
    WriteToDebug("GetDataChunk strDataCompressed="+strDataC);

    var blob=atob(strDataC);

    var decarray=[];

    Object.values(blob).forEach( function (item) { decarray.push(item.charCodeAt(0)); });

    var strData=window.pako.inflate(decarray, {to: 'string'});
    WriteToDebug("GetDataChunk strData="+strData);
    return strData;
    } catch (err) {
    SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error: "+err);
    return "";
    }

    }

    function setDataChunk() {

    ...

    try {
    var strDataC="";
    var compressed=window.pako.deflate(strData);
    var blob="";
    Object.values(compressed).forEach( function (item) { blob+=String.fromCharCode(item); });
    strDataC=btoa(blob);
    WriteToDebug("SetDataChunk strDataCompressed="+strDataC);
    return objLMS.SetDataChunk(strDataC);
    } catch (err) {
    SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Deflate error^: "+err);
    return "";
    }

    }

    The result works fine with Articulate Storyline 3 Version 3.15 and should work with Articulate Storyline 360 as well.

    I tested with a training which creates about 7.300 bytes of suspendData.

    The compressed data is only about 1.500 bytes.

    The compression is great. The result is only 20% of the uncompressed data.

     

    Some additional notes to the instructions of page 1:

    <script src="lms/API.js" charset="utf-8"></script>

    is now 

     <script src="lms/scormdriver.js" charset="utf-8"></script>

    The name of the file ist now index_lms.html (not index_lms_html5.html).

    Thanks again, David, for coming up with that idea.

    Best wishes,

    Jan

  • NickMorrison's avatar
    NickMorrison
    Community Member

    What's crazy to me is that this issue is STILL a thorn in everyone's side after all this time.
    If a compression system/string via java "freeware" is available - why can't Articulate just amend their programming to contain this (or a similar fix) in their SCORM 1.2 export tool in the first place?

    We all want/need it.

    This isn't something that we should have to get into the guts of the SCORM Package to play with/adjust "hack" just to make our courses work the way we (our clients) want them to.

    Afterall - it's not as though courses, tracking and logging demands are getting smaller.