As some of us know SCORM 1.2 only limits its suspend data for up to 4096 characters. Storyline (360) compresses its data (e.g. SL variables and such) in order to fit the limitations of the suspend data. There must be an underlying decompression algorithm or its own unique reader on SL to read the suspend data.
My question is when this compressed suspend data becomes decompressed, would there be a possibility of it hitting the 4096 limit?
That might help to give an indication of why pako.inflate is throwing an error.
Also, I might recommend opening the debug window on the first session, grab a copy of the suspend data (both compressed/uncompressed) at the end of the session, and then compare that with what is in the registration data on cloud.scorm, and with the data as then logged at the startup/resume of the second session. That will help identify if there is a strange issue with the data getting corrupted between the two sessions.
Seeing what the actual error message pako.inflate is throwing will help to further troubleshoot.
Uh, first, those were suggestions for you to use to debug, not to just collect the info for someone else to analyze. If you get the error exception message from pako.inflate, then I might be able to help decode what that could mean.
Second, you need to actually include that output. It would be near the beginning of the debug log (just after the call to GetDataChunk). The file you attached as blank_Session_2.html does not start at the beginning and does it include the first GetDataChunk call that would then have the key error message.
first of all: A big thank you for your idea and sharing it with us.
My company is using the LMS "LSO" from SAP.
I ran into an error when using your code. I have included pako V. 2.0.4. Maybe there is a bug in that package.
Second there is a problem with your base64 encoded data. Your program code does not compress "binary data" (the zipped data from pako) but a textstring. So the result isn'nt really "compressed".
This is my solution based on your idea: (follow the unstructions on page 1 of this conversation)
function getDataChunk() {
.....
try { var strDataC=objLMS.GetDataChunk(); WriteToDebug("GetDataChunk strDataCompressed="+strDataC);
var blob=atob(strDataC);
var decarray=[];
Object.values(blob).forEach( function (item) { decarray.push(item.charCodeAt(0)); });
So, that does mean the getDataChunk() and setDataChunk() do need some updates to deal with deflate now returning an Int8Array and inflate requiring an Int8Array as input.
Though your suggested changes do work, I have tuned them just a bit. Note: I chose to use String.prototype.split to turn a string into an array because it has the widest & oldest browser support and can still be done in one line. Plus the only real issue with using split is not a problem here since we would not be encountering any actual UTF-16 characters coming out of deflate nor going into inflate as they specifically are dealing with 8-bit integers (eg, an 8-bit integer would never produce a UTF-16 character that would then break using split).
So, here is my updated pako patch file that I am now using with pako > v2.0.0:
What's crazy to me is that this issue is STILL a thorn in everyone's side after all this time. If a compression system/string via java "freeware" is available - why can't Articulate just amend their programming to contain this (or a similar fix) in their SCORM 1.2 export tool in the first place?
We all want/need it.
This isn't something that we should have to get into the guts of the SCORM Package to play with/adjust "hack" just to make our courses work the way we (our clients) want them to.
Afterall - it's not as though courses, tracking and logging demands are getting smaller.
60 Replies
What is the error you are getting (should be visible from the browser developer console)?
Or are you saying it just isn't working? Are you configured for SCORM? or cmi5 or tincan?
We have configured the course with SCORM 1.2.
Course configured to "Resume" when the user relaunch it.
But, when I relaunch the course after viewing some slides, all the previously recorded suspend data has been lost.
I can't find any error in developer console.
Please find the attached "Debug_Log" for your reference.
Well, I definitely see an inflate error getting thrown in your debug output. Hard to say what the error is though.
Perhaps modify this line:
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error");
to be:
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error: " + err);
That might help to give an indication of why pako.inflate is throwing an error.
Also, I might recommend opening the debug window on the first session, grab a copy of the suspend data (both compressed/uncompressed) at the end of the session, and then compare that with what is in the registration data on cloud.scorm, and with the data as then logged at the startup/resume of the second session. That will help identify if there is a strange issue with the data getting corrupted between the two sessions.
Seeing what the actual error message pako.inflate is throwing will help to further troubleshoot.
Thanks for providing more information David.
Please find the attached debug log's of Session 1 & 2; plus the Suspend_Data comparison of Session 1 debug log and cloud.scorm data.
Uh, first, those were suggestions for you to use to debug, not to just collect the info for someone else to analyze. If you get the error exception message from pako.inflate, then I might be able to help decode what that could mean.
Second, you need to actually include that output. It would be near the beginning of the debug log (just after the call to GetDataChunk). The file you attached as blank_Session_2.html does not start at the beginning and does it include the first GetDataChunk call that would then have the key error message.
This post was removed by the author
Hi David,
first of all: A big thank you for your idea and sharing it with us.
My company is using the LMS "LSO" from SAP.
I ran into an error when using your code. I have included pako V. 2.0.4. Maybe there is a bug in that package.
Second there is a problem with your base64 encoded data. Your program code does not compress "binary data" (the zipped data from pako) but a textstring. So the result isn'nt really "compressed".
This is my solution based on your idea: (follow the unstructions on page 1 of this conversation)
function getDataChunk() {
.....
try {
var strDataC=objLMS.GetDataChunk();
WriteToDebug("GetDataChunk strDataCompressed="+strDataC);
var blob=atob(strDataC);
var decarray=[];
Object.values(blob).forEach( function (item) { decarray.push(item.charCodeAt(0)); });
var strData=window.pako.inflate(decarray, {to: 'string'});
WriteToDebug("GetDataChunk strData="+strData);
return strData;
} catch (err) {
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error: "+err);
return "";
}
}
function setDataChunk() {
...
try {
var strDataC="";
var compressed=window.pako.deflate(strData);
var blob="";
Object.values(compressed).forEach( function (item) { blob+=String.fromCharCode(item); });
strDataC=btoa(blob);
WriteToDebug("SetDataChunk strDataCompressed="+strDataC);
return objLMS.SetDataChunk(strDataC);
} catch (err) {
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Deflate error^: "+err);
return "";
}
}
The result works fine with Articulate Storyline 3 Version 3.15 and should work with Articulate Storyline 360 as well.
I tested with a training which creates about 7.300 bytes of suspendData.
The compressed data is only about 1.500 bytes.
The compression is great. The result is only 20% of the uncompressed data.
Some additional notes to the instructions of page 1:
<script src="lms/API.js" charset="utf-8"></script>
is now
The name of the file ist now index_lms.html (not index_lms_html5.html).
Thanks again, David, for coming up with that idea.
Best wishes,
Jan
Well, hrmph! It does look like pako changed their API in v2.0.0:
So, that does mean the getDataChunk() and setDataChunk() do need some updates to deal with deflate now returning an Int8Array and inflate requiring an Int8Array as input.
Though your suggested changes do work, I have tuned them just a bit. Note: I chose to use String.prototype.split to turn a string into an array because it has the widest & oldest browser support and can still be done in one line. Plus the only real issue with using split is not a problem here since we would not be encountering any actual UTF-16 characters coming out of deflate nor going into inflate as they specifically are dealing with 8-bit integers (eg, an 8-bit integer would never produce a UTF-16 character that would then break using split).
So, here is my updated pako patch file that I am now using with pako > v2.0.0:
Thanks for the solution for SCORM 1.2 suspend_data
Is anyone able to post a solution that works with Tincan/Xapi please?
What's crazy to me is that this issue is STILL a thorn in everyone's side after all this time.
If a compression system/string via java "freeware" is available - why can't Articulate just amend their programming to contain this (or a similar fix) in their SCORM 1.2 export tool in the first place?
We all want/need it.
This isn't something that we should have to get into the guts of the SCORM Package to play with/adjust "hack" just to make our courses work the way we (our clients) want them to.
Afterall - it's not as though courses, tracking and logging demands are getting smaller.