Forum Discussion
Storyline Suspend Data Compression
Good day!
As some of us know SCORM 1.2 only limits its suspend data for up to 4096 characters. Storyline (360) compresses its data (e.g. SL variables and such) in order to fit the limitations of the suspend data. There must be an underlying decompression algorithm or its own unique reader on SL to read the suspend data.
My question is when this compressed suspend data becomes decompressed, would there be a possibility of it hitting the 4096 limit?
63 Replies
- JanVilbrandtCommunity Member
Hi David,
first of all: A big thank you for your idea and sharing it with us.
My company is using the LMS "LSO" from SAP.
I ran into an error when using your code. I have included pako V. 2.0.4. Maybe there is a bug in that package.
Second there is a problem with your base64 encoded data. Your program code does not compress "binary data" (the zipped data from pako) but a textstring. So the result isn'nt really "compressed".
This is my solution based on your idea: (follow the unstructions on page 1 of this conversation)
function getDataChunk() {
.....
try {
var strDataC=objLMS.GetDataChunk();
WriteToDebug("GetDataChunk strDataCompressed="+strDataC);
var blob=atob(strDataC);
var decarray=[];
Object.values(blob).forEach( function (item) { decarray.push(item.charCodeAt(0)); });
var strData=window.pako.inflate(decarray, {to: 'string'});
WriteToDebug("GetDataChunk strData="+strData);
return strData;
} catch (err) {
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error: "+err);
return "";
}
}
function setDataChunk() {
...
try {
var strDataC="";
var compressed=window.pako.deflate(strData);
var blob="";
Object.values(compressed).forEach( function (item) { blob+=String.fromCharCode(item); });
strDataC=btoa(blob);
WriteToDebug("SetDataChunk strDataCompressed="+strDataC);
return objLMS.SetDataChunk(strDataC);
} catch (err) {
SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Deflate error^: "+err);
return "";
}
}
The result works fine with Articulate Storyline 3 Version 3.15 and should work with Articulate Storyline 360 as well.
I tested with a training which creates about 7.300 bytes of suspendData.
The compressed data is only about 1.500 bytes.
The compression is great. The result is only 20% of the uncompressed data.
Some additional notes to the instructions of page 1:
<script src="lms/API.js" charset="utf-8"></script>
is now
<script src="lms/scormdriver.js" charset="utf-8"></script>
The name of the file ist now index_lms.html (not index_lms_html5.html).
Thanks again, David, for coming up with that idea.
Best wishes,
Jan
- DavidHansen-b20Community Member
Well, hrmph! It does look like pako changed their API in v2.0.0:
## [2.0.0] - 2020-11-17
### Changed
- Removed binary strings and `Array` support.So, that does mean the getDataChunk() and setDataChunk() do need some updates to deal with deflate now returning an Int8Array and inflate requiring an Int8Array as input.
Though your suggested changes do work, I have tuned them just a bit. Note: I chose to use String.prototype.split to turn a string into an array because it has the widest & oldest browser support and can still be done in one line. Plus the only real issue with using split is not a problem here since we would not be encountering any actual UTF-16 characters coming out of deflate nor going into inflate as they specifically are dealing with 8-bit integers (eg, an 8-bit integer would never produce a UTF-16 character that would then break using split).
So, here is my updated pako patch file that I am now using with pako > v2.0.0:
--- index_lms.html.orig 2020-09-16 11:15:29.634371759 -0700
+++ index_lms.html 2020-09-16 11:15:06.980645108 -0700
@@ -13,5 +13,6 @@
#app { height: 100%; width: 100%; }^M
</style>^M
<script src="lms/scormdriver.js" charset="utf-8"></script>^M
+ <script src="lms/pako.min.js"></script>^M
<script>window.THREE = { };</script>^M
</head>^M
--- lms/scormdriver.js.orig 2021-07-03 14:55:52.000000000 -0700
+++ lms/scormdriver.js 2022-05-02 13:08:13.100525415 -0800
@@ -32257,7 +32257,15 @@
return "";
}
- return objLMS.GetDataChunk();
+ try {
+ var strDataC=objLMS.GetDataChunk();
+ var strData=window.pako.inflate(atob(strDataC).split('').map(function(c){return c.charCodeAt(0)}), {to: 'string'});
+ WriteToDebug("GetDataChunk strData="+strData);
+ return strData;
+ } catch (err) {
+ SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Inflate error");
+ return "";
+ }
}
//public
@@ -32270,7 +32278,14 @@
return false;
}
- return objLMS.SetDataChunk(strData);
+ try {
+ WriteToDebug("SetDataChunk strData="+strData);
+ var strDataC=btoa(window.pako.deflate(strData).reduce(function(s,i){return s+String.fromCharCode(i)},''));
+ return objLMS.SetDataChunk(strDataC);
+ } catch (err) {
+ SetErrorInfo(ERROR_INVALID_RESPONSE, "DataChunk Deflate error");
+ return "";
+ }
}
//public - JeremyTrott-098Community Member
Is anyone able to post a solution that works with Tincan/Xapi please?
- NickMorrisonCommunity Member
What's crazy to me is that this issue is STILL a thorn in everyone's side after all this time.
If a compression system/string via java "freeware" is available - why can't Articulate just amend their programming to contain this (or a similar fix) in their SCORM 1.2 export tool in the first place?We all want/need it.
This isn't something that we should have to get into the guts of the SCORM Package to play with/adjust "hack" just to make our courses work the way we (our clients) want them to.
Afterall - it's not as though courses, tracking and logging demands are getting smaller.
- BugnaitBugnaitCommunity Member
Hi David, I want to use instructions pako.min.js file and I have followed all the instructions as suggested by you, but when I am testing the Articulate Storyline 360 Scorm 1.2 compatible package on SCORM cloud it is not working and my module gets stuck on starting. If possible, please share any example package for the storyline with pako.min.j implementation....
- JanVilbrandtCommunity Member
The big question is: have you eead all posts of this discussion.
If not: There is a major update of pako.js
The old code of the first posts isn‘t working anymore because of major changes in pako.js.
++++
I am using my code for over a year now and it works fine. You find it in the later posts.
- DavidHansen-b20Community Member
Yep, I concur.
This is the most relevent entry: Pako v2 update
Related Content
- 6 months ago
- 7 months ago
- 8 months ago
- 4 months ago
- 6 months ago