Forum Discussion
SCORM 1.2 Suspend Data--Seems Wonky to My Pea Brain
For the last couple of weeks I've been working on a SCORM 1.2 resume issue for a Storyline 2 course that I'm trying to finish up on for a client. I posted on this previously but that post appears not to be available now in the new forums. So here's an update . . .
First of all, I have to say the Articulate front-line support person that I've been working with has been very helpful and very responsive. Couldn't ask for a better experience there.
Anyway, back to the problem. Basically, I have a SCORM 1.2 course that, after you launch it from the LMS and fail the final quiz, you, say you exit from the quiz results page. When you re-launch the course later to re-try the final quiz, and bookmarking kicks in, you are not returned to the quiz results page but to somewhere in the middle of the quiz to a quiz question that needs answering.
So bookmarking appears not to work right.
After that happened I filed a case and working with Articulate support, we confirmed it was a issue with the amount of resume (of suspend) data. It was exceeding the SCORM 1.2 limit of 4096 characters as nicely described here in the KB.
So, when re-launching the quiz turns out I was being returned to just before the SCORM 1.2 4096 character limit had been reached in the previous session, which makes sense.
So, working from a couple of my own ideas and more suggestions from my fellow Super Heroes (thank you, Steve F. and Phil. M) I tried to reduce the suspend data in the course by trying some of these things:
- Publish to SCORM 2004 3rd or 4th Edition or Tin Can/Experience if you can or LMS supports. (Not an option here--the LMS only supports AICC and SCORM 1.2, and SCORM 2004, for a variety of reasons, is not an option).
- Reduce the number of slides in a course.
- Reduce the # of layers in a course
- Consider using lightbox slides for layers that repeat over many slides or come from masters.
- Set as many slides as you can to "reset to initial state" if you can.
- Reduce the number of questions in a quiz or cut any freeform interactions that you can.
- Use freeform questions instead of text based questions as less data may be sent.
- Reduce the use of longform variable strings.
The course is fairly short, just 50-55 slides. Most slides have a pause layer and transcription layer for displaying narration text. Three of the slides are separate multiple response quiz questions at the end of the topics and 10 slides are simple multiple-select questions for the final quiz.
The course also use a mandated Storyline template that the client had developed by another firm. I had to use that template with little, if any, variations.
The master had 4 layers on the layouts which I was able to cut down to just two by moving two of the layout layers over to two lightbox slides. The other two layers I could not reduce due to the way lightbox slides currently work in Storyline (needed more customization than currently is permitted).
To verify the suspend data overage, Articulate Support had me turning on LMS debugging in the course and send them the log files. So, after providing that for them, I decided to watch the log files myself and record the amount of suspend data after I had moved to a new slide.
If you've not set up a LMS debug file for a SCORM course, this is how you do it.
Actually, kind of fun to watch the debug file generate and see how it builds up with and after each slide in a course. Neat tool.
And this is the line you want to look for in the LMS debug file to see the suspend data.
This command will appear many times as you deal a with a slide (so I was always looking at most latest occurrence of the string in the debug file for my character counts).
For that line, eliminate the "strValue=" and anything before it and then copy the data after (for that line only) and then bring that text into something like Notepad++, and then select all the text to get a character count for the resume data.
And when you exceed the SCORM 1.2 data limit, you'll see a line like this with "intSCORMError=405" in the LMS debug file.
Now my method for recording the suspend data was not exactly perfect as recording the suspend data after moving to a new slide also includes data for visiting that new slide. But I was mainly looking for general trends in going from slide to slide.
Going through simple, click-to-read slides with simple narration seemed to add only 7-20 characters per slide (and usually just 7 characters) when moving to a new slide. Seems like a reasonable number to moi.
However, when looking at the "check your knowledge" questions and the final quiz questions, what I found initially was jaw-dropping. I was seeing well over 400 characters generated for each. So just with my 10-question final quiz, I'd was going over the 4096 character limit for SCORM 1.2 suspend data.
After trying some of the things I listed above to reduce resume data, I was able to get the quiz questions to over 200 characters per question.
But still too much data . . .
This past weekend, I re-created the quiz from scratch using the default plain template in Storyline and nothing else. Quiz questions seemed to now add over 100 characters of resume data--pretty consistently 114-115 characters regardless of the amount of text in the question answers.
Much better but still too much data to my (limited :) ) way of thinking. And I'm left to ask what in the real course itself is causing that number to be even higher?
If there is indeed something in the template causing things to balloon up, it'd be nice if we as developers could know that to prevent that in the future.
Now here's the kicker. I took an earlier version of the course that was done in Storyline 1 Update 5 and published it out to SCORM 1.2 and set it up in the LMS. I went through the course, touching each slide and quiz questions, and failed the final quiz. I then exited the course from the final quiz results page and looked at the last suspend data string sent to the LMS.
Then I opened the same course up with Storyline 2 Update 1 and published it out to SCORM 1.2 and set up a new course in the LMS. I then went through the course exactly as I had before, and answering every quiz question the same. Again, I exited from the quiz results page and looked at the last suspend data string sent to LMS.
I then compared character counts from both trials: The Storyline 2 Update 1 version had produced 32% MORE SUSPEND data.
Now I'm not a programmer and my old pea brain may not understand all what is going on, but I'm left with at least a couple of possibilities . . .
1) Storyline's suspend data generation MIGHT POSSIBLY BE inefficient and could benefit from some tightening. Again, I'm not a programmer but I scratch my head when it takes at least over 100 characters (in a plain vanilla version of the course) and over 200 or 400 characters each in the designed version of course with the mandated template to record my answers to a simple multiple choice quiz question. Something doesn't seem right to me there . . .
2) Something in the course is causing the resume data to balloon up to over 200 or 400 characters when the plain vanilla version only has over 100 characters. But what is that? I'm using Storyline as it is meant to be used, and the supplied template isn't doing anything special or highly customized. So why the difference in suspend character counts? Is Storyline 2 keeping track of a lot of new things as we proceed through a course?
One more kicker--publishing to AICC produced nearly the same amount of resume data as its SCORM 1.2 counterpart (which is good). However the course didn't have the bookmarking problem at re-launch.
AICC has the same suspend data character limit of 4096 characters. But it appears that this LMS was not enforcing the AICC character limit the way it was with SCORM 1.2. An unexpected, but very lucky, result--and a temporary solution.
Unfortunately, using AICC is just an interim solution at best. The client the course is for wants SCORM 1.2 as they will be migrating to a new LMS soon and it may not support AICC. And the new LMS does not support Tin Can yet and they're currently having trouble getting SCORM 2004 3rd of 4th edition to work.
Sorry for the long post but I thought I'd share my recent experience in case it might help someone else in some way. Starting to feel like I'm engaged in one of the labors of Hercules . . . ;)
Hoping I hear something more definitive from Articulate support soon. The QA team is looking at it.
With fingers crossed . . .
- GerryWasilukCommunity Member
Hi, Justin! :) Thanks for the detailed reply. Much appreciated. Also, please pass on my "thanks" to Engineering for considering my issue and taking the time doing so.
You've always been a great and knowledgeable professional to work with. And so is Steve. I really respect his expertise and knowledge, much as I do yours. So when Steve makes a suggestion like he does above, I give it a lot of weight.
I can't speak to his suggestion as he's far more knowledgeable on these technical things to me but it seems to have a lot of merit to me. So does Steve's suggestion have weight and possibility?
Also, Steve doesn't overstate the severity of this issue or the sheer hell it can put us folks in. Support requests for things like this can drain an organization and infuriate learners, learners' management, and course sponsors.
Or when you discover an issue like this at the 11th hour before releasing a course, it's a terrible and very stressful place to be as a developer as you have to scurry to find a solution and you need more detailed help.
I know that publishing to SCORM 2004 and Tin Can are probably the best solutions. No argument there. However, some folks can't use them yet--either their LMS does not support Tin Can or SCORM 2004 natively (like Moodle), or their LMS may not support SCORM 2004 "right" (like Saba) or Tin Can is not available yet for their LMS.
That said, I hope you folks, at the very least, can help us out more here. The KB article that you folks have here is a great start: http://www.articulate.com/support/storyline-2/exceeding-scorm-suspend-data-limits-sl2
But, IMVHO, we need more.
I'm not interested in how the proprietary algorithm works and even what all is stored. But, I'd like to know, as a developer, ALL the things that I could do in Storyline to try and reduce resume data.
Do too many layers on slides impact things? Layers on masters? Button or objects with many states? Does something like taking the answers in multiple choice or multiple select questions or interactions and making them just A, B, C, D, etc., and then add separate text boxes for the rest of the answers and then not randomizing answers, reduce resume data? Etc., etc.
You folks obviously know how resume and the algorithm works better than we do, so I'd like to strongly urge that you folks consider augmenting that KB article and provide detailed and comprehensive suggestions on how to possibly reduce resume data.
You could probably do that relatively quickly and it wouldn't involve possible software changes. How to design courses efficiently for AICC or SCORM 1.2 would a fantastic resource for developers to have in hand before starting a new course or in troubleshooting an existing one.
My initial post above lists some things to try. But I don't know if they are all valid or not. Or maybe if there are more things we can try.
We need you folks to do that. Would that a possibility? :)
One could make a cogent argument that, as you folks have increased the amount of data being sent with Storyline 2, you may have a responsibility to help us out more here. At the very least, again, document in detail all the things that we as developers can do to potentially reduce resume data when SCORM 1.2 (or AICC) must be used for a course and the LMS strictly enforces data limits per the respective specs.
Also, having things in more detail from you folks would help us when discussing things with clients. Telling a client that I have to change or remove or do something in a certain way because "I think" it may increase resume data and cause bookmarking/completion issues just does not have as much weight than you folks having something in black and white that I can refer to and show the client.
For the course I was working on, I saw the additional resume data for answering a quiz question or interaction vary wildly--from well over 400 characters per slide initially to over 200 characters per slide as I changed some things to over 100 characters when I created a new, vanilla version of the quiz.
That says to me that's it's clear we as developers can do things that influences the amount of resume data. We need you folks to help us out here. (And maybe strongly consider Steve's suggestion.)
Sorry for the long post. But after living with this issue for a long time, I'm a little bit passionate on this . . .
- SteveFlowersCommunity Member
Justin -
We've reported this before. Part of the issue is the order of storage. With the SCORM 1.2 suspend data floor, we've seen longer modules lose progress on restore. This might not seem like a big deal but it's fairly serious when thousands of support requests come in for the same issue or if you're the person that experiences the issue in a long slog. Ton of wasted time that, in my opinion, is completely unnecessary.
Seems like a relatively simple fix to me. Prioritize certain elements to the front of the string in the "proprietary" algorithm.
Disappointed that this issue doesn't seem to matter...
- RobertEdgarCommunity Member
Thanks for the update, Gerry.
We're using PeopleSoft's ELM 9.1, and it does enforce the 4096 character limit. So that option isn't open. It does support SCORM 2004 v3, but I haven't tested how robust it is yet, and so for short term am looking to manage the resume data string.
It seems like a listing of stock Storyline interactions ranked by size of their additions to the resume data string would be most helpful. If this varies with some aspect of its use (size or number of movable graphics etc.) then an added note that calls an author's attention to that fact would be useful. It's like a calorie count on the menu of a fast-food restaurant. Let us choose our own poison based on some known data.
- SteveFlowersCommunity Member
Agree. We've seen a lot of issues with resume data in SCORM 1.2. One moderately sized module ended up dropping the back half of the slide visits. The inflation of the suspend data string seems inconsistent, as you illustrate above. Optimizing these would help a little bit. It's a mystery to me how some of the suspend strings apply so many characters to stuff like quiz interactions. Maybe it's holding all of the interaction data? If so, it's not needed in suspend.
Would love to see suspend data stored in an ordered priority. Seems to me scores, completion status, and slide visits would take priority, followed by variables, slide and layer states. In that order.
- JustinStaff
Good Afternoon, Gerry.
Engineering has weighed in on your question, and I'd like to summarize what I know for you.
- There is a lot more to our suspend_data than just whether or not the learner got the question right and what they answered. We store the location on the timeline, the navigation history (to support the Previous button), the current state of objects, variable values, interaction results, and much more.
- There are a lot of factors that go into what we store in our suspend_data, based on the features a particular object uses. Our algorithm is fairly long and complex, so it would be difficult for us to document the specifics. Also, it's proprietary. :)
- We don't see anything to indicate that we have a defect here. Everything we see tells us that we are being as efficient as we can be with the storage of suspend_data.
- It is expected that the suspend_data for Storyline 2 is larger than the suspend_data for Storyline 1. Due to added features, Storyline 2 requires additional suspend_data.
Please let us know if you need anything else, and have a great day!
- StephenConeCommunity Member
Hi Justin,
I recently stumbled across this post when I was researching an unrelated issue. Unfortunately, I have not had the opportunity to experience Storyline 2, so I will limit my comments to my experience with Storyline 1.
I understand that the suspend data Storyilne is sending back to an LMS stores much more than the standard interaction details for a given course. I also understand that Articulate has made the calculated decision to encode and compress the suspend data so you can fit more data within the character limit.
I would agrue that your not being efficient at all in terms of the data you include in the suspend data string because you're relying too much on the compression. One example that comes to mind is that the Interaction IDs that Storyline generates are overly verbose and contain redundant text. I've typed in an example of an Interaction ID from one of my courses below:
Scene2_QuestionDraw91_Slide1_MatchingDropDown_0_0
That's approximately 46 characters devoted to a single ID. Seventeen characters of that string could have easily been removed by eliminating the question type from the string, especially since you already use SCORM to send the question type to the LMS as a separate value. The string could have been truncated further by using initials or eliminating the descriptive text (e.g. 2_QD91_1_0_0).
In comparison, Articulate Studio '09 just used a simple "QuestionN_N" approach for the Interaction ID.
It may seem silly to be concerned with how many characters are in string, but ultimately they all add up usually to the dismay of an Instructional Designer or client who has to change how a course functions because the suspend data exceed 4096 characters.
And Gerry? We have our developers ensure that all the review questions are set to resume to initial state. That seems to reduce the size of the suspend data. I suspect based on Justin's comments that may also to apply to non-quiz slides as well.
Thank you
- GerryWasilukCommunity Member
Thanks, again, Justin, for taking this so seriously and your thoughtful response. :) Yes, for the near term, I'm more concerned with #1 as that, if done, could be implemented quickly without software changes, which, if they were to happen, would probably take a while,
For the long term, #2 sounds appealing as a avenue of research, along with Steve's conundrum of how question data is stored.
Again, appreciate your response and Articulate considering this. THANK YOU! I know supporting the "past" (AICC or SCORM 1.2) is often not as exciting as the "now and future technologies" (like Tin Can) but, unfortunately, some of us are forced to live in the past due to client requirements.
- GerryWasilukCommunity Member
An update . . .
First, the list of "several ways" to "reduce the accumulating length of the resume data string" in my post was really the product of some of us Heroes, particularly Phil Mayor, Steve Flowers, and myself. Wasn't just me. Have to give credit where credit due. :)
Steve and I got a nice note from Articulate concerning the issue. We gave them some ideas and thoughts. Have not heard much since.
In my case, the issue was affecting two projects for two different clients. One client was using Saba and the other Totara (aka Moodle 2.6). The solution for both was to publish to AICC. AICC is supposed to have the same character limit as SCORM 1.2. However, for both LMS's, this limit was not enforced.
New version(s?) of Moodle (2.7, if I remember right) have a config switch to ignore the character limit for SCORM 1,2, so for one of my clients that might be their workaround for now. Other LMS's may possibly have this option also.
I still believe for those LMS's that have the strict character limit for SCORM 1.2, some more further help or documentation from Articulate would be nice.
RE: "Are freeform interactions and questions something we should try to avoid, or to use? I'd think the former, but I may be missing a distinction here. "
I'd say the answer is "it depends. " So much, at least to my pea brain, is dependent on what else you are doing in the course (e.g, how many other interactions, what kind, how many quiz questions, layers on master slides, etc.).
- GerryWasilukCommunity Member
Well, hopefully, you can test SCORM 2004 v3 and it can work well, especially as its character limit is 64,000.
For one of my clients, the one using Saba, even though that LMS was supposed to support SCORM 2004 v3, the way Saba implemented SCORM 2004 was so #@#@!#@!#@!#@!# that it was not usable. Got some great help from Articulate, SCORM.COM, and Saba itself but working together we couldn't work around the issue.
Also, back in the days when Articulate 09 first came out we ran into issues when the Saba LMS divided the resume data into multiple database records because the field for the resume data could only hold so many characters.
Every time the suspend data required more than one data record, a Presenter 09 course would act wonky on re-launch for a learner. Among other things, bookmarking never worked.
We never found out what was the problem but we strongly suspected the LMS was doing something with the resume data during initial LMS save or returning it wrong to the Articulate content when it reassembled it into one string on content re-launch. Most often, the quick fix there was to turn off resume for any embedded Engage interactions or Quizmaker quizzes/questions so the resume data could get down to only taking one LMS data record.
Fun problems to debug! :)
- GerryWasilukCommunity Member
Thanks, Steve!
One thing I forgot to mention in my ponderous tomb of a first post.
I met yesterday with a new client for a new project. Like the other client above, they're a major company here in Twin Cities and both are in the Forbes Top 150. Both companies use Articulate as their e-learning products of choice. Between the two companies, quite a few copies of Articulate software have been purchased.
For this new project, I've been engaged to help this new client with issues they are having with their LMS (Totara aka Moodle with window dressing :) ) for external learners and document best practices and learners. And Totara, I believe (still need to get up to speed with it) only supports AICC and SCORM 1.2.
One of the issues they are experiencing may just be the same one I described above. And there's a possibility if we can't get all the issues cleared they may might give up on Storyline and go back to Captivate.
Not making this up . . .
- JustinStaff
Hello again, Gerry and Steve.
Thanks for being so persistent, guys. We appreciate you keeping us in check, and I apologize if I missed the crux of this thread. I may have had tunnel vision on the question, "Does Storyline's suspend_data really need to be so large, or do we have a potential bug here?" It seems that there is more to to your concerns than that, and here is what I am hearing:
- You'd like to have better documentation on precisely which slide objects and properties impact the size of our suspend_data. For example, is it worthwhile to reduce the number of slide layers, or is it not?
- You'd like to see us restructure our suspend_data to prioritize the most important data elements at the front of the string. For example, we might decide that scores are more important than interaction results, so scores should therefore go first.
I'm happy to try and advocate for these requests, but I think I need Steve to clarify one thing first... Given that we compress our suspend_data, is it really helpful to prioritize the data within? Here's my train of thought, which may or may not be accurate:
- If I want to email a large text file to a friend, I might decide to zip it first.
- Let's say that the zip file is truncated in transit for some reason.
- Upon receipt, I wouldn't expect my friend to be able to read the portion of the file that didn't get truncated. I'd expect my friend to not be able to unzip the text file at all.
Is this an accurate analogy, or is there something unique about the way suspend_data is compressed that would allow the intact portion of a truncated, compressed string to be readable?
Thanks again, guys!