Capture Tin Can API statements generated by Storyline?

I’d like to use Storyline for some E-Learning courses.  I am not a content designer per se; I’m a software engineer who needs to figure out how to capture the Tin Can API statements generated by Storyline when I go through the course.  The first thing I need to do is just capture the statements, and then I’ll create a little standalone program to write them to the console.  I have a little 4 slide project I developed, but I also see there is the Solar System project, and any number of others I can test with.  I’ve published the little project for LMS, selecting Tin Can.  The output is great, and I see the tincan.xml metata file.

But I haven’t seen anything about how to capture statements at runtime – I’ve just seen that it is possible.  I've heard that the only option Storyline supports for this is to interface with an LMS.   The backend (LMS) is irrelevant for now, and I don't really have access to one.   Since I don't have access, is there a way I can just create some kind of a small database that would "fake an LMS"?  Or is there some "toy LMS" I can use for this testing?  Once I can see these statements, then I can apply them to a real LMS when the time comes.

So basically, my question is:

What are the steps I need to do to run a story published for Tin Can, and capture the Tin Can API statements generated by Storyline as I progress through the course?

I’m just running on a local machine for now, as this is all test stuff.

Thanks a lot

18 Replies
Megan Bowe

Hi Frank! 

To get the Tin Can statements from a Storyline course, the course needs a connection to an LRS. When there is not a connection they will be queued to send when a connection is made. The LRS receives the statements and gives you access to them, you don't need an LMS to do this. If you just need something for testing, sign up for a free SCORM Cloud account. You can add the courses and see them write statements to the LRS in SCORM Cloud.

Megan

fbs 419

Hi Megan --that worked great.  I set up the proper tracking in Storyline, added my content to SCORM Cloud, launched a course, and viewed the statements (with both the Tin Can Statement Viewer link on the Apps page or the LRS Viewer on the LRS page).  In order to see them, I always have to switch the viewer to View Statements In SCORM Cloud (sandbox), rather than SCORM Cloud, or I can't seem them.  But that's OK -- as long as I can see them.  But what I really need to do is capture these statements programmatically as they are generated, either using an API that I write, or some existing one.  Do you know if there is a way to do that?  For now, I would like to write a small standalone program (with Visual Studio or some tool like that) to capture each statement as it is generated, and write it to the console.  Hopefully, this is possible? -- Thanks

fbs 419

Hi Megan.  If you're running an Articulate course (that was published for TinCan)  in a browser on a desktop, you said the statements will be queued until a connection to an LRS is made.  As you go through the course, are these statements stored somewhere in some form on the PC?  For mobile devices, from http://tincanapi.com/a-mobile-devices-story/, I've read:

"All Tin Can needs is an occasional internet connection. Tin Can statements are stored on the mobile device as activities are experienced. When there is a network connection, the collected statements are sent to an LRS (or several LRSs)."

I see that HTTP PUT statements are sent to the endpoint when a connection is made.  Where is the information stored on the PC so the software can know what information to use for the PUT request for a given statement?

Thanks

Megan Bowe

Hi Frank, 

Sorry, I didn't subscribe. Thanks for nudging me! 

Running the course in a browser on a PC, there needs to be something around the course capturing the statements to queue them for when an internet connection is restored. Often the LMS or mobile application handles this (you previously used SCORM Cloud to serve this purpose). The course, by itself, is not capable of telling the PC where to store the statements and what to do with them when the connection is restored. The specification for the API doesn't say exactly how offline content should work because there are many ways to deal with it, so you're free to build as you please for queueing and syncing offline statements as long as they properly use the statement API when the connection is restored. 

Your plan to build a lightweight program to capture and communicate the statements is right on. Here is a list of open source software that may help you get further down that road: http://tincanapi.com/2013/07/11/the-open-source-landscape/

Cheers,

Megan

fbs 419

Hi Megan.  Yes -- I've seen some of the open source stuff.  Currently, I do use SCORM Cloud just as the LRS, and so it does the capturing for me.  I host the courses myself.  I have my own program to retrieve the statements, parse the JSON, and then process them in whatever way I want.  I also know how to use the statement API to generate statements -- that's not the problem.  The issue for me is the "capturing from the course" part.  I'm doing this in C#, but I still haven't seen how to do this with some of the .php, or python examples that are out there [of course I don't know these languages either].  I basically want to write some kind of a web service that will do the capturing -- as part of our own LRS,

You said that "the course, by itself is not capable of telling the PC where to store the statements and what to do with them."  I can deal with that -- but the one piece that is missing for me is how to know what to store.  If someone answers the question:

"What is 6 + 7"   and gives the answer 13, 

how does SCORM Cloud (or whatever the "something around the course capturing the statements")  know that the user answered that particular question, and that they gave the answer 13? There has to be some communication from the course about that, right? Somewhere, that info needs to be available, and I don't know how to get it.  Once I have that, I can deal with everything else.

Thanks again

Megan Bowe

Hi Frank, 

In terms of the content knowing that a statement should be made, I think you may be looking for this http://www.articulate.com/tincanapi/ Each authoring tool has decided to implement the places that make statements in differently, but so far generally they pretty well match what would have been a data point in SCORM. That link describes exactly how Storyline will make statements, from what places, and what to expect within the statements. 

For the mechanism that is programmed into a course to tell it how to handle the statements it's making, you might be looking for this document that describes how courses packaged for Tin Can should be handled in launch and rollup https://github.com/RusticiSoftware/launch/blob/master/lms_lrs.md (this is an interim specification that is being used, when CMI5 is complete their spec will super cede this document on packaging and launch)

Megan

Trip Levine

Hey Frank,

  I've been puzzling over this for months as well. I'm trying to write something that will capture Articulate's information. It seems that you can launch the Articulate storyline with a URL with a URI at the end of it that will specify your endpoint .

So if the link to access the Storyline is story.html. Then you would do story.html?endpoint=http://www.mywebsite/statements/

But unfortunately, this as far as I've gotten because Articulate delivers malformed JSON to this endpoint. So right now, I'm trying to figure out either how Articulate produces these malformed JSON statements and change them, or find some strange way to usurp the calls and hack them to be correctly formed.. ha..

In any case, Articulate doesn't really seem to be set up for this kind of situation. I think the whole staff there is 100% SCORM with just an after thought that it can be used with other services.

The biggest question I have at this point is how does SCORM read malformed JSON? I can upload the same tests up to SCORM and no issues whatsoever. 

Deeply vexed,

Trip

yuna B

hi all

I am nowhere near as sophisticated with my tincan experience as the rest of the posters here, but i iwll submit a question. I have published a storyline project to tin can and placed it in my tincan compatible LMS (not LRS).  When I look at the scorm report, it appears that the assigned names of the interactions and their responses have been replaced with much longer string of characters. So I don't know which response was selected. 

What do I do? 

thanks

Yuna

fbs 419

To answer Trip's post:

I wrote WCF services to GET and POST TinCan statements, so I had an endpoint like this to run an Articulate story (names changed to protect the innocent):

http://localhost:81/ directory>/story.html?endpoint=http://localhost/myservice.svc//&auth=Basic cXFxcXFxOnJycnJycg==&actor={ "name" : ["Joe Blow"], "mbox" : ["mailto:joe@blow.com"] }

Then the service that I called would POST statements. To the get the statements, my url was:

"http://localhost/myservice.svc//statements"

I first set up apps in SCORM Cloud and used Fiddler to see what was going over the wire. Then I made WCF service contracts that represented the TinCan statements.

Andrew Downes

Hi Hamdy, 

In order for Storyline's tracking to work it will need to be launched from somewhere as it expects LRS and learner details to be passed in the querystring. 

Can you give me some more background on how you want learners to come at the course? Are they launching from an email, an LMS or what? Does the thing that's launching the course know who the learner is? 

Andrew