Tin can API in practice...

Sep 06, 2016

Hi all,

I've done a lot of reading and research into the Tin can API recently, and understand it in theory/conceptually, but how are people actually using this in practice within Storyline?

I thought the whole point was that it can gather data about a range of learning experiences beyond those you have pre-defined, but am I correct in thinking that you still have to define what the possible learning experiences are up-front, it's just that they're not confined solely to the SCORM package this time?

What I mean by this is that you have to set up all of the statements in the LRS prior to a learning activity being dispatched, and assign statements to actions such as "X read this", "X experienced that", describing the learning opportunities for learners beforehand... it's not actually a case of the learner/user creating the statements themselves to demonstrate what they have read/experienced etc?

Any tutorial videos / articles / help / advice on how people are actually implementing tin can within storyline would be greatly appreciated. There's a lot of documentation on the web but i'm finding a bit of a gap in that there is a lot of technical content, but very little on how an instructional designer can actually utilise this feature.

Thanks,

Emma

14 Replies
Ryan Parish

Using Tin Can is a lot different than using SCORM, because there really isn't a set standard of what kinds of data should be captured by the Learning Record Store where you send your statements. You have to identify the statements yourself.

Since most LMS platforms don't have a native LRS that captures these statements along with other activities performed outside the system, you will need to figure out a solution to that before you can really experience the full potential of Tin Can.

As for use cases, and how you might incorporate statements into Storyline production, there are nearly unlimited options.

For example, let's say that you wanted to build a learning curriculum to teach nurses how to perform CPR. The curriculum might include an eLearning program built in Storyline, an ILT taught by a certified instructor, and a practice component on a human patient simulator, or "smart dummy" (pardon the oxymoron). To measure the success of the program, you might want to track when the nurse performs CPR in the real world (when a CPT code is logged into a patient chart), and measure the outcome via an electronic health record, or through the patient billing office. Was the procedure a success? Is the patient still healthy six months later?

Tin Can can help answer all of these questions. In Storyline, you could send a statement to the LRS each time a learner completed an activity, viewed a slide, answered a scored or survey question,  or completed a course. You may want to individually track the efficacy of using one video vs. another video on an A/B test with your users, or track whether a certain ILT instructor is more effective than another instructor in delivering the material. 

Basically, rather than looking at an educational initiative as a monolith, you can run scientific style experiments through your LMS, through web behavior captured outside the system, through manual learning interventions like instructor-led training and mentoring, and through data pulled from other connected systems (via Rest APIs sent to your LRS). 

The easiest way to set something meaningful up is to follow the scientific method. Start with a question like, "Will adding an interactive eLearning activity to my CPR program help improve patient outcomes?" Then you can construct a hypothesis, test that hypothesis by tracking all of your existing training components with and without your new activity, analyze your results, and draw conclusions about what they mean. Determine if those results align with your original hypothesis, and then if not, tweak the program until you get your desired result.

For lots of folks just dipping their toes in the water here, just sending the big 4 from SCORM to an LRS is enough to get you started. The key is that if you really want to make the most of Tin Can, you will want to track behavior that happens outside of the system. If you're just wanting to track learner behavior inside your LMS, you may want to stick with SCORM 2004, 4th Edition. 

I could talk about this all day, and I'd love to help you with this! Let me know if I am talking gibberish, or if this is making sense!

Emma Herbert

Hi Ryan,

Thanks for taking the time to provide such an in depth response! That's definitely helped and it's interesting to consider an experiment style approach. 

We are interested in Tin Can to provide more in depth data on how our learners use the SCORM packages, for example how they are interacting with each slide, what materials they are viewing most, specific answers they are providing to questions etc. We also want to gather data on what materials, videos, articles etc they are interacting with outside of the SCORM packages which is where I fall down as I don't understand how this works/is set up logistically. 

You mentioned the big 4 - could you please identify which 4 you mean here? 

Also, would you mind elaborating on referencing using SCORM 2004, 4th edition, as I have usually always opted for SCORM 1.2.

Many thanks in advanced! I'm pleased you could talk about this all day as I have a lot of questions!!

Emma 

Steve Flowers

SCORM 2004r4 might not be well supported by your LMS. Worth a try to see if it works. Depending on what you want to do, you might not see any difference between the functionality offered in SCORM 1.2 and 2004r3 or r4. Many still use 1.2 because it meets their needs and has (largely) less complex intent. It was a big jump but largely for packages. If you stick with a single SCO, the advantages tend to melt away. 2004 offers a significant increase in the size of the suspend data field size, so testing for support might be worth that alone.

I just finished a project that mixed SCORM and xAPI. I used SCORM for tracking initial completion and xAPI to send actions performed within a simulation out to the LRS. An instructor then accessed a report card dashboard (custom built) to review the simulation choices and other calculated items and provide a final grade back into the LRS. This was something we absolutely couldn't do within the LMS alone. 

By the way, this is tracking 90+ different simulations within the LMS. All of which can be taken either by assignment or as a follow-up. The dashboard tracks how many have been taken and how many have been marked as passed by the instructor. 

This mixes the world of the LMS with the world of the LRS. It's a neat way to tackle challenges that the LMS doesn't handle well on its own.

Last year, I completed other similar project setup. iPad-based simulation stations for groups. The groups competed as they went through the exercises. The "LRS" collected the responses and displayed a dashboard of scores for the entire room of teams. The instructor could drill in and see how each question was answered to provide additional feedback. In real time. This one wasn't added to the LMS. I say "LRS" because I faked it with a Google Apps Spreadsheet combo. It would have been just as easy to do with an LRS. 

Think of the LMS like a fortress. You can get in if you cross the moat (log in) you can enter, find a room (search the catalog) and open a book (open a course) but you can't take the book out of the room, or out of the castle. You also can't open more than one book at once. The LRS is more like a cell phone tower. You can connect to it with multiple devices and can send and receive messages. You can move about freely and you can use more than one device at a time. And the devices can talk to each other. It's that big a difference. 

The LMS is a simple event dispenser. xAPI (Tin Can) enables a whole lot more. Wrote this about 4 years ago. Still applies.

Ashley Terwilliger-Pollard

 Think of the LMS like a fortress. You can get in if you cross the moat (log in) you can enter, find a room (search the catalog) and open a book (open a course) but you can't take the book out of the room, or out of the castle. You also can't open more than one book at once. The LRS is more like a cell phone tower. You can connect to it with multiple devices and can send and receive messages. You can move about freely and you can use more than one device at a time. And the devices can talk to each other. It's that big a difference. 

Best analogy in a long time! Thanks Steve for coming in to share.

This discussion is closed. You can start a new discussion or contact Articulate Support.