Tips for gauging and meeting learning objectives?

Jun 20, 2014

Hello!  Here's a quick question for those who have spent some time with instructional design.

Mainly this question is in regards to interactive E-learning created with tools like Storyline, though I'm sure this could apply to many courses, in general.

During the instructional design process, what are some ways you can use to gauge that the learning objectives are being met, and that the E-learning course is going to be effective for the learners?

Just wondering if anyone has any advice or tips to share about methods you use during the ID phase (to ensure the course will stays "on track" from the very beginning, into storyboarding, and also through the development phase)? Meeting the learning objectives stems from the initial instructional design.

Now much of the time will give some parameters of what the learners need to learn. The question is how to plan and build on that, and measure success!

So does anyone have anything additional they could share as far as methods used or advice? Or know of any descriptive Website articles that discuss this?  Thank you!

11 Replies
Helen Gordon

Hi Jay,

A couple of things I use are action mapping, formative assessments and piloting:

Cathy Moore's Action Mapping: If you've not come across it before, essentially you start with the measurable business goal (so you'll be able to see if your training has worked on an organisational level) then you identify what the learners would need to do in the real world to achieve that goal, then you design your practice activities to enable learners to practice those behaviours (scenarios in storyline are great for this) then finally you identify the really need to know information. Unless the learner needs the information in order to pass the practice activities then either your practice activity needs work or you don't really need that information.

Formative and Summative Assessments: Give learners challenges during the training as well as at the end to do a final assessment. Make sure the challenges/questions match up to the learning objectives and that every learning objective by an assessment and correspondingly that you aren't assessing anything that doesn't tie up to the assessment - Worst case examples are asking question on content you've not even covered, more common but still bad examples are asking irrelevant questions about the exact percentage for something when it isn't something they need to know.

Piloting and evaluation might be beyond the instructional design phase you asked about but the more feedback you can get from the target learners the better otherwise you might miss the obstacle that's preventing the learners from performing the required behaviours. For instance, your solution might provide all the information they could need but the actual problem is cultural or motivation or conflicting requirements which your training won't address.

I hope those ideas help,

Helen

Cathy Moore

Helen, thanks for your great summary of action mapping.

Jay, I'd like to add that during the analysis phase, it can be helpful to use this quick flowchart to make sure that training is the right solution and your objectives are on track. As Helen mentioned, sometimes the problem comes from a lack of tools or an issue with the company culture. If that's the case, no matter how you phrase or test your learning objectives, your training might not have an effect.

Also, if you've had conventional instructional design training, you might have been told to write objectives using verbs like "define" and "identify" or even "understand." You might want to aim higher than this and write performance objectives that describe what people need to do and not just what's happening in their heads. 

For example, a performance objective would be "Remove spoiled fruit from the batch before packing it." That's your ultimate goal. If a client insists on seeing "understand"-style objectives, then you could also write enabling objectives for that action, such as "Identify spoiled fruit."

Your activities and assessments could also be aimed at the performance objective rather than the "understand" style objectives. For example, you could show the learner a photo of a batch of fruit and have them decide whether that batch can be packed or not. If the learner correctly says the batch shouldn't be packed, then they need to click on the spoiled fruit (assuming we're talking about fruit that you can assess on sight). This is a painfully basic example but my point is that you can design activities that try to simulate the decisions that people take on the job rather than writing questions that mostly test their knowledge. This post might be helpful for that.

Jay Yearley

Thank you very much, Cathy.  It's great to also get a response from you.  Your blog seems to be recommended frequently as a resource in this particular area.

I will definitely be looking into and using the flowchart ideas (action mapping, etc.), as they make quite a bit of sense and the process is good for pinpointing the areas that need training.  Also, the tips for creating activities for "using" rather than just "knowing" knowledge is useful. Thanks again for sending the links along.

Also, I liked this post, also on your blog.  It's a solution to create a focused, and measurable, training goal, which is along the lines of what I was looking for.

Jay Yearley

Hello again!

Here's another question extending from the original one. Anyone can feel free to answer if having some insight on this.

After reading through Cathy's blog post on creating a training goal in 2 steps (and others), I better understand the importance of creating measurable goals through the formula seen here:

Now, are there any additional tips on how to arrive at those numerical percentages?

After identifying something at the company that's already measurable (am assuming this is during the initial needs analysis), how would one then come to the decision of how much, in actual percentages, the training would change/improve, and by when?

Is this just a very educated estimate on your part on how much the training will change it? (depending on what you already know about the company's measurable, current strategies)


As an example, say a company wants to do E-learning to train customers online on how to use a website's service to sell products. The training is intended to better streamline sales of a product by training them online. Also, to help to reduce the actual number of calls to a call center, thus reducing the time they spend on explaining the service to the customers via phone. This frees them up for other customer service related issues.

So for example a measurable goal might be: "Sales will increase 5% in half a year as customers are trained Online to sell, and calls to call center are decreased." Something like that percentage and timeframe might be a very educated estimate though.

Is that the general idea of using this formula/method? I realize that specific answers will vary and depend on each company and their strategies.

john faulkes

Jay,

I think that's the general idea.

Some theory about success measures in general:

The reason we should define them might seem obvious: they tell us whether we have achieved a goal or not, and/or they tell us when we need to stop activity. However, there are two very good additional reasons to set them:

1. If they are kept visible they provide a reminder of the quality level we must work toward.

2. In the process of defining them, if we have great difficulty doing so, we may realise that the original goal we have set is muddy and flawed (which will of course scupper the project)

Success measures need to be Meaningful (related closely to the purpose of the project) and Measureable (capable of some form of checking).

Success measures can be can be 'internal' i.e. related to development deadlines, budget limits etc. Or 'external' - related to the business purpose, the customer reactions, and so on. In practice, with a significant project one would have a selection of measures covering all of these aspects.

Setting good, numerically based measures for uncertain outcomes is tricky, as you have outlined. The best way to get close to a reasonable measure (e.g. 'Sales will increase by 5%') is to sit with the professionals and debate 'what-ifs' with them. For instance, sit with a marketing manager and ask 'if our people changed (whatever) behaviour, what impact do you think it would have on sales?' Also a good follow up: 'How confident are you in that? What is the % likelihood of it happening?'

(Marketing guys are paid to compare the quality of products in relation to competition, end-customers' needs, factoring in the promotional activities, and predict likely revenues).

Finally though, it's a learning process, for the long term. Setting targets is a good idea but where you can always stress the motivational aspects of these rather than the bean-counting ones.

Cathy Moore

Jay, in addition to following John Faulkes' good tips on getting a number out of your stakeholders, you might consider focusing on just one measure. The "sales will increase" goal is a good end goal (for almost every business!) but decreasing calls to the support center might be more easily measured and more directly attributed to the training. So one approach could be: "Calls to the call center about using the site to sell products will decrease X% by July 1, 2015 as ..."

Whether you focus on sales or support requests, I'd suggest taking out the mention of training and focusing more on what the learners will be doing. That might give you, "Calls to the call center about using the site to sell products will decrease X% by July 1, 2015 as customers use the site correctly and confidently."

It's a very common temptation to include training in the goal, like "Sales of megawidgets will increase 5% by Q3 as sales staff are trained in upselling." It can be more effective for needs analysis and designing activities if the goal focuses more on what the sales staff will do: "Sales of megawidgets will increase 5% by Q3 as sales staff apply the 3-step Sell-it-UP model" (nice and concrete, assuming the model is clear) or "...as sales staff consistently use effective upselling techniques" if you don't have a specific model.

It's great that you're doing this step. No matter what your final goal ends up being, it will help focus your analysis and should help keep your stakeholders from trying to add irrelevant information.

Jay Yearley

Thank you, John and Cathy.

This is all good input.

I better see the benefit of focusing more on what the learners will be doing (rather than training). I wasn't sure which the formula method was aiming at, so thanks for clarifying.

Also, I feel that using the formula method would be a good way to create a measurable goal that will help set the project on a more focused path, and help keep it on track through it's development.

And as John mentioned, "If they are kept visible they provide a reminder of the quality level we must work toward." Which is very true.

This discussion is closed. You can start a new discussion or contact Articulate Support.