What are good targets to set for small scale (in-house) eLearning teams?

Jun 01, 2016

Dear friends and esteemed superheroes at Articulate,

May I learn from you, what would be good targets to set for small scale (in-house) elearning teams?

I work for a state-run institution, so there is no profit-margin driven bottom line. However, I hope to implement some Specific, Measurable, Achievable, Relevant, and Time Bound (SMART) targets for my fellow colleagues (and myself!). 

Would love to hear from all the experienced eLearning designers/ developers out there. 

Thanks

5 Replies
Bob S

Benedict,

If you are using SMART goals, then can we assume you are also following best practice and cascading goals from the top levels downward?

If so, then getting started is a matter of looking at what your supervisors goals are, figuring out which pieces your e-learning team can support, and creating SMART goals that roll up into those. And then cascade your goals, down to what each team member on your team will do to support them. Make sense?

Not sure if this is what you are asking about, so please let me know if you were hoping for something else and I will be glad to reframe as needed.

Hope this helps.

Benedict Chia

Hi Bob! Thks for sharing. What you say makes sense. Totally agree with part of cascading down what our supervisor wants. In this case, measuring learning outcomes and impact. Ideally Kirk Patrick level 3 (and beyond). But easier said than done. Correlating the targets to the outcomes is another challenge as the change in behavior may be influenced by many other factors. Which brings me back to my original question... What would be good targets? :) hope I'm not confusing further. Quite stumped at this point and appreciate all the advice I can get from this awesome community!

Bob S

Stumped is fine, it's why we all come here. :-)   Let's see if the community can un-stump you a bit... It sounds like you are trying to tie SMART Goals for your team to a larger initiative around measuring the impact of your training efforts.  That about sum it up?

So first piece of advice I can give, is don't put all of your eggs in one basket.... especially if you are just starting out down this measurement path. Rather think about a variety of quality measures that paint a broader picture. This is important for lots of reasons, not the least of which is culture change takes time and you may not like what you find out if you have just 1-2 measures.

On a related note... don't discount measuring less than level 3 too (not instead of).  For example, a well-designed Level1 eval well implemented, can often yield a much larger data set (better participation) and be more revealing than you think. For example, try just 3-5 Likert questions focused mostly on content relevance and clarity, and 1 free form question on what could be better.  Questions such as "How well did the course content relate to your current role?" and/or ones around projected impact to their performance can be important quality measures.... especially when combined with other metrics.   And as they are Likeft based, you can easily create some SMART goals for your team around them.... "Achieve an average learner score of 3.5 for the first 6 months in the areas of...."

Depending on where you are at currently, even what we used to call "Level 0" metrics around completion compliance can be a great measure to start out with.   If you have the best course in the world and only 12% of your learners took it, did you achieve success?   So you may wish to include participation/compliance metrics for your team for year 1-2 at least.

Which brings me to the next point.... don't plan on going from 0-100mph right away.  Really really really common trap LD Managers fall into when they first head down this path.  So know that some of your metrics will change and mature as your organization moves from crawl to walk to run. That many mean starting off with some "lower" level metrics for now and growing into the higher level stuff.  Be prepared to explain why that's a good idea to the stakeholder/leaders.

Finally.... Level 3 and above can be challenging, but doable.  One of the easy ways to get started is creating a survey for each and every e-learning course that goes out automatically to the learner's supervisor 30-60 days after completion. It should restate the 1-2 core learning objectives of the course their employee took, and then ask for a rating of how they've observed behavior change in each of those areas, from worse than before (1) through no change to measurable improvement (5).    Note _ You may not get many of these back at first, so again have a variety of measures you are looking at.

This is a big topic and lots more to talk about. But hopefully the above will start some ideas flowing for all of us!

 

Benedict Chia

Wow. Bob, that's some insightful stuff you have shared.

First of all, I like your take on the L3 via 30-60 days after completion. Probably surveymonkey can do a good job in terms of aggregating the data. Getting the supervisors to complete the survey is another issue, but I think that can be resolved through other methods.

Second, on measuring compliance... even if you have data to show that 100% of staff used your elearning (be it incentives or pressure from top), how does that translate to actual impact or outcome? Where is the value add in measuring that? I can imagine some stakeholders asking... Hence, I think your suggestion for measuring L3 is gold. Fortunately, we have an LMS that allows us to track this compliance rate. So, I will take note to track that data as a supplementary data set.  

What's your twitter handle? With advice like these, definitely worth following your tweets!

PS+ rest of the Articulate heros... keen to hear your views too :)

Bob S

Don't want to discourage others from posting, but did want to answer your specific question about the value of compliance metrics...

First, you are right. On their own "Level 0" metrics do not relate directly to outcome. However two things to keep in mind...

1) Remember you are just starting out, and having a wider variety of measures to gauge your success against is always better.  You can always drop this measure as things mature in the future, but it's good to be able to point to the level of penetration into the organization your team's work has reached. It shows your value... or your opportunity.

2) Remember, data alone means little; it's how you analyze and use it. So in the case of penetration/compliance just imagine this scenario... A business unit/team has unfortunately not enjoyed the success they hoped for. In an effort to explain why, they point to ineffective training. Your team responds by pointing out that only 12% of the employees took the training, and of those it achieved an average rating for relevant content/impactful of 4.3 out of 5.  So how can the training itself be the cause?   Conversely, a unit has achieved good success in an area.  You don't want to take partial credit for that unless you can show that a reasonable % of colleagues partook in the training. So.... it's part of the larger picture to give insight into impact. Make sense?

This discussion is closed. You can start a new discussion or contact Articulate Support.