Skills Gap vs. Training Needs Assessment

May 21, 2019

Friends, I've learned a lot of cursory information based on my reading (here and other places) & internet searching on Training Needs Assessments and Skills Gap Analysis. I guess I am confused, and would like to pose a real-life question. 

Question 1: Is there a tangible difference between the two? Or does one dovetail with the other?

Question 2 (by way of story): I am currently creating a -- I think -- TNA for a continually developing software system. As I understand it, the "expected outcomes" of a particular process are the expected outcomes of a user story -- i.e., As a ____, I am able to ____, so that I can ____. Since I have no real contact with the users (it's an international operation), I've figured to release it as a survey asking how competent they feel executing a particular process. Is this a TNA or a Skills Gap Analysis?

Thanks all! Would love your feedback.

10 Replies
Andrea Mandal

It seems like your organization is trying to do a bit of both - you're trying to find out if there's a need for training by surveying them to find out if they're competent in the task? Is the outcome the majority of  "I don't feel competent = train" or "I feel competent = no need for training" ?

I would take some caution with this approach; I don't know your organization but you might have a lot of people respond that they feel competent when they actually do not, and therefore miss an opportunity. Other factors need to come into play when doing a Training Needs Analysis (which is where you determine whether you actually need training or not.) Are there performance problems identified with the task? Safety incidents? Are you trying to change metrics (increased sales, decreased average handle time on a call, etc.)? A lot of times "throw training at it" is the answer to any business problem - a training needs analysis identifies whether training, rather than process changes or communication or firing a problematic manager (ha ha but not really), is what will fix that problem.

Skills Gap analysis can be part of a TNA or a separate thing once training is identified - you're answering the question "what can people do now, and what do you want them to be able to do?" Measuring competency is a key part of that, which is what you're doing - but you're measuring perceived competency and not actual competency. I've used lots of stuff for this, including performance reviews and incident reports. You have to walk a fine line at this stage because the learners may feel like they're being targeted or tested (and if they fail bad things might happen.) Management buy-in is a must - but also bypassing management when you can and working as closely as you can with the learners so they don't feel like this is tied to their performance, with their boss looking over their shoulder. Please assess the climate before you jump into this - are people already demotivated and suspicious? Or are THEY the ones clamoring for training and ready to learn more? How you interact with your end users - and how you can convince them to interact with you - during the analysis phase will set the stage for the rest of your development process. Even if it's just surveys... you're more likely to get honest responses if you know why people are asking for training and you can build some sort of trust with the survey takers.

Best of luck!

 

Andrew Ratner

Andrea, what an amazing set of questions. OK, let me see if I can create some more context and maybe see how you follow up.

This is a very unique and interesting project. What our team is doing is creating a new solution for a legacy system that has had very little training for a long time. To answer your questions about the culture and whether folks are clamoring for training, I would say that they aren't starved, but they're very interested in learning more and how this new system will work, what that'll mean for their jobs, etc. So a mixture of curiosity, suspicion, and intense desire to learn. With that said, it's really necessary for there to be training simply because this is so new. 

And because this new system is still in development (agile development at that), all of the functionality is being built incrementally. And the only time our user base sees the software is through User Acceptance Testing. So my thought was, well, this system provides training embedded within the software (sort of, it's a little janky), I'll see how competent folks feel about the functionality after interacting with the system during UAT to gauge what the important points are that really need the built-in training, and which parts don't. I modeled the survey after this survey here, but instead of roles, I split it as functionality with "I can perform X process" outcomes. I'm also supplementing the surveys with focus groups conducted by some change management folks in the field to hopefully create a friendlier and more personal feedback session to add qualitative data as well. 

Really, in talking this out, I realize that the need for training has always been there, so it's really been a test of what to train. Is that still a TNA? Skills gap?

 

Andrew Ratner

Another thing I wanted to add is your comment here: "I don't know your organization but you might have a lot of people respond that they feel competent when they actually do not, and therefore miss an opportunity."

I entirely agree, which is what I'm trying to figure out here! Since this has been my approach, how can I best avoid that, if it's ill-advised? Thanks!!!

Andrea Mandal

I saw on your other post that you work for a government contractor - I'm not sure how rigorous your process/documentation has to be, if you have to follow the SAT process or what, but I wouldn't worry about "is this TNA / Skills gap analysis" because it's all analysis and it's fantastic it's happening at all (I just came from industry and you'd be surprised...) - just make sure that you document according to your orders and processes.

You have identified a need - "Users need the training because it's a new system and they've never seen it before" congratulations your TNA is done.

I've created training for systems still in UAT before. It's not ideal for creating training but it is for the final rollout, unfortunately. This makes you have to get creative and flexible. But my question is, if it's a new system, wouldn't everyone be starting from zero? You'll always have those software geniuses who mess around in it for 30 minutes and have it mastered, but if it's something that hasn't been done before, everyone's got to start at point A (besides those nerds.) Maybe the questions you need to ask first are different. Things like how much overlap is there with the old system (if there is an old system)? Is it a question of look and feel being different or completely different processes to do the same things? Or are they learning to do NEW tasks? For instance, if you used Captivate and switched to Storyline, you're going to intuitively know some things like how to open, save, generally how to create a screencast (although you might need help finding where those items are in the menu.) But if you're teaching an e-learning designer how to use InDesign and lay out a print publication, that's a whole different kettle of fish.

Have you done an audience analysis yet? Number of users, ages/generation, comfort with technology, comfort with change?

Andrew Ratner

Andrea, you're asking so many amazing questions here. I just want to say that it is amazing to have such thoughtful, experienced folks like yourself help me out with this stuff. 

So, that said, I have not done an audience analysis, per se, but we have a lot of former employees and TDY stakeholders in the office who interact with the current system and know it and the audience very well. Additionally, we have regular contact with Change Management teams for each of our sites, so we get a lot of user feedback, which sort of gets to the audience analysis. What we do know is that we have a lot of ESOL users, so language needs to simple and direct. Definitely not comfortable with change, haha. So we're making a concerted effort to have more consistent and direct communications about all of our incremental changes as we develop.

With regards to your question about new processes: some of them are generally the same, but with a much more accessible interface, though some processes have changed slightly. That's sort of why I wanted to ask the question of how competent they felt executing the processes. But I also have questions specifically about high-level stuff, like are you able to navigate the site, how familiar with the new terminology are you, etc. So that I get that understanding first. Thoughts on all this?

Andrea Mandal

So let me get this straight - the people you are surveying are those who HAVE used the new software in UAT? I keep coming back to thinking that a skills gap analysis may not be the most effective analysis tool for a brand new software system because everyone is going to have a skills gap. The current state is "we've never seen this tool" and future state is "we are proficient in this tool." You may just want to get a short writeup of your audience characteristics and use that to help guide your design. Moving into task analysis might be your best bet from a productivity perspective. UAT documentation will help with your task list - the things the user should be able to accomplish. Speaking with developers and UAT participants can help you rank what needs to be trained by difficulty, importance, and frequency of use.

Andrew Ratner

Yeah, they're usually new users to the system who've been selected for UAT.

What you're saying is interesting to me because a UAT scenario is essentially, a task analysis. It's a test script. What I've done for my surveys previously is asking for ratings of the higher-level tasks (thinking about this explanation by Nicole Legault here as a jumping-off point).

Recently I've been thinking about including a branching question on the survey if users report a 1-3 score asking what specific task posed a challenge for them. But I was also thinking about doing this as a facilitated discussion with UAT users to get more substantial feedback. Would that still justify the survey? I would like, at least, to include some form of analysis before I move forward with designing the next set of training materials.

Andrea Mandal

Right, the major work of the task analysis has been done for you by QA (thanks, QA!) but you still have to do the fun work of figuring out which tasks need to be trained first, most urgently, and in what order. How you can start figuring those out is with your survey. Make it clear to them that you are creating the training on the new system, and their responses will HELP to make the training as effective as possible. That way you are likely to get more accurate responses (everyone likes to help, as long as it doesn't take much time)

We just did a similar thing where we sent a survey to a target group of people who will be affected by new training, then held a group session where we went more in detail about what tasks needed training/resources/both. But the survey results informed the group discussion. Start with the survey, and then you can determine difficulty/importance/frequency with this group as well as the developers.

Andrew Ratner

That's interesting. So the idea would be:

1) Do the initial survey rating competency in those high-level tasks (e.g., "create a case" or "navigate the site"), then

2) Conduct a feedback session discussing what the more granular tasks were that might have tripped them up, may deserve more attention due to difficulty/importance/frequency, etc.? 

You're an amazing help, Andrea. Thank you so much!

This discussion is closed. You can start a new discussion or contact Articulate Support.