Costing metrics

Oct 29, 2012

I would be interested to know if anyone has developed any costing metrics for developing materials in Articulate.

Obviously there is no standard product and we have to plug in a range of variables including complexity and a variety of quality parameters.

However, I imagine there might be some high level metrics for how many hours it takes to develop, for example, 1 hour of face to face training. Has anyone come across these?

5 Replies
Bruce Graham

Hi Christine,

This topic comes up now and again, is one that I just found.

It's a bit like a weather forecast - there are predictions, but they are seldom right!

One current project I am working on started back in April - the target was 8 module completed by end-August. We have one almost completed now, and we have another module where we are on build 0.19! The budget, time and scope have all changed through the project.

I will charge a different per hour price then someone else, and others will have a "per project" price.

Just not sure you can do much more than develop metrics for YOU.

You mention face-to-face training, which is completely different from online training in many respects.



Jeff Nauman


I've found the Chapman Alliance to be a fairly valuable reference for this kind of thing. Their website has a ton of information gathered from studies and surveys across the e-learning community. Maybe this will give you a point of view from which you can determine some of your own metrics.

Steve Flowers

Estimation isn't an exact science but I've found it a necessary evil. Lots of factors involved in getting the estimation closer and narrowing your field to make things less arbitrary. 

Here's a bit of info. The Excel based estimator is targeted at government markets and is pretty arbitrary. Levels of interactivity tend to be a primary determinant in many estimation models. I'm not a huge fan of this as a primary driver as it tends to broad stroke the estimation. There are some other recommendations here: 

You may be interested in a model / method we're putting together to break down the features and data elements of a solution at the task level and to specific purpose vice as a large knit package. I've started to break this out here:

Until you're able to identify hard data elements associated with the task, the estimate will always be floppy. A task (procedural / process) / topic (concept) profile can contain a wide array of data points including difficulty, ambiguity, complexity, likelihood of change, safety considerations, etc.. These data points can inform your category, intervention, delivery medium, and communication mode selections and can help to narrow down your estimation within a reasonable range.

We've used another estimation model for EPSS in the past that gets pretty close. This uses the same profile concept described in the content profile / lens article above but adds a multiplier based on another calculation of team capability and schedule constraints. Again, it's not an exact science but an estimate needs to start somewhere.

Here's what the equation looks like for the EPSS estimator:

RS = TC (TA + IS) / LPL 

RS: Resource Score is the pre-multiplied factor we would apply our calculation adjustment. For us the base multiplier is .67 (based on evaluation of past projects)

TC: Task Complexity ranges from 1 - simple to 3 - very complex

TA: Task Ambiguity ranges from 1 - clear to 3 - very unclear and indicates how well the task is defined / matured.

IS: Information scope ranges from 1 - limited to 3 - high and has an associated page count guide to narrow the score.

LPL: Lowest performer level ranges from 1 - novice / new performer to 3 - journeyman 

The more you can apply hard data factors, even if these factors are subjective based on the input of your stakeholders, the more accurate your estimate will be. Breaking it down to the task / topic level will also help you narrow the estimate.

This discussion is closed. You can start a new discussion or contact Articulate Support.