Training is a key part of any organisation. It doesn’t just enable employees to do their job, but it plays a role in determining how productive and efficient an organisation is. For this reason, it’s important to be both clear on what the goals of any training are before it’s implemented and to evaluate training once it’s complete to determine how effective it has been.
Although at face value evaluating training effectiveness might seem pretty straightforward, it begins to throw up tricky questions once you start looking at how to approach it. Should you base evaluation on how well learners remember what they’ve been taught, for example? Or should you base it on how well they put the learned knowledge into practice? If it’s the latter, how would you go about doing that? And would it be easier to simply look at the return on investment that any training has delivered for an organisation?
Fortunately, we don’t have to wrestle with these conundrums, because someone has done it for us. Professor Donald Kirkpatrick’s four levels of training evaluation provide a basis for training evaluation that reconcile these sorts of questions. They state that evaluation should cover 4 things:
1) Reaction – what learners think and feel about the training having undertaken it.
2) Learning – how well learners have retained the knowledge delivered by the training.
3) Behaviour – how well the learner has put the knowledge into practice.
4) Results – what overall impact the training has had.
In a work environment, these might apply to how well an employee has received the training, how well they’ve retained the knowledge for the delivery of their role, how well they’ve used the knowledge for their role and how the training has impacted the company’s productivity or bottom line.
The main area of evaluation in which e-learning can help is that of knowledge retention. Historically, it may have been straightforward enough to get employee reactions to training, monitor how well they’ve put it into practice and see how that has reflected in company performance. Evaluating knowledge retention from training, though, would have required subsequent and continued testing, which might have been too time-consuming to feel worthwhile.
E-learning platforms like Wranx, however, build the evaluation of knowledge retention into the learning process. In order to reinforce the knowledge that is being taught, learners are given short drills, by way of answering which they provide instant feedback back on their learning performance and the training effectiveness. Longer, end-of-unit assessments are also common and provide the same information.
The business benefits of this are that it gives insight into one area of learning evaluation that may previously have been overlooked. That additional insight, along with those for reaction, behaviour and results, can help to paint a picture of training effectiveness and show where training needs to be improved, if at all.