Thursday, October 04, 2007

Kirkpatrick - misunderstood again

Acres of screen and print pages have been given over to the relevance or otherwise of Kirkpatrick's model of evaluation. Indeed, so much time seems to be spent critiquing it in the training press that it is easy to assume that everyone knows it, and from the number of voices lined up for and against, that everyone understands it.

Today a colleague received a request to make sure the course he was working on featured Kirkpatrick analysis.

'Well, we can do a "happy sheet" and intra/post course testing,' was the group reply, in an attempt to crudely match the good doctor's schema.

But no! The client wanted a survey at the end of the course that addressed all four levels of Kirkpatrick in one go:
  • what do you think of this training course?
  • have you learnt what you need to know?
  • will you change your behaviour as a result of this course?
  • do you think your performance will improve?
Now, I'm not sure of the background of this particular client, and I have no idea if they are in a training department or not, but someone, somewhere has only made the most cursory glance at the literature here. I think we'll be working with them a little more to straighten this out.

Perhaps, sadly, what it made me think about was just how out of the loop I am when it comes to the whole training cycle. For our clients we are simply a means to an end - nothing more than the design phase of the training - so I never get to learn how the training went down; I never get any learner feedback or statistics.

I used to enjoy the thrill of sorting through the post course sheets (back when I was a classroom trainer) looking for comments (instead of a straight row of satisfactory ticks) and trying to implement changes for the next time to get better. Or the nervous feeling of awaiting the six-month peer review. Still, being an elearning guy now does have its benefit - no more bloody business hotels...

1 comment:

Karyn Romeis said...

Coming a little late... sorry!

Yes, part of the downside of being a learning designer is the remove. I totally understand that. In my role, I am supposed to design blended solutions, which means synchronous stuff, too, but often the client feels competent to do that, so we wind up working in silos - they do the bit they feel confident about, leaving me to do the e bit.

That frustrates me on two levels. Firstly it makes it well nigh impossible to wind up with a solution that hangs together. And secondly, I don't get to interact with the learners.

I hear your frustration, too, on the expectation that all four levels of Kirkpatrick's model can be met with a single happy sheet.

I am of the view that changes in behaviour can only really be measured within the internal performance management programme, and separating the impact of training from that of all the other aspects of effective performance management is a massive ask.

I also harbour serious doubts that ROI can ever accurately be measured - there are just too many what ifs, variable and environmental factors involved. I reckon you either believe investing in staff development is worth it or it isn't - don't try to attach finite numbers to it.