Thursday, October 04, 2007

Kirkpatrick - misunderstood again

Acres of screen and print pages have been given over to the relevance or otherwise of Kirkpatrick's model of evaluation. Indeed, so much time seems to be spent critiquing it in the training press that it is easy to assume that everyone knows it, and from the number of voices lined up for and against, that everyone understands it.

Today a colleague received a request to make sure the course he was working on featured Kirkpatrick analysis.

'Well, we can do a "happy sheet" and intra/post course testing,' was the group reply, in an attempt to crudely match the good doctor's schema.

But no! The client wanted a survey at the end of the course that addressed all four levels of Kirkpatrick in one go:
  • what do you think of this training course?
  • have you learnt what you need to know?
  • will you change your behaviour as a result of this course?
  • do you think your performance will improve?
Now, I'm not sure of the background of this particular client, and I have no idea if they are in a training department or not, but someone, somewhere has only made the most cursory glance at the literature here. I think we'll be working with them a little more to straighten this out.

Perhaps, sadly, what it made me think about was just how out of the loop I am when it comes to the whole training cycle. For our clients we are simply a means to an end - nothing more than the design phase of the training - so I never get to learn how the training went down; I never get any learner feedback or statistics.

I used to enjoy the thrill of sorting through the post course sheets (back when I was a classroom trainer) looking for comments (instead of a straight row of satisfactory ticks) and trying to implement changes for the next time to get better. Or the nervous feeling of awaiting the six-month peer review. Still, being an elearning guy now does have its benefit - no more bloody business hotels...
Post a Comment