I received training today in the development tool provided by one of the market leaders to its customers for the purpose of customizing its off-the-shelf product.
Before we travelled to the distant office for our day's classroom training, we sat through about three hours of fairly dull on-line click thru "learning" and read a couple of PDFs we were sent.
The courseware made a great deal of the instructional design principles that are this companies backbone, and reason for its position. The material we read listed pretty much every theory you might care to mention, though Bloom seemed top of the pile.
The tool is built quite heavily around the application of theory in the design of the course, but the crucial problem as we saw it was that for all this theory, you basically ended up with a course that looked pretty basic, relied on questionable ideas and was essentially pretty dull. For all the theory, it basically failed to interest the learner.
Don't get me wrong - the tool could be very useful in some ways, especially its robust debugging tools, but the application of the theory did nothing for learner engagement.
The other issue I had with the whole approach was that it made out that it was essential to have all these theories present, but only when they were appropriate. Whenever that may be. This would be nigh on impossible as there were so many ideas swilling around (in fairness, the document was an excellent précis of the field) you could never pander to more than a fraction. You wouldn't even need to apply a great deal of common sense devoid of any theory to be able to come up with something that could be interpreted as obeying some of the theory anyway - a kind of scatter gun effect.
In the final analysis the heavy theory part of the tool was optional, so it was possible not to bother with it at all. Instead you were left with the tool's natural inclination toward testing - a perfectly sound approach, but one that hardly seemed to get a look-in in the literature. So much for the theory.