Josh Bersin recently reported some frightening numbers on the state of learning. It reminded me of something we made central to our leadership training in my last position: if you don't measure it, it doesn't matter. This became foundational to our management and leadership training and we encouraged managers to examine their own behavior. Managers soon discovered that their behavior and what they recognized drove the behavior of their team members.
The same is true of learning organizations. The courses and programs that get the most attention or are favored by the leadership team get grandfathered in (to use Bersin's phrase) or overlooked when programs are being cut. The real value lies in the data. As my old boss used to emphasize, "Make data driven decisions." Proper analysis goes a long way to making the case to keep programs or cut them and will help win the war at the budget table. More than simple level one or two analysis, quality program evaluation drives level three and four feedback (Kirkpatrick's model that we all know and love) and secures a place for the training department at the table in driving organizational performance.
Unfortunately, few organizations, under ten percent, are performing that level of measurement. Today's training department barely has the bandwidth to produce training to keep pace with demand. Detailed evaluation puts a strain on the available resources. Doing the analysis to make data driven decision about the existing curriculum seems too backward thinking and no one wants to look back. I would also bet that few organizations have systems that facilitate the gathering, analysis, and storage of the information required.
Measurement doesn't have to be a high tech solution, although there are some very good high tech solutions out there. Class surveys can be very effective, when well written, and can be delivered electronically or via pen and paper. Data entry is cheap in the short term. After the training, make sure to follow-up and gather data on job performance related to training impact. This is measured from participants and their peers, managers, and employees. Getting 360 data, 60 to 90 days after training, is invaluable in determining training effectiveness. Keep it up, make it a habit. You can continue to pay for labor to do data entry if necessary. Take the local Excel expert to lunch in exchange for doing some impromptu analysis. Eventually, the number of responses and the insights gained from the data will demand a high tech solution for data gathering and data mining.
When we coached managers about being careful what you measure, we often used the example of the lack of consistency among managers in the restaurant. Team members quickly learn about a manager's pet peeves, the things he or she measures, and they work very hard to avoid those things. This doesn't mean that the restaurant is achieving the organizational goals of creating great guest experiences, increasing sales, being safe or controlling costs. But it was doing a great job of avoiding punishment.
Be careful what you measure. When you measure learning and practice some effective culling, you end up with a stronger learning library and a highly productive set of tools. When you only measure participant reaction, or worse, measure the wrong behavior as part of your level three analysis, you will end up diminishing the reputation of your training department. Don't just try to avoid punishment. Design the training with the end in mind, i.e., impact on performance. Then look for ways to show how training is helping the organization achieve its goals.
No comments:
Post a Comment