A Shift in Perspective
In training, we often hear claims of success, promises of improvement, and the belief that our efforts are making a real difference. But when it comes to evaluating the impact, one question often lingers. Can we prove that training works?
Donald Kirkpatrick, known for his influential work in evaluating training effectiveness, often referenced the case of O.J. Simpson to illustrate the difference between evidence and proof. Simpson was acquitted in the criminal court, where the standard was “beyond a reasonable doubt.” However, in the civil suit, the burden of proof was lower, and he was found liable based on a “preponderance of evidence.” This highlights the distinction: proof is absolute, while evidence represents the level of certainty we can substantiate.
In training, we may never be able to prove beyond a reasonable doubt that training caused a specific outcome. Too many variables are involved. Stakeholders may disagree on results, and some will always hold alternative facts or perspectives. But that’s valid. The value of training often lies in its perceived impact, and this perception can vary depending on the observer.
While we cannot prove impact with certainty, we can shift the conversation from belief to evidence. Instead of claiming something as a proven fact, we focus on the indicators suggesting that training is causing changes or delivering results. The goal is to provide credible, concrete evidence aligned with the training objectives.
Levels of Evidence: From Hearsay to Direct Evidence
Much like a lawsuit, where different types of evidence are presented to prove or disprove a case, training evaluation follows a hierarchy of evidence. Here’s how evidence in training compares to legal cases:
- Hearsay
In court, hearsay is second-hand information, unreliable and subjective. It’s often dismissed because it’s not directly witnessed. In training, it’s the anecdotal stories or second-hand reports. For example, “I heard from a colleague that the new training improved productivity,” without solid evidence to back it up. - Witness Testimony
Witness testimony is first-hand but subjective and can be biased. In training, a participant shares their personal experience, such as, “After taking this course, I felt more confident in my work.” It’s valuable but doesn’t prove causality. - Character Evidence
Character evidence speaks to a person’s behavior or reputation. In training, this might include feedback about the long-term impact of a training program, like improved team collaboration. It’s useful context but not definitive proof. - Documentary Evidence
Documentary evidence, like written records, is concrete and more reliable than testimony. In training, it could be reports, surveys, or post-training assessments showing satisfaction or knowledge gained. - Circumstantial Evidence
Circumstantial evidence suggests a relationship but doesn’t directly prove causality. In training, observing trends like increased performance following a program, without clear proof that training caused the change, is circumstantial. - Documented History or Timeline
A documented timeline helps establish a case. In training, tracking performance before and after training over time gives a broader view of long-term impact. - Direct Evidence
Direct evidence is clear and indisputable. In training, it’s observable, measurable changes directly linked to the training. For example, clear increases in sales or performance metrics tied to the intervention.
The Value of Evidence, Not Proof
Evidence isn’t about certainty in the absolute sense; it’s about the level of certainty we can support. The more evidence we gather, the more we can substantiate our belief that training is having the desired impact. This requires collaboration and agreement on the type and amount of evidence needed. Do we trust the process blindly, or do we need evidence beyond a reasonable doubt? What’s important is that we all agree on the level of evidence we require.
Bottom Line
Every piece of evidence counts. Each bit adds to our understanding and helps validate our assumptions. The intention to train is based on a belief, and that’s great. If we articulate that belief, we can turn it into an assumption. And that assumption is something we can test. By collecting evidence and comparing it, we begin to evaluate the effectiveness of our training and make better-informed decisions.
However, the key foundation of this process lies in having a clear plan or assumption. If we’re not clear on our intentions, the desired effects or impact, there’s nothing specific to test. Without a clear understanding of what we expect from training, the whole process of evaluation becomes uncertain. This is where the true value of training lies. not in vague beliefs, but in articulating clear, measurable goals. When we do this, we can test our assumptions and measure the results. Otherwise, the training itself risks being misunderstood or undervalued, and that’s not what it deserves.
In the end, while we may never fully prove the impact of training, we can make clear what we are aiming for and how we think training adds to it. The only thing left to do is collect evidence. And in doing so, we provide the pieces of the puzzle that build trust. This way, we learn together in the ongoing process of learning and growth.
No responses yet