If you’ve ever been a manager, you know how frustrating the Dunning Kruger effect can be.
Let’s say you work at a software company, and you need to give Karen, your newest software developer, a performance review. Karen’s exceptionally good at developing code, but she lacks a few critical programming skills. That’s okay -- you recognized this skill gap before hiring her, and set up training sessions for this reason.
But when you mention Karen’s programming skill gap to her, her reaction baffles you: “What are you talking about? I’m exceptionally skilled at programming. I don’t need training -- in fact, I’m one of the best programmers on your team.”
You’re surprised. Not only is Karen unable to recognize her weakness, but she overestimates her skill in comparison to others, believing herself to be better than some of your best developers. Her lack of knowledge on the subject makes her unable to see her own errors -- this is known as the Dunning Kruger effect.
Dunning Kruger Effect
The Dunning Kruger Effect is a pyschological phenomenon in which people of the lowest ability in a subject rate themselves as most competent, compared to others. Ironically, people who lack the most knowledge on a topic also lack the ability to recognize their own mistakes and errors, making them exceptionally confident and biased self-evaluators. They are also unable to fairly judge other people’s performance.
The Dunning Kruger effect, first coined by David Dunning and Justin Kruger in 1999, is a cognitive bias that influences everyone’s perception of their own abilities. Simply put, people are unreliable resources for evaluating their own skills and shortcomings.
However, the Dunning Kruger effect gets slightly more complex than that. People like Karen, with the lowest levels of competency in a subject, often rate themselves highest in terms of expertise. Dunning and Kruger explain this as a double-curse -- Karen makes mistakes because she’s not competent in a skill, but that same incompetency blinds her from seeing any errors in her work. In short, she’s not skilled enough in a particular area to see she’s not the best at it. She also misjudges other people’s abilities, assuming she knows more than most of her colleagues.
Meanwhile, true experts often underrate themselves -- they’re so knowledgable on the subject, they can see how much they don’t know.
Here, we’ll dive into a few key research findings to exemplify this effect in-action. We’ll also explore a few potential solutions, so you can help any Karens -- or any other colleagues on your team who suffer from the Dunning Kruger effect -- fairly judge her performance moving forward.
Dunning Kruger Effect Examples
There have been numerous research studies over the years to support the notion that people misjudge their own adequacy -- and, that the poorest performers are least accurate about their own skills. Let’s look at three examples now.
Example One: Debate Skills
Ehrlinger et al.’s 2008 study examined students in a collegiate debate tournament. As you might’ve guessed, students performing in the lowest 25% grossly overestimated their skills -- they guessed they’d won almost 60% of their matches. In fact, they’d won about 22% of them.
The lowest performers weren’t simply overcompensating for a lack of skill, or boosting their confidence to hide their insecurities. Instead, they were genuinely unaware of their incompetence -- the debaters performing in the lowest 25% had the least knowledge of debating, so they were unable to accurately judge their own performance. They weren’t biased judges. They were simply uninformed ones.
The results from this study translates to plenty of real-world examples. If you’ve extensively studied marketing, you might be shocked to hear how your colleague mis-evaluates the results from your company’s new marketing campaign. He might take a look at the staggeringly-low numbers and say, “Looks good to me,” simply because he doesn’t have the skillset to understand how to read SEO analytics -- and, without that skillset at all, he believes himself to be above-average.
Example Two: Logical Reasoning Skills
In 1999, Dunning and Kruger published their initial research on the Dunning Kruger effect, called, “Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments.” To conduct their research, they looked at people’s self-perceptions in regards to humor, logical reasoning, and grammar.
In particular, we’ll look at study two, which focuses on logical reasoning. In the study, 45 Cornell University undergraduates were asked to complete a 20-item logical reasoning test. They were then asked to evaluate their ability and test performance -- first, by providing a “general logical reasoning ability” percentile ranking compared to classmates, and second, by providing an estimated test score compared to classmates. They were also asked to guess how many test scores they got correct.
As theorized, the students in the lowest 12th percentile estimated their “general logical reasoning ability” fell closer to the 68th percentile in the class, and believed their test scores fell at the 62nd percentile. They also thought they’d answered 14.2 problems correctly (on average), when in reality, their mean score was 9.6.
Ultimately, the lowest scorers believed themselves to be above-average performers. On the flip side, the top 86th percentile students drastically underestimated themselves, estimating their general ability fell almost 20 points below, around the 68th percentile.
Have you ever heard someone really good at public speaking sigh and say, “That went terribly”? They probably aren’t just acting humble -- if they’re a true expert, they likely underestimate their performance in comparison to those around them.
Example Three: Emotional Intelligence
In our prior examples, we’ve seen how the Dunning Kruger effect influences a person’s perception of their logical skills -- but what about other aspects of a person’s personality, like emotional intelligence?
Sheldon, Ames, and Dunning explored emotional intelligence in relation to the Dunning Kruger effect in their 2010 study. While they conducted three separate studies, we’ll focus on the first one, which required 157 masters students to complete a Mayer-Salovey Caruso Emotional Intelligence Test. Participants were given an extensive description of EI and then asked to estimate their percentile ranking on a scale from zero to 100. They also needed to estimate their score on the MSCEIT.
As you might’ve guessed, participants who scored lowest, at the 10th percentile for emotional intelligence, overstimated their EI by 63 to 69 percentile points, and believed their MSCEIT performance to be 62 to 63 points higher than it was.
On the contrary, top performers, in the 90th percentile for EI, underestimated their EI score by five to 20 points.
This example is critical for recognizing the Dunning Kruger effect is not simply related to raw logic-based skill. Instead, it biases other aspects of our lives, including social interactions. Emotional intelligence is key to becoming a better co-worker and leader, and incorrectly assuming you’ve got high EI could be detrimental to your long-term career growth.
Consider taking an emotional intelligence test, or another personality test, to fairly judge your strengths and weaknesses.
Dunning Kruger Effect Potential Solutions
Solution One: Offer Resources to Rectify Your Colleague’s Self-Perceptions
If low performers overestimate their ability in a skill because they don’t have the knowledge to fairly evaluate their performance, a solution could be to provide those low performers with the resources and knowledge necessary to accurately judge their own performances.
During their original 1999 study, Dunning and Kruger tested this hypothesis by asking participants in study four to complete a number of Wason selection tasks (which is meant to evaluate logical reasoning skills). Afterwards, they provided roughly half the participants with a training session on how to solve Wason tasks, and then asked them to re-evaluate how well they’d done.
Overall, bottom quartile performers who received the training session became better and more accurate judges of their own abilities. Before the training, they’d ranked their ability around the 55th percentile, estimated their test performance around the 51st percentile, and reported answering 5.3 problems correctly.
After the training, these same bottom performers re-ranked their ability around the 44th percentile, estimated their test performance around the 32nd percentile, and reported answering only one problem correctly.
If you’re dealing with a colleague who can’t see how badly she’s performing, perhaps it’s because she doesn’t have the tools or training necessary to see her mistakes. Rather than explaining her mistakes and hoping she’ll get it, maybe you need to go further and offer training resources to re-calibrate how she critiques her skills.
For instance, perhaps your colleague Karen’s programming inadequacy is due to a lack of knowledge of Java Script -- her background in coding leads her to believe she intuitively understands Java Script, but she can’t see how programming differs from code. If this is the case, offering a free training on Java Script could show Karen how much she needs to learn, and how to better evaluate her performance.
Solution Two: Provide Feedback Sessions
When a new colleague joins your team, she carries with her a list of preconceived notions of her strengths and weaknesses -- but studies have proven people’s perception of their skills are only weakly correlated to their actual performance (Mabe and West, 1982). This makes it difficult for people to fairly evaluate their own abilities -- if I believe I’m an exceptionally strong writer, and I’m given a test labelled, “Writing Rules,” I’m going to speed through the test and assume all my answers are correct, and it will be harder to accept objective feedback to the contrary.
On the other hand, if I’m given a test labelled “Math Rules,” and I’m not personally invested in the subject, I’m a better self-evaluator and will more fairly judge my own performance.
However, I could be wrong about my own skills -- maybe I’m better at math than I am at writing, in which case, my evaluations are biased from the get-go.
A Ehrlinger and Dunning study from 2003 tested this same concept -- they provided participants with a 10-item test and either described it as an “abstract reasoning” test or as a “computer programming skills” test. The participants had already described themselves as exceptionally strong in abstract reasoning, but admitted no knowledge of computer programming. As expected, participants who believed they’d taken an “abstract reasoning” test scored themselves 12% more favorably than when the test was labelled “computer programming”.
While there are no easy solutions to this, it’s important to take into consideration. Arming your colleagues with the knowledge that they are biased judges of their own performance could be a strong initial step. Perhaps you want to conduct a feedback session, in which you teach colleagues how to openly accept hard-to-hear feedback by acknowledging they aren’t always fair self-evaluators.
You could also offer workplace learning courses. Ultimately, the more your coworkers learn, the less likely they are to think they’re experts in a subject -- which, ironically, makes them more likely to become one.
http://bit.ly/2ukch7G
No comments:
Post a Comment