By now, most people understand the ways in which biases can creep into important decisions. Most are much less aware, however, of how much noise can affect their choice making, according to psychology and strategy experts Daniel Kahneman and Olivier Sibony. In this case, noise refers not to the clatter in the room but to the high variability in inputs and cognitive processing that people must contend with when making singular and collective judgments.
The concept is less well-known, in part because there has been much more research on bias than on noise—something Kahneman and Sibony are seeking to change with their recent book, Noise: A Flaw in Human Judgment (Hachette Book Group, May 2021), coauthored with Harvard professor Cass R. Sunstein. In this edited conversation with McKinsey’s Julia Sperling-Magro and Roberta Fusaro, Kahneman and Sibony explain what noise is, how it relates to bias, and what people can do about it.
McKinsey: You both have researched and written so much about decision making and cognitive biases. What brought you to the topic of noise? And why now?
Daniel Kahneman: I’ve been working on errors of judgment for most of my career, more than 50 years, and most of that time, I’ve been studying biases and how they lead to errors in judgment. But about seven years ago, I encountered another type of error, which is noise. It’s something I hadn’t thought about earlier—neither had a lot of other people. So that became a topic of thinking and, ultimately, of the book. As to the question, “Why now?” I would say the book is, in a way, premature. The normal sequence would be, you have an idea, you spend 15 years researching, teaching, and living the topic. Given my age [87], we didn’t have 15 years to wait [laughs], so the book came out a bit early—it’s still green, not ripe, but that’s the best that we could do [see sidebar, “Making Noise: A closer look at the creative process”].
Olivier Sibony: In some of the work I was doing with companies to address the problem of bias, it struck me quite often that the effect of bias is not actually predictable, as we would normally assume. It is often something much more random, and when Danny started talking about noise, I realized that we were talking about the same thing.
Identifying unwanted variability
McKinsey: How do you both define “noise”?
Olivier Sibony: Noise is the unwanted variability in professional judgments. The inclusion of “unwanted” in the definition is very important, because sometimes variability in judgments is not a problem; sometimes it’s even desirable. But not when it involves a professional judgment. The obvious example would be a doctor’s diagnosis. If two doctors give you two different diagnoses, at least one of them must be wrong. That is a judgment where variability is not desirable. There is a correct answer, and you would want these two people to have the same answer. When you don’t have the same answer to something where you’d want the same answer, that’s noise.
McKinsey: What differentiates noise from bias?
Daniel Kahneman: Put simply, bias is the average error in judgments. If you look at many judgments, and errors in those judgments all follow in the same direction, that is bias. By contrast, noise is the variability of error. If you look at many judgments, and the errors in those judgments follow in many different directions, that is noise.
Olivier Sibony: Here’s a forecasting example to make it more concrete. Say we are planning how long it will take to redecorate our kitchen. We can expect that all of us will be too optimistic; all of us will underestimate the time it will take to finish the renovation. But even though we’re all talking about the same kitchen, none of us will have the exact same estimate of how long the project will take. The average error, whereby we underestimate the time, will be the bias in our forecast. The variability in those forecasts is the noise.
McKinsey: Where is noise commonly found?
Daniel Kahneman: The noise we’re talking about in the book is “system noise,” or unwanted variability within a system of judgments. A good example is the judicial system. Judges should be interchangeable. They should give the identical sentence in the identical case. When they don’t, that is system noise. We found the same dynamics in medicine, with underwriters in insurance, and in many other functions.
There can also be noise within an individual. It happens, for instance, when people are presented with the same problem twice and they don’t recognize that it’s the same problem, so they give different answers. Or it happens when people see the same problem under different conditions—the conditions shouldn’t matter, but they do. The sentences that judges hand down, for instance, can vary with the outside temperature. So it’s worse to be a defendant on hot days.
Underestimating noise levels
McKinsey: Did one system, function, or industry surprise you as being particularly noisy?
Olivier Sibony: The most striking examples, to me, are in performance reviews. The research shows that when you evaluate someone’s performance, only about one-quarter of the rating is related to actual performance. The other three-quarters are related to noise. It can be “level noise,” which is that some raters are, on average, more generous than others. It can be “occasion noise,” reflecting the fact that the evaluator may be in a better disposition today than on other days. And it can be the idiosyncratic response of each person to another person, of a rater to a ratee. When you take all those things together, about three-quarters of a performance rating is based, in fact, on pure noise.
Outside the business world, the example that struck me the most is in the world of forensics and fingerprinting. We have been taught that identification by fingerprints is infallible so long as you follow the right procedures for capturing and analyzing them. In fact, it’s not. It’s a judgment. And, like all judgments, it is subject to noise. There is less noise in fingerprinting than in performance ratings, of course, but where we would expect zero noise, there actually is some. Where we expect some noise, as in a performance rating, there is a lot. The bottom line, as we’ve put it in the book, is wherever there is judgment, there is noise, and probably more of it than you think.
McKinsey: How can you measure the level of noise in an organization?
Olivier Sibony: You do what we’ve come to call a “noise audit.” You give the same problem to a lot of different people, and you measure the differences in their responses. For example, we presented some of our findings to an investment firm. Senior leaders there were interested in finding out whether there was noise among their analysts. They designed a case. They said, “Here is a company, here is its P&L, here is its cash-flow statement.” They gave all that information to their analysts, who are supposed to be applying the same methods and the same techniques to value all the companies that they are looking at. They gave the same set of data to all their analysts, and found that, on average, [between any two analysts] you would get a 44 percent difference in their evaluations. Leaders at the investment firm had no idea that the level of variability would be this large—and this is a common response. The degree of variability in judgment between people is always much greater than you expect.
Would you like to learn more about our Strategy & Corporate Finance Practice?
McKinsey: How do people react when they get played back those results?
Daniel Kahneman: They tend to be surprised, and they think, “We really should do something about it.” But unless there is a lot of energy behind this, nothing happens. In fact, many organizations have information that there is a noise problem, but they don’t want to look at it—because it makes people look bad and because they don’t quite know how to intervene.
Olivier Sibony: The way that most companies produce judgments actually suppresses their ability to recognize noise and does not necessarily improve the quality of their judgments. If the three of us write down what we think is going to be the rate of inflation next year, we’re going to have very different answers. But if Julia speaks first and says, “Here is my assessment, and here’s why,” and Julia is our boss—well, we’re going to gradually converge to Julia’s answer. Maybe not completely, but we’re going to hesitate to voice the full extent of the disagreement that we have. This is not far removed from the way most companies produce judgments.
Daniel Kahneman: In principle, when companies hold case conferences in which people gather to discuss a particular problem, this would be an ideal setup for a noise audit. But for it to work, the case material would need to be viewed independently by all the participants, and each would need to make a judgment on the material independent from all the others. The case conference could then be a forum for comparing those judgments. As it typically happens, the case is prepared by one person who provides a focus for agreement for the others. There is a wasted opportunity here to discover the amount of noise and constructively find ways to reduce it.
Noise reduction through decision hygiene
McKinsey: In the book, you talk about “decision hygiene” as a way to reduce noise. What is it?
Olivier Sibony: Whenever you talk to people in organizations about reducing errors, they immediately jump to the idea of identifying their biases and how to fight them. If, for instance, your projects are always behind schedule because you are always too optimistic about deadlines, that’s something you should address. But in most decisions, there probably isn’t such an obvious directional error. It’s likely that you are going to find all kinds of different errors pulling and pushing in different directions. That’s why you need to look at it as noise.
Decision hygiene is a set of specific procedures for reducing noise. We call it hygiene because it is a form of prevention, not a remedy to an identified problem. As with other forms of hygiene, it can be a little bit thankless. You never get a pat on the back saying, “Well done washing your hands today, the disease you did not catch is the flu.” Likewise, you will never know which bias or error you averted by applying decision hygiene. It just needs to become second nature.
McKinsey: What are some examples of decision hygiene?
Olivier Sibony: One principle that Danny mentioned, when he talked about case conferences, is to aggregate multiple independent judgments. Whenever you have different people making judgments, rather than assign the judgment to one person or gathering three people to talk about it around the table, get them to make their judgments independently and take the average of that. Or use some other variation on that theme. But essentially preserve the independence of people’s judgments before you aggregate them. That’s a big tool.
Another thing is to remember that competence matters. Some people are going to be better than others at any judgment. In medicine, for instance, some diagnosticians are better than others. If you can pick the better people, that helps. The better people are going to be more accurate; they are going to be less biased but they’re also going to be less noisy. There is going to be less random error in their judgments.
Then you get to slightly less obvious principles. One thing that struck us, for instance, is that how you define the scale on which a judgment is made makes a huge difference in the amount of noise you see. If you replace an absolute scale with a relative scale, you can eliminate a very big chunk of the noise. Think of performance evaluations again. Saying that someone is a “two” or a “four” on a performance-rating grid—even when you have the definition of what those ratings mean—remains fairly subjective, because what “an outstanding performer” or “a great relationship skill” means to you is not necessarily the same thing that it means to me. But if you ask, “Are Julia’s relationship skills better than those of Claudia?” that’s a question I can answer if I know both Julia and Claudia. And my answers are probably going to be very similar to yours. Relative judgments tend to be less noisy than absolute ones.
And there is another approach to noise reduction: using algorithms or rules of some kind, or artificial intelligence, to replace human judgment. Wherever there is judgment there is noise, but the corollary of that is wherever you want to get rid of noise, you need to take away the human element of the judgment. The beauty of algorithms is that they will do that. They will eliminate the noise. There will be no mood, no temperature, no difference between your judgments and my judgments. The machine will churn out the same judgments so long as the algorithm doesn’t change.
The question is, could you inadvertently introduce some systematic bias into your decision making by using and training algorithms in a way that may not be perfect and where the algorithms themselves may be the product of biased human judgments? It’s a very serious issue. But we don’t think it’s an issue that should prompt people to throw the algorithm baby out with the biased bathwater. The answer is not to reject all algorithms; it is to make sure algorithms are not biased.
Bias Busters Collection
Reducing room for biases
McKinsey: Will applying decision hygiene help an organization reduce bias as well as noise?
Daniel Kahneman: Almost certainly, yes, you will reduce bias if you reduce noise. One of the origins of bias is that people tend to jump to conclusions, and they reach those conclusions early, based on very little information. They find information that confirms their existing opinions, and they look for information in a selective way. If you implement a decision-hygiene procedure, you break that pattern and prompt people to view the problem as separate subproblems that can be looked at factually, without intuition, or with a minimal of intuitive output and input. If decision hygiene procedures are followed, you have less room for biases in the independent judgment.
McKinsey: As an organizational leader, how would I go about thinking about investing in or implementing decision hygiene—are these procedures relevant for every decision, for only some?
Olivier Sibony: There are some instances in which the value of the noise reduction may not always be worth the cost. If you are deciding at what price you are going to make an offer to acquire a company, it’s probably worth trying to reduce the noise in it.
McKinsey: What kinds of conversations and results do you hope to see from the book?
Olivier Sibony: There is an almost philosophical thing that I would like leaders to think about, which is this: If you expect others to agree with you, why aren’t you trying more often to agree with them? Why do so many people, especially in leadership positions, seem to believe that their role is to express a unique, distinct, even original point of view on what needs to be done and, at the same time, find it troubling when others don’t agree with them?
Daniel Kahneman: The goal is for the intellectual impact of the book to become embedded in the language so that it can become embedded in practice. If, in a few years, people understand the words “noise,” “noise audit,” and “decision hygiene” in the same way that they understand the word “nudge,” that’s what we would hope for. That and the awareness that system noise is something all organizations should worry about.
Comments and opinions expressed by interviewees are their own and do not represent or reflect the opinions, policies, or positions of McKinsey & Company or have its endorsement.