Posted on

Good facts, bad facts, and the supposed wisdom of crowds

I’ve long been a skeptic about medical advice on food, because the experts change their minds so often. An article in the New York Times last week about a new book, “Diet and Fat: A Severe Case of Mistaken Consensus” by John Tierney, included some thoughts relevant to the information quality debate stimulated by user generated content and wiki reference sources:

We like to think that people improve their judgment by putting their minds together, and sometimes they do. The studio audience at “Who Wants to Be a Millionaire” usually votes for the right answer. But suppose, instead of the audience members voting silently in unison, they voted out loud one after another. And suppose the first person gets it wrong.

If the second person isn’t sure of the answer, he’s liable to go along with the first person’s guess. By then, even if the third person suspects another answer is right, she’s more liable to go along just because she assumes the first two together know more than she does. Thus begins an “informational cascade” as one person after another assumes that the rest can’t all be wrong.

Because of this effect, groups are surprisingly prone to reach mistaken conclusions even when most of the people started out knowing better, according to the economists Sushil Bikhchandani, David Hirshleifer and Ivo Welch. If, say, 60 percent of a group’s members have been given information pointing them to the right answer (while the rest have information pointing to the wrong answer), there is still about a one-in-three chance that the group will cascade to a mistaken consensus.

I looked again at the opening story of the bestselling Wisdom of Crowds, a book by James Surowiecki. Here are a couple paragraphs. You’ll see that the story was chosen because there was a scientist involved, and as you might imagine, he found that this particular crowd was indeed able, when he averaged their guesses, to make an extremely accurate guess at the weight of an ox.

Eight hundred people tried their luck. They were a diverse lot. Many of them were butchers and farmers, who were presumably expert at judging the weight of livestock, but there were also quite a few people who had, as it were, no insider knowledge of cattle. ‘Many non-experts competed,’ Galton wrote later in the scientific journal Nature, ‘like those clerks and others who have no expert knowledge of horses, but who bet on races, guided by newspapers, friends, and their own families.’

The analogy to a democracy, in which people of radically different abilities and interests each get one vote, had suggested itself to Galton immediately. ‘The average competitor was probably as well fitted for making a just estimate of the dressed weight of the ox, as an average voter is of judging the merits of most political issues on which he votes,’ he wrote.

But any crowd has a better chance of coming up with an accurate answer to a question about the weight of an object than about any of the myriad types of information we need that are neither material nor measurable. A crowd can’t simply come up with accurate historical dates, let alone answers to the questions we depend on experts to help us with because answering requires knowledge, analytical skill, and insight. At Berkshire we use the shorthand of How and Why to describe these more difficult questions, and that’s what we ask our authors to focus on when writing their articles. How did agriculture develop almost simultaneously in different parts of the world? Why does football (soccer) have such broad appeal around the world?

Leave a Reply

Your email address will not be published. Required fields are marked *