Are crowds smart? Since the publication of James Surowiecki's The Wisdom of Crowds in 2004, many in business would answer yes. The book made waves by arguing that random groups of strangers can, when their answers and ideas are pooled, make better decisions than a single expert. Since then business has tried to put this insight to use with crowdsourcing and prediction markets, but is the original premise that average group responses beat individual expertise actually correct?
This week a statistics-heavy argument has broken out online over this question. It was sparked off by a column in the WSJ by Jonah Lehrer that used new research out of Switzerland on the wisdom of crowds to argue that, while their purported intelligence is real, it requires the group to have diverse backgrounds and be independent -- if the group is allowed to know each others' answers, groupthink sets in and the final solution tends to be far worse.
But did Lehrer misunderstand the underlying science he was writing about? On his blog author Nicolas Carr recently offered a fascinating round-up of the reaction to Lehrer's column, including experts who argue that the study actually failed to show what Lehrer said it did. Among them is Peter Freed, a neuroscientist at Columbia, who calls the collective answers of the crowd in the Swiss study "horrrrrrrrrrrrrendous," explaining,
the crowd was hundreds of percents -- yes, hundreds of percents -- off the mark. They were less than 100 percent off in response to only one out of the six questions! At their worst... the 144 Swiss students, as a true crowd... guessed that there had been 135,051 assaults in 2006 in Switzerland â€" in fact there had been 9,272 â€" an error of 1,356 percent.But wait, responds physics professor Chad Orzel, the mean isn't what you should use to judge the crowd's answers, you should use the median instead! By that measure they did a lot better. For anyone familiar with statistics or academics this sort of intense, endless and minute debate will be familiar (and quite possibly exhausting). So what's the takeaway of this debate for those less interested in the finer points of statistical practice? In typically incisive style, Carr offers a pithy summation:
The wisdom-of-crowds effect seems to be exaggerated. In many cases, including the ones covered by the Swiss researchers, it's only by using a statistical trick that you can nudge a crowd's responses toward accuracy.... As soon as you start massaging the answers of a crowd in a way that gives more weight to some answers and less weight to other answers, you're no longer dealing with a true crowd, a real writhing mass of humanity. You're dealing with a statistical fiction. You're dealing, in other words, not with the wisdom of crowds, but with the wisdom of statisticians.Carr's warning that claims for the wisdom of crowds should be taken with a grain of salt seems sensible (after all that's the point Lehrer was trying to make when he started this whole discussion -- only some crowds are wise and only under certain conditions). Bubbles, panics, stampedes and mass delusions plague large collections of people, and when it comes to describing their own needs and desires, focus groups are often terrible. As IDEO's Tom Hulme put it in a recent interview, "if you ask people about something, they give you a clear opinion. But then if you watch them, you'll find that the way that they behave actually contradicts the opinion they gave." But don't chuck the baby out with the bath water, other forms of crowdsourcing used by business, such as prediction markets, have a pretty good (though far from flawless) rate of success.
When it comes to crowds and their cleverness, healthy skepticism is in order. Do you agree?
Read More on BNET:
- Nicholas Carr: What Water Wheels Can Teach Us About Cloud Computing
- Crowdsourcing: How to Make a Crowd Smarter
- Can Facebook Predict Election Results