The Bradley effect, which refers to the propensity of white poll respondents to overstate their support for a black candidate, isn’t the only issue that pollsters, statisticians and academics will discuss and dispute. But it may be one of the most consequential since it stands to significantly skew pre-election poll results in an election where it seems increasingly likely that Barack Obama will emerge as the Democratic party’s presidential nominee.
“We know that biracial elections have been difficult for pollsters,” says Andrew Kohut, president of the Pew Research Center. “Race continues to be an issue for Obama, and to the extent race is an issue, race will be an issue in polling.”
The phenomenon, known in the trade as “social desirability bias”, draws its name from Tom Bradley, the former black mayor of Los Angeles who lost the 1982 California gubernatorial election despite leading in final day pre-election polls. It resurfaced in the 1989 Virginia governor’s election when L. Douglas Wilder, an African American, barely squeaked by his Republican opponent despite polling that reflected a commanding double-digit lead for Wilder heading into Election Day.
Some observers saw evidence of the Bradley effect right out of the gate this year in New Hampshire. While surveys were close to the mark on the Republican side, polls for the Democratic primary showed Obama with a steady lead over Hillary Clinton in a contest he eventually lost 39 percent to 36 percent. The average margin of polls taken up until a day before the election projected an eight-point Obama lead.
Exit polls showed that Clinton narrowly edged Obama among voters who made up their decision on Election Day, suggesting that the discrepancies between pre-election polls and the actual vote couldn’t be explained away by a last-minute flood of Clinton support.
Others saw the Bradley effect at work in the Rhode Island primary in early March, where Clinton’s 58 percent to 40 percent victory was also notably wider than expected—the RealClearPolitics polling average showed a margin of 9.7 percent.
In states with larger black populations, such as North Carolina and South Carolina—where polls had Obama leading by nine points, when he actually won by 28—there’s even been talk of a reverse-Bradley effect, whereby Obama’s support was under-reported in pre-election polls.
Yet for all the worry surrounding the Bradley effect, there is still considerable debate over whether it is a real cause of polling error—or even a significant one at that.
Skeptics point to a number of other elections featuring black candidates where the phenomenon hasn’t surfaced, the most recent of which is the 2006 Tennessee Senate race between Democrat Harold Ford and Republican Bob Corker. In that election, polls showed Corker with a 12-point lead over the African American Ford just three days out. Ultimately, though, the election margin proved razor thin.
“I expected no more questions about Bradley effect after the Ford race,” says Tom Lee, a Nashville attorney who served as a top adviser to Ford in the race. Lee says that the campaign’s internal poll numbers were accurate the whole way through, but that other polls failed to screen out unlikely voters, who in Tennessee tend to be more conservative.
“What our race showed is you can poll accurately if you got the right screens,” he says, “but you also can be misled. It starts at the beginning. If the sample is poorly drawn you’ll never be able to trust what you see.”
Indeed, in a year marked by record-breaking turnout, many pollsters ae less concerned about the Bradley effect than the models used to determine the composition of the actual voter turnout. In the wake of New Hampshire’s polling glitch, AAPOR convened a task force to study what had gone awry. Though its report has not been finalized, the task force’s head, University of Michigan professor Michael Traugott, says the evidence has yet to point to any “smoking gun.”
Frank Newport, the editor in chief of the Gallup poll, says his team has likewise been unable to find any such major error that explains the discrepancies in its poll. He’s unwilling to pass off New Hampshire as a one-time fluke.
AAPOR had hoped to publish the task force findings before this weekend’s convention, but Traugott says the work has been stalled by the hesitancy of pollsters to submit their methods and practices for peer review. He expects to report by mid-to-late summer, providing enough time before the general election for pollsters to tweak their methodology or improve their voter-screening questions.
“It does make for a challenge when you have public pollsters who won’t share their methods appropriately with others,” says Rob Daves, a Minnesota-based AAPOR past president. “Science is based on transparency and I’m not just talking about social science.”
The two most recent Democratic primaries last week did little to put those questions to rest. In an article published last week on the Pew Research Center’s website, two University of Washington researchers found that pre-election polling in Indiana deviated from the pattern found in earlier primaries.
“Clinton’s margin of victory should have been significantly larger than the two percentage points actually recorded and larger even than the 5-point margin that pre-election polls predicted on average,” wrote professors Anthony Greenwald and Bethany Albertson.
Pollsters, of course, point out that the lion’s share of the primary pre-election polling has been good. But the confluence of factors unique to the upcoming general election—a historic Democratic nominee who will be either an African American or a woman; an influx of new voters; the likelihood of unusually high turnout—nevertheless gives them reason for concern.
“My sense is that people are pretty nervous,” says AAPOR member Michael Hagen, the director of Temple University’s Institute for Public Affairs. “We’re in uncharted territory here, that’s for sure.”