Seeing (Sometimes False) Safety In Numbers

Duell Hinson carries his ballot for the North Carolina primary election as he walks past voting booths at the Unionville Volunteer Fire Department polling place in Unionville, N.C., Tuesday, May 6, 2008.
AP Photo/Chuck Burton
This column was written by CBS News director of surveys Kathy Frankovic.

Exit polls date back to the 1960s and are now a cornerstone of American elections and election coverage. We expect them to be accurate and precise, even though those of us who work on them frequently have to warn people about trusting too much in early and incomplete results.

In 2004, some bloggers and political junkies (as well as the Stock Market) over-believed the early exit poll results. Even today many people still claim those exit polls "proved" there was fraud in that year's vote count. Consequently, starting in 2006, news organizations have restricted access to early results: now, only three members of each participating organization are allowed to view the early exit polls, and they can do so only in a room without internet access - no Blackberries, no wireless cards and no cell phones. Everyone else - even those of us in the companies that pay for the polls -- waits until 5 p.m. Eastern Time to see the results.

In 2006, and in the 2008 primaries, this embargo has held, and there have been no leaks of early exit poll data.

It's natural to want to put confidence in numbers - numbers about anything. Numbers appear to give precision to those subjects that can be expressed in numbers. We read a thermometer with a belief in its ability to measure temperature precisely, and we also want to believe in the precision of poll numbers. And especially, we seem to want to believe in exit polls - not only here in the U.S., but any exit poll anywhere in the world. We want to assume that anything expressed in numbers must be correct, no matter how little we know about how those data were collected.

In many Western democracies, exit polls have been conducted for nearly as many years as they have existed in the U.S. - and they have a comparably well-developed and public methodology. But those exit polls are often quite different from American ones, because each country sets somewhat different goals for its exit polls, and hence different questionnaire styles.

For example, the National Election Pool (NEP), which is responsible for the U.S. exit polls, asks about 25 questions on its questionnaires. But in Germany, the exit poll conducted for the ARD network asks fewer than a dozen questions. And in Great Britain, the exit poll questionnaires that are used to project elections have only one question: "Which candidate have you just voted for?" At the moment the polls close, an estimate of the outcome is produced, and it isn't (it can't be) simply "who won." Not in a Parliamentary system. At poll closing time, the British media produce an estimate of the number of seats each party will have in Parliament when all the votes are counted. So the most important finding is not why the Labour Party (or the Conservative Party) has won, but assessing the "swing" - the gain and/or the loss of seats. British exit pollsters focus only on those districts where the outcome might be in doubt, ignoring those constituencies where there is no question who will win. That projection can be more important than any analysis. In the United States, however, both projections and analyses are important in exit poll reporting.

These polls in North America and Europe are all journalistic enterprises. But there is another kind of exit poll, too, with different goals. Exit polls sometimes are conducted - or paid for - by a political party or even by a government. And we tend to believe them, too. But there can be problems in these politically-sponsored exit polls: often, they are conducted (or claim to have been conducted) to prove a political point or to give one side a political advantage. In the Venezuelan referendum held last December, some news reports credited "government exit polls" that showed President Hugo Chavez victorious in a referendum that would have allowed him to remain in power indefinitely. But, when all the votes were counted (and they were counted - and released - very slowly) Chavez had, in fact, lost his hoped-for mandate.

The news media that reported those polls must have believed them at the time; otherwise, why report them? But clearly, the Chavez government had a political point to make. The data was flawed (and may not even have been collected), but journalists believed the reports from the "government officials."

Exit polls - even the threat of conducting exit polls - by an unbiased third party may not be able to prove fraud, but they sometimes can help keep a government honest. The early justification for exit polls in the Philippines (first conducted in 1992) was the fact that it takes two weeks or more for the Philippine election commission to count the votes. Reporting exit polls there within 24 hours of the election helped keep government vote-counters honest. Incidentally because of concerns about political and military activity at polling places (and worry about the security of a mostly female interviewing force), exit polls in the Philippines are conducted at people's homes; after people there vote, their fingers are marked with indelible ink, which helps exit-pollsters to recognize them.

In the United States, the public can get a lot of information about how exit polls are conducted, and so we understand, and mostly trust the results. But having all that information also helps us to know exit polling's strengths and weaknesses. Ideally, we should know the same things about exit polls in other countries, and so we should not simply, blindly believe all of them to be true.
By Kathy Frankovic