In early September, SurveyUSA showed John McCain leading the polls 58 percent to Barack Obamas 38 percent among North Carolina voters. A week later, a CNN/Time poll found that McCains North Carolina lead had slipped by 10 percent, barely out-stepping Obamas 47 percent hold in the state.
But the discrepancy doesnt necessarily indicate a shift in public opinion. Polling biases that occur in the way pollsters frame questions, who they poll -- as well as when and how -- and sample size can cloud the way a population's opinions are represented, according to Steven Greene, a North CarolinaState Universityassociate professor of political science.
The SurveyUSA poll started calling North Carolinian's houses at random two days after the Republican National Convention concluded, and Greene said it is normal for a candidate's support to experience a "convention bounce." In this case, he said it was a "Palin bounce."
"Public opinion was especially volatile right then," Greene said. "That's a great example of how one 600-person sample can vary a lot from another," Greene said. "It forces you to question the whole end. Can you really have that much opinion change? Should two polls conducted across three days have that much of a difference?"
It's an effect that Greene said reduces polls' efficacy.
"Everything has to be put into context," he said. "Any single poll by itself is almost meaningless."
This type of occurrence is natural to polls, he said, because although "good polls try and be as unbiased as possible, polling questions are asked of humans and created by humans. For the most part, they're not biased, but they're riddled with error. It's the nature of the business."
Although error is natural and expected -- polls that survey 1,100 people have a margin of error of plus or minus 3 percent because answers from that select group of people cannot perfectly measure the population's opinion -- Greene said poll results "feed our hunger" for instantaneous snapshots of who is winning the race.
"Polling is giving us the score," Greene said. "It fits into how we think of politics as a game," he said. "That's how most people are trained to think about politics. In that sense, we crave those numbers. You don't watch a football game and never know what the score is."
And now, he said, it's even easier to provide daily scores because polling is cheap, quick and easy.
The process's simplicity is also part of its downfall, according to Kenneth Pollock, a professor of statistics. Pollock said that although it polling agencies have "thought about" elements that could skew poll results but weigh other factors, like deadlines, against them.
"Polling agencies have very tight deadlines to get polls done because they want to get them in newspaper on a certain day. They have to get polls out quickly," Pollock said. "They try and establish certain methodology that they use, but it's quite difficult."
Some of these methodologies, like calling landline phone numbers at certain times of the day or using only Americans with a voting history, can create more bias than is natural to the trade, he said.
"How are they identifying the people who are likely to vote? Especially this year, because it's expected there will be a lot of new voters," Pollock said. "Cell phones are a big problem. A lot of polls don't include cell phones in their samples, which biases the polls away from potentially younger, newer voters who might only have cell phones."
Polls tend to undercover new voters, who, for this election, are voting primarily with Obama, Greene said.
"Young voters are more pro-Obama than the rest of the poulation," Greene said. "Young voters with only cell phones are even more pro-Obama, so pollsters are underestimating Obama's support."
To counteract this underestimation, Greene said pollsters typically weigh heavier the answers from young voters who answer landline phones. Before this election, he said the tactic "hadnt been shown to be that big a problem. It hasnt seemed that young cell phone-only voters are that different from landline voters."
Polling agencies might have to reconsider their methodology if they find results from this election skewed disproportionately away from younger, first-time voters.
"They'll have to see whether that is different here," Greene said. "If this continues, then pollsters are just going to have to adjust."
Another bias readers have to keep in mind, Pollock said, are basing final election results on opinions that are subject to change.
"Opinions are changing over time," he said. "Taking the poll won't be the same as the election results."
Greene said forcing people into opinions they don't have is the clearest type of polling error.
"Pollsters are trying to claim people have a clear opinion, or an opinion at all, when they really don't," he said. "Even when you're asking the very best question -- a nice, very simple, straight-forward question like 'Do you think you will vote for John McCain or Barack Obama' -- it can get really messy."
Although Greene said everyone except those who had been living a cave "can't avoid Obama and McCain," they still might not be able to answer the question. If they answer, "I'm not sure," Greene said pollsters will sometimes pressure an answer.
"They'll say, 'Well, if you had to pick now,' or, 'If the election were today,'" he said. "But the election's not today."
But there are more reliable poll -- or rather, a poll of polls -- that puts Americans' need for tangible numbers in perspective. Sites like Pollster and Fivethirtyeight analyze poll results from polling agencies to form a more realistic picture of how Americans will vote.
"People put too much faith in polls," Greene said. "But Pollster -- what they do that's smart is they have an average across polls. Even one poll can be even more than 3 points out of whack. By averaging across polls, whatever systematic error that is introduced by any particulate organization will, presumably, even out. Random error should even out when you've got 20 different polls."