And more hard truths about the use and abuse of modern opinion research.
By Tom Barrett
Election polls are fun. They can help you understand why politicians do and say the things they do. They can help you decide how to vote. And as long as the parties have access to polling, you should too.
But, as campaign polls proliferate like dandelions in April, they also become the source of a vast amount of the hooey that gets spewed by pundits.
Pollster Bob Penner has a long history of working for election campaigns. In a recent interview, he said the “literacy around polling” is pretty low.
Polling numbers naturally bounce around within their margin of error. “If you do the same method day after day, each day [the result] will be different,” said Penner, president and CEO of Stratcom. “That’s called sampling error.”
But if a pollster goes on TV and says the bouncing numbers are just sampling error, “he wouldn’t be on TV,” Penner said.
“So he’s got to construct a reason for why the numbers moved other than the probable real reason, which is just a natural variation in the polling method. So he says it’s because of the ads they ran today. Or it’s because of the media story that was on last night. Or it’s because this guy endorsed him. And that’s almost never true. It’s almost never the reason.
“But they’re out there saying it and people are at home consuming it and saying, ‘well, those ads really moved the numbers.’ ”
Compounding the nonsense is the failures — sometimes spectacular — of pollsters in some recent elections. While pollsters generally get things right most of the time, their reputations aren’t helped by results like last year’s Alberta election, where Premier Alison Redford’s Progressive Conservatives were re-elected with a 10-point margin despite a slew of polls predicting a majority for the Wildrose party.
So what’s a voter to do? How do you make sense out of all the numbers that are going to be thrown around between now and May 14?
In self-defence, interested voters might want to learn a bit about how polls work, how pollsters can get it wrong and what they do to try to get it right.
Let’s start with what Penner was saying about sampling error. That’s the margin of error that pollsters quote — plus or minus, say, 3.5 percentage points, 19 times out of 20.
That means that if you repeated the poll over and over and over again at the same time, contacting respondents at random in a way that makes sure every voter has the same chance of being reached, 95 per cent of your results, or 19 out of 20, will fall within that plus-or-minus 3.5 per cent range.
The key phrase there is the part about every voter having the same chance of being reached. If that’s not true, your results could be skewed. If for some reason you’re not reaching, say, men over 50, you have a problem, especially if men over 50 tend to vote in a certain way.
For many years, the telephone allowed pollsters to come close to the ideal: pretty much everyone had one landline phone in their home, so pretty much everyone had an equal chance of being interviewed.
But technology and social changes have put an end to that. Cell phones, and the growing number of cell-phone-only households, have made it difficult to contact a lot of people.
That wouldn’t matter too much if we assume that those who use only cell phones are the same as those who have landlines, said Jason Roy, assistant professor of political science at Wilfrid Laurier University. Unfortunately, that’s not the case, he said.
We know that people in cell-only households tend to be, among other things, younger with lower incomes than landline households. Those factors can help predict a voter’s preferences or whether they are likely to vote at all.
People are also much harder to reach by phone these days. Voice mail lets them screen their calls and avoid talking to pollsters. Even when a pollster gets a live one on the line, we’re more likely to refuse to answer questions.
It’s gotten so bad that traditional telephone polling has a very low response rate. Again, Roy said, the people who refuse to talk to pollsters may be different from those who do. (In fact, he said, it appears that the people who talk to pollsters may be those who are most likely to vote, which would make declining response rates less of a problem. Still, said Roy, “these are unknowns we can’t really estimate.”)
Pollsters have come up with ways to get around the problems of phone polls, but the new techniques still come with some controversy.
One new method is known as Interactive Voice Response (IVR), in which a recorded voice asks a series of questions. Respondents answer by pushing buttons on their phones.
Roy said the technology allows for quicker and less expensive polls. But the pollster has no way of knowing if the person answering the call is an eligible voter or a seven-year-old kid.
IVR polls have been off, which raises questions about the technology’s validity, Roy said.
Richard Johnston, acting head of the University of B.C. Political Science Department, said IVR has some advantages.
The recorded voices are often easier to understand than the call centre workers who conduct traditional telephone polls, he said. As well, IVR might get a more honest response — people are conditioned to be polite when talking to other people and therefore tend to be more likely to reveal prejudices to a recorded voice.
But IVR can’t ask an unwilling respondent if there would be a better time to do the interview, he said. And if a pollster doesn’t make rigorous attempts to interview reluctant people, the sample could be biased towards the views of those who are available to talk.
Another way of getting around the problems of telephone polling is to abandon the phone altogether. In online polling, pollsters recruit large panels of people who agree to answer their surveys.
Each time he or she does a poll, the pollster will draw a certain number of people from the panel and invite them to answer questions by email.
Again, such surveys are quicker and less expensive to conduct, said Roy.
If you’ve been paying attention, you might be thinking that online polls have a built-in problem: not all voters use the Internet. One in five Canadians over 16 are not online; Internet usage rates also vary widely between provinces and between urban and rural voters. That means not everybody has an equal chance of being included in the panels that pollsters draw their interview subjects from.
Online polls, which have a pretty good recent track record, use various techniques to get around this drawback. One thing they do, like all pollsters, is weight their samples to make them look like the general population. That is, if men over 55 are underrepresented in a sample drawn from an online panel, a pollster will take the answers they got from men over 55 and make them worth more in the final tally.
That’s based on the assumption that people in that group will tend to have similar opinions. Weighting is a controversial topic in academic circles; it’s a process that, as Roy said, is “part scientific and part — I don’t know what you want to call it — part magic maybe.”
UBC’s Johnston noted that, like IVR, online subjects are likely to be more candid because there is no human interviewer.
Despite some methodological issues, he said, “the web, frankly, is the mode of the future; until some other technology comes along, at least.”
Timing is everything
There are other issues that can confound pollsters. Beyond the sampling errors we’ve talked about there are a number of other ways errors can creep in. The way a question is worded will affect the answer.
The order in which questions are asked can influence the answer, as well. If you ask someone a series of questions about government scandals, then ask how they’re likely to vote, you would likely get a different answer than if you just asked for voting intention.
Pollsters generally provide at least some information online about how their poll was conducted. That information should state the exact questions asked and the order in which they were asked.
There are a number of other things an interested voter can ask about a poll. Roy said the information required under the Canada Elections Act makes a good guideline.
The act requires that accounts of polls published during a federal election campaign must include a number of details, including who sponsored the survey, who conducted it, the date it was conducted and the margin of error.
“Any story that doesn’t report that information immediately in my view is suspect because they’ve missed some of the key information,” Roy said.
Even when pollsters are careful about their methodology, things can go wrong. People can change their minds. A poll that was valid last week won’t necessarily reflect what people are thinking when they go into the voting booth.
Party pollsters in last year’s Alberta election say the numbers shifted by as much as 20 percentage points in the last weekend of the campaign. It appears that the polls that showed Wildrose about to form a majority government were valid — at the time they were taken.
An Angus Reid poll conducted after that election suggests that almost 40 per cent of Alberta voters made their final decision on whom to vote for on election day or a day or two before. Most pollsters had stopping interviewing by then.
Which is a good reason to remember that polls “don’t predict the future,” in the words of Wilfrid Laurier University political scientist Barry Kay. “They’re snapshots of the past, hopefully the recent past.”
Please rate your pollster’s ego
If some new polling techniques are controversial, it may be in part because pollsters tend to be controversial characters themselves.
“Polling firms are like political parties,” said University of B.C. political scientist Richard Johnston. “They have a convergent interest in the sense that they want the general reputation of the industry to be positive.
“But they’re in competition with each other and for the most part they systematically ignore each other. They talk as if no other poll has ever been taken and if they do any comparisons it’s only with their own polls — except when they think that they see that somebody has kind of fallen behind the herd, in which case they encourage the wolves to go after that particular one.”