Skip to main content
opinion

Barry Kiefl is president of the audience analysis company Canadian Media Research and was the director of research for CBC/Radio Canada from 1983 to 2001.

Every journalist who focuses on politics today relies on polling data to frame and give direction to their stories.

Polls don’t seem to require much technical knowledge, and the experts – the pollsters – are very willing to share their data and “free” analysis. Polling is now interwoven into political journalism as many pollsters have actually become de facto journalists/political influencers, participating on media panels and hosting podcasts. Unfortunately we often hear their opinions, rather than the public’s opinion they claim to represent.

A number of pollsters are so willing to work with the media, they’ve taken to paying for the polls themselves. This should be a red flag.

When CBC and other media began commissioning polls, circa 1970, they paid for the polls and had exclusive rights. They often had polling experts in-house who understood the methodology of public opinion polling. Polls dealing with federal elections, and particularly the 1980 Quebec referendum, were treated as though the country’s future was at stake. I remember, as a young data analyst, being sworn to secrecy while generating the CBC poll results for the 1980 referendum.

CBC at the time developed an elaborate set of journalistic standards for polls. When I complained in 2014 that the broadcaster wasn’t abiding by its standards, the CBC’s then-ombudsman Esther Enkin wrote, “The whole public opinion research field is in flux and has had some hard knocks recently. Practitioners are faced with challenges in obtaining representative samples at a time when the way we communicate is profoundly changing.” A decade later, the CBC mostly ignores those old standards, which can still be found on its website. The world and journalism has moved on. Polls are often not subjected to any more scrutiny than man-on-the-street interviews.

Surveys today have an increasingly difficult time getting people to participate. People are busy, bombarded by junk calls and e-mails. Every store you shop at wants you to complete a survey. Thus, some pollsters have forsaken random sampling and instead allow people to “opt-in” and join their club. This is another bright red flag. These non-probability, unscientific samples are only representative of the people who opt-in. The cost of an opt-in poll can be as little as 1/100th the cost of a random sample survey.

The pollsters who still use random, representative samples are fortunate to have 1 in 10 people agree to respond. Response rates plummeted starting at the beginning of the century, raising questions about representativeness. The key question: are those who respond typical? Few Canadian pollsters provide information on survey response rates but many polls could have response rates as tiny as 1 or 2 per cent, or even less. Online opt-in surveys actually cannot calculate response rates and have no margin of error.

In the U.S., pollsters have addressed the issue. For example, the New York Times revealed that it made more than 2.8 million calls for polls in the 2018 midterm election and admitted to response rates lower than 2 per cent. The industry watchdog organization here that could have studied this problem shut down in 2018 and was replaced by the Canadian Research Insights Council. When questioned about the practices of some leading pollsters, CRIC basically said it encourages pollsters to follow its standards, which are prominently displayed on its website. When presented with clear violations of these standards, such as the lack of timely release of poll details and methodology, the response has been indifferent.

The respected Pew Research Center revealed recently that response rates to its own telephone surveys had declined from well over 30 per cent to less than 10 per cent in the past 20 years.

Subsequently, Pew and some Canadian researchers have recognized that with near-universal internet usage, online panels of respondents may be the best way forward to measure public opinion. Nielsen and Numeris recognized this decades ago for the measurement of TV audiences. But the panel can’t be opt-in; it should be a random sample of online users. The incremental cost of random sampling is amortized by using panel respondents over months or years.

Polling has evolved since Claire Hoy wrote his classic put-down, Margin of Error, in 1989. To be successful today, pollsters must not only have a grasp of polling methodology but also be consummate media performers. Journalists will continue to use pollsters, especially those who don’t charge for their services, but given the absence of an industry watchdog and with media organizations relaxing their polling standards, it is incumbent on individual journalists and the public to know and understand the basics of polling methodology.

Editor’s note: A previous version of this article noted an incorrect publication date for Claire Hoy's book Margin of Error. This version has been updated.

Interact with The Globe