How accurate are polls supposed to be? The answer may depend on which candidate or party you support.
Last week, pollsters in the US shipped quite an amount of criticism for, in statistical terms, getting it right. Polls predicted Hillary Clinton would win the popular vote and she did. When all the votes are counted, the margin is expected to be between 1 and 2 points in favour of Clinton. Granted the gap was predicted to be wider (between 3 and 4 points according to Nate Silver's FiveThirtyEight) but a modest overshoot does not justify the opprobrium that has been heaped on polling organisations.
The final Ipsos/Reuters poll showed Clinton on 42 per cent and Trump on 39 per cent.
According to RealClearPolitics, pollsters were more accurate in 2016 than in 2012.
Recent misses in the UK general election and Brexit referendum have heightened sensitivities to polling accuracy, so a win in polling terms last week was badly needed. A statistical victory was never going to be enough.
Once it became apparent that winning the popular vote would not deliver the Oval Office for the Democrats, the scapegoating began. Polls quickly became lightning rods for the anger felt by so many in the media and across the world who had lulled themselves or had been lulled into a false sense of security. Had Clinton won the electoral college, polling would not be in the dock.
What was the likelihood of Trump taking the Whitehouse if Clinton achieved a margin of between 3 and 4 points as indicated by national polling? Not that likely at all. But National polls paint only part of the picture.
FiveThirtyEight gave Trump about a three in 10 chance of becoming president, based on an analysis of all polls, including state polls. So the possibility that Trump could win the electoral college was very real, even if the reality was inconceivable.
The polls were not perfect and some polls in rustbelt states fell well outside the margin of error.
Hard look
Polling companies around the world are taking a hard look at their methodologies and models following a number of disruptive elections.
A universal truth is that it is getting more difficult to poll. No one methodology gives researchers full coverage of the population.
A fully random sample was only ever theoretically possible, but changes in how we live and the communications technologies we use has made the perfect sample nothing more than an aspiration, especially in developed economies such as the UK and the US.
Researchers use models and algorithms to make their samples behave like fully random samples, but this engineering of these samples brings with it the potential for systematic error.
We have seen this in recent elections where polls tended to herd in one direction because pollsters employed similar adjustments.
Probably the trickiest aspect of polling is estimating turnout and identifying unlikely voters. In this latest US election, only half of registered voters actually voted.
Ipsos/Reuters use a model based on four components - stated past behaviour, future intentions, interest in the election and actual behaviour from voting registers. Predicting turnout is, by necessity, more art than science.
Turnout is also highly variable in absolute terms and within specific populations. If a particular demographic decides to vote in greater numbers than previously (such as African Americans in 2008 or, potentially, rural, working-class, whites in 2016), polling models may not be calibrated for this scenario.
No analysis of polling would be complete without mention of the changing relationship between polling and the media. We live in an age of always on news. The appetite for polls is insatiable. This must have consequences for accuracy.
IBD/Tipp pollster Raghavan Mayur, who called the popular vote for Trump (yes, incorrectly, but this point has already been made), has expressed concern over the pressure polling organisations are under to produce tracking polls, questioning if sufficient investment is being made to support such high-intensity polling.
Reliable prediction
Lately, relying on a poll of polls to deliver at a more reliable prediction has come into vogue. Recent high-profile elections call this practice into question.
A poll of polls works if the error you are trying to smooth is sampling error, but it is becoming clear that polling error is more likely to be systematic. If the majority of polls are heading in the wrong direction, one good poll is much better than many bad ones.
There are good and bad polls because there are good and bad pollsters. You do not need a licence to poll. All interested parties need to be more discerning of the source of polling, otherwise poor polling will bring reputational damage and discourage established polling organisations from putting their heads above the parapet.
On this point, it is interesting to note that Pew, one of the most respected research organisations in the US, did not poll in this election.
We are fortunate in Ireland that many of these challenges have yet to mature, with the exception of voter turnout. In the most recent General Election, Ipsos MRBI introduced a likelihood to vote adjustment, which was needed to improve the accuracy of our polls.
Ireland, however, has its own unique challenges, chief among them being timing. Nowadays we have more floating voters who typically leave it very late in the campaign to make up their minds, sometimes only deciding in the final days or hours and too late for polls to capture.
Polling in the US goes to the wire, allowing pollsters to measure at the optimal point on the curve.
Polling can be complex and the temptation is to reduce the findings to a single number that is easy to digest. For the 2016 US election commentators and polling organisations turned bookmaker, distilling a candidate’s probability of success into a simple percentage. It could be said that this is just semantics, but arguably it helped create the illusion of certainty.
We now know that the truth is always more nuanced and reverting to reporting poll findings in terms of percentage support for each candidate should be considered.