As late as Friday night, most major polls had Labor ahead 51-49 on the two-party preferred vote. This result mirrors a number of polling “upsets” globally over the past few years, from Trump’s election in the US to Brexit.
It’s clear polls are missing a cohort of conservative voters, which the Australian Liberal party has now named “the quiet Australians”. This conservative vote was also under-represented in the 2015 UK General Election, the 2016 Brexit referendum and the 2016 US election. This group is getting harder to quantify using traditional polling methods.
There are several reasons why the polls can paint an inaccurate picture of the elections, and in particularly over the past weekend in Australia.
In research, we get much stronger levels of accuracy and participation when people feel engaged in the issues at hand. The current level of disengagement with political issues may be driving some of this under-representation. We often hear more strongly in research from those with the loudest, most vested voices. In this case, those that wanted change may have been more visible and audible in the polls than those content with the status quo
Political opinion polling is a science that depends on extraordinary accuracy in terms of representative samples, at a time when inaccuracy is now too easy. There is no single way of reaching every Australian cohort in an opinion poll. This compares to several years ago, when almost everyone had a landline and would answer the phone to do a quick survey. Today, increasing numbers of Australians have internet-only landline connections, if they have a landline at all, and rely exclusively on mobile. Analysis of ACMA data predicts only 50 per cent of Australian homes will still have a landline in 2021, down from 83 per cent in 2011.
With the 2015 UK general election, the main problem with polling inaccuracy was determined to be unrepresentative samples. People who took part in polls did not accurately reflect the population as a whole.
Analysis of the 2016 UK Brexit polling has indicated that online polls seemed to perform better than phone polls. In total, 63 percent of online polls correctly predicted a Leave victory, while 78 percent of phone polls wrongly predicted that Remain would win. Some of the discrepancy may have been due to younger voters supporting Remain, but not bothering to vote. A key difference in Australia, of course, is that with voting being mandatory, the entire electorate is represented.
Asking the right questions
In the 2016 US election, the vast majority of polls predicted that Hillary Clinton would beat Donald Trump. One poll that did correctly pick Trump in the lead was the USC/Los Angeles Times Daybreak tracking poll. It allowed people to assign themselves a probability of voting for either candidate, rather than having to declare their preference with 100 per cent. Different types of questioning – rather than the current, standard voting intention question – may end up providing greater insight. And we are now seeing discussions about whether social media data is a valuable predictor of voting intention.
So when it comes to research, there is increasing inaccuracy built in to the methodologies that takes time and money to overcome.
Political opinion polling, such as that we see in our media, is a relatively unique form of research with regards to the degree of accuracy required, and the level of public scrutiny placed on it throughout an election cycle. Commercial organisations recognise that insight and strategy is based on investing time and money in seeking viewpoints from a variety of data sources to ensure all views are accurately represented.
Amanda Hicks, partner, KPMG Acuity
This article first appeared on the KPMG NewsRoom.