Mike Addelman, in collaboration with the BES team, blogs about work to uncover just why the opinion polls before the 2015 General Election were so wrong.
One of the world’s longest-running investigations into political attitudes and voting behaviour, the British Election Study (BES), has been amassing huge quantities of data on every General Election since it launched over 51 years ago. And because the country is currently going through rapid changes in our party system and electoral behaviour, it’s an important time to be running the study, putting BES at the heart of crucial debates such as Britain’s EU membership, Scottish independence, and the decline of the two party system.
In 2012, the Economic and Social Research Council, which funds BES, awarded the 2015 study to a consortium led by The University of Manchester in collaboration with Oxford and Nottingham. The Manchester members of the leadership team are based at the home of quantitative analysis in the humanities: the Cathie Marsh Institute for Social Research.
Since taking over the study in 2013, the new BES has used a variety of methods to measure public attitudes and voter behaviour: an internet panel of 30,000 people is being followed over time, covering the 2014 European elections, the Scottish independence referendum, the 2015 UK general election (including a daily sample of 1,000 panellists during the ‘short’ campaign), and next year’s devolved and local elections and the EU Referendum. The ongoing inter-election panel study asks an online sample of respondents many of the same questions at key points over the electoral cycle so the study can track the evolution of voter attitudes over time and identify the factors influencing their development.
We have also carried out an in-person survey of nearly 3,000 electors, following the General Election, which forms part of the 50 year time series of post-election cross-section surveys. Because it is a nationally representative probability sample, it provides a benchmark for measuring changes in political attitudes, participation and voting behaviour. It is uniquely placed to address questions about why the opinion polls got it wrong in the run up to the General Election. And as part of a partnership with the Electoral Commission, the BES team is validating voter turnout for thousands of respondents.
The BES in-person survey data, recently released in November 2015, led to a new understanding of how sampling impacted on the General Election polling. The data reveals a closer match to the 2015 General Election Conservative lead over Labour than the pre-election polls, which failed to forecast the scale of the Conservative victory. Analysis of the new data by the BES team suggests that this closer match was made possible by using probability-based sampling methods and successful targeting of hard-to-reach groups, including non-voters. The results of the in-person survey have been much awaited, because they have implications for understanding what went wrong with the polls in 2015.
We have been studying the polls, which were so bad at predicting the 2015 General Election and have come up with theories as to why. In the BES in-person survey, 40.63% of the voters said they voted Conservative, 32.65% Labour, 7.08% LibDem, 10.71% UKIP and 4.54% SNP. While accepting that it is not a perfect sample of the whole population, the data nevertheless predicts the turnout well, and has a representative distribution of different age-groups. And it’s those two factors which help the BES get much closer to the election result, although further research is also needed.
Though the survey was conducted just after the general election, the team are reasonably sure that the results are not down to respondents knowing the outcome. For that to occur, we would expect to find more Conservative voters the longer from the election a respondent was interviewed and the analysis suggests that was not the case. Other popular explanations, including Shy Tories, late swing or systematically different preferences among “don’t knows”, to explain why the polls were wrong before the election have been more or less discounted by the team, who found no evidence for them.
Opinion pollsters are very good at making their samples reflect the general population, using a system of weighting. But the general population and the electorate are very different things, because around 40% of adults don’t vote. The 40% who don’t vote are also less likely to answer political polls. And that means there are too many voters in polls among low turnout groups such as young people.
In the BES in-person survey, 43% of the under 30s said that they didn’t vote. However, polls will often find less than 15% of them saying they didn’t vote. That means there are far too many young voters in polling samples and not enough young non-voters. And because young people tend to be more Labour leaning, that means we end up with too many Labour voters in the polls, as happened before the election. And that, could at least partly explain, why there was a bias in favour of the Labour party in May.
Without the BES, it would so much harder for social scientists, journalists and politicians to understand how long-term changes in the country’s social, political and economic circumstances affect voter attitudes and electoral outcomes. The fact that BES has provided data on every general election since 1964 makes it one of the most important resources for understanding elections and changes in voting behaviour.