On 21 July, Ministers published a report that found the UK Government failed to counter Russian interference in the 2016 Brexit referendum, despite a mounting body of evidence of global efforts to use and abuse digital platforms to influence democratic outcomes. As a result, how can we be sure that what we are being told is the ‘truth’? In this blog, originally written for our On Digital Trust publication, Professor Rachel Gibson discusses what policymakers can do to mitigate the most harmful repercussions of malware in our democracy.
- Governments, social media providers and tech business are facing increasing pressure to be more accountable and transparent
- However, tracing and stemming the growth of informational ‘ills’ is an increasingly difficult task
- Policymakers must develop and invest in the capacity to deter misinformation through detection
- Here, engaging with interdisciplinary research is vital
The impact of online activity designed to disrupt democracy and sway elections is a matter of growing concern worldwide. From cyber-attacks and the deployment of malware, to data leaks and the spreading of ‘fake news’, subversive activity to influence political outcomes is becoming more sophisticated and widespread. Much of it is taking place on social media platforms. But what can be done to protect citizens, society and democracy itself?
Growing evidence, mounting pressure
Official investigations into the misuse of voters’ personal data during political campaigns have increased following the Cambridge Analytica scandal, a watershed moment which uncovered the harvesting of millions of people’s Facebook profiles. There is also growing evidence of concerted efforts by anonymous ‘hostile’ actors to use AI to automate the spread of misinformation during elections in a bid to deceive the electorate and disrupt outcomes. Meanwhile stories of the deliberate hacking of political parties’ and candidates’ emails and malicious attacks on commercial and public agencies’ operations via ransomware are all on the rise.
Efforts by governments to address these problems are mounting, as is pressure on social media providers and other tech businesses to be more accountable and transparent in their practices. Such interventions are clearly important. However, in order to effectively deter these threats to the political process we first need to define the range and nature of the problems we face more clearly, determine which ones we can tackle now – given the resources available – and outline the range of mechanisms that exist, or are within reach, to deal with the most serious of these.
The scale of the problem
Debate about the impact of new communication technology on democracy preceded the arrival of the internet. The invention of the printing press, the telegraph, radio and television all fuelled hopes and fears about the diffusion of new ideas and the empowerment of ordinary citizens. The emergence of the World Wide Web in the early 1990s was no different. For Howard Rheingold, one of the early ‘gurus’ of the online community, the internet provided the opportunity to transform society and ‘revitalise citizen-based democracy’. Decades on, however, the narrative has shifted quite profoundly. The talk now is of ‘dark web’ activity, where voters are profiled without their consent and ‘deep fakes’, ‘bots’ and ‘troll factories’ lurk, seeking to confuse and manipulate an unsuspecting electorate.
While there is no doubt such techniques are being deployed in elections, there is surprisingly little systematic evidence or consensus on how widespread or how effective they are. In 2017, survey research from the US reported that the average American adult saw at least one fake news story during the Presidential campaign of 2016, that most such stories favoured Donald Trump, and that over half of those exposed actually believed what they read. However, analysis of voters’ Twitter feeds found that fake news accounted for only 6% of all news consumed on the platform during the campaign, and that it was heavily concentrated among certain users – with just 1% of users being exposed to up to 80% of fake news stories.
In 2017, the public release of tweets and Facebook posts from accounts linked to the Russian Internet Research Agency (IRA), by the US Senate intelligence committee, prompted a flurry of investigations into efforts to interfere with the 2016 presidential election. While there was universal agreement that the IRA had embarked on a coordinated effort to confuse and demobilise American voters, particularly those likely to support the Democrat candidate Hilary Clinton, there were mixed verdicts on its success in doing so. Some research using over-time analysis of tweet release, and subsequent changes in public opinion polls, suggests a concerning pattern of linkage. However, other research argues trolls played only a minimal role in the Twitter election debate compared with ‘authentic’ accounts, and that despite having an extensive reach, the content of the IRA automated messages was of limited power to persuade, given the crudity of expression and syntax.
Misinformation, disinformation and mal-information
Given the difficulties associated with measuring and tracing the impact of these new rogue actors and algorithms, where should policymakers be targeting their efforts? We might start by dissecting the problem of election misinformation according to two criteria – importance and tractability. What is of most concern, and what is most amenable to governmental intervention? For example, playing hard and loose with facts in order to promote oneself and discredit one’s opponents is hardly a new campaign strategy. Tasking bodies such as the Electoral Commission with the job of deciding whether an advertisement crosses a line from truth to lie risks becoming a time-consuming exercise that ends up enmeshed in court proceedings. Even where false information is shared or posted during an election, if the person(s) responsible does so in ignorance, how far should their actions be penalised? Again, the blurred lines of accountability and proportionality threaten to stymie any attempts at an effective regulatory clamp down.
Leaving to one side concerns about the flow of ‘standard’ propaganda and the accidental diffusion of misinformation that digital channels encourage, there are a range of more malicious and coordinated misuses of information that social media is particularly prone to. These include attempts by foreign and domestic actors to actively misinform voters or engage in what we might label as disinformation campaigns. The goal here is to deliberately decrease the amount of accurate information in society by increasing the supply of false and extremist information in circulation. While the result of such activity may simply be increased confusion and distrust among the public, it may also have a more specific end of encouraging support for a preferred candidate, while discouraging votes for their rivals. One step beyond this type of social ‘hacking’ are more targeted and illegal uses of the internet designed to spread ‘true’ information in order to disrupt and damage. This type of mal-information includes the leaking of confidential data and information designed to discredit opponents, or the promotion of hate speech online toward an individual, based on personal characteristics such as race or religious identity. The authors of such attacks will of course take steps to cover their tracks. However, this type of strategic and coordinated misuse of technology often leaves some type of digital breadcrumb trail that is susceptible to detection and investigation.
Where next?
Given the wide range of informational ‘ills’ that digital technology can now release into the political ecosystem, the question arises of what can be done to stem their flow? Numerous reports such as the Digital, Culture, Media and Sport (DCMS) committee publication Disinformation and ‘fake news’ seek to map this terrain. Distilling their contents and using my proposed combined importance and tractability ‘test’, I have identified four proposals for positive progress in this area:
- Mandate providers of social media platforms to maintain, and make available to government agencies, accurate records of all political advertising purchased on their platforms (with no minimum threshold applied). All paid advertising should carry an imprint that identifies who funded it. In addition, ‘fake news’ teams should be actively deployed by the companies to identify the categories of information misuse highlighted above, ie attempts at mal-information, disinformation and also, where possible, misinformation. These teams would feed into my next recommendation.
- A fact-checking consortium should be established for elections, as a joint initiative between government, media, and platform-providing companies. This would carry out impartial checks on social media accounts suspected of spreading dis or mal-information and provide corrections. They would be promoted as a ‘trusted’ go-to source for citizens to report suspected stories and to fact check campaign claims.
- New government-funded Democratic Digital Defence Teams should be set up to work across key departments and agencies such as the Electoral Commission and Information Commissioner’s Office. These units would recruit highly skilled data and social scientists to develop AI early warning systems that would use sophisticated techniques of machine learning and network analysis to spot bots and other malign actors, designed to spread false news during elections.
- Taking a longer view, there needs to be a more concerted and compelling effort to educate the next generation of voters about the need for vigilance when consuming news and information online. A variant of citizenship classes, these would focus on instilling the digital security skills required for voting and particularly ways to distinguish real versus fake news stories. This could be linked to the teaching of a wider set of online skills that are necessary for staying safe online in more general day-to-day activities such as finance and banking, purchasing goods, curation of social media profile content and email etiquette.
This article was originally published in On Digital Trust, a collection of essays providing analysis and ideas on the use of data in healthcare, crime prevention, and democracy in the current political climate. You can read the full publication here.
Policy@Manchester aims to impact lives globally, nationally and locally through influencing and challenging policymakers with robust research-informed evidence and ideas. Visit our website to find out more, and sign up to our newsletter to keep up to date with our latest news.