Hate speech or harmful speech is any expression (speech, text, images) that demeans, threatens, or harms members of groups with protected characteristics. It includes slurs, name-calling, discriminatory and exclusionary speech, incitement to hatred and violence, harassment. Online communities are a particularly fast way to spread hate. In this article, Dr Mihaela Popa-Wyatt explores the main questions regulators and policymakers must address, including the rights and protections to be balanced, and questions of practical enforcement.
- As the Online Safety Bill comes into force, policymakers must regularly review whether to modify the law in order to better hold platforms responsible for user content.
- Regulating online content can be problematic due to challenges, such as defining the legally responsible actors for online hate speech and balancing a right to free speech with the desire to limit harmful content.
- The use of tax deterrents to eliminate the profits of publishing hate and disinformation could effectively encourage media entities to reduce the amount of hate speech they disseminate.
Defining hate speech
There is no universally accepted definition of ’hate speech’ in law or common parlance. Rather, it is a broad concept, capturing a variety of forms of speech. Various treaties that tackle discrimination use different definitions to designate the type of speech that ought to be subject to criminal or civil law, or other regulatory bodies. Hate speech is a criminal offence under EU law whilst the UK has specific laws prohibiting the expression of hatred and threats towards individuals based on protected group membership. In the USA, however, First Amendment rights mean that much harmful speech counts as protected speech. There are considerable difficulties in agreeing on a uniform terminology of what counts as ‘hate speech’, the severity of harms caused by it, and legal measures to combat it.
Harmful speech in all forms is amplified online given the speed and reach of dissemination. The Online Safety Bill in the UK focuses on “harmful, false and threatening communications offences”. It primarily focuses on individually targeted harms such as threat and physical injury – missing out important dimensions of the specific nature of harms and their implications for vulnerable individuals and groups.
In addition to protected group categories (race, gender, ethnicity, nationality, religious affiliation, sexual orientation, gender identity, disability, age, indigenous origin, etc.), an encompassing definition of harmful speech should make provisions for new groups who are in need of legal protection (e.g. obesity, immigration status, etc.). This should be open to amendments that adapt to new groups in need of legal protection. Currently the Online Safety Bill does not even mention ‘gender’ as a protected characteristic. Hence, misogyny and language of gender-based violence is not covered.
Current regulation
The Online Safety Bill will establish a regulatory framework for online services by placing duties of care on e.g. user-to-user services such as social media platforms and search services. The 2017 internet safety strategy green paper identifies three principles that underpin the goals of respecting online safety:
- What is unacceptable offline should be unacceptable online.
- All users should be empowered to manage online risks and stay safe.
- Technology companies have a responsibility to their users.
It will give Ofcom power to regulate and levy fines against non-compliant providers. The stated aim of the government in introducing this bill is “to make Britain the best place in the world to set up and run a digital business, while simultaneously ensuring that Britain is the safest place in the world to be online”. However, the Bill, if enacted, continues a method of regulating individual speech through the criminal courts. Therefore, it is only applicable to a small number of the most serious cases – and not practically applicable at scale. This would require a new regulatory framework – new legislation to be drawn up to replace or work alongside the Bill.
The Challenges of Regulating Online Speech
Bringing the Responsible Actor to Account
The first question is, who is the legally responsible actor for online hate speech? Is a hateful utterance the responsibility of the individual user who made it? Or does the platform have the responsibilities of a publisher? Today social media companies have legal duties to remove harmful content from their platforms. Enforcement by companies and regulators is patchy – leading to concerns that filtering all online speech is too burdensome to be practical. Various countries have laws holding the individual responsible. However, enforcement at scale is challenging, particularly if a criminal offence has to be prosecuted.
There is simply too much speech to regulate by prohibition – and regulation of individual speech through the criminal courts is an unwieldy instrument. One alternative is to treat clear but less serious breaches of the law in the way that we deal with minor driving offences such as speeding. In such a domain, enforcement is largely automated, using a system of fixed penalty notices and/or penalty points. Such a framework would require those who persistently breach the law accumulate penalty points that ultimately result in a loss of service access. This would provide a practical way of incentivizing people to be good users of a public space.
Impinging on Rights to Speech
The second question is how to balance the right to free speech with the desire to limit hate speech. Speech regarded as inoffensive by some may be regarded as hate speech by others. There is a concern that prohibiting speech that some find harmful will prevent legitimate political expression or subsequently be used to do so, including by possible future bad actors. This can include powerful actors with the resources to deploy a law to silence others with legitimate views. UK libel laws for example, are known to be used in this way, including their use to prevent investigative journalism.
Tax on harmful content- a solution?
Tax deterrents could provide an alternative framework to the criminal law that could ameliorate rather than eliminate harmful speech at scale. The rationale is that service providers who allow and accelerate the dissemination of harmful speech should pay a tax to compensate for the harm and thus reduce financial incentives to propagate that harm.
This is similar to the tax on tobacco and alcohol. In particular, a tax on revenue or profit that is proportional to the aggregate quantity of harmful speech disseminated by providers (online platforms, internet service providers) should be levied (as per pillar one of the Digital Services Tax, introduced in April 2020).
The principal implementation issue is how to assess the tax due. Tax authorities could authorize third-party providers to assess the quantity of harmful speech published according to agreed definitions (deliberated by regulators and policymakers in new legislation) on a sample basis, with the tax authority as the customer and the publisher as the billed entity for the assessment service. This would be a simple and scalable mechanism. It could incentivize social media entities to reduce the amount of harmful speech they disseminate – without the complexity of criminal law or the restriction of individual rights to speech, and place the UK as a global pioneer in confronting harmful content.