Click here to read our latest report: Going Dark: The Inverse Relationship between Online and On-the-Ground Pre-offence Behaviours in Targeted Attackers

The Ethics of Regulating Extremism Online: Five Elements for Content Moderation Frameworks

The Ethics of Regulating Extremism Online: Five Elements for Content Moderation Frameworks
11th March 2021 Dr. Alastair Reed

In the wake of the storming of the US capitol on 6 January, we saw unprecedented responses by tech companies to regulate and control the extremist content on their platforms. This reached its peak with the suspension of Donald Trump, who was still the sitting US president, from numerous social media platforms. Trump supporters, angered by his deplatforming, flocked to alternative social media sites, including Parler, given its claims to respect “free speech”. Shortly thereafter, citing the app’s violations of terms of service, Apple and Google removed the Parler app from their stores and Amazon terminated its webhosting contract, taking Parler fully offline. While many praised the actions of the tech companies in countering support for extremism online, others, including the tech companies themselves, raised questions about the implications of these actions with regards to free speech and the global power and potential overreach of tech companies.

Challenges associated with deplatforming and content moderation are nothing new, and tech companies have grappled with these issues since their inception. However, following the rapid rise of Islamic State (IS) early in the last decade, and their expert exploitation of social media to disseminate their propaganda, tech companies’ roles in regulating extremist content have come under greater scrutiny. Governments and tech companies have struggled to find a balance between removing extremist content whilst protecting freedom of speech. Whilst decisions to remove IS-branded content were relatively straightforward—the group was designated as a terrorist organisation — determinations around extremist content moderation vis-à-vis intent and freedoms of religion and speech among non-designated groups and individuals are more complicated. Today, as focus shifts to regulating far-right extremist content online, these challenges persist and are compounded by the extent to which far-right extremist ideologies and groups have become intertwined with mainstream right-wing political parties and rhetoric.

Removing or restricting content online, by definition, requires limiting individual rights, which speaks to the importance of the clear articulation of the ethical arguments behind these decisions. An ethical framework can help conceptualise and determine the regulation of speech, and to avoid ad hoc decision making that may be biased, unfair, unjustified, or simply inexplicable. Realising this, social media companies have taken steps to clarify content restrictions on their platforms and even established independent entities to adjudicate their content removal decisions. However, more is still to be done. This post summarises a recent article where we argue that five elements are important in considering ethical frameworks for regulating extremist content online: Free Speech, Moral Authority, Transparency, Privacy, and Efficacy.

Free speech

Free speech should be the starting point for any ethical approach to content regulation. While not protected everywhere, free speech is typically seen as a fundamental right in liberal democracies. Still, we do accept constrains on free speech in certain circumstances—because of the risk to people’s safety, one cannot yell ‘fire’ in a crowded theatre. Any ethical framework must clarify the occasions when free speech can be constrained and the ethical justifications for it, so that it can be consistently and fairly applied.  Drawing on aspects of rights-based approaches, utilitarianism and virtue ethics, we argue for a pluralistic approach where free speech can be justifiably disrupted if the speech acts display significant disrespect for individuals, are likely to cause significant, harm and/or there is a significant pattern of extremist behaviour. Simply having extreme beliefs are not a justification for the restriction of free speech: the potential impacts of the actions must be taken into consideration. However, given the protected status of free speech, such disrespect, harms, or patterns of behaviour must be significant.

Moral Authority

Free speech is important within liberal democracies, but we must recognise that social media companies are private institutions. Just as we cannot demand a private news company air our opinions, it is questionable if these companies have a moral obligation to support the public communication of any and all individual beliefs and speech. Free speech, therefore, is not just about what gets said but who decides what gets said. This raises the questions of the legitimacy and moral authority of the decision maker, in this case, those hosting the content.

We suggest here that determinations of free speech online rely in part on what type of institutions social media companies are. Are social media providers private companies, or are the services that they provide akin to public infrastructure? If they are the former, then it would seem that they have an unfettered right to determine who uses their platforms and how. If they are the latter, however, they may have a wider responsibility to upholding the rights and safety of the user. Alternatively, should we consider social media companies to be more like traditional media organisations, with all the editorial responsibilities over content this entails? The nature of an institution determines the nature of its responsibilities and the extent of its moral decision making authority. Governments or a given society more generally are likely responsible for making these decisions, however this is a complex and contested discussion.

Transparency

With regard to processes around how extremist content is removed, justice does just not need to be done, but it needs to be seen. At the heart of this is the principle of transparency. It is not sufficient that decisions taken to remove or restrict content online are ethically justified, it is important that the process of how and why these decisions were taken is clear and transparent. This is essential not just to maintain trust in the system, but also to be able to allow scrutiny of the process. No process is perfect, and there will invariably be mistakes. Transparency entails that any constraints on free speech must have an open and effective and functioning appeal process to deal with such mistakes.  An area of concern for transparency is the application of artificial intelligence (AI) and machine learning (ML) to identify extremist content online. The use AI/ML faces the additional challenges of ensuring that the process has not incorporated any unintentional biases into its identification process. This makes the transparency and scrutability of the AI/ML decision making process essential.

Privacy

Closely intertwined with the challenges of transparency are questions of privacy. We have a reasonable expectation of privacy in our daily lives, and the questions of online privacy have become increasingly complicated. This becomes relevant when we consider issues of what expectation of privacy applies to those that share extremist material or views online. For example, what personal information should social media companies share with law enforcement agencies? What levels of privacy and protection should a user expect from a social media company? And, if personal information is to be shared with law enforcement agencies, should the users be notified? Addressing these issues is beyond the authors’ current scope, however, as deplatforming of extremist content on public social media channels becomes increasingly common, questions about deplatforming and reporting on extremist content in otherwise private social media channels are becoming even more difficult and important.

Efficacy

Fundamentally, countering violent extremism online involves the balancing of rights, with the need to diminish the threat posed by violent extremism. However, if we are going to argue for the restriction of such fundamental rights as free speech, we should be clear that such restrictions will actually work or at least have a realistic expectation of achieving success. Any ethical justification will ring hollow if rights are restricted for an end that is unachieved or unachievable. Hence the fifth key element is efficacy. Any restriction of rights in the name of countering violent extremism must be based on a reasonable expectation that the restrictions will work. This highlights the importance of addressing the continued lack of effective evaluation of programs that counter violent extremism both online and offline. Unless we have reasonable levels of certainty that these approaches work, it is difficult to present an ethical justification for the interference in individual’s rights.

Concluding Thoughts

The overall point of these five key elements is not to offer any detail on solutions to ethical issues related to the regulation of extremist content online. Rather, these elements draw attention to the type of problems that arise when using social media to limit the spread of ideas online and the ethical considerations that should be applied to restrictions on content sharing.

Alastair Reed is Associate Professor at the Cyber Threats Research Centre at Swansea University, and Executive Director of the RESOLVE Network.

Adam Henschke is a senior lecturer at the Crawford School of Public Policy, at the Australian National University, Canberra, Australia.

Kateira Aryaeinejad is Research and Project Manager at the RESOLVE Network.