Click here to read our latest report “Transmisogyny, Colonialism and Online Anti‐Trans Activism Following Violent Extremist Attacks in the US and EU”

Social Control, Terrorist Content and the Online Safety Bill

Social Control, Terrorist Content and the Online Safety Bill
1st September 2023 Sam Williams
In Insights

*Please note: The provisions of the Online Safety Bill are correct as of 27 July 2023, and are subject to change*

Social control is a highly contestable concept. For the purpose of this Insight, in simplified terms, social control denotes the methods by which society regulates and controls behaviour. The methods by which social control is implemented have undergone a variety of changes in recent times. A particularly pertinent trend is the ‘responsbiliation’ of social control; whereby governmental actors are increasingly tasking non-state entities with regulating and controlling certain societal behaviours. 

The United Kingdom’s Online Safety Bill (The Bill, hereafter), currently in its 3rd reading in the House of Lords, encapsulates this trend. Part 3 of The Bill imposes “duties of care on providers of regulated user-to-user services [including social media platforms such as Facebook and Twitter] and regulated search services [such as Google]” to regulate various categories of content on their services. Such categories of content include terrorist content, Child Sexual Exploitation and Abuse Content (CSEA), and mis/disinformation, amongst others. The Bill outlines specific obligations for internet services, such as removing content that falls under sections of the Terrorism Acts (2000 and 2006) and producing “transparency reports” explaining their regulation processes.

To enforce these duties, Ofcom has the power to impose fines of up to £18 million or 10% of the service’s global revenue, whichever is higher; apply to the courts to impose business disruption measures; or bring criminal sanctions against senior managers of ‘offending’ services. In addition to presiding over enforcing these duties, Ofcom has the responsibility of preparing codes of practice for services to abide by, which are subject to modification by the Secretary of State. 

Whilst The Bill still includes ‘traditional’ or ‘formal’ forms of legal regulation, it represents a shift towards ‘informal’ measures of social control, giving internet services great autonomy in terms of how they regulate content on their sites and how they define what specific content must be removed. The remainder of this Insight posits that the presence of these trends of social control forecast an increased risk of ‘over-regulation’, magnifying several issues with The Bill.

Ambiguity and a Broadening Scope

The impact of these trends of social control is exacerbated by the ambiguity and lack of definitional clarity present in The Bill, echoing long-standing challenges in defining terrorism. The Bill does offer some guidance as to what terrorist content must be regulated by services by prohibiting content which is deemed to be illegal under several existent pieces of terrorism legislation. However, these definitional issues persist when applying the definitions provided by the Terrorism Acts, as illustrated by a series of court cases prosecuted under the Acts.

A discussion of the various definitions of terrorism is beyond the scope of this Insight. However, what is relevant is the limits that this lack of clarity places on the potential impact of The Bill. Firstly, this uncertainty threatens the ability of The Bill to create uniformity in the regulation of terrorist content, not only making it increasingly difficult for services to establish protocols but also clouding the picture of what users understand they can and cannot share online. Additionally, research has long shown the perils of removing terrorist content from certain platforms, only to have it reappear on alternative, less-regulated sites. Such ambiguous legislation increases the likelihood of a wide variation across the policies of different platforms, subsequently increasing the likelihood of terrorists migrating from platform to platform. 

Current provisions predominantly leave the task of defining which content should be deemed ‘terrorist’ up to internet services – a task for which they are not necessarily qualified to undertake. There are two potential scenarios flowing from this. Firstly, services might adopt overly narrow definitions allowing some terrorist content to remain on sites, leading to significant fines. Secondly, the lack of clarity could foster a culture of over-regulation. Internet services do not exist with the sole intention of reducing the presence of terrorist content on their sites: their central goal is to maximise profits. The risk of facing substantial fines combined with the lack of clarity on which specific content they should be removing could prompt them to adopt an overly broad definition of terrorism, resulting in non-terrorist content being mistakenly removed. A scenario in which the fear of incurring fines outweighs the protection of human rights creates widespread concerns for freedom of expression.  

The ambiguity evident in The Bill does not end with the definitions, or lack thereof, of terrorism. Section 18(2) imposes an obligation on internet services to have “regard to the importance of protecting users’ rights to freedom of expression within the law”, without defining the particular extent of this duty. “Have regard to” implies a far ‘softer’ duty in comparison to the harder duties imposed on services to remove content from sites. This only increases the likelihood that these services will abide by their ‘harder’ duties of removing content, rather than the ‘softer’ duties of guarding freedom of expression.

These concerns are intensified by The Bill’s provisions which instruct services to “use accredited technology to identify…and prevent individuals from encountering terrorist content” (S.111). While employing technology offers the advantage of swiftly regulating a vast volume of content, it comes at the cost of sacrificing nuanced analysis of content removal decisions. Consequently, this increased emphasis on regulatory technology, coupled with these definitional concerns, poses a risk of inadvertently removing valuable content if it contains terrorist-related material. For example, there have been widespread concerns surrounding the removal of content from Syrian human rights activists documenting atrocities, or the censorship of journalistic content, particularly in autocratic regimes

This situation, therefore, sees governments being caught between a rock and a hard place. Regulating terrorist content online is seen as essential, both in terms of security considerations, but also due to public pressure and media attention. However, legislation which aims to do this is invariably lacking in clarity, fostering a culture of over-regulation and undermining efforts to effectively and uniformly control the problem. 

Bypassing Legal Processes

The current trends of social control raise the potential issue of regulation and control being imposed without due legal processes being followed. Unlike government entities, internet services are not bound by the same legal restrictions when limiting certain rights on their platforms. This issue has consistently been raised in relation to previous attempts to increase cooperation between governments and internet services. Experts worry that this can lead to a dereliction of duty on the state’s part to respect requirements of necessity and proportionality when restricting such rights. 

It is important to note that increased cooperation between governments, online services, and academics can be valuable in the effort to control online content. However, these must be balanced against the concerns and definitional issues outlined above. The creation of a bypass to legal processes and a wide array of interpretations for what might constitute removable terrorist content provides services with an opportunity to remove content based on their ideological values, using The Bill as a pretext.

Previously, these concerns were heightened by the provisions in The Bill which required the removal of ‘legal but harmful’ content. Requiring internet services to remove content that is perfectly legal under UK law could have had exceptionally dangerous consequences for freedom of expression. What might have constituted ‘harmful’, inevitably, also suffered from a lack of definitional clarity, further broadening the scope of its application. However, these provisions have now been removed, illustrating the crucial and challenging task of striking a balance between asserting control and upholding individuals’ rights.

Establishing Norms

Discussions of legislation such as The Online Safety Bill require an analysis that goes beyond its stated aims, such as reducing the presence of terrorist content online; or legal obligations, like protecting freedom of expression. The law also has an expressional function, encompassing the establishment and affirmation of social norms and patterns. The enactment of such a Bill can produce a set of ‘cyber norms’, helping to establish acceptable online behaviours. The Online Harms White Paper illustrated this as an aim of The Bill, using ‘informal’ measures of control to regulate behaviour. In this instance, the legal provisions may not necessarily directly reduce the presence of terrorist content online. Instead, The Bill intends to establish these ‘norms’, hoping users will acknowledge these and subsequently refrain from posting such content themselves.

However, concerns have been raised that these provisions will have a ‘chilling effect’. The increase in content removal might discourage people from disseminating content in the first place. This, coupled with the ambiguity surrounding the definition of terrorist content, could lead users and services alike to err on the side of caution with regard to both posting and removing such content, contributing to this ‘chilling effect’. This could be particularly problematic if invaluable journalistic and activist content is withheld due to fear of regulation. 

Returning to the first point made in this Insight, a further consequence of The Bill is the confirmation of trends of social control. By entrusting and enforcing internet services to control and regulate a series of problems – terrorist content, mis/disinformation, CSEA – The Bill indicates an ongoing trend of ‘responsibilisation’ in the proposed legislation.

Conclusion

This Insight does not intend to discredit the importance of controlling terrorist content online, nor does it aim to outline new problems with such efforts. Instead, it aims to highlight how a key trend of social control is exacerbating existing issues within the regulatory framework. By expanding the social control apparatus and burdening internet services with responsibilities for which they may not be sufficiently prepared, the risk of ‘over-regulation’ intensifies, highlighting the issues that are present within The Online Safety Bill. 

Sam Williams is an ESRC-funded PhD Student at Cardiff University, studying within Cardiff’s Security, Crime, and Intelligence Innovation Institute. His topic concerns Russian influence operations and is a mixed methods study. He previously completed an undergraduate law degree and an MA in Cyber Crime and Terrorism at Swansea University, and a Social Science Research Methods MSc at Cardiff University.