Click here to read our latest report “Far‐Right Extremism and Digital Book Publishing”

The Ecosystem: Cross-Platform Responses to the Online Safety Act

The Ecosystem: Cross-Platform Responses to the Online Safety Act
30th September 2024 Erin Stoner
In Insights

Introduction

The evolution of terrorism and extremism has pushed content moderation to the forefront for academics, technological companies, and legislators alike and called into question the ability of tech platforms to interpret and adhere to new legislation. Recently, the Independent Reviewer of Terrorism Legislation, Jonathan Hall, argued that some elements of the act are unsuitable for the online space in which actors operate across multiple platforms, each with distinct features of running and regulations. 

Hall’s analysis noted that the Online Safety Act tends towards a service-by-service approach and the binary distinction between public and private servers. However, this Insight argues that technological companies should push for a cross-stakeholder ‘ecosystems’ approach, as seen in Ebner’s works, to ensure that platforms’ responses to extremist content processes are fully effective. 

Access From All Angles; Service-by-Service Approaches

In the Online Safety Act 2023, Criterion (a) of public/private content is ‘the number of individuals in the UK who are able to access the content by means of the service’ and criterion (c) is ‘the ease with which the content may be forwarded or shared with users of the service other than those who originally encounter it’. 

Hall argued that the phrase ‘by means of the service’ gives an unrealistic depiction of content/platform dynamics. In reality, extremist content is rarely disseminated on a single platform. The service-by-service approach may account for the public availability of content within that specific platform, and the accessibility of certain pieces of content through a single mode of access, but ignores the common cross-platform nature of the dissemination of extremist or terrorist content. Determining the status of content’s accessibility solely through a single platform overlooks how common it is for content to be accessed via an external platform and that hateful content often spans across multiple platforms. 

Assessing the status of extremist content on a service-by-service basis can result in content which is accessible externally slipping through legislative cracks. As a result, the Online Safety Act’s focus on a service-by-service approach can pose problems for countering terrorist and extremist content, as it fails to encompass the cross-platform nature of content. 

The Blurred Boundaries of Public and Private 

The report also explored issues posed by the continued distinction between public and private content; public information is typically subject to harsher moderation than private. Proving that certain content is publicly accessible is often essential to showing that terrorist intent was behind its dissemination as propaganda. There is a tendency within the legislation to assume ‘private’ and ‘public’ are fixed concepts, homogenous across all platforms. This is not always the case. 

Hall’s focus on Telegram’s channel and ‘joinlink’ system (prominent in the dissemination of Islamic State propaganda) illustrates that there is not always a clear distinction between public and private content. One requires a joinlink (a type of invite link) to access private channels on Telegram, or can be added by an admin member. The platform draws a sharp distinction between public and private, and users sign terms and conditions in which they agree not to ‘Promote violence on publicly viewable telegram channels, bots, etc.’, but do not moderate private channels., the exception being that any private channels with widely available join links will be treated as a public one, with the expected moderation. 

However, Hall questions where exactly the border between private and public lies and how this discrimination between the two categories ignores the key-role small, private channels often have in propaganda dissemination. New propaganda is also often initially disseminated in smaller, private channels—the links to which are available if one knows where to look—from which users can swap and produce content to be spread in public channels. 

As a result, the legislation’s basis that public and private channels and domains online have obvious differences is undermined by the fact that the two often interact to produce terrorist content, and often do not fit into a clear category of public or private. 

Faceless, Decentralised and Transnational Parties 

Hall’s critique expands to the fact that under the Online Safety Act, the commission of an offence specifically requires ‘conduct by a person or people’. The Act clarifies this through the addition of a clause specifying the ‘use, possession, viewing, accessing, publication or dissemination’ of terrorist content as constituting an offence, but again, this misrepresents the multi-dimensional nature of the online space in two manners. 

First, while it is historically easier to prosecute an IS member, for example, due to their membership to a proscribed terrorist group, recent discourse in the online extremism sphere has explored a ‘salad-bar’ approach, meaning ideology is decentralised and made up of fluid ideas which different individuals may place more or less value on. It is more common for an individual to act outside of the realms of an official group, collecting a range of ideologies from many extremist categories (such as anti-elitism, conspiracy theories, misogyny, and white supremacy). Therefore, to say that a proscribed group conducts the use, possession or viewing of certain content is increasingly difficult when extremist actors, especially on the far-right, tend to be involved in more general, decentralised online ideologies as opposed to formally belonging to an identified organisation. 

Furthermore, the transnational nature of the internet presents a complex problem for identifying a single person or group. Is location determined based on the user’s settings selected upon joining the platform? Or the company’s geolocation of the individual, which many platforms are not capable of? If a piece of propaganda is created in the US but is then detected on a Russian-based forum, which has different laws surrounding terrorism and hate speech regulations, it is difficult to identify whose jurisdiction the Act adheres to. As a result, charging the ‘person or people’ becomes difficult as content rarely exists within easily identifiable borders of jurisdiction. 

Ultimately, legislation that operates under the assumption that online terrorist content is spread on a single platform, by a singular person, belonging to a sole ideology misrepresents the reality of moderating such content. This, in turn, may hinder legislation’s practical application by practitioners and technology companies to cover diverse types and sources of terrorist content in the online space. In reality, online extremist content rarely exists in such a specific or bordered manner. Users operate across platforms, and the emergence of more decentralised forms of terrorism and extremism makes it difficult to link individuals to a particular identifiable group, which would enable prosecution. Online spaces rarely exist with distinct public/private differentials, and platforms function in tandem with one another in hosting terrorist content as opposed to content existing on a ‘service-by-service’ basis.

Responding – An Ecosystems Approach

Of course, technology companies do not have direct influence over the creation of such legislation; although many governments do share new or updated legislation with stakeholders for commentary, platforms are more commonly subject to such legislation rather than creating it.  However, there are still helpful ways in which tech companies may respond to and approach the online space. In order to remedy a singularised platform approach, technological companies may be encouraged to engage in cross-stakeholder discussions where platforms are able to coordinate within a wider network. This can be summarised through what Ebner describes as an ‘ecosystems’ approach, which includes “tracking and tracing how content is moved and co-ordinated between different platforms and how it can be manipulated and massaged for different purposes.” 

It is critical for both policymakers and the tech companies implementing these policies to understand that content is spread between multiple platforms at various stages of dissemination, and platforms typically have distinct but interconnected roles to play in a wider ecosystem of radicalisation. It is rare for an individual to be radicalised and moved to violent action via a single platform. Ebner argues that recruitment and mobilisation can take place on more public platforms like YouTube or Reddit, while radicalisation and intimidation tend to take place on messaging apps such as Discord or Telegram. Furthermore, the mainstreaming of extremist messages tends to occur on social media platforms, where ideology can be spread through actors posting content or creating discourse in comment sections which promote extremist ideologies. This cross-platforming nature of many online extremist and terrorist networks requires collaboration between technological companies to map content as a web of information across multiple stakeholders. 

Cross-network De-platforming

The ‘ecosystems’ approach can be applied via certain forms of de-platforming. Historically, content would be removed individually, judged per post or group, and occurs on a service-by-service basis, conducted solely via the host platform. Content is thus removed per the specific site’s regulations; in some cases, the user will receive a temporary or permanent ban. A most notable example of mass de-platforming was the mass removal of terrorist content from X (Twitter at the time) from 2015-2018; between mid-2015 and August 2016, 360,000 accounts were suspended for being associated with terrorism. 

De-platforming in this manner can be considered what Allchorn and Kondor refer to as a ‘whack-a-mole’ process, meaning that when an individual post is taken down, it can simply reappear again on an alternative platform or through a new account by the same actor on the original platform. As a result, service-by-service de-platforming becomes more complicated, and while there is often no option but to de-platform, the platform provider may be powerless against alternative accounts and platforms where content can be re-posted. Instead, by understanding the interconnected ‘ecosystem’ of the online space, the process of de-platforming may be improved through strategic network takedowns and disruptions, where entire networks of content and actors are mapped across platforms and removed as a collective, as opposed to the attacking of individual posts. 

De-platforming is not a fix-all solution for extremist content, but it has increased potential for success when conducted through a cross-platform network. While de-platforming has the ability to muffle the front-facing public voice of extremist ideology, thus reducing its ability to reach a wide audience, there are concerns over its efficiency in removing content from all spaces in which it can cause harm. There is a risk that de-platforming drives actors to less regulated platforms, such as Telegram or Gab.ai, making the content more difficult to detect for moderators while still accessible for users. There is also concern about the fact that de-platforming can harden the identity of groups whose ideology already concerns their being silenced; being de-platformed can be seen as a badge of honour, of proof of a greater conspiracy against a group member, therefore consolidating their radicalisation as opposed to diffusing it. 

However, on the whole, when de-platforming occurs simultaneously through cross-platform collaboration, it has a dampening effect on a group’s ability to gain support and spread content. Primarily, de-platforming has the ability to push extremist actors to smaller platforms, meaning that even if they find other ways to disseminate content on alternative platforms, the content is typically less accessible than before. Pushing something into a smaller, less accessible space means that the content is no longer available to a mass audience; even if it is still online, it is a step towards pushing such content away from the public eye. Furthermore, studies surrounding the banning of far-right group Britain First from Twitter show that when groups migrate from a more public platform like Facebook to slightly smaller ones like Gab or Telegram, there is a significant decrease in those who follow the content over from the original larger platform to the smaller one. Therefore, the tendency for followers not to maintain interest in content produced by Britain First once it was removed from Facebook suggests that de-platforming in this manner may be a productive way to decrease public support amongst extremist groups. 

Conclusion

Current legislation in the United Kingdom’s Online Safety Act has a tendency to regulate on a single-service basis, with binary definitions of private and public content. To ensure platforms are still able to deal with content that may slip through these legislative cracks, the ‘ecosystems’ approach of inter-platform collaboration through strategic network takedowns ensures content is removed at a more efficient rate across platforms on a network level. While moderation in this form is not a definitive solution to extremist content, any persisting content is more likely to appear on back-facing platforms rather than front-facing ones. Even if this content becomes more difficult to detect by appearing on smaller-scale platforms, it creates suitable barriers for extremist and terrorist groups from reaching broader groups through more popular services. As a result, a cross-platform and cross-stakeholder approach that acknowledges the tendency for extremist content to function as an ‘ecosystem’ can relieve the weaknesses outlined by the Independent Reviewer.