Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

Towards a Policy Framework for Addressing Violent Conspiracy Theory Movements

Towards a Policy Framework for Addressing Violent Conspiracy Theory Movements
7th September 2023 Hannah Rose
In Insights

In this GNET Insights miniseries, ISD experts explore the complex intersection of conspiracy theories, violence and extremism. In the fourth article in the series, ISD Analyst Hannah Rose outlines the shifting platform and government responses to these issues, and the complex policy landscape around this emergent harm area.

You can also read the other articles in the series, on the complex conceptual relationship between these phenomena, the varied manifestations of violence associated with conspiracy movements, and the key conspiratorial narratives accompanying violent radicalisation.

Introduction

With the emergence of the anti-lockdown movement and its capacity for mass mobilisation, the potential for real-world impacts of conspiracy theory movements boomed both in the public and policymakers’ consciousness. While the January 6 Capitol insurrection demonstrated the propensity of conspiracy theory movements to catalyse violence, Instagram’s introduction of labels for COVID-related content showed for the first time that mainstream social media platforms recognised the offline harms which could originate within their ecosystems. 

Violent conspiracy theories pose a threat by targeting marginalised communities, serving as conduits to more extreme ideologies and propagating diverse forms of violence. As such, they represent a meshing of often disparate harm types and constitute a new phenomenon which platforms and governments alike have struggled to situate within their existing policy frameworks. 

Although conspiracy theories can motivate their proponents towards violence, believing in a conspiracy theory is not inherently illegal, nor is it against a platform’s terms of service, even if it carries the potential for real-world harm. Policies are further complicated by the nebulous structure of conspiracy movements, which lack clear group affiliations, and the frequent use of coded language or ‘dog whistles’, which hinders detection. Additionally, conspiracy theories tend to revolve around an anonymous or undefined villain, whose identity is heavily inferred but is never explicitly revealed, sitting in a grey area when it comes to policies against incitement. For example, conspiracy theories may refer to shadowy figures such as ‘globalists’ or ‘cultural Marxists’ – terms that may seem innocuous to the unseasoned reader but are often deployed to propagate myths about Jewish people’s relationship with power, money and control.

A combination of these various issues has led to an unsystematised approach to defining the problem and its solutions. This Insight will consider some of the commonalities and gaps in policy approaches to combatting violent conspiracy theory networks, first on social media platforms and subsequently in governments.

Platform Responses

Conspiracy theories provide three core challenges for platforms’ Trust and Safety policies, which often centre around the removal of content produced by proscribed terrorist groups and other overtly illegal behaviour. Firstly, conspiracy theories gain momentum through a loosely connected, decentralised and often anonymous online landscape, with no clear leader or formal structure, and therefore no obvious page to take down. A second issue is the ambiguous nature of the content, wherein platforms are accustomed to removing material that has been designated as illegal by a state but lack the same impetus when it comes to removing non-proscribed conspiracy theory content. The final challenge is the heterogeneous nature of conspiracy theories, marked by different interpretations of events and narratives accompanied by diverging attitudes towards violence across conspiratorial networks. Combined, these challenges fall outside the scope of the clear-cut, mass content removal on which platforms’ Trust and Safety systems were initially designed. 

Trust and Safety systems across mainstream social media platforms typically exhibit fractured policies towards conspiracy theories. For example, policies which may be used to address conspiratorial content on TikTok are dispersed across the policy areas of harassment and bullying; violent and hateful organisations and individuals; hate speech and hateful behaviours, and violent behaviours and criminal activities. YouTube specifically identifies conspiracy theory content under its hate speech policy; however, it defines such content in a manner that only considers it a violation when it specifically targets an individual or group with protected characteristics. 

In May 2022, X (formerly known as Twitter) published a crisis misinformation policy which included the advancement of demonstrably false information with the potential to “cause serious harm”. Combined with their abuse and harassment policy and violent speech policy, this could cover the nexus of conspiracy theories and violence. However, with moves to reinstate actors who have promoted conspiracy theories such as Ye (formerly known as Kanye West) since Elon Musk took over the company, the future of the comprehensive enforcement of such policies is unclear.

Meta is the only company to demonstrate a dedicated policy area focused on the intersection of conspiracy theories and violence, as part of its ‘dangerous organisations and individuals’ community standards. Placed in the third part of Facebook and Instagram’s three-tier content removal policy,  “violence-inducing conspiracy networks” (VICNs) “may not have a presence, or coordinate” on their platforms. Meta defines VICNs as non-state actors who 1) organise under a name, sign, mission statement or symbol, 2) promote theories that deliberately and falsely attribute violent or dehumanising behaviours to people or organisations and 3) have inspired real-world violence. Such a definition specifically distinguishes conspiracy theories promoting violence against individuals from those that may have little hateful impact such as flat earth theories.

The policy raises questions around the definition of ‘violence-inducing’ and decisions by platforms to only address content specifically related to violence. For example, a 9/11 ‘truther’ who believes that Jewish people orchestrated the terrorist attack may not specifically advocate for antisemitic violence and therefore not qualify under Meta’s definition of a VICN. However, by spreading falsehoods about Jewish people, they would nevertheless contribute to a hostile and hateful online environment. Similarly, survivors of terror attacks or family members of victims targeted by ‘disaster trolls’, as identified by researchers at King’s College London, may experience other adverse impacts beyond merely the ‘real-world violence’ defined in Meta’s policy. In practice, the enforcement of VICN policy is narrowly focused on QAnon to the extent that Meta’s policy updates on this area refer in shorthand to “our policy against QAnon”.

A thin line exists between ensuring freedom of speech and protecting communities from harassment or abuse, and different parts of conspiracy movements may sit on different sides of this line. In these scenarios, catch-all content removal or approval may fail to understand the heterogeneity of these movements. For example, the anti-COVID lockdown movement features significant diversity in its attitude towards violence, meaning that blanket content removal policies designed to reduce violence may inadvertently take down more than the policy intends.

Even when the threat is well-defined, platforms may still encounter challenges when identifying conspiratorial content and assessing coded or covert language against their policies. Defining what violence looks like in the context of intersecting harm types and subsequently recognising the associated actors and linguistic cues will necessitate the adaptation of content moderation software and ongoing refinement of human moderator expertise. Platforms must also consider the different policy levers available to them beyond content removal, including content warning notices, redirection strategies, counter-narratives and targeted interventions

Government Responses

Government policies addressing the proliferation of conspiracy theories are similarly fractured, with no clear international consistency regarding its policy domain, ranging from digital governance to counter-extremism to matters of security. Across governments, the effort to tackle violent conspiracy networks currently sits in three overarching portfolios; counter-terrorism, threats to democracy and incitement to violence. 

The UK Government’s recent update to its counter-terrorism strategy, CONTEST, recognises that “conspiracy theories can act as gateways to radicalised thinking and sometimes violence”, and therefore pose a threat both in themselves and through their linkage into other harm types. In the UK, violent conspiracy theories may be considered under the catch-all category of ‘mixed, unstable or unclear’ terrorist threats, which include ideologically motivated violence beyond the traditional far right/Islamist dichotomy.

In a similar recognition of the changing framework of violent threats, the Canadian government moved towards defining conspiratorial actors under ‘ideologically motivated violent extremism’ (IMVE), distinct from religiously or politically motivated violent extremism (RMVE and PMVE). In both cases, violent conspiracy networks are understood as a terrorist threat, using counter-terrorism tactics. Given the complex relationship between violent conspiracy theories and ideology, as well as their decentralised and grassroots structures, post-9/11 countering violent extremism frameworks are often focused on ideological deradicalisation and group disengagement and therefore may struggle to adequately grapple with and counter violent conspiracy networks.

Conspiracy theories are often united by anti-establishment and anti-government threads, leading some governments to define conspiracy theories as threats to democracy or the constitution. In Germany, the Federal Office for the Protection of the Constitution monitors conspiracy networks such as the anti-lockdown Querdenken movement. This movement which organised a protest to storm the Bundestag in 2020, and the Reichsbürger movement, whose adherents were involved in a recent intended coup, fall under its scrutiny. 

David Icke, who has long spread conspiracy theories that the world is ruled by reptiles, was banned entry into the Netherlands and the Schengen Zone in November 2022 for posing a “threat to public order and peace”. The letter to him by the Dutch government identifies that conspiracy theories “can harm the democratic rule of law” by undermining the legitimacy of democratic institutions. Due to differing national contexts, not all governments have the same offices or legal instruments to define violent conspiracy theories as threats to democracy. 

Icke’s ban was secondly linked to the illegality of incitement to hatred in relation to targeted antisemitism in his live shows and writings. When conspiracy theories lead to incitement to violence or hatred against a protected minority, relevant legislation can be used regardless of the perpetrator’s motivation. One such example is  Piers Corbyn’s 2021 video calling for the public to engage in violent acts against MP’s offices for their role in implementing lockdown measures. This led to his arrest on suspicion of encouragement to commit arson. Such outcome-focused measures can prevent violence, yet overlook the conspiratorial motives of the alleged perpetrator or the other harms that extend beyond targeted violence including the spread of propaganda, or abuse and harassment of targeted communities. 

Given the diversity of conspiracy theories and their violent manifestations, the current approach pulls together disparate policies and legal instruments, attempting to shoehorn this unique threat into existing counter-terrorism (CT) or countering violent extremism (CVE) frameworks. As concerns regarding the nexus of violence and conspiracy theories deepen, a comprehensive cross-government policy program may become necessary to coordinate the various areas in which the threat sits. Establishing consistency in policy programs across governments will be crucial, working towards a uniform framework to facilitate a fluid international response to a transnational threat. 

Conclusion 

Common across the approaches of both social media platforms and governments is a lack of synergy between the various policy areas in which conspiracy theories sit. Given the complicated relationship between conspiracy theories and ideological motivations, traditional CVE frameworks and securitised approaches which centre ideology in radicalisation processes may be insufficient in comprehending the motivations and pipelines of violent conspiracy actors. It is equally vital for policymakers to avoid securitising non-violent communities, calling for a careful equilibrium in understanding where conspiracy networks cross the line of inciting hatred or violence. 

To systematise and bridge the gaps in policies addressing conspiracy theories online, both cross-platform and intra/inter-governmental communication is imperative. Furthermore, with the forthcoming of the UK’s Online Safety Bill and the implementation of the European Digital Regulation Framework in 2024, collaboration between governments, regulators and social media platforms to ensure legal consistency across the online and offline spaces is paramount. As the threat posed by radicalised online communities continues to morph and innovate, policies must be suitably flexible and agile to adjust to evolving landscapes of violence.