Introduction
The study of the intersection between gaming and extremism has focused largely on a few key areas: the presence of gamers and gaming culture in extremist communities, with a particular focus on the gamification of extremism; the co-option of gaming aesthetics and culture by extremists; the creation of (mods and bespoke) games by extremists; and the potential vulnerabilities that might make gamers more susceptible to radicalisation than the general population.
However, while there is a growing body of literature on the topic, one area that requires a more thorough investigation is gaming-adjacent platforms and the policing of extremist (borderline) content on said platforms. These digital platforms were originally created to support the broader gaming community online, either by facilitating community building and conversation between gamers, or to allow gamers to livestream their activity. On the one hand, they can be seen as the “digital infrastructure that surrounds gaming, a cornerstone of global gaming communities, and essential to the transmission of gaming culture.” On the other, gaming-adjacent platforms like Steam, Twitch, and Discord have gained notoriety, in part for being exploited by extremists who see them as fertile ground for spreading their ideologies.
This Insight will discuss the latter with reference to a recent study that we carried out in the summer of 2024 – interviewing 13 Content Moderators, P/CVE Practitioners and Industry Researchers with direct experience of expressions of extremism, content moderation and P/CVE efforts on gaming-adjacent platforms. In our investigation, we mainly focused on Twitch, Steam, and Discord as a way of spotlighting platform affordances when it comes to content that is extremist in nature but not unlawful. As this Insight shows, this was far from the case – with unlawful content seeming to co-mingle with extremist content, making the divisions between legal and illegal content largely blurred in reality.
Overview: The Role of Extremism on Gaming-Adjacent Platforms
There is a small but growing corpus of literature on gaming-adjacent platforms. This includes:
- Work led by the Institute for Strategic Dialogue (ISD) tracking extremist use of DLive, Twitch, Steam, and Discord;
- Work from the Anti-Defamation League (ADL) exploring harassment, extremist usage of gaming-adjacent platforms (namely Steam) and the challenges faced by trust and safety employees in the games industry when it comes to moderating hate and harassment;
- And, helpful summaries that provide broader overviews of extremist activity on these platforms, from the Radicalisation Awareness Network and the United Nations Office of Counter Terrorism.
However, analysis on this topic is far from comprehensive and there is a need for further research on this important set of platforms. This includes the prevalence and nature of extremist content as well as the state of policing efforts to inveigh against such content.
Our Research
Between May and July 2024, we interviewed 13 Content Moderators, P/CVE Practitioners and Industry Researchers with direct experience of expressions of extremism, content moderation and P/CVE efforts on gaming-adjacent platforms. This was complemented by a qualitative document analysis of Platform Terms of Service and a Systematic literature review of the Extremism Studies-related literature on Gaming-Adjacent platforms.
Findings
What we found was striking:
Expressions of Extremism
1 – Narratives
Extremist ideologies manifest on gaming-adjacent platforms through a spectrum of ideologies, each with its own distinct themes and messages. The most prevalent, right-wing extremism, saw a range of expressions on these platforms rotating around white supremacy, Nazism, and antisemitism, and often marked by explicit expressions of racism, misogyny, homophobia, and conspiracy theories such as QAnon. On the other side, Islamist extremism was less remarked upon but saw ideals promoted, such as the establishment of a Caliphate, the call for Jihad, and loyalty to the Ummah, or a global Muslim community. Additionally, and quite pertinently, interviewees noted narratives and online content on gaming-adjacent platforms cleaving to ‘extremist-adjacent’ fixations and egregious content that certainly broke platform terms of service. This includes content such as child sexual abuse materials (CSAM), glorification of school shootings, and graphic depictions of sexual content and violence. This latter point both echoes and goes beyond recent studies by Moonshot and Winkler, Wiegold, Schlegel and Jaskowski based on user surveys and content analysis of the prevalence of extremist sentiments in gaming-adjacent spaces.
2 – Radicalisation
These different extremist narratives, while ideologically distinct, reflect new forms of radicalisation that are fuelled by specific tactics on gaming-adjacent platforms. In particular, interviewees found that radicalisation occurred on such platforms through organic or strategic means. For example, some individuals are purposefully directed into these spaces through pre-existing extremist channels, while others actively seek them out on their own.
Adding to this strategic and interactive set of processes, radicalisation was also found to be self-directed and post-organisational, with individuals on gaming-adjacent platforms adopting symbols and rhetoric to identify with broader ideological trends rather than joining a specific group. Another key component of radicalisation on gaming-adjacent platforms, often used by extremist groups enculturating new recruits, was content funnelling. For instance, initial interactions might take place in online gaming communities where individuals bond over shared interests. These interactions can then migrate to less regulated gaming-adjacent platforms, allowing extremist rhetoric to proliferate unchecked. Both of these tactics and trends were observed by researchers in their digital ethnographic study of right-wing extremist communities on Discord and Steam – showing how gaming-adjacent platforms can act as a bridge between more popular platforms and more explicit and egregious closed spaces.
3 – Recruitment
This radicalisation pipeline was linked to softer pathways into extremist subcultures on gaming-adjacent platforms, driven by a pattern of idolisation, machismo, and community building. Respondents remarked on high-profile figures such as Anders Breivik and the Christchurch attacker, who are considered “idols” for aspiring extremists, providing both models of action and a sense of belonging. Additionally, and cleaving to the hybrid nature of extremism, respondents also mentioned the lawless, hyper-masculine environment on toxic games such as GTA V and Call of Duty that can be appealing to individuals attracted to extremist spaces. This often includes a pseudo-military team-building approach, where groups (on games like Rust and Arma 3) adopt survivalist and combat skills to unite and kill a shared adversary – usually the opponent of a particular extremist group. Sustained engagement with narratives propagated on gaming-adjacent applications could, potentially, increase “susceptibility to radicalisation.” It may also legitimise further inroads into real-world offline violence through identity fusion and small-group identity bonding rooted in pseudo-military team-building approaches.
Content Moderation
1 – Healthy platforms – what they look like (ideally)
When it came to policing extremist content on gaming-adjacent platforms, respondents were asked, first and foremost, what was not happening and how healthy online platforms play a crucial role in counteracting extremist content. In this sense, respondents suggested that effective platforms would have swift content removal mechanisms, vetted content channels on sites, and a stricter scope for bans and removal on the most persistent violators of terms of service. Here, the example of ideal responses included how Twitch removed the Buffalo shooting livestream within two minutes; specific Facebook and Reddit sub-channels where content is vetted before going live; and, Discord’s efforts to unlist and remove servers exploited or set up by formally-identified extremist organisations.
2 – Issues
Yet, these platforms face significant challenges from extremist groups trying to evade such tactics. For example, when moving away from formal groups, the anonymity and cloaks used by individual users who deploy hidden extremist symbols complicate moderation efforts. Moreover, the moderators whom we interviewed often expressed frustration at inconsistent enforcement policies and the burden of deciding whether content or users should be reported to law enforcement. One recurring issue raised by interviewees was frustration over younger users’ access to and influence from extremist individuals or influencers. These figures were openly streaming live gameplay alongside extremist narratives on mainstream social media, highlighting the ecosystem-like nature of extremist exploitation of the internet.
3 – What practitioners can do
Industry researchers, content moderators and P/CVE practitioners interviewed expressed hope, that certain tactics and techniques have proven effective in the gaming-adjacent space. One key technique touched upon – at the preventative level – was improving content moderation through raising awareness about harmful online content and behaviours. Another set of platform-led efforts touched upon by interviewees was creating precise threat escalation protocols that automatically identify direct threats to life or other kinds of specific threats with a timeline attached to that threat or location. Another related facet of discussion was around fostering better partnerships with law enforcement in order to make platforms safer. In particular, it was noted that law enforcement agents need to better understand how the platforms and their subcultures operate in order to interdict them better. Additionally, in order to sustain content moderation on such platforms, it was noted that providing mental health support and relief for moderators was essential, as continual exposure to harmful content was proven to have a detrimental impact on their well-being in the long term.
P/CVE Efforts
1 – Effectiveness of Narrative-Based Interventions
More specifically, interviewees talked candidly about efforts to counter and prevent violent extremism (P/CVE) on gaming-adjacent platforms. In particular, counter-narratives aimed at educating users about the dangers of extremist content were particularly championed, using video games and adjacent platforms as an online outreach tool to reach individuals at risk of radicalisation. However, interviewees disagreed about how impactful these narratives truly are and whether they would be sufficiently proactive in turning the tide on extremism – something which is not globally prohibited in all the terms of service reviewed as part of this project. Therefore, other narrative-based interventions might be needed that are more tailored to the sub-cultures and affordances of this space.
2 – Role of AI
Another aspect of P/CVE efforts touched upon by respondents was the harnessing of Artificial intelligence (AI) for content moderation and policing purposes. In the past, AI has proven useful in identifying and flagging problematic textual content, but limitations exist, particularly in nuanced online spaces like gaming, where video and audio content are germane. In particular, bots were raised as being potentially useful, capable of issuing automatic bans for posts with harmful language and providing some (light) relief for human content moderators; however, key pitfalls were also identified – including how bots fall short in detecting sarcasm or masked language, allowing certain contextually-dependent content to slip through, and the ultimate need for human content moderation in the most sensitive and trickiest of cases.
3 – Adopt specific protocols
Finally, it was deemed by the interviewees that gaming-adjacent platforms could improve and strengthen these efforts through more transparency concerning content moderation guidelines and accountability measures in how they are enforced. In order to foster greater confidence, it was deemed by those interviewed that content moderation guidelines should be made more transparent, and platforms should hold both users and themselves accountable under their terms of service in order to consistently leverage such sanctions to greater effect. Moreover, the stressing of legal ramifications for users who breach these terms of service is also essential to reinforce responsible online behaviour and act as a deterrent for potentially malicious use in the future.
Conclusions
Extremists have been exploiting gaming-adjacent platforms for about a decade (according to recent datasets) – using them for radicalisation, recruitment, and to spread their appeals. Gaming-adjacent platforms sit within a broader system of new and emerging tech – with extremist actors engaged in ‘opportunistic pragmatism’ – circumventing codes of practice and using such platforms to funnel individuals into more closed online spaces. Like mainstream social media ten years ago, gaming-adjacent platforms are in the early stages of interdicting such content but only in the most harmful of circumstances. However, more effort needs to be exerted on proactive enforcement and precise threat escalation procedures, which are problematised through the dynamic and video-based nature of these environments. Therefore, a more bottom-up approach needs to be adopted in P/CVE to address and mitigate harmful and toxic cultures that lead to extremists thriving on such platforms in the first place.
Recommendations
Efforts to address problematic content and potential radicalisation in gaming (adjacent) environments need to evolve across several key sectors:
Platforms and Regulators should be boosting their efforts to monitor and manage content that may be harmful yet remains technically lawful, as outlined by the Global Internet Forum to Counter Terrorism (GIFCT) in 2023. Moreover, if such efforts are not forthcoming, a shift towards a bottom-up strategy is needed – one that empowers gamers to act as proactive bystanders through educational initiatives led by these platforms to curb harmful content.
Researchers should start by focusing on the complex interactions between gaming content, radicalisation, and recruitment tactics, advocating for more active and engaged research methodologies rather than passive data collection (like surveys). A holistic, ecosystem-oriented perspective that moves beyond individual platforms is also needed as essential in order to understand these dynamics.
Practitioners and Policy-Makers need to recognise the potential of gaming and related platforms as tools for online outreach to individuals at risk of radicalisation. They need to appreciate “extremist-adjacent” behaviours and content as a risk factor for radicalisation when designing interventions. In doing so, they need to advocate for a proactive approach through safety-by-design, the creation of inclusive spaces, and the reversing of toxic cultures within gaming environments.