Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

Online Subcultures and the Challenges of Moderation

Online Subcultures and the Challenges of Moderation
1st October 2020 Florence Keen
In Insights

Introduction

To those who grew up on the Internet, chan culture is nothing new – nor is it immediately synonymous with hate speech, violence and extremism. Indeed, for many, chans are simply places to discuss shared passions with people from around the world – be it video games, anime or music. However, features including user anonymity and a lax approach to content moderation have contributed to a culture of trolling and bigotry on a number of prominent chan boards. This, alongside the string of extreme right-wing attacks in 2019 that had connections to several chans, and the glorification of violence exhibited by users within the culture itself, has meant that governments and law enforcement agencies have been forced to consider the ways in which these spaces may be influencing extremist behaviour.

That said, the presence of memes, in-jokes and ironic counterculture across the chans make it near-impossible to determine the sincerity behind a user’s post, nor can we ever say with total authority that spending time on a chan board is the ‘smoking gun’ behind an individual’s radicalisation and/or act of violence. Given the overwhelmingly cynical and nihilistic attitudes found within these subcultures, in which anything and everything can be reduced to a joke, what exactly can be achieved by engaging with, and attempting to moderate chan culture?

A Brief History of the Chans

Chan boards began their life in 1999 with the inception of 2chan, an anonymous text-based message board created by a Japanese student studying at the University of Arkansas, which remains one of the most prolific online communities in Japan. The anonymous imageboard 4chan arrived on the scene in 2003, and became the birthplace of all manner of phenomena that are now deemed mainstream, including the Rickrolling prank, Gamergate, the Anonymous hacking collective and more recently, the QAnon conspiracy theory. In comparison to mainstream social media, which is predicated on public profiles and individual identities, chan culture is designed to be impenetrable and exclusive, which according to Vyshali Manivannan operates via a “zero-identity” approach, in stark contrast to high visibility online economies that prioritise followers, ratings and usernames.

Whether posted ironically or with genuine intent, hate speech on 4chan’s /pol/ (politically incorrect) board was shown to have increased by 40% between 2015 – 19. Then, in March 2019, the extreme right-wing attack at mosques in Christchurch, New Zealand, which left 51 dead, propelled chan culture firmly into public and policymaker consciousness, with the attacker linking a Facebook livestream of the attack, as well as his manifesto to 8chan, writing on its /pol/ board that it was “time to stop shitposting and time to make a real life effort.” A number of extreme right-wing attacks since followed a similar pattern, including shootings in Poway and El Paso in the United States, which culminated with the online infrastructure service Cloudfare revoking its relationship with 8chan in August 2019, effectively shutting down its operation. The action was welcomed by 8chan’s founder Fredrick Brennan, who stated “it’s not doing the world any good,” adding weight to a growing perception that amongst the hundreds of largely innocuous chans, a small number were encouraging a more toxic worldview, and in some cases playing a role in its users’ radicalisation journeys.

Moderating Chans Versus Mainstream Social Media

Unlike mainstream social media companies which exhibit tangible structures and regulatory best practices, the chan sphere is a much trickier beast to comprehend and curtail. Therefore while initiatives like the Global Internet Forum to Counter Terrorism (GIFCT) have been successful in engaging a number of social media and technology companies to work together in order to prevent violent extremists from exploiting their platforms, fringe chans do not participate – it is not within their DNA. As Tech Against Terrorism noted in the aftermath of the extreme right-wing attack in Halle, Germany, in October 2019, the attacker’s livestream and manifesto remained available on 4chan and other fringe chans on the Darknet, correlating with a similarly inadequate response to the Christchurch attack just months before. However, it is also worth noting that the chan ‘Meguca’, where the attacker originally posted material, has since been taken down.

One potential response to curtailing fringe chans that host the most dangerous content is to encourage private sector companies such as domain registrars and web infrastructure providers to terminate their relationships with them. However, as seen with 8chan’s successor 8kun, the removal of one chan from the Clearnet is sure to be followed by another, not to mention the option of moving to the Darknet altogether as some have already done. Therefore, whilst a deplatforming or removal of service approach has proven to be effective in limiting extremist content across more mainstream platforms, the logic of this action cannot easily be replicated onto chan culture. In a further divergence from mainstream social media, some fringe chans are pre-emptively acting to preserve themselves in case of removal from the Clearnet through forming alliances with one other. In some instances, this has included establishing ‘bunker’ versions of each other’s most popular boards as a means of maintaining the communities built prior to their removal.

Another fundamental difference between the chans and mainstream social media is their lack of desire for a mass audience, as chans are predicated on attracting more niche and self-referential communities and are built to foster a sense of belonging and ‘in-group’ status. By making themselves largely impenetrable to outsiders in their design, usability and language, they set limits on who can understand and interact on them, weeding out the less committed participant whilst baiting those who fail to understand their codes. It is therefore particularly important that policymakers and practitioners take the time to understand each chan’s unique culture, so as not to risk interpreting all chan content at face value.

Conclusion

In general, chans should not immediately be conflated as hubs for extreme right-wing radicalisation and violence, as we know that of the hundreds of popular sites most are relatively benign. That said, it is concerning that a number of high-profile attacks in recent years have made use of certain chan boards to post final messages, manifestos and links to livestreams of attacks. Additionally, the glorification of violence and deification of past attackers within chan culture is a trend that is likely to continue, and it is therefore understandable that policymakers and practitioners are asking themselves if there is anything more that can be done to curtail these environments.

As part of a wider project investigating online subcultures, memes and the promotion of violence, the authors of this article have spent the past six months observing a number of chans in order to gain a deeper understanding of the way in which visual culture influences violent discourse online, as well as interviewing key experts in the field to consider what policymaker and practitioner responses should be. Overwhelmingly, our findings point to the limited effectiveness of intervention in chan culture, in large part because they function in a wholly different way to mainstream social media and technology companies. This is due to their entrenched anonymity, lax approach to moderation and hostile attitude towards ‘outsiders’ and ‘normies’ in which the objective is not to engage with large audiences but to foster niche, self-selecting communities.

Those wishing to engage in the chan sphere must thus be aware of the impracticalities of moderating this culture and should not merely replicate practices that have been used in other spaces. The presence of chan alliances and the ability to move to the Darknet if required means that even if one chan is taken down, its contents are likely to resurface elsewhere. Greater institutional digital literacy should first be prioritised by governments so that the nuances of chan culture are better understood by practitioners. Without this, they risk misinterpreting their contents, whilst further cementing the ‘in-group’ status of the most fringe and potentially violent chans by attempting to engage and intervene in a misguided way.

This Insight was funded by the Centre for Research and Evidence on Security Threats (CREST) – an independent Centre commissioned by the Economic and Social Research Council (ESRC), and is part of a wider project looking at Memetic Irony and the Promotion of Violence Within Chan Cultures.