Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

The Rise of the Far-Right Web

The Rise of the Far-Right Web
10th November 2021 Simon Copland
In Insights

In October 2021 the former President of the United States, Donald Trump, announced the establishment of a new social media platform – ‘Truth’. As an ongoing critic of major social media platforms – particularly following his ban from Twitter in January following the protests at the US Capital – this announcement has been long expected and is assumed to be a potential central aspect of an attempt to return to office in 2024.

Yet ‘Truth’ is not the first platform to establish itself as a direct competitor with large social media platforms such as Facebook, Twitter, YouTube or Reddit. Over the past few years a number of alternative platforms used by the far-right have been established with this direct goal in mind – including Parler, Gab, Gettr and more. While some have stumbled and failed, others are thriving, with Gab in particular now hosting millions of visitors. These are no longer fringe spaces, and in turn need to be properly grappled with.

How have we ended up here? While this trend seems new, it is actually not. Instead these platforms are attempting to emulate much of the ethos of the early social web, yet with a more political lens. This gives us serious lessons for how to address the hatred and violence on social media.

A Brief History of Moderation on Social Media

The early development of web 2.0. social media platforms such as Facebook, Twitter, YouTube and Reddit were based on libertarian politics that emphasised free speech.

This was motivated by an optimism of the potential of social media to lead to a mass democratisation. Platforms were developed under an ethos of handing control over to the user (or ‘prosumer’), with large platforms initially resistant to engaging in mass regulation of content. Discussing the development of the platform for example, the founder of Reddit, Alexis Ohanian argued:

“The Internet is a democratic network where all links are created equal. And when such networks get hierarchies forced upon them, they break. They start looking a lot more like the gatekeepers and bureaucracies that stifle great ideas and people in the physical world. That’s why we fight so hard to keep them the way they are – open – so any idea that’s good enough can flourish without having to ask anyone’s permission.” (Ohanian, 2013: 10-11)

Yet, facing the challenges of attracting the ‘wrong’ sort of attention, platforms are increasingly balancing this desire. While selling themselves as ‘neutral’ spaces in which users can create and share content, digital platforms have always engaged in some form of moderation. Initially this focused primarily around illegal content, with sites removing child sexual abuse material or organised crime from their sites. In recent years however, platforms have faced increased pressure due to other types of content they host with a particular focus on hateful material such as misogyny, racism, homophobia and far-right ideas.

While signals of this pressure began earlier, the 2016 US Election was a major turning point, with digital media platforms facing significant scrutiny over the role they played in the spread of misinformation and hateful content during the campaign. The controversies centred primarily around the use of platforms from international agents (in particular Russia) to spread disinformation, and the data harvesting scandal from Cambridge Analytica. These incidences shifted the relationship between the public/law makers and large platforms, with the campaign fermenting significant distrust in the previous models of these spaces.

Large social media platforms have faced a growing trial of legitimacy. Latzko-toth, argues that before a technology “gets integrated – or rejected – in a given social milieu, the technology is subjected to a ‘trial of legitimacy’, where its relevance, meaning, and compatibility with the group’s norms and values are examined and debated.” Over the period since around 2016 large digital media platforms have faced an intensified version of such a trial, with the pressure having serious potential to impact their political and social capital, and in turn their bottom line.

In turn, platforms have implemented dramatic changes. Facebook, Twitter and YouTube, amongst others, have become more active in regulating content on their spaces, including bans of high profile figures such as Milo Yiannapolous, Alex Jones, and Donald Trump. Facebook has also developed a fact-checking team for content, and Twitter infamously fact-checked several Tweets from Donald Trump in the lead up to the 2020 election. Even Reddit, one of the most libertarian spaces on the web, banned over 2,000 subreddits in 2020, including the largest dedicated to support for Trump.

Returning to the Initial Ethos of the Web

It is these changes that have been a driving force behind the rise of the far-right web. Over recent years high-profile conservative figures have begun to heavily criticise these shifting moderation policies. For example, in 2019, immediately following the bans of Infowars founder Alex Jones from multiple platforms, Donald Trump Tweeted his anger. He said:

“I am continuing to monitor the censorship of AMERICAN CITIZENS on social media platforms. This is the United States of America — and we have what’s known as FREEDOM OF SPEECH! We are monitoring and watching, closely!!”

These concerns have led to growing critiques of mainstream social media platforms. Following his ban from Twitter for example, Milo Yiannapolous said it was no longer a place for conservatives. He said:

“With the cowardly suspension of my account, Twitter has confirmed itself as a safe space for Muslim terrorists and Black Lives Matter extremists, but a no-go zone for conservatives.

This is the end for Twitter. Anyone who cares about free speech has been sent a clear message: you’re not welcome on Twitter.”

Conservative commentators and politicians alike have criticised large digital media companies as engaging in campaigns of censorship of right-wing figures, specifically positioning shifting rules as an attack on the right. Believing they have no space left on the mainstream web (which has often been true) these figures have turned elsewhere.

I have found this in my own research of bans for misogynistic material on Reddit. In 2018 Reddit implemented a ‘quarantine’ to two high profile men’s subreddits – r/Braincels and r/TheRedPill. A quarantine doesn’t ban a subreddit but severely limits access to it and its functionality. My research found that while the quarantine reduced engagement in both subreddits, in turn reducing access to the misogynistic material on the site, it also resulted in a concerted campaign for members to leave Reddit and head to less regulated sites. Moderators of r/TheRedPill in particular ran a concerted campaign to push users to a much less regulated self-run forum. This matched other research in the arena, which has consistently found that bans push users somewhere else, often to far-right unregulated spaces.

This could be considered a good thing, as it protects other social media users from the hatred of these participants. However it comes at a cost. The sites users migrate over to are far less regulated and diverse, allowing hateful material to go unchecked. These sites also further a divide between the mainstream and far-right online, with bans and other hardline moderation practices further isolating individuals from mainstream institutions. In effect these bans make the issue “someone else’s problem.” The hatred still exists, but now somewhere else on the web. 

Lessons to be Learnt

We can begin to understand the rise of platforms used by the far-right, such as Parler and Gab, as an attempt to return back to the libertarian politics of early social media. These platforms have been marketed as spaces of free speech, ones that appear in reaction to more stringent moderation policies of the large digital media platforms.

In turn platforms like Gab, Parler, Gettr and Truth are becoming problems that we need to comprehend. Their rise suggests that bans simply cannot cut it as a solution to hate online – the web is too unregulated, and individuals and groups can move elsewhere. As Bharath Ganesh argues “without a consensus across all web hosts, it is almost impossible to prevent the migration of digital hate culture to other, less-regulated web hosts.”

If we want to address hate online therefore we need to be more creative, tackling the causes of the issues, not just the outcomes. Proper engagement, particularly with those on the margins, is needed to turn people down a different path. We cannot deregulate ourselves out of the problem.