Click here to read our latest report “Far‐Right Extremism and Digital Book Publishing”

Banning Nazis or “Burning Books”? How Big Tech is Responding to Hate Speech, and the Implications

Banning Nazis or “Burning Books”? How Big Tech is Responding to Hate Speech, and the Implications
2nd July 2020 Florence Keen
In Insights

This week, YouTube took the decision to kick a number of influential far-right accounts off its platform, including those of the American white supremacist Richard Spencer who is credited with coining the term “alt-right”; the Canadian white supremacist Stefan Molyneux, whose videos have seen him polemicize on scientific racism and eugenics; and the former Ku Klux Klan leader, David Duke, a leading figure on the American far-right, with particular notoriety for anti-Semitism and holocaust denialism. Following the bans, a YouTube spokesperson stated: “We have strict policies prohibiting hate speech on YouTube, and terminate any channel that repeatedly or egregiously violates these policies.” Their actions come just over a year after it updated its hate-speech guidelines to prohibit videos that asserted a group’s superiority in order to justify discrimination on the basis of a person’s gender, race, religion or sexual orientation, and to remove videos that glorify Nazism and promote damaging conspiracy theories, including Holocaust denialism.

Pressure has been building on social media companies to do more to clamp down on hateful content, and YouTube is not alone in taking action in recent days, with Facebook announcing its designation of the violent US-based anti-government “Boogaloo” linked network under its Dangerous Individuals and Organizations policy, less than a week after Discord shut down the largest Boogaloo-affiliated server in operation on its platform. Reddit has also banned several popular subreddits including r/The_Donald, its largest pro-Trump community, as well as r/ChapoTrapHouse, and r/GenderCritical, for similarly “promoting hate.” Finally, the Amazon-owned streaming platform Twitch temporarily suspended Donald Trump’s channel for violating its policy against “hateful content”, in an indictment of the US president that is becoming all-too-familiar after Twitter hid one of his tweets behind a warning that it was glorifying violence in May of this year.

For many, these actions are welcome – if not long overdue, particularly as extreme right-wing violence is on the rise globally, meaning that the connection between online and offline hate is once again under scrutiny. Deplatforming raises important questions about the extent to which digital realms influence real-world violence, and its efficacy as a tool in disrupting these chains of hate. Moreover, it is important to recognise that the actors in question are flexible and dynamic, so will likely continue to make use of less moderated spaces to organise and push their narratives – preventing our ability to eradicate hate speech in its entirety. Underscoring all of this is the highly politicised role that social media companies are being forced to take, in an age of polarisation and disinformation, where hateful ideas and conspiracy theories are being mainstreamed, and advertisers are turning their backs on platforms who are not seen to be “doing enough” to combat them.

Online hate, offline violence?

The reaction of those banned by YouTube was predictable, with Stefan Molyneux uploading a statement to BitChute in which he denied fomenting violence and hatred, non-ironically equating YouTube’s decision to “book-burning”, in what he described as a “highly coordinated effort” to “remove the middle.” Within the far-right universe, Molyneux is a big hitter, his main YouTube channel boasting 928,000 subscribers before its takedown, having achieved more than 300 million lifetime views. His deplatforming is therefore significant, particularly as he is seen as a gateway into the “alt-right rabbit hole” for many young men. What we don’t know for certain, is the extent to which the “rabbit hole” influences violence, given that radicalisation is a deeply complicated process, driven by all number of factors including political, economic, social and psychological conditions.

This connection is one that is hotly discussed within academic and policy circles, with studies indicating how enabling environments and support structures provided by social media “advance” the radicalisation process of susceptible individuals, and how online “filter bubble[s]” ensure that individuals are exposed to mutually reinforcing extremist content. Amongst its social media peers, YouTube has garnered a particularly bad reputation for these practices, with evidence showing that if an individual engages with extremist content on YouTube, they are significantly more likely to be recommended extreme and fringe content via YouTube’s algorithms. Rebecca Lewis has aptly referred to this to this as the “alternative influence network”, in which audiences are able to move from mainstream to extreme content through exposure to political influencers and their guests, who adopt the tactics of brand influencers in order to “sell” audiences on far-right ideologies. This, she argues, “facilitates radicalisation.”

In the current political climate, it is right that we scrutinise how social media platforms and the far-right actors that they host motivate the worldviews of extreme and potentially violent actors. Whilst we can never say with absolute authority that exposure to a YouTube video or Facebook group is the driving force behind an act of violence, the reach of the virtual far-right is manifest. In what Bhareth Ganesh has described as a “digital hate culture”, it is amorphous in nature, its strength lying in its ability to defy categorisation and spread to all corners of the Internet, from mainstream social media platforms, to more imageboards such as 4chan, encrypted messaging apps such as Telegram and the Dark Web. Although content moderation and deplatforming will never eliminate online hate in its entirety, the logic goes that by removing the most visible nodes from the system, the likelihood of a “normie” being exposed to and radicalised by hateful and violent discourse is mitigated.

To ban or not to ban?

Support for deplatforming is not universal, with some arguing that censorship creates the opposite effect – as prohibited philosophies become “forbidden fruit”, unwittingly drawing attention to the ideas that you are trying to quash. There have also been concerns about the migration of users from one platform to another when an account that they follow is suspended. Platforms such as Gab even positioned themselves as alternatives to mainstream social media in the wake of crackdowns on far-right activity, its Chief Operating Officer stating in 2018 that he was “worried about people’s rights” and that “a lot of political speech is being labelled as hate speech and is simply being wiped off the map.”

However, platform migration has not manifested in any significant way, as Hope Note Hate demonstrated with the case of far-right provocateur Milo Yiannopoulos. After his mainstream social media bans between 2017-18 he moved to Gab and was immediately confronted with a severed audience, lamenting “I lost 4 million fans in the last round of bans…I spent years growing and developing and investing in my fan base and they just took it away in a flash.” There is thus compelling evidence to suggest that deplatforming and hate speech moderation work, in that alternative platforms simply have a smaller and more niche audience – meaning that even if an individual or organisation is forced to jump ship, they have a limited recruitment pool from which to draw from, and the potency of their messaged is decreased. Milo Yiannopoulos and his peers still have the right to speak (despite what free speech lobbyists will claim) but whilst they were once headline acts, they have been relegated to the 3am slot, oftentimes simply shouting into the void.

Other critics of deplatforming have argued that this kind of practice amounts to nothing more than liberal virtual signalling, in which big techs’ political and moral centre “skews consistently left”. Depending your ideological stance, the notion that Silicon Valley is inherently left-wing may be laughable, although they are not impervious to political pressures, as Mark Zuckerberg must be surely reckoning with; Facebook’s stock fell 8% in just two days at the end of June, after advertisers including Starbucks, PepsiCo and Coca-Cola announced a boycott of advertising due to its failure to tackle hate speech effectively. The recent actions of YouTube, Facebook, Reddit and Twitch are therefore also explained by efforts to avoid reputational damage and subsequent economic sanctions, more than they are evidence of big tech’s “radical left-wing” bias, as Donald Trump himself has argued.

Conclusion

Whether or not you agree with deplatforming philosophically, the removal of key players from the mainstream digital hate scene has a demonstrable impact on their ability to transmit their messages. In the midst of a global pandemic, economic uncertainty, increased racial tensions and upcoming US elections, white supremacist narratives and the conspiracy theories that the likes of Richard Spencer, Stefan Molyneux, David Duke are known to promote are thriving – and it is the responsibility of tech companies to take action where necessary.

When it comes to the most dangerous ideas, this is not simply a debate about free speech – it is about public safety and trying to prevent genuine harm. Hate speech has always and will always exist, but where we can limit its oxygen, and push it into increasingly small spaces, we should.