Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

Weighing the Value and Risks of Deplatforming

Weighing the Value and Risks of Deplatforming
11th May 2020 Ryan Greer
In Insights

Last month, the video platform TikTok banned far-right extremists Britain First and Tommy Robinson, the latest action taken by a tech platform to address hateful and extreme content by sanctioning abusers. Platforms’ embrace of deplatforming as the default tool for repeated or severe violations of terms of service shows progress in prioritising the issue of online extremism, but as a tool, it is a blunt instrument that may not be equally valuable in all circumstances. Not all platforms can or will address all content equally efficiently, and whether they should requires an assessment of unintended consequences. Whether those factors are correctly balanced by platforms, or deplatforming is simply the most straightforward tool at their disposal, remains to be seen.

Addressing harmful content that could lead to hate, extremism, and terrorism is critical for tech platforms, sometimes for legal compliance and other times simply because it is imperative to protect their users and our communities.  For a sense of scale, recent transparency reports show that between January and June of 2019, Twitter took action against almost 600,000 accounts for violating policies related to hate and Facebook took action against 17.8 million pieces of content based on terrorist propaganda concerns and 15.5 million related to hate speech between January and September of 2019. The Global Internet Forum to Counter Terrorism asserts that its joint hashing database – the shared mechanism for large tech companies such as Facebook, Microsoft, Twitter, YouTube, and others to post or find terrorism-related content – has over 200,000 pieces of unique content.  When these actions manifest as banning a user, the result can be severe: an oft-cited example of the success of deplatforming is that of far-right provocateur Milo Yiannopoulos, who may be as much as $2 million in debt following bans that have removed his ability to benefit financially from his notoriety. Alex Jones’ media outlet InfoWars had about 1.4 million daily views of its site and users before being banned from YouTube and Facebook, and 715,000 afterward, according to the New York Times’ analysis.

On the other hand, these results raise questions regarding whether platforms are efficient in carrying out bans.  Jones, for example, launched “Infowars is Back” on Facebook an hour after it banned Infowars. Proxy channels emerged on YouTube, sharing Jones’ videos with over 1.6 million viewers, including 550,000 views in a thirty day period, and 10,000 subscribers. Lesser known antisemitic and white supremacist channels have managed to circumvent attempted bans. If the strategy to address online extremism must be “whack-a-mole,” there is considerable room to improve efficiency in finding users and content to ban, implementing bans, and finding and removing proxies.

Beyond efficiency is effectiveness: banning an individual or group may feel cathartic, but whether it achieves the desired result of degrading and helping defeat extremists and their movements is a far more central question. The verdict on that is, unfortunately, unclear.

Researchers at Georgia Institute of Technology looked at bans on Reddit, concluding that users that experienced sanctions from Reddit for hate speech left Reddit entirely, reduced hate speech on Reddit by 80-90 percent, and many also migrated to new Reddit threads. Audrey Alexander’s study for the George Washington University Program on Extremism shows that mass bans of Islamic State (IS) followers on Twitter “deteriorates [IS’] followers’ ability to gain traction on the platform, likely hindering their reach to potential recruits” and acknowledges that the “decay” on Twitter corresponded with IS’ strategic shift to Telegram as its platform of choice.

Strategic success for mass bans has often been interpreted (1) as “digital decay” for the individual platform in question, rather than the integrated online ecosystem, and (2) in terms of the volume of users and their hateful content rather than the escalation or de-escalation of extremism.

Telegram, for example, became the platform of choice for jihadists as mainstream platforms began to use bans, removing IS sympathizers’ ability to recruit followers from a mainstream audience, but driving their online communications underground to a less-visible and less-regulated platform. Now it is also becoming a destination of the global white supremacist movement.

Similar platform migration has led to extremist use of VK, the Russian Facebook-equivalent; Gab, far right-extremists’ Twitter-equivalent; and lesser-known sites that their users would move to if those platforms began regulating, which, as ADL analysis suggests, could be WrongThink, minds.com, toko.tech, MeWe, or freezoxee. The evolution of the “chans” is illustrative: bringing attention to 4chan or 8chan may have led to particular actions to limit extremist content on them, but also led 8chan to go dark and return several times, and also gave rise to Endchan, 7chan, and myriad other copycat sites that aim to circumvent attempts to regulate them.

According to an analysis by ADL and the Network Contagion Research Institute, during months when “a Twitter mass ban took place corresponded to more than double the percent of new members on Gab than a typical month.” The frequency with which the users referenced the ban, and the corresponding spiteful references to censorship (e.g. “fascistbook” and “goolag”) suggests that the new users are joining Gab due to mass bans on another platform, and that being banned fueled their anger – not self-reflective anger for the behavior that got them banned, but toward the authorities than banned them.  Another study reached similar conclusions, looking at Facebook and VK. This analysis suggests that the grievances that fuel far-right extremism may be heightened in users that are banned from mainstream platforms, and that those grievances are then expressed in fora with less oversight and a higher portion of like-minded members. In other words, there is a distinct possibility that deplatforming trades high exposure to a broad population for more extreme exposure to other extremists. And no amount of whack-a-mole will prevent extremists from finding the next forum on which they can post their hate and recruit new followers, with authorities potentially unaware of the platform migration.

Removing users and content also hinders investigation and research into the threat. Imagine an individual that poses a security concern and whose primary means of being discovered by law enforcement is online behavior – for example, Conor Climo, whose online conversations and support for the Feuerkrieg Division led law enforcement to search his home, where they found bomb making materials and evidence of violent plots. If such a suspect were removed from all platforms that could be accessed by law enforcement and informants, then plots may continue, but out of sight.  Further, researchers looking into such behavior to inform policymakers and the public no longer have visibility into concerning behavior once it is removed, which could distort public opinion and decision-making based on an inaccurate picture of threats.

Deplatforming may limit the breath of hate and extremism on mainstream platforms but increase extremists’ motivations to plot, doing so in secret. On the other hand, allowing hate unfettered access to the worlds’ most powerful megaphones to recruit more to their cause is similarly risky. Neither, of course, is an acceptable outcome, which is why comprehensive approaches – and comprehensive research into what works – is needed. Whether providing law enforcement more opportunities to track extremism, tech platforms better ways to implement terms of service enforcement, or promoting “good speech” to overwhelm hate and extremism online –comprehensive, integrated approaches are necessary.