Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

Male Supremacism, Borderline Content, and Gaps in Existing Moderation Efforts

Male Supremacism, Borderline Content, and Gaps in Existing Moderation Efforts
6th April 2021 Ye Bin Won
In Insights

Leaders of Google, Facebook and Twitter testified once again in front of the US House of Representatives in a hearing titled “Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation.” In addition to pertinent inquiries into social media’s role in the insurrection at the US Capitol on 6 January 2021, the CEOs were asked pointed questions on what has become a far too familiar topic: the efficacy of content moderation practices against misinformation and extremist content rampant on their platforms.

The past year has seen renewed efforts targeting networks of anti-government extremists, Europol action against “the dissemination of online terrorist content” on Telegram, as well as belated attempts by major platforms to appropriately respond to the threat posed by the facets of the QAnon conspiracy theory. However, while some of these movements have migrated to more fringe platforms, reporting suggests that major platforms continue to be exploited by malicious actors spreading misinformation and hate speech. Previous research has detailed the prevalence of what some platforms have described as “borderline content” – the intentional use of ambiguous language, humour, memes, and similar tools of communication that allow actors to bypass censorship. These activities pose a unique challenge to increasingly automated moderation efforts, and this becomes particularly challenging when dealing with posts assessed to come close to violating community standards, but which ultimately are judged to not cross the line.

Material espousing male supremacism  – an extremist political belief that men are biologically superior to women – often represents a challenging form of borderline content on mainstream platforms. Male supremacists express desires to dominate women’s bodies and spaces as well as espouse anti-feminist views that malign women as dishonest, manipulative, unintelligent, and morally bankrupt. Despite their vitriolic rhetoric, male supremacists such as involuntary celibates (incels) and Men’s Rights Activists continue to operate freely on mainstream social media platforms. Crucially, numerous acts of targeted violence offline continue to evidence linkages to male supremacism – including the Toronto van attack and the Tallahassee yoga studio shooting in 2018, the 2020 Crown Spa stabbing, and the 2020 shooting which targeted US District Judge Esther Salas at her home in New Jersey.

So why is this supremacist ideology allowed to remain on so many platforms, despite being cited by so many violent perpetrators? One key reason may be that embedded misogyny not only leads us to underestimate the violent potential of male supremacism as an ideology, but also precludes us from acknowledging violent misogyny’s ubiquity. Consequently, male supremacist violence is often viewed as a secondary concern to other supremacist ideologies. Another reason may be the preference of those that espouse these beliefs to produce borderline content on mainstream platforms. While most male supremacists online usually stop just short of calling for femicide, many express support for gender-based violence such as rape, beatings, and harassment. On mainstream platforms, they further “soften” their rhetoric. Take, for instance, gendered slurs, which most platforms explicitly ban: on non-mainstream websites such as Incels.co and 4chan, incel terms such as “noodlewhores” (a derogatory slur for Asian women) and “femoids” flourish. On the other hand, incels operating on major platforms often choose less explicit terms such as “meme gender,” “Stacys,” and “virtue signallers” to degrade women. In doing so, the violent, discriminatory, and harmful nature of their hateful beliefs not only remain potent, but are made all the more digestible, accessible, and repeatable to a wide audience.

Many social media companies rightfully recognise the slippery slope of moderating borderline content. However, they also often fail to adequately and consistently deplatform or moderate content creators who flirt on the edge of violating community guidelines, often intentionally so. While most prohibit gender-based hate speech, the continued presence of male supremacist content on major platforms puts into question the effectiveness of existing content moderation efforts. For instance, Twitter prohibits speech that dehumanises protected characteristics like race, religion, ability, and gender, but continues to host prominent male supremacist accounts. One example is the Twitter “arm” of Incels.co, an incel-specific website. Rather than repeating the dehumanising and misogynist rhetoric that proliferates on Incels.co, the account pushes a more “palatable” message of an oppressed and misunderstood community. This, in effect, mainstreams the account’s hate and allows it to evade censorship despite its direct connection to a virulently male supremacist website.

In the case of Facebook, Tier 2 of its hate speech policy prohibits denigrating speech against protected characteristics and specifically mentions male supremacy as an example. Despite this, a recent ruling by Facebook’s Oversight Board calls into question how consistently – and faithfully – these standards may hold. The Board overturned a Facebook ban of a post that “disparaged Muslims as psychologically inferior” as they deemed that it “did not advocate hatred or intentionally incite any form of imminent harm.” Given that most male supremacist content in mainstream platforms falls into this category (e.g., suggesting that women are morally bankrupt, understating violence against women), this precedent may set the tone for how male supremacist content could be moderated in the future.

In 2019, YouTube announced that it was banning supremacist content and putting more effort into moderating borderline content and misinformation. While YouTube boasted that it reduced watch time of borderline content by 70%, experts have questioned its success in diverting people away from harmful content. Furthermore, the continued existence of “alternative influence networks” that toe YouTube’s hate speech policy casts doubt on the effectiveness of YouTube’s abilities to counter borderline content. In terms of male supremacists, though some outspoken individuals have been deplatformed, most of these individuals carry on freely with growing viewership.

US Government efforts to tackle this issue are also worthy of self-introspection. Despite the growing misinformation threat, the focus of Congressional Committees with jurisdiction to legislate in the online space seems to remain squarely on pursuing sweeping antitrust reform amidst continued efforts to push through changes to Section 230 of the Communications Decency Act. After early bipartisan efforts in the previous Congress failed to gain sufficient traction, the initial steps of the 117th Congress have done little to assuage long-held concerns over the effectiveness of congressional oversight into platforms’ content moderation efforts.

Balance, however, is needed. As experts of both content moderation and extremism have long argued, placing sole responsibility for decision-making into the hands of major social media companies is problematic. Similarly, there are clear risks associated with excessive government regulation of speech. The messaging espoused by online actors rooted in hate and extremism is not always entirely distinguishable. The task of marginalising this rhetoric – rooted in the same harmful narratives by which individuals have been inspired to commit acts of violence – is herculean.

To date, threats undoubtedly remain from a range of actors – from jihadists who continue to seek to exploit social media platforms to Russian Government attempts to spread pandemic disinformation and undermine democracy itself. However, this growing and diffuse milieu of extremist actors present a far more ambiguous and pervasive threat if left to fester online. More diligence is required to mitigate the use of veiled or coded language by bad actors in furtherance of their respective extremist ideologies. It is crucial for future policy responses and platform action to take into consideration the complexities of borderline content. While borderline content does not explicitly violate policies, the unmitigated perpetuation of such content allows these communities to flourish. Technology companies will have to get the balance right, as ideologies such as male supremacism continue to inspire violent attacks, espouse dangerous rhetoric, and sometimes serve as links to other extremist communities.

Ye Bin Won is a Research Assistant with the Program on Extremism and a junior at Georgetown University pursuing a major in International Politics and a certificate in Jewish Civilization.

Jonathan Lewis is a Research Fellow with the Program on Extremism, where he studies extremist movements in the United States.