This Insight is part of GNET’s Gender and Online Violent Extremism series in partnership with Monash Gender, Peace and Security Centre. This series aligns with the UN’s 16 Days of Activism Against Gendered Violence (25 November-10 December).
Cross-burnings and public lynchings were some of the few symbolic gestures white supremacists such as the Ku Klux Klan used to send a ‘message’ to marginalised communities of them being unsafe. In the past, the threat was solely in-person and varied across geographical areas; the internet has exacerbated the reach of the danger, created new environments, and new victims to target. Social media has become a vital part of our lives as we use platforms for advertising opportunities, highlighting research, and connecting with our peers. Victims face threats such as doxxing, hacking, and swatting from online actors because they work in their communities. Still, vulnerable communities and change-makers deal with an entirely different set of dangers. According to Psychology Today, identity is “the memories, experiences, relationships, and values that create one’s sense of self. This amalgamation creates a steady sense of who one is over time, even as new facets are developed and incorporated into one’s identity.” While many individuals would choose to avoid danger, many minorities, women, LGBTQ+, and people from non-Christian religions risk their lives to carry out work related to violent far-right extremism. A person that does this work requires strength, resilience, and a strong sense of self to continue manoeuvring their daily lives. Unfortunately, social media platforms inadequately protect vulnerable populations victimised by far-right actors espousing exclusionary and dehumanising sentiments.
The harassment faced by vulnerable populations is traditionally executed to administer a sense of psychological distress and fear of the unknown. Due to the anonymity provided by many platforms or users with phoney online personas, it’s difficult to apprehend anonymous users or provide legal resources for the distress they may have caused when the user is unknown. While some users may tell users to “just close your computer” or log off, it’s difficult because the online and offline environments have become intertwined, and the unknown online extremist can be anyone. While some researchers choose to use pseudonyms and fake profiles, it’s important to acknowledge that you cannot expect a white gay man to pretend to be a cis-white male online or a woman of colour to come up with a ‘white name’ and understand the ‘white experience’ if they are not members of that community. Many individuals targeted by extremists are attempting to perform actions to delegitimise them and prevent hate from spreading.
For example, in 2017, Neo-Nazi and Daily Stormer founder Andrew Anglin instructed his online followers to harass, bully, and dox Taylor Dumpson and myself. Taylor was the first black female student government president at American University; we lived together at the time, and we served as the president and vice-president of Alpha Kappa Alpha Sorority Inc., Lambda Zeta chapter. I was targeted with Taylor because we were roommates and in leadership positions within our Sorority. Anglin’s followers were able to evade content moderation using memes, links, and masked rhetoric. Many users could still keep their online accounts despite the damage they inflicted on social media platforms. At that time, the situation was frightening, and the psychological impact has been long-lasting; the incident occurred online, perpetrated by someone neither of us had ever met, intending to produce real-world harm. Still, Taylor is now a public speaker and a civil rights attorney at the Lawyers’ Committee for Civil Rights Under Law.
Many bad actors online will utilise ‘coded’ language that does not trigger content moderation mechanisms as a tactic to troll and harass online. The language is aimed at the in-group, or individuals that spend time online to understand their language. The rhetoric primarily includes racist, sexist, antisemitic, or anti-LGBTQ+ verbiage or memes to target individuals that identify as part of those communities without them recognising the true intentions of their words because it’s usually hidden by using humour. Additionally, artificial intelligence has limitations that make it difficult for platforms to moderate memes and photographs, which is another tactic extremists will utilise.
In 2016, Lizzy Waithe, a transwoman in South Dakota, committed suicide and her suicide note went viral due to a coordinated harassment campaign planned on a toxic internet forum. The harassment included memes and anti-LGBTQ+ slurs to target Waithe. Even after her death, the hate continued as trolls mocked her suicide letter with vulgar hashtags and modified their profiles to satirise her passing. Another coordinated harassment campaign was launched on the same forum in 2022 against trans Twitch streamer Keffals because she aspired to remove the forum from the internet. The harassment began online but resulted in real-world dangers when she was doxxed, swatted, and hacked, ultimately leading Keffals to flee her residence for safety. Last year was one of the deadliest years for the transgender and gender non-conforming community, as 57 individuals were killed or injured due to their identity. When the world saw the damage the forum had inflicted and the dedication from Keffals, Cloudflare dropped the forum from its internet services.
In one of the worst atrocities in human history, 6 million European Jews were killed during the Holocaust due to the antisemitic, eugenic, and religious supremacist beliefs of the elected government. Since 2016, the US saw six deadly attacks against Jews, and the Jewish community is currently the most targeted religious group in the United States. Antisemitic violence is something we continue to see because once ‘coded’ concepts such as ‘great replacement’ and ‘white genocide’ have become mainstreamed and incorporated into the vocabulary of influential figures and television personalities. Moreover, extremists cite the concepts in their manifestos as justification for their attacks.
Identities make people unique and allow them to live their truth. Individuals should not fear for their lives because of who they are. When public figures begin to provide legitimacy to hate, conspiracy theories, and racism, it makes it easier for it to become acceptable. Still, social media platforms need to provide safety to users targeted due to their identity, as online harassment can turn into offline danger. In addition to social media, media outlets must stop giving public figures a platform to continue spewing their problematic claims. Those mechanisms allow their message to be broadcasted to individuals outside a public figure’s traditional following. It may be challenging to comprehend the rhetoric masked by humour or the true meaning of memes, but it may end up being the difference between life and death. The harassment can be public posts, private direct messages, or in-person, but that fear should not hinder you from being yourself.