Click here to read our latest report “Transmisogyny, Colonialism and Online Anti‐Trans Activism Following Violent Extremist Attacks in the US and EU”

Transmisogyny, Colonialism and Online Anti‐Trans Activism Following Violent Extremist Attacks in the US and EU

Transmisogyny, Colonialism and Online Anti‐Trans Activism Following Violent Extremist Attacks in the US and EU
24th May 2024 Anne Craanen

This report investigates the rise of online anti-trans activism following two prominent attacks involving LGBTQ+ communities, namely the October 2022 attack on a gay bar in Bratislava, Slovakia, and the March 2023 shooting at a school in Nashville, Tennessee perpetrated by a trans man.

We use a postcolonial approach, through which we find that the transphobia espoused online following the attacks was predominantly transmisogynistic, a consequence of the colonial logics around gender which assign the monopoly of violence to white cisgender men. The main themes identified were the erasure of trans identities, particularly transmasculinity, the overlap between transmisogyny and other forms of discrimination, and the demonisation of trans people. 

The most important conclusion from our research is for everyone – technology companies, policymakers and other stakeholders – to take transphobia and transmisogyny seriously. Too often transmisogyny is seen as a side problem, or as a complement to another set of more radical ideas, including but not limited to white nationalism or anti-government sentiment. It can often be the case that transphobia, alongside misogyny, hate speech, or other forms of discrimination, is seen as “harmful but lawful” or described as “borderline content”, thereby not in need of online moderation. While simply removing such material from platforms may be neither appropriate nor advisable in all cases, there are other forms of content moderation that platforms can consider, depending on how online transphobia manifests itself. 

In the conclusion of our work, we provide practical recommendations to technology companies of all sizes for tackling transphobia more effectively. Key among these are the importance of knowledge-sharing between platforms and subject matter experts, defining transphobia and transmisogyny in platforms’ terms of service, and employing content moderation practices such as disinformation tags and algorithmic deprioritisation. 

Recommendations for technology companies:

  1. Increase online monitoring following attacks that are directly relevant to the LGBTQ+  community as transphobic content is likely to increase, including material that violates terms of service, incites violence or is otherwise illegal. 
  2. Collaborate with experts to comprehend and classify transphobic rhetoric, and produce a taxonomy alongside subject-matter specialists, technology representatives, civil society, and government partners.
  3. Consider diverse moderation methods, removing illegal content and also using alternatives to removal such as fact-checking and algorithmic adjustments to mitigate exposure to transphobic channels and content.
  4. Define transphobia in terms of service to guide users as to what is allowed on platforms and enable user reporting. 
  5. Design clear reporting and appeal mechanisms for moderated content, including online transphobia, to protect digital and human rights.

Read full report View infographic