Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

The Cissexist Assemblages of Content Moderation 

The Cissexist Assemblages of Content Moderation 
12th December 2022 Rae Jereza
In 16 Days, Insights

This Insight is part of GNET’s Gender and Online Violent Extremism series in partnership with Monash Gender, Peace and Security Centre. This series aligns with the UN’s 16 Days of Activism Against Gendered Violence (25 November-10 December).

Introduction

In recent weeks, there has been a mass re-platforming of anti-trans accounts on Twitter.  The direct actions of Elon Musk – a white cis man and self-proclaimed ‘free speech’ absolutist whose wealth stems from colonialism have already led to direct shifts in content moderation towards dangerous anti-trans rhetoric. Such developments are dangerous and life-threatening as they encourage violent attacks against LGBTQ communities. The shooting at Club Q in Colorado Springs, which left five dead and nineteen injured, and the threats received by children’s hospitals that provide gender-affirming care are only a few recent examples of anti-LGBTQ ideologies mobilised as violent action. Such attacks are part of a two-year campaign by far-right influencers to demonise LGBTQ people, casting us as a threat to children and, ultimately, the white nuclear family [1]. 

Yet, gendered moderation policies are not limited to more explicitly violent assertions of the cisnormative heteropatriarchy. In an article examining how content moderation processes reinforce normative gender roles, Ysabel Gerrard and Helen Thornham introduce the concept of “sexist assemblages”: the shifting elements of content moderation that reproduce gender norms on social media. Echoing – and explicitly referencing –  the work of Safiya Umoja Noble on representations of Black women and girls on Google, Gerrard and Thornham show how white femininities are “stabilised” through hashtags, recommender systems, and community guidelines among many other elements. They conclude by calling for research that pinpoints “other elements of the assemblage” to get at the links between content moderation and the social world. This Insight builds on the notion of sexist assemblages to think through and situate this moment of increased violence against trans people online and offline in the US.

Cissexist Assemblages Online

 Instances of anti-trans violence are alarming and worthy of our attention and intervention. However, there is a tendency to talk about anti-trans acts in presentist ways and frame them as extreme exceptions in an otherwise progressive society [2]. I argue that addressing this violence – indeed preventing it from happening again in the future – rests on revealing and destabilising mundane cisnormativity, which provides what Cynthia Miller-Idriss calls “the fertile ground” for supremacist ideologies and affective investments to “thrive.” Doing so entails examining content moderation as a set of strategies that go beyond content removal: as an assemblage of competing practices, interests, and concerns – what I have called “the ecology of content moderation” – that reproduces cisnormativity on platforms. These cissexist assemblages may not always entail anti-LGBTQ slurs or narratives, and they often do not involve explicit calls to violence against LGBTQ communities. Nevertheless, they contribute to “cisgendering reality”:  

…erasing, othering, and punishing non-cisgender existence and experience throughout mainstream social institutions, interactional patterns, and structural arrangements in ways that allow people to accept a world without non-cisgender people (emphasis mine).  

Cissexist assemblages are therefore the practices within mainstream content moderation that reinforce a view of the world devoid of trans and non-binary people.   

LGBTQ users, organisations, and their advocates have long documented elements of mainstream platforms’ cissexist assemblages. In 2019, the Salty Algorithmic Bias Research Collective published a report describing how Facebook’s (now Meta) content moderation practices affect LGBTQ Instagram users. Based on a pool of 118 respondents who “identified as LGBTQIA+, people of color, plus sized, and sex workers or educators,” the research collective found users from these marginalised communities had their profiles deleted or disabled with little to no explanation. For instance, users reported that posts advertising products for women and non-binary people have been removed, while products for “erectile dysfunction” remain on the platform. Kendra Albert and Oliver Haimson have also drawn attention to the practice of removing trans and non-binary crowdfunding campaigns from Instagram on the grounds that they constitute violations of Meta’s nudity and sexual solicitation policies. In a letter to Meta’s Oversight Board published in 2022, Albert and Haimson discuss the various ways in which trans and non-binary people’s posts are moderated through a “cisgender gaze” that shapes algorithmic moderation. These practices prevent trans and non-binary people from “participating in the public sphere” and make it difficult for them to access funds for gender-affirming care in a context where healthcare is increasingly inaccessible. Furthermore, they reinforce the notion that trans and non-binary people’s bodies are inherently lascivious and their very existence hypersexual in ways that easily feed into – and enable – abhorrent far-right narratives that cast trans people as “groomers” and “pedophiles.” 

How might we challenge content moderation’s cissexist assemblages “stabilised” and made apparent through removing content posted by trans and non-binary people? Grindr’s whitepaper, titled “Best Practices for Gender-Inclusive Content Moderation,” presents some helpful strategies to counter cissexist assemblages. For instance, the authors’ suggestion to have “gender-free photo rules” dovetails with Albert and Haimson’s suggestion that Meta reworks their nudity and sexual solicitation policies “to eliminate the engines of disproportionate harm, rather than attempting to create exceptions for transgender users and their content.” Doing so will not eliminate the broader societal tendency to hypersexualise LGBTQ people’s bodies, but it is a step towards humanising LGBTQ people, at least online. The authors’ suggestion that platforms provide “inclusive gender options,” “include pronouns,” be cautious with open text fields, and more are also welcome steps in the right direction. In terms of moderation policies, the authors recommend including statements that explicitly welcome trans and non-binary people and disallow discrimination on platforms. They also suggest that companies clearly state rationales for their moderation decisions, express what they do allow on the platform, and train their moderation teams in gender-inclusive ways of moderating.  

These are a good start but are also limited in the sense that they narrowly address social media’s cissexist assemblages. For example, the range of algorithmic harms trans and non-binary people experience goes beyond how images are moderated. As Thiago Dias Oliva, Dennys Marcelo Antonialli, and Alessandra Gomes have noted, algorithmic moderation can result in the flagging of LGBTQ speech genres as harmful. There is thus a need to consider a broader range of semiotic objects produced by users. Moreover, companies should consider how cisnormativity shapes how these objects are interpreted by both AI and human content moderators.  

Additionally, the whitepaper does not address the fact that major social media companies like Twitter and Meta generate revenue through hosting advertisers. While trust and safety professionals might truly be working towards – and believe in – platforms where marginalised peoples are included, content policies and moderation practices are constrained by what kinds of content advertisers are comfortable with. This structure impacts not only users but also the underpaid, overworked third-party content moderators who social media companies rely on to keep their platforms friendly for advertisers. The pressure to remove ‘objectionable’ content on platforms falls on third-party vendors who subject workers to extreme conditions in the name of pleasing clients. As I have written about elsewhere, this means that content moderators essentially perform unpaid affective labour to maintain the profitability of large social media platforms. I have also shown that even when Meta’s content moderators believe in social justice and can identify coded speech against people of colour, productivity metrics discipline them into aligning with policies they disagree with. Although the specific study I am referencing does not refer to instances of cissexism and anti-trans rhetoric, it is reasonable to assume that all the knowledge in the world is useless in situations where one is essentially coerced into mobilising policies one finds politically problematic or harmful.  

Conclusion

Making sense of violent supremacist acts should entail examining the ways that seemingly mundane social interactions and structures enable supremacist violence. The US is a country that runs on the violent oppression, killing, and everyday exploitation of people of colour, LGBTQ people, workers, and disabled people both domestically and overseas. People with power, disproportionately made up of abled and wealthy white cis Christian men, have always been invested in maintaining white cis Christian capitalist hegemony.  Social media companies, like all large organisations driven by profit,  are ultimately invested in maintaining this status quo regardless of how progressive their policies look on paper and how dedicated their employees are to social justice. This is evident in social media’s cissexist (racist and Western-centric) assemblages, which comprise mundane moderation practices that reproduce the very same gendered tropes on which more explicit anti-trans rhetoric and violence are predicated. As is the case with many forms of far-right activity, we must attend to the ways in which the so-called mainstream enables and fuels the very same oppressive logic.  

 

My gratitude to Maureen Kosse, who gave me feedback on the key ideas of this piece. I would also like to thank the Anthro Writing Group, and Rine Vieth in particular, for their company during the writing process.

[1] And, of course, such tropes are not new: the right has long used children as a foil to cast their enemies as dangerous and perpetuated the dangerous myth that trans women are “predators.”

[2] Scholars such as Jemima Pierre, Antonia Vaughan, Maureen Kosse, Meredith Pruden, Hanah Stiverson, Catherine Tebaldi, Jennifer Delfino, Britt Halvorson, Joshua Reno, Aaron Winter, Aurelien Mondon and many more have made similar arguments about race and racism and gender. Their insights have allowed me to develop this orientation towards far-right actors and the mainstream.