Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

Political Outrage Machines: Exploring the Algorithms Structuring Conspiracy TikTok

Political Outrage Machines: Exploring the Algorithms Structuring Conspiracy TikTok
25th January 2023 Justin Grandinetti
In Insights

The short-form streaming audio-video app TikTok has become near-synonymous with controversy—in no small part because of the spread of conspiratorial content on the platform. The international version of the Chinese-owned app Douyin, TikTok’s meteoric rise in popularity has quickly made the platform one of the most downloaded and used in the world, particularly by younger people. Those older than 25 might have first heard about the app due to debates surrounding its data collection and safety practices. Countries across the globe have deliberated and even implemented bans on the app due to a mix of genuine concern for data security and geopolitical proxy battles with the Chinese Communist Party. And like more established social media platforms are the familiar hotly-debated arguments: is TikTok facilitating an informed and engaged populace or is the app a hotbed or crass consumerism and dangerous mis(dis)information? 

There exists a popular societal, and unfortunately often academic, compulsion to either praise or blame technology for the triumphs and sorrows of the world—and social media is no exception. Narratives during the early years of social media predicted that these platforms would usher in a halcyon age characterised by a free sharing of knowledge, increased momentum for democracy across the world, and a greater voice for regular people. This narrative was galvanised by major events including the Arab Spring protests; Barack Obama’s use of social media to connect with votersnovel forms of citizen journalism. Now more than a decade later, the pendulum has swung and saccharine characterisations of social media have become bitter: vaccine scepticism; election denialism; radical populism and hatred. Social media platforms—and in particular the boogeyman referred to as ‘the algorithm’—are blamed for the rapid proliferation of conspiratorial thought. 

These bifurcated narratives about social media and underlying data-driven processes of algorithmic sorting, recommendation, and personalisation show the stark limitations of technological determinism, otherwise known as the position that technology is the driver of societal change. Such myopia leads to TikTok as a kind of Schrödinger’s platform, at once simultaneously the facilitator of utopian world-building and harbinger of the apocalypse. Avoiding these kinds of contradictory causal narratives when it comes to the spread of mis(dis)information on social media requires a different strategy from the familiar beaten path—a strategy that considers how these platforms work toward a particular logic that prioritises monetisation through engagement. In turn, one way to think about social media platforms is to try to assess the difficult-to-quantify element of affect. This means becoming attentive to how the complex relationships between platforms and individuals create new intensities that can have a myriad of impacts ranging from affirmative to harmless to outright dangerous—like in the cases of recently attempted far-right insurrections in the  United States and Brazil

In a recent article, my co-author and I set out to understand the gravity and pull of conspiratorial content on TikTok—in an academic sense, what’s known as affect. Affect refers to an intensity that’s created between bodies, a force of things, a flow, and a capacity to act. What’s more, affect is experienced preconsciously—it’s a change we often feel but cannot always neatly articulate. By extension, affect is difficult to quantify and can seem quite ephemeral. Chris Ingraham’s Gestures of Concern offers an accessible example of affect through the unfortunate accident of placing one’s hand on a hot stove. In this instance, the pain experienced from the heat is personal, while the socially perceivable physical and verbal reaction to the burn is emotion. Affect, though, is the precognitive gap that occurs between the hand touching the heat and the registration of pain. It’s the moment when we pull away before we even have time to consider what occurred. In turn, attention to affect means consideration of relationships and, by extension, the force of encounters that alter a body’s ability to act. 

Applied to social media like TikTok, affect is experienced through a complex mix of humans and technology that includes users, data flows, smartphones, apps, and infrastructures. We watch videos and something changes, though we might not always be aware of what. The accretion of checking social media might eventually culminate in a good or bad mood, or maybe plants the seeds of scepticism or paranoia. Perhaps even the feeling of missing out on something big, or the desire to ‘do something’ after browsing content. The list goes on. Affect is the new intensities that form through our connection with these platforms. 

Appraisal of the affective pull of social media platforms is more important than ever in a world of polarised extremes, conspiracy theories, and circulating mis(dis)information. As difficult as it may be, it is critical to consider what kinds of affective intensities can be channelled by the embedded algorithms of social media. What follows are observations from a recent examination of what’s known as ‘Conspiracy TikTok’ done by my co-researcher and me over the course of several months at the end of 2021.  

An Ethnography of Affective Algorithms 

Taking a deep dive into conspiracy TikTok meant being aware of how the platform’s algorithms shaped the content we viewed. Broadly speaking, algorithms are a mathematical or logical term for a set of instructions. We encounter algorithms all the time, even in a non-digital sense. For instance, few would call following a recipe a ‘food algorithm’, but the idea is the same. On TikTok and other social media platforms, users encounter algorithms largely through processes that recommend and personalise content and advertisements. Troublingly, these complex layers of machine learning algorithms are often characterised as a ‘black box’, in that they often function without a clear explanation, which means that even those who design such systems are unable to fully articulate the ‘why’ behind algorithmic output. 

The opacity of algorithms can feel disempowering; however, there is a growing body of research aimed at demystifying everyday algorithmic encounters. Here, scholars attempt to unpack the complexity of algorithms not as mere computational techniques, but instead as an ongoing and ever-changing relationship between the social and technical. By extension, algorithms can be better understood by embracing the various practices, engagements, and contexts in which algorithms function. This means observing, describing, and mapping the operational logic of algorithms in order to try to ‘reverse engineer’ how these embedded processes function. Put more simply, we can open the black box of algorithms on social media platforms like TikTok through an assessment of what we observe and experience as users. 

Inspired by what is been called an ethnography of algorithms, my co-author and I used TikTok over the course of several months at the end of 2021. I was new to TikTok, so I set up my profile by selecting some of the interest categories that included pets, food and drink, music, sports, funny, pranks, technology and science, and pop culture. By extension, my initial TikTok feed was a malleable mix of content – a strategy by the recommendation algorithm to begin to coalesce content based on whether I liked, followed, or even paused to watch certain videos longer than others. Of course, this also meant diving head first into conspiracy content via what’s often tagged as ‘Conspiracy TikTok’ or ‘ConspiracyTok.’

Political Outrage Machines 

I found that my feed was quickly altered by a search and viewing of ‘conspiracy’ content. Initially, this revolved around the supernatural and the silly, including short clips about historical mysteries like the Dyatlov Pass incident. What soon followed, however, was far more political in nature. In only a few days my feed became a pendulum of polarised political content; the supernatural conspiracies were replaced by anti-vaccination rhetoric, as well as some videos trying to debunk those spreading vaccine mis(dis)information. The range of conspiracies interspersed with other content continued over time. Some, like videos claiming that ‘Rome never existed’ are rather banal (but certainly not desirable). Others, like the promotion of US election scepticism and the widespread influence of the so-called Deep State are potentially far more damaging. Many of the videos circulating at the time presented clips from the thoroughly discredited pseudo-documentary, Plandemic: The Hidden Agenda Behind Covid-19 (2020), which promotes false narratives about the safety and monetisation of vaccines. While some users situated videos as simply presenting ‘both sides’ of vaccination debates, others engaged in back-and-forth arguments by ‘stitching’ together previously existing TikTok videos side-by-side with their own commentary. 

In addition to just how quickly our feeds transformed into political outrage machines, my co-author and I noticed what seemed to be a lack of action by TikTok to slow or remove such content. Some, but not all, of the vaccine misinformation was tagged by the platform with an overlay directing viewers to reputable sources of information. TikTok’s own transparency reports from April to June 2021 note that roughly 1% or 81 million videos uploaded to the platform during this period were removed for violating community guidelines. However, only 27,518 of these were taken down for spreading COVID-19 mis(dis)information. That doesn’t begin to touch the host of other conspiracy videos circulating on the app. 

Moderation is, admittedly, one of the most difficult questions of our contemporary digital era. All platforms moderate—even the self-described ‘free speech’ alternatives. However, whether or not platforms can or should be the arbiter of truth is an unsettled topic. What matters specifically to an examination of affective algorithms on TikTok is that the platform has little incentive to completely slow the spread of conspiratorial content, even if the company wanted to. Politically-charged conspiracy content has become normative on social media platforms in large part because it drives engagement. Moreover, mis(dis)information has echoes. We witnessed videos calling the November 2021 crowd crush tragedy at the Astroworld concert a demonic and satanic ritual—language that neatly elides with the conspiracy QAnon’s fears of a global satanic cult of paedophiles. Even as Q waned in followers, it’s clear the roots of the conspiracy were able to grow, recombine, and intermix when left unchecked.  

Platforms like TikTok operate with an underlying logic: eyeballs are currency. In turn, content that grips with an affective intensity keeps us watching, coming back for more, and even opening the app reflexively in moments of boredom or to fill some time.  There’s a kind of gravity that brings users back. While it might not always be driven by conspiracy theories or political engagement, the gravity of platforms is how we experience affect. 

A Trajectory of Data-Driven Personalisation 

Readers who grew up in the 80s and 90s might remember the tabloid called the ‘Weekly World News’. This publication often revolved around the fantastical and supernatural, with cover stories ‘unearthing’ that then-President Bill Clinton rode in a UFO, or that five US Senators were actually space aliens. The conspiracies of yesteryear seem quaint by comparison to what circulates on social media now. This shift is underscored by a larger question: in the era where we are bombarded by an unending amount of conflicting information, how can one even begin to make sense of the world? Media theorist Mark Andrejevic argued in 2013 that tech companies can utilise big data to unearth new insights, while the average person is left the affective response of ‘gut instinct’. That is, without analytical tools to help the average person wade through the conflicting morass of information, we are left with our own internal compass, driven more than we’d like to admit by information that strikes an emotional chord. Conspiracies get to the heart of our desire to make sense of an incorrigibly plural world, to look ‘behind the curtain’, and to see what’s really going on. What is troubling, however, is that once one starts down a path of viewing conspiracy content on platforms like TikTok, they are likely to continue seeing more and more of it through underlying algorithms of affective capture.  

Whether it’s leaks about data security concerns, troubling changes to algorithmic transparency and ethical practices, or just how easily platform mechanisms can be exploited by bad actors, we might feel some comfort to believe that the age of social media is ending. Yet, it’s important to recognise that technology isn’t the inevitable driver of social change, nor do our interactions with social media lead to one and only one outcome. That outcome isn’t always negative, however, studies and reports show that TikTok’s algorithms do seem to quickly train toward showing problematic and dangerous content. On TikTok, the amplification of extremism appears to be a feature, not a bug. By consequence, it’s critical to understand how platforms like TikTok are driven by the underlying logic of keeping users engaged. The attendant challenge comes from recognition of how this occurs. Understanding how the underlying affective algorithms of social media function cannot be accomplished by only one account—this is an ongoing project that requires continual evaluation. In a landscape where some reports have found that TikTok is the fastest-growing source of news for young adults, the stakes of spreading conspiracies and mis(dis)information are higher than ever. 

Unfortunately, there may be no sure-fire panacea to one of the most pressing issues of our time. Relying on platforms for transparency and moderation puts perhaps too much trust in private industry. Expecting even-keeled and fair regulation by governments also seems overly optimistic and potentially hazardous. What’s left is the fraught solution of individual accountability. It’s easy to feel that we are privy to the ‘truth’ when we view content that’s algorithmically selected to appeal to us. And, in a polarised political landscape, it can be difficult to separate ideology from fact, the lens of team politics from actual events, and accurate information from what feels right. Perhaps the biggest danger comes from when the spread of baseless and nonsensical conspiratorial thought detracts from legitimate critiques of institutions, systems, and structures of power. As difficult as it may be to achieve, perhaps our awareness of the mechanism of how algorithms embedded on social media platforms select information for us might be the first step in breaking the chain of mis(dis)information and conspiracy. 

Justin Grandinetti is an Assistant Professor in the Department of Communication Studies and affiliate faculty in the School of Data Science at the University of North Carolina at Charlotte. His research interests are at the intersection of mobile media, streaming media, big data, and artificial intelligence. Justin’s work has appeared in AI & Society, Information, Communication & Society, Critical Studies in Media Communication, and Surveillance and Society. For information or to contact Justin, please visit www.justingrandinetti.com.