Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

A Deadly Trifecta: Disinformation Networks, AI Memetic Warfare, and Deepfakes

A Deadly Trifecta: Disinformation Networks, AI Memetic Warfare, and Deepfakes
15th February 2024 Achi Mishra
In Insights

Content Warning: The following article includes disturbing and offensive AI-generated images.  

Introduction

Generative AI technologies are rapidly advancing and transforming the future of work, education, and national security. The release of ChatGPT in 2022 and the following AI tools irreversibly altered the fabric of the global digital ecosystem. While companies race to develop and deploy their own AI systems, technical AI safety guardrails lag behind, as do AI policy solutions. This discrepancy poses deep concerns regarding algorithmic bias, data security, and the digital divide. 

The 2024 presidential election in the United States marks the first after the release of widely accessible generative AI tools capable of creating realistic synthetic text, images, audio, and videos. Experts fear these tools will increase the dissemination of disinformation and misinformation while further polarising and radicalising Americans.  

This Insight analyses the use of generative AI to create difficult-to-detect disinformation networks, participate in memetic warfare, and disseminate realistic synthetic media. The combination of these three elements and the feedback loops they generate creates a tumultuous political environment where the very fabric of reality is questioned. We studied these elements by conducting an extensive literature review of existing academic papers, reports, and articles. We also searched for political memes, generative AI artwork, and deepfakes on both Reddit and 4chan. 

Disinformation Networks

Social media disinformation networks pose a significant risk this election season. It is becoming increasingly accessible for individual actors to generate entire social media networks to spread disinformation. Of the Facebook and X influence operation takedowns in 2020, 76% were specific actors spreading disinformation, compared with 62% in 2019 and 47% in 2018. One such network identified in 2019, The Kullberg Network, consisted of 24 pages with ~1.4 million followers. The network disseminated far-right disinformation and Islamophobic content through pages targeting Christian, Jewish, and Black Americans. Despite attempts at mimicking diversity, Snopes identified that radical American evangelical Kelly Monroe Kullberg was responsible for the network. Even more troubling is that these pages received funding from political donors to boost divisive posts as political advertisements to target audiences.

Mis/disinformation networks like The Kullberg Network will become easier to create and more adept at evading detection as generative AI advances. Previously, reverse image searches identified fake profiles. Now,  AI-generated profile pictures can be created within seconds to evade reverse image search identification successfully. Although Facebook has policies to report and remove fake profiles, AI-generated profile pictures pose a unique enforcement challenge to this policy because of detection complexity. 

AI-Generated Memetic Warfare

Memetic warfare is used to spread disinformation, radicalise individuals, and push extremist agendas while building a sense of community. The online platform 4chan hosts multiple threads relating to memetic warfare where anonymous users post explicit instructions on how to use AI image generators like DALL-E3 and Midjourney to generate racist, far-right memes. These threads also include guidelines for creating propaganda memes (Fig. 1). 

Fig. 1: Right-leaning propaganda generation guide found on 4chan.

The creators and contributors of these threads believe strongly in the persuasiveness of their AI-generated content. All the threads we studied included the following verbiage:

“We used to have a /mwg/ on this board years ago and it’s probably one of the biggest reasons the 2016 Election got won. If we can direct our autism and creativity towards our common causes, victory is assured.” 

These memetic warfare threads include antisemitic, anti-Black, and anti-Indian AI-generated images (Figs. 2-5). 4chan users are encouraged to modify other images if they find them inspirational. When offensive content is removed from the site, ‘new’ versions of the offensive imagery are generated and reposted – a developing phenomenon in generative AI propaganda creation. 

 

Figs. 2-5: AI-generated images from multiple far-right memetic warfare pages on 4chan. Fig. 2 (top left) shows Jewish men dancing as the Twin Towers burn. Fig. 3 (top right) depicts a crowd of Black people burning and stealing goods. Fig. 4 (bottom left) portrays the Hindu god, Shiva, defecating as Indian men wait to collect and eat the excrement. Fig. 5 (bottom right) dehumanises a Jewish man by giving him bug-like features, including arachnid legs, bloody mandibles, fly eyes, and sharp, pointed teeth.

Deepfakes

The prevalence and proliferation of deepfakes – realistic AI-generated images, audio, and videos – is also deeply concerning. Researchers at University College London found that listeners could identify audio deepfakes only 73% of the time. While deepfakes are difficult for humans to detect, a survey by Sumsub found that the number of deepfakes in North America doubled from the end of 2022 to March 2023. The rising number of deepfakes, combined with the human inability to accurately differentiate between human and synthetically generated content, creates the perfect opportunity for disinformation and misinformation to thrive. False information spreads six times faster than the truth, indicating that deepfakes have the potential to cause far-reaching damage. 

Experts fear deepfakes can be used for nefarious purposes, including but not limited to sowing confusion among the public and armed forces regarding public safety announcements, military commands, and the opinions of political leaders. Deepfakes of political leaders have surfaced with increasing frequency. In March 2022, a deepfake video of Ukrainian President Volodymyr Zelenskyy was released, in which he called for his people to surrender to the Russian government. Although Ukrainian officials had warned of such an attack and President Zelenskyy himself debunked the video, it still managed to briefly confuse people worldwide. 

A convincing audio deepfake of British Labour Party Leader Keir Starmer verbally abusing his staff was released in October 2023 and viewed over 1 million times. Despite the clip being debunked as a deepfake, constituents claimed that Starmer labelled it as artificially generated as a “get out of jail card.” Others repeatedly asked how people determined the audio was fake, as this deepfake was far more convincing than Zelenskyy’s. 

In the United States, AI-generated phone calls impersonating President Biden targeted New Hampshire voters the weekend before the nation’s first 2024 primary election. The automated phone call stated, “Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again.” The source of this blatant attempt at voter suppression has proved difficult to identify, though officials are determined to prosecute the perpetrators to the highest degree. 

Although this specific example targeted Democrats, other deepfakes convincingly target Republicans, including former President Donald Trump. A recent AI-generated image of Trump and Jeffrey Epstein on Epstein’s private jet, the infamous Lolita Express, made its rounds on X (Fig. 6). In early January 2024, actor Mark Ruffalo shared AI-generated images of Trump with teenage girls. Though Ruffalo apologised when these images were identified as deepfakes, it didn’t detract from the fact that they were widely circulated and acknowledged as real. 

Fig. 6: Deepfake of former President Trump and Jeffrey Epstein on the Lolita Express.

Female politicians and public figures face the threat of deepfake pornography to a much higher degree than their male counterparts. A 2019 report found that 96% of deepfake videos found online were pornography exclusively targeting women. Rana Ayyab, an Indian investigative reporter, was the victim of such an attack in 2018, after which she “self-censored quite a bit out of necessity,” which impacted her career. Some reports claim that deepfake pornography videos of United States Congresswoman Alexandria Ocasio-Cortez (AOC) are released weekly on platforms like Mr Deepfakes. 

These deepfake attacks on women are an intimidation tactic designed to silence their voices. Not only are these videos capable of ruining reputations, they are also becoming faster and easier to create. A user on 4chan stated:

 “I saw that female politician deepfake porn clip shitposted on here so for the fuck of it I go to telegram and search for a bot that makes those things and found one instantly. Within 5 mins I had a clip of a hot girl I know face swapped in a POV video and it was free.” 

While women are currently the main target of deepfake pornography videos, such videos could be used in the future to blackmail government officials regardless of gender.         

A Deadly Trifecta

The combination of disinformation networks, AI-generated memetic warfare, and deepfakes is a deadly trifecta in the upcoming United States presidential election. Disinformation networks garnered millions of followers even before the recent accessibility of generative AI tools for the general public. Now, techniques like using AI-generated profile pictures allow these networks to increase reach while evading detection for even longer.  

As disinformation networks grow, generating extremist content becomes increasingly easy to create and challenging to detect. AI-generated memetic warfare enables extremists to create racist, sexist, and political images often rooted in dehumanising their opponents. When users develop and share such content, they frequently exchange feedback and praise, resulting in community building and shared camaraderie. It is easy to see how this could be a gateway to lone actors committing attacks in the name of extremist movements like QAnon, especially since researchers found that 90% of extremists in 2016 were radicalised through social media. The same researchers also identified that far-right extremists are more likely to generate content than far-left or Islamist extremists. This implies that far-right extremists are likely to digitally radicalise individuals at a greater rate than far-left extremists in the United States. 

When it comes to deepfakes, however, it is clear that no political party or public figure is immune. Americans have already seen and heard convincing deepfakes of President Biden, former President Trump, and other elected officials. These deepfakes were used to foster voter suppression, heighten polarisation, sow confusion among constituents, and intimidate politicians. Most importantly, deepfakes are convincing enough to distort reality and fabricate evidence. Even after content is identified as a deepfake, some remain sceptical. Moreover, there is a growing inclination to question the legitimacy of authentic content. When facts can be fabricated, individuals are more susceptible to their biases. This leads to greater political polarisation and the rapid ‘othering’ of those with different world views. By allowing deepfakes to widen this trust divide, countries, including the United States, will find themselves fractured into factions with their own versions of reality rooted in their own versions of fact. When an entire nation cannot agree on a fixed reality with established and provable facts, its very foundation of democracy is threatened. 

Potential Solutions

Combatting this trifecta in the upcoming US Presidential Election and beyond will involve a nuanced, multi-pronged approach. Lawmakers and government officials must create policies against the creation and dissemination of deepfakes and other AI-generated content. Tech companies should develop technical tools to identify AI-generated content, and educational organisations must increase AI literacy efforts for the general population, starting at a young age.  

Without policies and legislation to protect individuals and society from deepfakes and other generated AI harm, corporations and individuals are less likely to prioritise responsible development and deployment of these technologies. AI Safety organisations believe that self-regulation of AI by big technology companies is equivalent to no regulation. Although this stance might seem rigid, there have been multiple self-regulatory failures in AI and other areas like social media and the food industry. Because harmful AI systems can be replicated and deployed relatively cheaply and at scale, self-regulated AI poses an accelerated risk on par with nuclear warheads, which are much costlier and time-intensive to create. To thwart such extreme levels of risk, it is critical to have government regulations that hold companies legally and financially accountable. By doing so, companies will be further incentivised to create ethical AI technologies. 

In the first three weeks of 2024 alone, both Democrat and Republican lawmakers from 13 states introduced legislation to either disclose or ban AI-generated content. However, no amount of legislation will make a tangible difference if it is impossible to enforce. For example, if deepfake disclosure or ban is required, it will still be difficult to accurately identify deepfakes and determine their source. 

This is why it is critical to create technologies that not only accurately detect if content is AI-generated but also identify both the tools used to create it and the source of the content. Such tools can be used by social media platforms, government entities, and individuals. With the multi-level incorporation of these tools, content consumers can confidently discern whether the media they engage with is genuine or fabricated. This can prevent AI-facilitated radicalisation and political polarisation while ensuring authentic media consumption, irrespective of political or social leanings. 

Finally, without AI literacy, individuals will remain unaware of the many risks AI poses to society,  including the far-reaching consequences of disinformation and misinformation. In 2021, researchers found that 84% of American adults were AI illiterate, indicating the need for urgent and widespread AI literacy efforts. In politics, voters need to be informed of the authenticity of the media content they are exposed to. This will involve education about how deepfakes and other AI-generated content are created and how social media algorithms can effectively distribute such content, often resulting in echo chambers

Without combining legislative, technical, and educational efforts to combat the negative sociopolitical impacts of AI, governments leave their constituents and national security interests vulnerable to the actions of domestic extremists and foreign bad actors. With this multi-faceted approach to combating extremist disinformation networks, AI memetic warfare, and deepfakes, nations can proactively outpace malicious entities on multiple fronts, including the integrity of electoral cycles.

Achi Mishra is an AI Ethics Engineer at Polygraf Inc., focusing on identifying and mitigating ethical concerns regarding AI governance and detection. Previously, she was a researcher for the ASSIST Lab at UC Santa Cruz where she completed her M.S. in Computational Media. 

Vignesh Karumbaya is COO and co-founder at Polygraf Inc., a company that provides AI-powered data integrity, monitoring, and governance solutions. He is a business strategist and former management consultant with global experience in the hi-tech, industrial, and retail sectors. Vignesh earned an MBA in Corporate Strategy from the Ross School of Business at the University of Michigan, where he was also a Fellow at the Tauber Institute of Global Operation. He received an M.A in Economics at Loyola College and a B.A (Hons.) in Economics from the University of Delhi, India.