Click here to read our latest report: Going Dark: The Inverse Relationship between Online and On-the-Ground Pre-offence Behaviours in Targeted Attackers

Weapons of Mass Disruption: Artificial Intelligence and the Production of Extremist Propaganda

Weapons of Mass Disruption: Artificial Intelligence and the Production of Extremist Propaganda
17th February 2023 Daniel Siegel
In Insights

Introduction

In late 2022, OpenAI publicly released some of the most sophisticated deep-learning models – DALL-E and Chat GPT. These neural networks rely on machine learning to generate infinite amounts of unique textual and visual content for users anywhere on the planet. OpenAI may have been the first company to release its products to the public, but it is not alone in its development; companies like NVIDIA, Google, and smaller artificial intelligence startups are developing similar engines. These generative AI models allow users to input commands to create essays, music lyrics, simple code and more. In January 2023, OpenAI, the Stanford Internet Observatory, and Georgetown University’s Center for Security and Emerging Technology released a study that explored the possibility of these models being used in influence campaigns by both state and non-state actors through the production of disinformation. The disruptive potential posed by these generative AI technologies has led some to consider themweapon[s] of mass disruption.” 

Over the past decade, extremist groups have been adapting their propaganda to be more interactive. Extremist video games, social media content, and music have found their way onto a variety of internet platforms. Unlike the extremist propaganda of the past, these new digital media products allow extremist groups to interact with audiences in unprecedented ways. This Insight will focus on the emergence of new digital AI-generated extremist propaganda. By simulating a variety of extremist content using AI generative models, the authors predict that this emerging technology may enable and accelerate the production of a greater quantity and quality of digital propaganda manufactured by non-state extremist actors.

The Evolution of Digital Media in Extremist Propaganda

In recent years, extremist organisations have experimented with the production of new forms of propaganda content, transitioning from outputting simple videos and magazines to creating video games, social media content, and even music. Technological innovation over the last twenty years has allowed extremist organisations to access new tools such as sophisticated editing software, microphones, and cameras to produce Hollywood-style propaganda. Recently, terrorist organisations have been utilising newer forms of digital media for their propaganda that allow targets to interact directly with the content in new ways. For example, recently developed games by neo-Nazi groups encourage players to engage in violent behaviour towards minorities from a first-person shooter (FPS) perspective (Figure 1). Other extremist groups are even creating downloadable modifications (‘mods’) that allow for the alteration of existing games. By making changes to the geography, characters, and aesthetics of popular games such as Super Mario and Doom, extremist groups can export their ideologies and foster player identification with violent avatars in familiar products. Unlike the propaganda of the past where an audience engaged with extremist actors by witnessing violence through video recordings or images, newer forms allow audiences to engage in simulated acts of terror in a more ‘interactive’ way.  

Figure 1: Neo-Nazi Video game produced by a member of the National Socialist Movement

Video games are just one example of the new variety of extremist digital media appearing in online spaces. Extremist groups on platforms like Twitter, Tumblr, Facebook, and especially TikTok create viral posts to attract followers, promote fear, and rally their existing and potential recruits. In 2019, a variety of ISIS TikTok accounts were found posting violence-inciting content blended with catchy professionally-recorded battle hymns (i.e. nasheed) and emojis, in an effort to target the platform’s users and show off the group’s strength (Figure 2).  

Jihadist groups are not alone in producing this content. A NewsGuard article found hundreds of videos that “allude to, show, or glorify acts of violence carried out by the Wagner Group,” a notoriously violent Russian mercenary group. Many of these videos spread Russian misinformation about the  Ukrainian government (Figure 2). While users cannot engage directly with social media propaganda, the videos allow audiences to engage with pro-Russia content in a new interactive way. Through liking, reposting, commenting, and recreating trending media, consumers of extremist materials interact with content in more complex ways than what was previously possible with older forms of propaganda.

Figure 2: ISIS and Russian Mercenary Propaganda on TikTok

Music is another propaganda medium that extremist groups have been engaging with to create and provide material for target audiences. The Anti-Defamation League published a 2022 report identifying 40 white supremacist artists on Spotify. These bands produced a range of musical genres, gained thousands of followers, and pushed various xenophobic conspiracy theories including the ‘Great Replacement’ theory. Some of these songs directly promote Nazi ideology, with titles such as ‘Aryan Fury’ and ‘Reich Machine’. Bands also appeal to audiences through the creation of meme-like album covers featuring far-right symbols like the Red Pill, Pepe the Frog, and the Black Sun (Figure 3). 

 

Figure 3: ADL’s screenshot of far-right imagery on album covers and music found on Spotify

This issue extends beyond Spotify. In collaboration with the European Counter Terrorism Centre’s EU Internet Referral Unit, Soundcloud flagged more than one thousand pieces of extremist content on its platform. By packaging their narratives through song, right-wing extremists can convey their hateful messages in a new way, potentially increasing their appeal to a wider audience. The Islamic State has also been found experimenting with the use of nasheed. By repackaging familiar Islamic songs, the group can convey its apocalyptic violent narratives and viewpoints of the world with trendy tunes. Al-Qaeda has engaged in similar activities by featuring hip-hop music in the background of its online content. Through music, both ISIS and al-Qaeda are leveraging similar tactics to right-wing extremist groups. Music may allow these groups to transcend multicultural and transnational differences in order to export their violent ideologies to new and wider audiences. 

AI Generative Models and Extremist Threat Ideation  

While conventional forms of extremist propaganda like those mentioned above require trained personnel to plan, create, and disseminate these materials, the emergence of AI generative models may allow for a variety of extremist non-state actors to produce a greater amount of propaganda products that are significantly more sophisticated – with significantly less effort. Future generative AI models will transform the landscapes of both mainstream and extremist media production, including video games, digital content creation, and music and other audio content – meaning new audiences can be reached and potentially radicalised through new propaganda techniques.

AI for the Gamers

Through the emergence of these new technologies, video gamers will be able to develop their desired storylines, characters, and environments by inputting text that can be transformed into a fully developed video game product. Some commentators are discussing the possibility of AI-generated and entirely personalised games. By allowing gamers the freedom to develop their own products, these technologies also provide extremists with significant power to develop video games that could bolster their radicalisation and additional recruitment methods.  

Extremist organisations will be able to rely on AI generative models to further their creative and technical skills. The less sophisticated artificial intelligence generative models available today convey the tremendous power of new propaganda products that could be leveraged and exploited by extremists. Currently, through ChatGPT, extremists are able to transport conspiratorial and violent narratives into complete video game storylines instantly (Figure 4). The ability to customise plotlines to resemble games like Call of Duty makes these technologies particularly dangerous; if these games are developed, they can be tailored to appeal to specific audiences by ensuring their products resemble popular games already on the market. 

Figure 4: ChatGPT produces video game storylines that contain themes found in Islamic State (left) and far-right propaganda (right)

Extremists can also exploit these technologies by producing simple code (Figure 5) that could help facilitate the alteration of graphics in a variety of extant games for groups that do not desire to create completely new products. New generative models like Scenario and MidJourneyAI used in combination with ChatGPT can greatly aid the creation of three-dimensional items that can be inputted within video games as well as violent artistic promotional material for original, extremist-made video games (Figure 5). The ability to rapidly produce extremist video games may allow a variety of groups to more easily desensitise individuals to political violence and input real-world conspiracies into gaming narratives for the purpose of making extremist viewpoints more believable.  

Figure 5: Chat GPT modifying graphics in video games (left), Scenario AI creating 3D items (top right), and Jan 6. U.S. Capitol insurrection video game generated by MidJourneyAI (bottom right)

 

AI for the Content Creators

Additionally, these technologies will greatly enhance the ability of extremist groups to flood applications like TikTok and Instagram with customisable content that is both visually appealing and persuasive. ChatGPT can help facilitate the creation of scripts and short-form content and organisational techniques for specific audiences in seconds (Figure 6). While the current technology cannot generate entire videos, it is easy to imagine how combining ChatGPT with deepfake technology could enable the production of prolific amounts of artificially generated extremist content (Figure 6). By utilising the faces of popular influencers and viral trends, AI generative tools may allow for more propaganda products to penetrate mainstream audiences easily, thus facilitating purposeful misinformation of news events that can serve the interests of extremist actors.  

Figure 6: ChatGPT generating scripts for an imaginary’ Syrian militia (left) and deepfake generated Lionel Messi capable of speaking fluent Arabic (right)

AI for the Audio-Makers

Finally, artificial intelligence may revolutionise the production of audio content. Extremist groups could, in the future, weaponise these systems by outsourcing musical talent to create violent narrative-driven songs about core issues associated with their respective ideologies. This can be seen in a ChatGPT-generated heavy metal punk song intended to resonate with individuals who belong to skinhead hate groups. Google has recently previewed  MusicLM which can translate text directly into musical sound with instruments that evoke specific moods (Figure 7).  Attractive album artwork for a song can be generated instantly through MidjourneyAI. The rhythms and lyrics developed by these technologies recreate human vocal and instrumental rhythms. While the creation of extremist music may not seem like a threat, music allows extremist actors to disguise their violent messages and radical beliefs, which may have a radicalising effect over time. 

Figure 7: Chat GPT song generated heavy metal punk song (left) MidJourneyAI generated punk skinhead music album (upper right), Google’s AI generative model for music (Music LM) (bottom right)

The Status of AI Safeguards

OpenAI, among other tech companies, has taken significant steps to implement greater restrictions on the generation of extreme content. For example, when content related to Nazis is requested through MidJourneyAI or ChatGPT, users are now met with an error message (Figure 8). 

Figure 8: ChatGPT and MidJourney AI safeguards working effectively to combat dangerous content

Thus, action is being taken to improve content moderation on these platforms, and there are terms of service for each tool that users must abide by. However, malign actors find workarounds, and we are already seeing how certain actors are making alterations to the datasets available in AI content generative models in order to curate harmful content that would otherwise be banned, known as ‘mischief models.’ As AI generative models advance, so will the mischief; in the wrong hands, these technologies could further fuel distrust in online information sources and hinder global efforts to combat violent extremism.

Conclusion

New generative AI technologies have transformative capabilities and the potential to bring about a new wave of creativity and innovation. However, their capacity to accelerate violent extremist rhetoric and messaging should not be underestimated. Unlike the propaganda products of the past, these tools are much more effective for radicalisation, recruitment, and retention; sophisticated AI-generated extremist propaganda can now be adapted to appeal to previously out-of-reach audiences in new and engaging ways. Tech companies have recognised this threat as legitimate, and have expanded efforts to safeguard against extremist exploitation of AI tools. However, as long as these programs exist, malevolent actors will adapt these programs to develop ‘mischief models’ capable of generating harmful content for a variety of extremist groups. 

There are certain AI startups that are attempting to prevent the misuse of their products by developing alternative programs to help identify patterns that can indicate whether images, texts, videos, or sounds were produced with generative models. These programs could one day be integrated into social media platforms, music streaming services, and even gaming platforms, which could hinder artificially generated content used for malicious purposes from finding its way out of the dark corners of the web. Furthermore, it is critically important that the AI companies discussed in this Insight expand their Trust and Safety teams. These teams can ideate the worst uses for features of these programs, and then implement rules and plans in case of their misuse. This Insight posits that while there are important steps being taken to prevent the misuse of AI generative models, there are clear vulnerabilities that can be easily exploited by extremist actors. 

Daniel Siegel is a corps member at Teach for America. He is also a counterterrorism researcher at UNC Chapel Hill’s program on digital propaganda. His research focuses on extremist non-state and state actors’ exploitation of cyberspace and emerging technologies. 

Mary Bennett is an analyst at More in Common, a nonpartisan research nonprofit, and a researcher with the Prosecution Project. Her research primarily focuses on violent misogyny and extremist visual propaganda content.