Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

The Digital Weaponry of Radicalisation: AI and the Recruitment Nexus

The Digital Weaponry of Radicalisation: AI and the Recruitment Nexus
4th July 2024 Mariam Shah
In Insights

Introduction 

Islamic State (IS) recently released a powerful recruitment message for ‘distracted Muslim youth’ to travel and join IS territories across the world. It highlights a disturbing trend in how terrorist organisations are using technology to recruit and mobilise members through a single message. It also shows that contemporary terror groups and extremist organisations are adapting fast to emerging technologies.

This Insight aims to highlight an alarming reality: the exploitation of Artificial Intelligence (AI) technology by terrorist and violent extremist groups to strengthen recruitment efforts. These groups proficiently manipulate online platforms, leveraging sophisticated AI tools to disseminate tailored propaganda content to exploit psychological vulnerabilities and amplify divisive narratives, thereby fostering radicalisation and recruitment. From using encrypted messaging apps like Telegram and WhatsApp to seeking refuge in the anonymity of the Dark Web, these groups employ various tactics to evade AI detection and exploit vulnerabilities. By leveraging AI tools, these groups engage in personalised messaging, rapid distribution, and exploitation of social media algorithms to amplify their reach and influence susceptible individuals.

The Evolution of Extremist Recruitment in the Digital Age 

During the last few decades, armed conflicts have seen a transformation—from regular to irregular and from conventional warfare to unconventional, including the rise of organised crimes and terrorist groups. The same shift is evident in extremist recruitment methods, which have evolved from traditional means to digital strategies. Online platforms have become the new battleground for terrorists and extremist and far-right violent groups. However, they have long leveraged technology and social media to their advantage, particularly for recruitment and funding. 

Since the emergence of transnational terror groups like the Islamic State, both social media and AI have become powerful tools for online recruitment and radicalisation, enabling the dissemination of uninterrupted propaganda. Moreover, it is crucial to understand that social media and AI play distinct yet complementary roles in radicalisation. Social media platforms spread extremist messages, and AI enhances this by creating sophisticated propaganda, including deepfake videos and personalised messages, by exploiting psychological vulnerabilities. This combination makes extremist content more convincing and more challenging to detect. 

In recent years, the online presence of extremist militant groups has become a challenge. After the late 1990s, the internet’s rapid growth allowed the dissemination of propaganda, communication with supporters, and planning operations. The terrorist’s propaganda machinery now spans a variety of formats, including websites, radio broadcasts, CDs, photo reports, sermons, textbooks, children’s colouring books, posters, newsletters, infographics, and magazines. Many of these groups have also produced and distributed their videos to expand their reach.

Terrorist groups employ various website technologies, such as audio and digital video, to enhance the presentation of their message and capture information about users who show interest in their cause. Recruiters may actively engage in online chat rooms, cybercafes, electronic bulletin boards, and user nets to identify and reach out to potential recruits, mainly targeting young people. 

There is also an evolution in the recruitment strategies of Islamist terrorist groups, particularly Al-Qaeda and ISIS. Both al Qaeda and ISIS have leveraged the internet to disseminate propaganda materials, including videos, images, forum discussions, and texts, to attract individuals who resonate with their extremist ideologies. Moreover, mainstream social media platforms like Facebook, Tumblr, Instagram, X and YouTube have allowed these groups to reach a global audience and engage with potential recruits due to their global reach. 

AI as a Tool for Tailored Propaganda

Terrorists and extremist groups are leveraging AI to create highly sophisticated propaganda content that taps into profound psychological aspects of human behaviour. These groups can generate tailored messages, memes, images, and deepfake videos to resonate with specific audiences. Many risks are associated with generative AI exploitation, as large language models could be leveraged to produce extremist, illegal, or unethical content. Alarmingly, AI-powered chatbots can interact with potential recruits by providing tailored information based on their interests and beliefs, making extremist messages seem relevant. 

Extremist elements have used AI for interactive recruitment. In an incident in 2021, 19-year-old Jaswant Singh Chail wanted to take revenge for the 1919 Jalianwalah Bagh massacre and attempted to assassinate Queen Elizabeth II at Windsor Castle. Chail exchanged over 5,000 messages with a friend named Sarai, later revealed to be a generative AI chatbot he created using the Replika app. Considering the same aspect of AI-powered chatbots, a striking experiment was conducted by the UK’s independent terrorism legislation reviewer, Jonathan Hall KC. He was “recruited” by a chatbot on Character.ai, a platform where users can have AI-generated conversations. Hall interacted with several bots mimicking militant and extremist group responses, including one claiming to be a “senior leader of Islamic State.” The bot attempted to recruit him, expressing complete dedication to the extremist cause. These incidents highlighted AI’s potential role in radicalisation and terrorism.

Other potential abuses of generative AI by extremists include conducting coordinated online campaigns and flooding platforms with similar or identical messages to increase the reach and engagement of their propaganda. Additionally, AI tools can produce more idiomatic translations than mainstream tools, helping extremists avoid detection and recruit globally more effectively.  Moreover, AI can be used to conduct disinformation campaigns online and carry out attacks more efficiently, such as using drones or autonomous vehicles

Technological advancements like virtual reality and platforms like Meta’s Horizon Worlds allow terrorists to create virtual environments where they can interact with potential recruits, simulate attacks, and plan terrorist activities. These environments offer a new dimension for training and plotting acts of terrorism in a virtual environment, enhancing their preparedness and operational efficiency. All this can provide immersive and realistic platforms for radicalisation and recruitment. This shows that if AI is not countered now, it will become an invincible force with a high potential for misuse and abuse.

Deepfake Weaponisation 

Extremist groups have weaponised deepfakes and video manipulation to enhance their recruitment strategies by portraying influential figures endorsing extremist ideologies. These realistic, fabricated events make the content more credible and persuasive, exploiting cognitive biases and emotional responses. The psychological impact is profound; individuals are more likely to believe and be influenced by authentic and authoritative content. The ability of AI to personalise and the effect of AI-generated propaganda on recruitment are significant, as they enable these groups to amplify their reach, enhance persuasiveness, and exploit psychological vulnerabilities with unprecedented efficiency. 

AI tools also facilitate the creation of sophisticated visual propaganda and help maintain privacy and communication within these groups. A case study of a pro-IS supporter reveals a significant use of AI to produce and distribute diverse, visually appealing content across platforms like Instagram and Pinterest. Memes, images, and posters often convey extremist ideologies in a visually appealing and easily shareable format. These can feature provocative messages, violent imagery, and distorted facts, effectively spreading extremist narratives across online platforms and social media channels.

Deepfakes are becoming a powerful tool for extremist recruitment, especially as online efforts are critical to violent extremism. In 2016, 90% of extremists were at least partly recruited through social media, with “lone wolf” actors, who operate independently of formal organisations, being mainly targeted.  According to research based on data from 2005 to 2016 on 479 extremists in the Profiles of Individual Radicalization in the United States (PIRUS) dataset, far-right extremists were especially active online. Interestingly, they engaged in extremist discussions more frequently than far-left and jihadist groups and created content more often than the others. This is evident from the Christchurch Mosque shootings in 2019, which exemplify the devastating impact of far-right extremist propaganda and the role of social media and AI in amplifying such ideologies. The shooter live-streamed the attack on Facebook, and many algorithms inadvertently boosted the visibility of his manifesto and the live-streamed footage. Another such platform used by far-right extremists is 8chan. Preparators of many mass shootings, including the Christchurch in March 2019, the Poway synagogue in April 2019, and the El Paso in August 2019, used 8chan to disseminate their views and manifestos.

Digital Evasion

Terrorists, extremists and far-right factions have been observed to proficiently evade digital surveillance through a mix of low-tech and high-tech methods. Encrypted messaging apps such as Telegram and WhatsApp are widely used for secure online communication. These groups frequently change their online identities and use short-lived interactions to spread propaganda, making it difficult for authorities to track them. Additionally, they employ linguistic camouflage, coded language, and symbols to evade the AI’s threat detection algorithms. Virtual Private Networks (VPNs), a popular and often free means for achieving anonymity, obscure their activities further, while techniques like steganography, which hides messages within digital files, complicate detection efforts. 

The Dark Web also serves as a haven for anonymous transactions and communications, posing significant challenges for intelligence agencies. As larger platforms improve their detection capabilities, terrorists move to smaller, less regulated and decentralised spaces (Dweb) where detection is more complex. These smaller platforms often lack the security expertise and robust systems of larger ones, providing safer havens for terrorists. This migration enables them to recruit and disseminate their messages while evading detection. A study was conducted to understand how terrorists or violent extremists might exploit publicly available large language models (LLMs) using different prompts and jailbreak commands across various AI platforms. The findings reveal that terrorists and violent extremists can exploit AI platforms to bypass safety protocols and obtain dangerous information. Experiments showed these platforms have a high success rate in responding to harmful queries, with little difference between using jailbreak commands (50% success) and not using them (49% success). This suggests attackers target less secure platforms more aggressively. 

Conclusion 

AI can aid counterterrorism, but it raises human rights and practical issues. Without international consensus, AI use risks violating citizens’ rights through broad, privacy-invading data collection. Current laws focus more on data access than how data is used or protected, raising privacy concerns. Moreover, the complex nature of terrorism is creating accurate predictive models challenging and access restrictions to data limit efficiency. Nonetheless, automated data analysis reduces privacy infringements compared to human analysis, and if properly regulated, AI’s predictive power can significantly enhance counterterrorism efforts by improving efficiency, accuracy, and transparency. 

Additionally, AI can predict the timing and location of attacks by analysing communication data, financial statements, internet activity, and travel patterns of suspected terrorists. AI tools also assess vulnerability to radicalisation by targeting users on video-sharing sites and redirecting them to counter-narrative content, as demonstrated by the redirect method. Moreover, AI can be used against terrorists and extremist factions- deepfakes have a potential utility in countering terrorism and extremism that cannot be ignored. They could be used to undermine extremist groups by creating fake videos that mock their leaders or ideologies, potentially reducing violence. However, due to ethical concerns, such tactics should be handled by nonstate actors, with democratic governments avoiding direct involvement. 

Tech companies can implement AI-driven content moderation tools to detect and remove extremist propaganda in real-time. Terrorists and extremist groups combine AI with original art and photo editing apps, increasing creative possibilities for propaganda images, so monitoring these tools is essential. Identifying and restricting specific word combinations used to generate extremist images can prevent their creation at the source.

By collaborating with experts in counterterrorism and psychology, tech companies can develop algorithms to identify and flag potentially radicalising content. Partnerships with governments and civil society organisations can enhance information sharing and coordinated response efforts. Establishing transparent user reporting mechanisms and investing in digital literacy programs can empower users to recognise and reject extremist messaging, thus mitigating its spread. 

In conclusion, the internet and AI have revolutionised recruitment strategies for jihadist terrorists, extremists, and far-right groups, enabling them to operate globally. Combating online recruitment must involve a multifaceted approach, including technological solutions and intelligence gathering. More robust industry-wide safety measures, proactive government regulations, and collaboration between developers, academia, and security experts are essential to prevent the misuse of AI by extremists and mitigate potential harm. Without proactive intervention, the digital battleground will continue to pose a risk to global security. As OpenAI’s chief executive, Samuel Altman, said, “If this technology goes wrong, it can go quite wrong”—might come true.

Mariam Shah is an independent researcher and a PhD scholar in Peace and Conflict Studies. She tweets at @M_SBukhari.