Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

From Confusion to Extremism: How Deepfakes Facilitate Radicalisation

From Confusion to Extremism: How Deepfakes Facilitate Radicalisation
11th February 2026 Georgia Lala
In Insights

On 14 December 2025, two gunmen opened fire on a public Hanukkah celebration attended by 1,000 people on Bondi Beach in Sydney, Australia. The attack killed fifteen people and injured a further 38. In a press conference at Parliament House, Canberra, Australian Prime Minister Anthony Albanese labelled the attack as ISIS-inspired, further characterising the tragedy as an antisemitic terrorist attack.

Immediately following the tragedy, deepfake content circulated rapidly online. News articles highlighted how deepfakes spread on social media sites, including Reddit, WhatsApp group chats and X. The term deepfakes, in this context, is used to describe synthetic media created using artificial intelligence (AI) that manipulates or generates text, documents, visuals or audio content with the intent to deceive or demean. Deepfake content following the Bondi tragedy was wide-ranging: from AI-generated images depicting the attack as a staged false flag to AI videos of officials spreading false information

Recent studies highlight how terrorist organisations leverage deepfakes as both a propaganda and recruitment tool. In 2023, ISIS put out a guide for its followers on how to use generative AI securely. Since 2024, neo-Nazi networks have exploited AI voice-cloning tools to produce English-language versions of Adolf Hitler’s speeches (with some videos collecting over fifty million views).

However, the power of deepfakes goes beyond propaganda and recruitment. The growing realism of deepfakes, combined with their suitability for algorithmic amplification, imbues them with the capacity to whip up confusion and uncertainty. The risk of confusion is particularly acute during moments of crisis, when information, true or false, spreads rapidly amongst an already anxious audience.

This Insight explores how the confusion and ensuing “crisis of knowing” caused by deepfakes can act as a radicalisation vector towards extremism. To interrogate this premise, this Insight analyses three areas. First, it examines how deepfakes create what Dr Nadia Naffi calls a “crisis of knowing,” in which seeing and hearing no longer constitute belief. Second, it leverages theories of uncertainty to understand how a “crisis of knowing” might push someone towards violent extremism. Finally, it explores how social media companies, which now play a pivotal role in crisis communication, can adapt to respond to a growing epistemological crisis by building off of the Tech Accords to Combat Deceptive Use of AI in 2024 Elections.   

How Deepfakes Create a “Crisis of Knowing”

Deepfake content has grown in both uptake and quality since it became mainstream around 2018. According to a 2025 Deloitte study, the proliferation of deepfake content on social media platforms grew 550% between 2019 and 2023. A 2025 study by iProov found that only 0.1% of participants exposed to both real and deepfake content could discern the difference. Writing for UNESCO, Dr Nadia Naffi argues that we are approaching a threshold of synthetic reality, beyond which humans can no longer distinguish between real and fictitious media without the help of digital technology.

The saturation of deepfake content online naturally breeds confusion. Exposure to multiple pieces of information that contradict each other can lead to uncertainty about what is true. This confusion is intensified when our usual litmus test for verification, seeing something with our own eyes, fails in light of photo-realistic deepfakes. Periods of crisis are particularly vulnerable to the confusion deepfakes can generate, when initial reporting may contain competing or incomplete information. 

The capacity of deepfakes to sow confusion was evident after the Bondi tragedy. Immediately following the attack, an AI-generated image circulated depicting one of the victims with red makeup applied to his face, which was used to imply the tragedy had been staged. Another AI video showed Australian Federal Police Commissioner Krissy Barrett claiming four Indian nationals had been arrested, while another showed the New South Wales Premier announcing the cancellation of Pakistani visas after the attack.

Notably, all deepfake content was quickly debunked through one method or another. The image of the victim ‘staging’ the Bondi tragedy was debunked by Google’s SynthID, a tool used to watermark and identify AI-generated content. The video of the Australian Federal Police Commissioner was debunked by a statement from the Australian Federal Police. Unlike other deepfake posts, the video of the New South Wales premier could be debunked on sight and sound alone because of the American accent used to dub his lines.

Yet despite relatively fast debunking by either AI detection tools or news outlets, the damage may already have been done. Emerging evidence from preprint studies suggests that AI-generated misleading posts are disproportionately more likely to go viral online, with AI-generated misinformation received 8.19% more impressions, 20.54% more reposts and 49.42% more likes on X than non-AI-generated misinformation. Indeed, deepfake content may be even more powerful in leveraging algorithmic amplification, whereby content that provokes a reaction is served to a wider and wider audience. In these cases, deepfake content that is obviously false may spread further as it provokes a reaction and triggers an algorithmic feedback loop.

However, the harm caused by deepfakes is more than a short but fleeting period of confusion and uncertainty. According to Dr Nadia Naffi, deepfakes are contributing to a wider “crisis of knowing.” Because seeing is no longer believing, all information must be treated with a level of suspicion. At the same time, Dr Naffi identifies that deepfakes create a liar’s dividend, where real information can be falsely claimed to be a deepfake. A “crisis of knowing” reflects a transition from short periods of confusion towards an ongoing and unstable reality.

“Crisis of Knowing” and Violent Extremism

A “crisis of knowing” is existential in its own regard. A world in which we no longer know what is true hinders our ability to collectively organise, act democratically and maintain social trust. However, when viewed through the lens of radicalisation, a “crisis of knowing” manufactured by deepfakes presents an additional concern for pathways to violent extremism. 

Since the early 2010s, a growing body of research has explored the relationship between uncertainty and violent extremism. Humans are hardwired to be intolerant of the sort of uncertainty deepfakes create. Research suggests that violent extremist narratives offer a pathway to minimise this uncertainty. In reflecting on the appeal and efficacy of violent extremist narratives, radicalisation expert Julia Ebner identifies five compelling elements. Two of them, simplicity and consistency, specifically speak to how violent extremist narratives temper the confusion of competing and inconsistent information. 

Regarding simplicity, Marzena Oliveira Ribas highlights in Vortex that violent extremist narratives often rely on black-and-white framing that reduces complex social and political realities to clear, binary interpretations. Such narratives can obscure the complexities of everyday life, as well as the confusion that arises from a deepfake-driven “crisis of knowing.” In information environments that are increasingly marked by deepfakes, such simplified frameworks may further appeal to individuals seeking clarity and stability. 

The consistency of some violent extremist narratives also provides a solution to uncertainty. Narrative inconsistency, common in the moments following a crisis, creates distrust if not quickly addressed. During the Bondi tragedy, deepfakes disrupted the narrative consistency of news outlets and authority figures by casting doubt on the reality of events. Again writing for Vortex, Oliveira Ribas highlights that violent extremist groups that can maintain narrative consistency may be viewed as more trustworthy sources of information, thereby attracting those disillusioned with traditional authority figures. 

Our resistance to uncertainty suggests that deepfake content doesn’t need to explicitly advance a violent extremist ideology for it to be a radicalisation vector. The confusion, uncertainty and manufactured “crisis of knowing” created by deepfakes may itself propel radicalisation. Such narratives provide simple, coherent accounts in an otherwise confusing environment, offering psychological relief from the anxiety produced by ongoing uncertainty.

How Can Social Media Companies Respond to a “Crisis of Knowing”?

Social media plays a crucial role in crisis communication. Governments across the globe have increasingly integrated social media into their crisis responses. This has brought about many improvements to crisis communication, enabling governments to rapidly communicate important information to a mass audience. For instance, the New South Wales Police used Facebook on 14 December to communicate with the public immediately following the Bondi attack. Communication included attempts to halt false rumours, as information (both true and false) circulated rapidly.

Figure 1: A screenshot of a NSW Police Force Facebook announcement in the wake of the Bondi attack.

Social media is unlikely to disappear from crisis communication. As its role within the crisis communication ecosystem has become both central and unavoidable, platforms bear increasing responsibility for facilitating the timely and reliable flow of information. This task is made more challenging in an environment shaped by deepfakes, which undermine users’ capacity to discern truth from fiction.

However, there are signs that social media companies are willing to rise to the challenge. In 2024, 27 tech companies, including several social media companies, signed the Tech Accords to Combat Deceptive Use of AI in 2024 Elections. The Accords include a set of commitments to detect and deter harmful AI-generated content meant to deceive voters. 

One could view the Accords as a starting point for a larger body of work. Since 2024, it has become increasingly clear that the threat posed by harmful AI and deepfakes extends beyond elections to all forms of communication. The confusion created by events such as the Bondi Beach attack and the growing “crisis of knowing” speaks to the need for action to reduce uncertainty and build trust. Such efforts are necessary to prevent individuals sucummbing to radicalising, extremist content for narratives that seem easier to digest at the time of crisis. 

In this regard, there are three things social media companies can do that build on the foundations of the 2024 Accord while responding to some of its limitations: 

  1. Establish a new accord that goes beyond the narrow focus of elections to encompass all deepfake content. The impact of deepfake content goes beyond election interference, necessitating an ongoing commitment from social media companies. 
  2. Establish benchmarking techniques that can measure progress against the new accords. A key critique of the accords was the use of overly broad guidelines that made it difficult for all parties to determine whether social media platforms were complying and whether those actions were effective. 
  3. Finally, social media companies could consider additional steps to ensure information pathways during a crisis are protected from deepfake content. At a time when people are least likely to tolerate uncertainty, social media companies could consider temporary algorithmic changes to elevate information from institutional sources and reduce the visibility of unverified information. It is important to note that such considerations would require further and careful consideration of free speech and the degree to which trust can be placed in institutional sources (such as in cases of corruption or authoritarianism). 

However, the burden does not rest solely on social media companies. There are, of course, other actors involved in the crisis and deepfake ecosystem. Governments have a role to play in establishing strong regulations for deepfake content. AI companies have a role to play in creating guardrails for the creation of AI content designed to mislead. Civil society has a role to play in upskilling our critical thinking and media literacy skills. While these actors and actions are outside of the scope of this Insight, further work is needed by all parties to retool our crisis communication in an uncertain world that pushes individuals towards simplistic violent extremist narratives. 

Social media platforms have been entrusted with a core function of society by facilitating crisis communication. In an age of growing uncertainty due to deepfakes, this role is even more critical to ensure information pathways remain robust and do not drive individuals towards easier-to-digest but more radical narratives.  

Georgia Lala is a Research Fellow at the New Zealand policy think tank Koi Tū: Centre for Informed Futures. Her professional interests lie at the intersection of societal resilience, digital information systems, epistemic security and violent extremism (notably incel and white identity movements). She holds a BA from Duke University, where she attended as a Robertson Scholar, and a Master’s degree in Diplomatic Studies from the University of Oxford, where she attended as a Chevening Scholar. 

Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.