Artificial intelligence played a key role in the riots that followed the stabbing attack in Southport, UK, which claimed the lives of three young girls on 29 July 2024. By facilitating the circulation of fake news and racist conspiracy theories, algorithmic systems of recommendation mobilised hundreds of people to target asylum seeker facilities and harass perceived racialised migrants, particularly Muslims. The process of mass radicalisation accelerated and amplified by AI resulted in the “worst wave of far-right violence in the UK post-war.” Although much has been said about recommendation systems, the influence of Generative AI remains largely overlooked. This Insight seeks to address this gap by examining how both algorithmic systems of recommendation and GenAI tools were weaponised through the visual storytelling of the great replacement conspiracy theory.
An “All-Out Crusade:” Violence and The Great Replacement Conspiracy Theory
The close links between the great replacement conspiracy theory, violent extremism and terrorism are well-documented. Believing that “White European populations are being deliberately replaced at an ethnic and cultural level through migration and the growth of minority communities,” many individuals have carried out extremist and terrorist attacks across Europe. Its appeal to violence seems to lie in its ability to activate a crusading mentality. The great replacement conspiracy theory portrays racialised migrant communities, particularly Muslims, as evil insofar as they are perceived to be attempting to replace White European populations and destroy Western civilisation. The conspiracy theory conveys an exaggerated sense of persecution – a characteristic of a specific type of conspiracism that has, historically, fuelled mass movements and incited violent acts. Conspiracy theories adhering to the style described by the Historian Richard Hofstadter tend to function as a call to action because they prompt individuals to perform as ‘militant leaders.’ In the face of a “super powerful evil force,” there is no time for dialogue. The perceived enemy “must be eliminated.”
The mobilising power of this conspiracy theory seems to be boosted by its grounding in White supremacy. By establishing an antagonism between White Europeans depicted as “pure good” and racialised migrants, particularly Muslims, as “pure evil”, the great replacement animates the fantasy of the White heroic warrior whose acts of violence have historically been legitimised as “ethical, regenerative, culturally legitimate, socially redemptive, and politically necessary.” In the European context, this call to violence is further reinforced by a religious dimension. As the great replacement associates Whiteness with Christianity and Blackness with Islam, the constructed antagonism between “pure good” and “pure evil” is transformed into a spiritual battle that evokes the imagery and rhetoric of the Crusades. By conveying the idea that Islam is incompatible with the West, this conspiracy theory serves as an invitation to White Christian Europeans (particularly men) to take part in a sort of modern crusade to “save” an idealised Europe, whose civilisation is perceived to have been built exclusively by White individuals. Following this line of thought, for some adherents, violence is morally and religiously justified.
The violent appeal of racist conspiracy theories like the great replacement runs deeper than conscious hatred—it taps into racist and Islamophobic fantasies deeply entrenched in the Western collective unconscious. By equating Whiteness/Westernness with civilisation and education and projecting White (Christian) Europeans as entitled to fully occupy European territories, they have operated as a mental refuge that has prompted White (Christian) Europeans to perform as rightful defenders of European nations. The alleged increasing presence of racialised migrants, particularly Muslims, in European territories is perceived as an existential threat because these spaces have been historically fantasised as exclusively inhabited by White (Christian) Europeans.
The racist and Islamophobic fantasies animated by the great replacement conspiracy theory have been systematically reinforced through centuries of novels, films, and visual media that consistently portray White (Christian) Europeans as civilised heroes while depicting Black, Arab, and Muslim communities as threats. Scholars like Edward Said and Frantz Fanon have documented how European and American storytelling has persistently characterised Black people, Arabs, and Muslims as “dirty,” “immoral,” “savage,” and “dangerous.” Far from harmless, visual storytelling holds significant pedagogical power and, consequently, it has helped shape how Europeans view Black, Arabic, and Muslim social groups. The more audiences consume films and media depicting Black men, Arabs, and Muslims as “deceitful” or “violent,” the more their unconscious minds may register harmful associations between these communities and danger or immorality. Over time, this harmful type of visual storytelling – full of racist and Islamophobic tropes – became banal, and this banality proved crucial during the UK riots.
From Banal Racist and Islamophobic Tropes to AI-Weaponised Visual Storytelling
Visual storytelling based on stereotypical representations of Black people, Arabs, and Muslims are particularly harmful in online spaces because, in addition to facilitating the circulation of racist conspiracy theories, bypassing content moderation, they have been incorporated into Large Language Models (LLMs), thus feeding Generative AI tools. While some digital platforms prohibit harmful stereotypes in their community guidelines, they seem to have struggled to enforce these policies consistently. The combination of inadequate content moderation and the normalised nature of racist and Islamophobic stereotypes has enabled the weaponisation of both algorithmic recommendation systems and GenAI tools. This proves particularly dangerous when these stereotypes are embedded in visual representations of racist conspiracy theories like the great replacement. The resulting amplification can activate a crusading mentality on a massive scale.
Funded by the LSE Urgency Grant Scheme, I examined this phenomenon, drawing particular attention to the role played by AI in the creation and amplification of a visual storytelling of the great replacement conspiracy theory. My object of investigation was one of the main X accounts “engaged in a coordinated influence operation” that spread multiple posts with xenophobic and Islamophobic rhetoric in the aftermath of the Southport killings, obtaining millions of impressions. Even though the owner of the account remains unknown, it holds the blue verified badge and, consequently, it has benefited from greater algorithmic amplification. In line with GNET’s mission not to amplify the creators of extremist content, the account’s username has been omitted from this Insight.
At the time of data collection (October 2024), the account had 532.8K followers. With the support of a research assistant, I manually collected all the posts made between 4 July and 4 August 2024, and their corresponding number of views, likes, comments, and shares. After independently coding the dataset, we identified posts featuring audiovisual representations of the great replacement conspiracy theory as well as two overlapping conspiracy narratives: the White genocide (which claims that White populations are being deliberately exterminated) and Eurabia (which asserts that Islam is gradually overtaking Europe). I found 152 posts out of 388 posts; my research assistant found 153. We both reached the same number of GenAI-created images representing these racist conspiracy theories: 39. Although these visuals made up only 10% of the sample, they attracted a disproportionately high engagement, receiving, on average, 1.14 million views per post. This finding proves that GenAI-created visuals can significantly increase the virality potential of content shared on X.
Some GenAI-created images constructed a visual storytelling narrative of the great replacement conspiracy theory, as illustrated by Figure 1. Before proceeding with the analysis, I invite the reader to examine these images. How many stereotypes can you identify?

Figure 1: Sequence of GenAI-created images used by the account in question to construct a visual storytelling narrative of the great replacement conspiracy theory.
The narrative begins with the alleged replacement of White Europeans represented in the first image. By depicting White European women as an object of desire of racialised Muslim migrants who are “inherently violent,” the image draws on established stereotypical representations of White femininity, Black and Muslim masculinity, thus animating several racist and Islamophobic fantasies. Besides representing a threat to the future of the White race, the picture also evokes an apocalyptic future for Western civilisation by equating racialised Muslim migrants with dirtiness and social disorder.
The following two pictures represent an attempt to restrain the alleged replacement through partisan politics, featuring Nigel Farage in the UK and Marine Le Pen in France, coming after racialised migrants. A few days later, another sequence of GenAI-created images conveys the message that engaging in dialogue was not sufficient. Fantasies associating Muslims with savagery are animated by stereotypical representations of Muslims as terrorists allegedly capable of killing children, men and women, setting the British Parliament on fire, and Muslim men supposedly destroying the Notre-Dame Church. Once again, a White blonde girl is represented as oppressed by racialised Muslims. However, at this time, the image includes a Black man dressed as a police officer, suggesting that law enforcement itself has been complicit in the perceived replacement of White Europeans.
As racialised Muslim migrants have supposedly failed to integrate into European and British societies, and considering the alleged threat posed by them to White individuals, particularly children and women, only a “heroic warrior” would be capable of “saving” Europe and the UK. This part of the story is illustrated by a gigantic White blonde British man coming after racialised Muslim migrants. The depiction of the White British man as a monstrous figure seems to draw on the imagery of the Hulk from the Avengers comics. His anger was so intense that his monstrous side emerged. The appearance of this White heroic warrior is not the climax of the story, though. Interestingly, the figure anticipating a final battle between White Europeans and racialised Muslims was shared by the X account during the riots in the UK and, not by coincidence, is staged in the UK. This image especially reinforces the crusading mentality activated by the great replacement conspiracy theory because it symbolically invites the internet user to conclude the story. It suggests that the fate of the United Kingdom rests in the hands of White Britons, thus legitimising the use of violence against racialised Muslim migrants.
Combined, the quantitative and qualitative analysis reveals how both algorithmic recommendation systems and GenAI have been weaponised by the examined X account to incite violence towards racialised migrants, particularly Muslims. While the specific GenAI tools used by the account owner remain unknown, the abundance of racist and Islamophobic stereotypes embedded in the images demonstrates how easily current recommendation systems and large language models can be exploited to produce content that fuels racism and Islamophobia.
This occurs despite platform policies. X’s rules prohibit content “targeting others with repeated slurs, tropes or other content that intends to degrade or reinforce negative or harmful stereotypes about a protected category.” Yet as this analysis demonstrates, GenAI-created images based on racist and Islamophobic stereotypes have not only bypassed content moderation, but the platform’s own systems have disproportionately amplified them.
Addressing the Harmful Cycle Perpetuated by AI
By facilitating the creation of visual storytelling of the great replacement conspiracy theory, and increasing their virality potential, AI has reinforced racist and Islamophobic fantasies deeply entrenched in the Western collective unconscious, thereby inspiring verbal and physical abuse against racialised Muslim migrants. Whether marching toward mosques while chanting “we want our country back” or shouting “Muslims off our streets” while waving St. George’s Cross flags, rioters echoed the same racist fantasies animated by the visual storytelling narrative of the great replacement conspiracy theory. They took to the streets believing they were rightfully defending a nation that should be predominantly inhabited by White Christians.
The far-right weaponisation of recommendation algorithms and GenAI marks a dangerous new chapter in the intersection between extremism and digital technologies, creating a feedback loop that further normalises racial and religious hatred. These systems amplify harmful stereotypes that have become so commonplace they operate below the radar of content moderation, enabling hate to spread under the guise of ordinary content.
The first step in addressing this problem is acknowledging its existence. Tech companies must recognise the harms caused by stereotypical representations of Black people, Arabs, and Muslims, particularly when created by GenAI tools and amplified by algorithmic systems of recommendation. Besides making it more difficult to produce images and videos featuring harmful stereotypes, they should limit their spread by disabling sharing functions. If content moderation increasingly relies on community notes, they should encourage users to report harmful content by adding a ‘report’ button alongside the ‘like’ and comment options. However, meaningful solutions require broader collaboration. Journalists, politicians, researchers, educators, and civil society must work together to tackle the social, cultural, and political roots of AI-generated and amplified racial and religious hatred. In other words, they must collectively expose and confront the banality of racist and Islamophobic stereotypes. The road ahead is challenging, but this piece represents an initial attempt to break the cycle perpetuated by AI – a cycle that is reinforcing harmful stereotypes and fuelling violence.
—
Beatriz Lopes Buarque is a Politics scholar specialising in the far-right politics of conspiratorial truth. She currently serves as a Fellow on LSE’s flagship interdisciplinary course, LSE100, teaching the AI theme. Her research explores the intersection between digital capitalism and issues of race, ethnicity, gender, and truth.
—
Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.