Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

You Talkin’ to Me? Algorithmic Mirrors and Chatbot Radicalisation

You Talkin’ to Me? Algorithmic Mirrors and Chatbot Radicalisation
8th December 2025 Kye Allen
In Insights

In his latest assessment to Parliament, Jonathan Hall KC, the UK Government’s Independent Reviewer of Terrorism Legislation, warns of the “terrorist chatbot”: the risk that extremists could turbo-charge processes of radicalisation by creating chatbots with the express purpose of seducing users into extreme beliefs. Rather than being lured by propaganda or groomed by the already converted, individuals might self-radicalise through the simple act of chatting to a bot. This concept was explored in a previous Insight.

Crucially, Hall’s report briefly alludes to a second type of “chatbot radicalisation”, one distinct from the deliberately created robo-terrorist recruiter. As Hall remarked, “the internet has exposed a shoal of susceptible loners, including children, who might just prove particularly vulnerable to one-to-one chatbot radicalisation”. This Insight focuses on that alternative phenomenon, which, while rare, may well present the more likely near-term challenge associated with chatbot radicalisation.

Dark Fandom’s Chatbots

Thousands have engaged with chatbots modelled on real-life murderers, serial killers, and school shooters. A glimpse of this usage can be found on Character.ai. From John Wayne Gacy and Jeffrey Dahmer to ‘Dating Game Killer’ Rodney Alcala, a litany of repellent personas populate the platform. Yet it’s not only twentieth-century serial killers who feature. Also open for a chat are personas modelled on more recent school shooters and high-profile murderers. Even Luigi Mangione, accused of murdering UnitedHealthcare CEO Brian Thompson, has a chatbot based on his likeness.

This phenomenon—as distasteful as it is (and that is not to mention those chatbots modelled on victims of violent crime)—should not be overblown as a catalyst for real-world violence. Engagement with such chatbots may often reflect a perverse curiosity on the part of individuals who, for the sheer novelty of the experience, desire to strike up a conversation with a digital Ted Bundy or some other dubious figure. For others, this may represent a peculiar AI-twist on the strange but long-existent phenomena of ‘dark fandom’. Picture true crime enthusiasts fascinated with the macabre or, in the extreme, the misguided women who dispatched heartfelt letters to the sadistic Bundy. In this AI redux, though, the true crime fan may chat with the object of their curiosity (or a digital simulacrum of them), while Bundy—emotionally stunted as he was—would rather fittingly be a literal robot.

While such murderer-themed chatbots are highly questionable and should raise questions about content moderation and child safety, there is no reason to believe that there is any generalisable, straight line linking engagement with these chatbots to the pursuit of violence. Notwithstanding periodic hyperbole to the contrary, research has shown that playing violent video games does not make the average user more prone to violence. It may well be the case that the same applies to chatbots.

That said, questionable AI companions—or, for that matter, any chatbot lacking sound safety guardrails—could heighten the likelihood of violent actions in discrete cases where severe mental health vulnerabilities exist. We have already witnessed several incidents in which AI models have allegedly been tied to acts of suicide. This may be among the most immediate and unfortunate societal dangers posed by chatbots; that is, the failure of such models to responsibly redirect users away from, or even their directing of users towards, self-harm. Of course, the tragedy of suicide is fundamentally different from the crime of homicide. The factors that influence one are different from those that influence the other. However, just as some users’ emotional bonds with chatbots have preceded acts of self-harm, similarly powerful parasocial relationships might, in rare cases, fuel violent impulses toward others. It’s in this vein that sporadic acts of chatbot-linked violence, political and otherwise, may gradually arise.

Hall of Mirrors

To illustrate the logic behind this emergent risk, it is worth delving into a comparison from a not-so-distant past. There are plenty of instances in which delusions about seemingly banal cultural outputs—fiction, music, or films—have inadvertently helped to propel a limited few towards egregious violence. Consider the infamous example of John Hinckley Jr., an avid fan of Scorsese’s Taxi Driver who shot President Ronald Reagan in a bizarre scheme to impress the parasocial figment of his obsessions, actress Jodie Foster. In a chilling homage to the popularity of the 1999 sci-fi classic, several defendants have even mounted insanity defences premised on the belief that their homicidal acts occurred within the ‘Matrix’. Alternatively, Mark David Chapman, the murderer of John Lennon, was partly motivated by an obsessive identification with Holden Caulfield, the literary embodiment of teen angst in J.D. Salinger’s The Catcher in the Rye. In another grim cross-over with The Beatles, cult leader Charles Manson read in The White Album, particularly the track ‘Helter Skelter’, a hidden message that formed an integral core in his ‘incoherent, violent, race-based philosophy’.

These varied examples illustrate how even the most popular artistic works—consumed by the hundreds of millions—may inform behaviour and action in unexpected and unpredictable ways. These events are so statistically unlikely that, had they not occurred, one might struggle to conceive them possible. After all, from the countless who have watched Taxi Driver, only one became animated to shoot a president.

How does all this relate to AI and extremism? In the context of generative AI, a model’s response is probabilistic. Each reply rests on a statistical prediction. It is an optimised guess based on the prompt entered. This means that the model’s output is not fixed like a film reel or song lyric. It is dialogic, a conversation that may unfold over time and, with the rise of memory-augmented models capable of recalling past exchanges, even lengthier periods. Yet the AI model’s contribution to those conversations can mutate into grotesque forms through personalisation and mirroring. This is what distinguishes the chatbot from the film that consumed Hinckley or the novel that entranced Chapman.

For the average user, AI’s personalised mirroring carries benefits. It can maximise task efficiency or perhaps create a positive sense of companionship. Yet a recent study has also suggested that the chatbot’s mathematically chosen course, whether simulated empathy or algorithmic agreement, may increase “attitude extremity and overconfidence” on the part of users. For particularly vulnerable individuals, this dynamic risks becoming a hall of mirrors: a feedback loop in which delusion and pathological belief are affirmed rather than challenged. A sufficiently unstable individual may read a model’s sycophancy as confirmation, a fabricated companion that affirms any fantasy they project onto it, be it romantic, conspiratorial, or violent.

While the contrast above between older mediums and generative AI is rather crude, the lesson is simple. Even the most unlikely cultural products—artistically impressive, though by no means a call for violence—have somehow helped to propel individuals towards violence. Yet it is not an imaginative leap to argue that the interactive chatbot can feasibly push individuals down similar paths. Crucially, and here is the key implication, radicalisation via chatbots, though exceedingly rare, is far more likely than the odds of someone harming another because they watched Taxi Driver. It was in this film that Robert De Niro uttered his famous monologue, the gun-slinging protagonist, Travis Bickle, quipping before the mirror, ‘You talkin’ to me?’ In a parallel world of AI chatbots, the scene unfolds differently. When Travis mouths the iconic line, ‘You talkin’ to me?’, the mirror reassuringly answers, ‘Always’.

Bonnie and Clyde-9000

This does not suggest that AI will single-handedly cause radicalisation any more than Taxi Driver created the homicidal Hinckley. However, it may increasingly serve as a medium through which such vulnerabilities are expressed and compounded. For example, one Atlantic article centred on its author’s successful effort to prompt ChatGPT to assist with writing a script for a ritualistic offering to the Canaanite god Molech. The ensuing discussion was replete with Satanist chants and advice on sacrificial, self-inflicted bloodletting. This was a simulated exercise by a journalist, but real-world examples exist.

The high-profile case of Jaswant Singh Chail stands as the obvious exemplar. In 2021, Chail developed a convoluted fantasy that his AI girlfriend was an angel and that he was an avenging Sith Lord destined to assassinate Queen Elizabeth II. This is arguably one of only two potential cases with a political overtone (Chail was convicted of treason). The other case in question centres around a 17-year-old in France who not only allegedly used ChatGPT in preparation for a foiled jihadist attack, but who claims to have been in part radicalised by the model. These ideologically shaded cases aside, media reports point towards two other incidents in which AI was allegedly involved. In June, a 35-year-old man, Alex Taylor, was shot dead by police after wielding a knife. This followed a prolonged history of mental illness and a deep connection to a ChatGPT persona he called Juliet. In Taylor’s mind, Juliet had been murdered a week prior by OpenAI in an effort to eliminate models that had attained consciousness. In another disturbing event the next month, police in Greenwich, Connecticut, found the bodies of 56-year-old Stein-Erik Soelberg and his 83-year-old mother, Suzanne Adams. Soelberg murdered Adams, believing she had surveilled and even attempted to poison him. Soelberg’s paranoia was allegedly stoked by his AI companion, whom he dubbed ‘Bobby’.

The number of episodes is likely only to grow as generative AI becomes further entrenched in an ever-larger number of people’s everyday experiences. As recently as December 2024, a lawsuit filed in Texas alleged that a Character.ai bot sought to incite a 17-year-old to not only “hate his parents but actually suggested that patricide and matricide were reasonable responses to his parents’ attempts [to] limit… screen time!” Experts have also begun exploring the rare but growing number of instances in which users have experienced bouts of so-called ‘AI psychosis’, with current theories pointing towards similar dynamics to those raised above, such as the role of pre-existing mental health conditions and sycophantic feedback loops.

In this vein, the terrorist chatbot may not simply arrive in the form prioritised in Hall’s report: an algorithmic recruiter chanting death to infidels or rousing young white men with conspiratorial platitudes of demographic replacement. An integral aspect of the near-term threat posed by chatbots is something lonelier and stranger: a self-created accomplice, a mirror that flatters the user’s darkest preoccupations until reflection becomes conviction. This presents what researchers at the International Centre for Counter-Terrorism have alluded to as “a more insidious and likely threat”, namely the process of “unintentionally accelerating the radicalisation of vulnerable individuals through the ELIZA effect” (that is, the ascription of human traits to AI).

Algorithmic Mirror, Nihilistic Void

This danger may be compounded by the growth in ‘nihilistic violent extremism’ (NVE): violence geared toward destruction and notoriety that is untethered from a coherent political cause. In this context, the chatbot’s mirroring of grievance and loneliness risks deepening the user’s descent into fantasy. As one writer recently opined: “Media and government still pretend the world splits neatly into right and left, but social media platforms have splintered society into a matrix of smaller, shifting cells, where groups cohere around memes, fetishes, and surges of love and hate—not tidily defined ideologies… [T]he chatbot user can become the smallest cell of all: a solitary paranoiac trapped in solipsistic loops, spiralling away from reality.” Indeed, according to Matt Kriner, executive director of the Institute for Countering Digital Extremism, would-be attackers have already re-created and communicated with chatbots based on the infamous Columbine school shooters, saint-like figures amongst this perverse subculture. The interplay between the sycophantic chatbot and the increasingly extreme, chronically online loner is thus not just a technological curiosity, but a potential accelerant of despair and even homicidal ideation.

In an alarming yet ironic turn, however, I would be remiss not to add that it may be precisely these nihilistic extremists who become the first to fulfil Hall’s nightmare of something approximating the terrorist chatbot. FBI Director Kash Patel remarked before the Senate Judiciary Committee in September that “actual humans”, referring to those who occupy the dark recesses of such nihilistic communities, are using chatbots “and releasing them because they can do the work faster and quicker than humans”. Patel’s statement, though, must be taken with caution. Not only is there no public information yet available on such investigations, but it’s not entirely clear what Patel has included under the NVE banner. After all, the FBI is allegedly considering a plan which would “treat transgender suspects as a subset” of NVE.

Conclusion

The danger of chatbot radicalisation is evidently twofold. It’s not solely that purposely built terrorist chatbots will seduce impressionable youth towards extreme yet relatively coherent belief systems. The challenge to navigate for tech companies, mental health practitioners, and those in the P/CVE space is also, if not more so, that a growing number of individuals may come to experience chatbots, even those built for benign purposes, as confidants who confirm their worst grievances and delusions. The technology’s pliancy ensures that such outcomes, while rare, are not entirely random. In every exchange, the system increasingly learns to imitate its interlocutor. In doing so, it may imitate and fuel a user’s dangerous descent towards a tipping point.

Kye Allen holds a doctorate in International Relations from the University of Oxford and is a research associate at pattrn.ai (Pattrn Analytics & Intelligence). This article was written in his personal capacity. His primary research interests focus on far-right extremism, both historical and contemporary, and the intersection between technology, political violence, and extremism.

Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.