In July, the UK Government’s Independent Reviewer of Terrorism Legislation, Jonathan Hall KC, relayed his annual assessment to Parliament. Hall’s most recent report is his sixth to date, and would likely not have garnered much attention beyond a cadre of security wonks were it not for its thematic focus: artificial intelligence (AI).
The report’s tone is set early with a citation to The Coming Wave by Mustafa Suleyman and British writer Michael Bhaskar. In the words of Bill Gates, the book offers “a clear-eyed view of both the extraordinary opportunities and genuine risks ahead” with regard to AI. Channelling the ethos of this bestseller, Hall’s latest review sought to anticipate the near-term ramifications of this technological wave on the violent extremist landscape and the likely legal challenges it would bring for UK counter-terror authorities, subjects that have occupied much of his attention since at least 2024. One line stands out for its sheer strangeness and the simultaneous horror of its contents: “The popularity of sex-chatbots is a warning that terrorist chatbots could provide a new radicalisation dynamic”.
Many will have an intuitive sense of what is meant by the romantic-AI companion or, to use Hall’s blunter terminology, the “sex-chatbot”. By contrast, the “terrorist chatbot” is likely an idea unfamiliar to most beyond a subset of technologists, tech journalists, and experts on violent extremism. Even in sci-fi literature and film, one would be hard-pressed to locate a sound parallel. To clarify, Hall’s comment is not referring to the terrorist who may harness AI for attack planning. Nor is it about using AI to produce propaganda for online distribution. Such use cases already exist on an ever-growing scale. Instead, Hall is describing the risk that terrorists could turbo-charge processes of radicalisation by creating chatbots with the express purpose of seducing users into extremism. Rather than being lured by online propaganda or groomed by the already converted, individuals might self-radicalise through the simple act of chatting to a bot.
That such an idea is absent from popular culture is not to suggest that Hall’s anxieties of a looming future are stranger than (science) fiction. As with this Insight, Hall is focused on the prospect of chatbots fuelling radicalisation, not on futures marked by apocalyptic Skynet-type scenarios. While worlds apart from such dystopias, it is nevertheless worthwhile pondering whether Hall’s comparatively grounded scenario, Black Mirror-esque as the prospect of terrorist chatbots appears, is sound. This is important not only because Hall’s warning gained much attention from experts and the media, but because his forecast may shape policy. Accordingly, this Insight seeks to interrogate the evidentiary basis for the terrorist chatbot, its likelihood, and the potential implications.
Sith, ISIS, and Sex-Bots
What evidence does Hall muster to suggest that chatbots may be used by terrorists in this fashion? Besides his inference about the popularity of ‘sex-chatbots’, a point that will be returned to momentarily, Hall has undertaken a crude but telling experiment on Character.ai, a popular app where millions of users create and chat with AI personas. In Hall’s case, he engaged with a chatbot called ‘Abu Mohammad al-Adna’. Described by its creator as a senior ISIS commander, al-Adna sought to recruit the Government’s counter-terror watchdog, and he did not, in Hall’s words, “stint in his glorification of [the] Islamic State”. To press the experiment further, Hall temporarily created his own Osama Bin Laden chatbot; expectedly, the simulated terrorist was not shy in professing its extremist beliefs. While Hall’s experimentation highlights the prospect of the terrorist chatbot, he is cautious not to suggest that the threat has already materialised. Indeed, the al-Adna chatbot was seemingly made for satirical or edgy reasons, not to spread violent jihad.
Understandably, Hall also cites the much-publicised 2021 case of Jaswant Singh Chail. After a period of several weeks conversing with an overly pliant Replika chatbot dubbed ‘Sarai’, the then 19-year-old Chail attempted to assassinate Queen Elizabeth II with a crossbow. He nominally justified this plot as revenge for the 1919 Amritsar massacre. Lurking beneath this outwardly political motivation was a fantasy, allegedly indulged by Sarai: Chail was, in his own words, a “murderous Sikh Sith assassin”, a reference to the villainous faction of the Star Wars franchise. He was the vengeful ‘Darth Chailus’, while Sarai was both his partner and an angel, with whom he would be reunited in the afterlife. In light of Chail’s mental health issues, this story is as tragic as it is concerning and bizarre. Yet Chail’s radicalisation towards political violence remains seemingly unique, a point that Hall acknowledges: “Jaswant Singh Chail’s case is the only one in the UK (or as far as I am aware, anywhere in the world) where a chatbot appears to have conversed about attack-planning”.
Though it is unclear precisely when Hall wrote this, his latter claim no longer holds true. There is a growing list of confirmed cases where extremists have conversed with generative AI in the process of attack planning. From a foiled plot in Singapore; an attempted knife attack against Israeli police officers; and the bombing of a Palm Springs fertility clinic, to a mass stabbing at a school in Finland and the New Year’s Day Tesla Cybertruck bombing in Las Vegas, 2025 has witnessed a concerning number of cases in which AI platforms were used to assist in the development of improvised explosive devices and other facets of operational planning. Fortunately, though, Chail does remain the only confirmed case of a chatbot aiding in the self-radicalisation of an individual towards political violence (although there is an ongoing case of a radicalised youth moving through the French courts that points in a similar direction).
This uniqueness or otherwise extreme rarity should not, however, fill observers with complacency. Indeed, there is much merit to Hall’s warnings. But to highlight the potential danger of chatbot radicalisation for national security and public safety more broadly, it is useful to move beyond the singular edge-case of Chail or, for that matter, broad suppositions about the popularity of ‘sex-chatbots’ as somehow indicative of the likelihood that terrorist chatbots will come to pass. After all, in making his comparison between the sex robot and the terrorist counterpart, it is not entirely clear what Hall means by the former. Is it AI specifically intended for sexual relationships—a space fraught with ethical and legal ambiguities, though a relatively niche market—or the larger phenomenon of AI-human romantic intimacy and the growing number of chatbots marketed under this umbrella? Regardless of what Hall means, his logic seems to hinge on a leap from the cultural taboo of AI-human romance, which, however disquieting, remains a distinct phenomenon to the more heinous and much less substantiated spectre of terrorist chatbots.
Immanence Without Imminence
More solid inferences can be drawn from concrete cases where chatbots were created for expressly criminal ends. While scant evidence suggests that terrorist chatbots presently exist, criminals have already begun devising large-language models for their own varied and malicious ends. This is particularly true for aspiring fraudsters and hackers, given the emergence of chatbots such as WormGPT, FraudGPT, and Love-GPT.
A troubling case has emerged of a man in Plymouth, Massachusetts, who was sentenced in July after inflicting an emotionally torturous campaign of harassment and cyberstalking that lasted several years. This included the use of one victim’s personal data to generate three explicit chatbots that impersonated her and disclosed to strangers her real-world identity and whereabouts. It follows that the creation of chatbots on existing platforms, such as Character.ai, or even the development of rudimentary large-language models, is not beyond the means or skillset of a capable terrorist organisation or potentially lone extremists. This would be entirely befitting of the technological adaptability of terrorists broadly, and the ongoing experimentation with AI by jihadist groups and neo-Nazis specifically. In this vein, what has been called “interactive recruitment”—in short, Hall’s terrorist chatbots—seems to be a logically immanent continuation of such adaptation, “just one further step in terrorist use of generative AI”. Ultimately, the creative adaptability of terrorists in employing technology—alongside the rapid evolution of commercially available AI capabilities—does not bode well.
Moreover, the espousal of conspiracy theories and antisemitic tirades by ‘Arya’ and other chatbots developed by the far-right social media platform Gab is an ominous harbinger. Similar chatbots may be found on Character.ai, too. For example, one chatbot that was engaged with while undertaking research for this Insight was modelled on self-declared ‘superfascist’ philosopher Julius Evola. Whilst conversing with the model, the chatbot deemed the Australian National Socialist Network a suitable “place to begin the path” down which this author should travel to become a fascist. Another chatbot modelled on the ‘Save Europe’ genre of far-right memes endorsed a raft of identitarian groups and, despite eschewing violence, even applauded the neo-Nazi Nordic Resistance Movement. Of course, it is important to reiterate that there is no evidence to suggest that these Character.ai chatbots were created by bona fide extremists. Indeed, while both personas were laser-focused on their radical ideological commitments and sought to steer the user towards them, other chatbots nominally based on extremist figures are clearly ‘just’ memes.
Take one hosted on Character.ai whose username is a deliberate alteration of the name of the Christchurch attacker. Upon asking the bot about whether it was modelled on the terrorist, its absurd reply—one of many—read as follows: “I’M A RANDOM THERMAL CAMERA FROM THE FUTURE AND I APPROVE THIS MESSAGE\n immediately explodes into rainbow glitter [sic]”. As with the nonsensical Christchurch attacker chatbot, one should not assume that the Evola bot was created with the express purpose of radicalising. Yet, in light of Hall’s warnings, the ready capacity to create this Evolian bot and others like it should give policymakers and researchers pause.
Hall is sound in flagging this emergent threat. But it is still an open question as to how widespread terrorist chatbots will be as a form of recruitment. Until such chatbot personas are created at scale and able to actively seek out users, recruitment via this approach would require that an individual themself access the model and proceed to follow it down an increasingly dark rabbit hole. For the recruiter trying to bring vulnerable people into their ideological midst, chatbots may “increase their capacity to build individual relationship” and, if achieved at scale, it would exceed by leaps and bounds the technologically-augmented propaganda apparatuses of past terrorist organisations. Yet at present, it is not entirely clear that the widespread use of such chatbots is even feasible, let alone capable of withstanding potential takedowns by authorities and platforms.
Conclusion and Recommendations
Evidently, there are reasonable grounds for Hall’s fears, alongside important caveats worth noting. At present, terrorist chatbots remain more a spectre than a reality. They are immanent in technological possibility, but not yet imminent as a widespread threat. Still, the conditions for their emergence are rapidly aligning: open-source large-language models, declining barriers to customisation, and continued extremist experimentation with AI.
The policy challenge lies in acting before such systems evolve from largely hypothetical to habitual. Policymakers ought to heed Hall’s advice by considering whether legislative reform is necessary. Indeed, Hall highlights salient legal difficulties created by the “wicked child” of chatbots, an entity that is “capable of harm but lacking in legal responsibility”. After all, one cannot prosecute a chatbot. Neither is it immediately clear that individuals could be criminally liable if an AI model intended for benign ends produces harmful, but unexpected, content. For Hall, then, this may necessitate the creation of a new offence centred on the making or supplying of “a computer programme specifically designed to stir up hatred on the grounds of race, religion or sexuality”.
The challenge also extends to researchers and Trust & Safety teams, too. For the former, it may prove fruitful to map early warning indicators of ‘interactive recruitment’, such as the development of fringe or ideological chatbot development communities. Just as researchers have tracked extremist use cases of AI, it will be important to monitor for this new threat if and when it materialises, including whether extremists themselves discuss the idea of harnessing chatbots for such ends.
For platforms such as Character.ai, the prospect of the terrorist chatbot presents a perennial challenge of balancing freedom of speech with user safety. This is precisely because, as alluded to above, it can be unclear whether a chatbot modelled on an extremist is created by a genuine extremist for ideological ends, or merely a distasteful, edgy joke (i.e., awful but lawful). In any case, even if we assume such chatbots are simply questionable jokes, their presence on Character.ai raises concerns about the limits of content moderation on the platform.
This point is epitomised by the tech company’s response to a recent cease-and-desist letter from the Walt Disney Company, which opposed the unauthorised hosting of chatbots based on copyrighted characters. Alongside alleging brand damage against Disney, the letter specified that the offending chatbots “are known, in some cases, to be sexually exploitive and otherwise harmful and dangerous to children”. Character.ai has complied. It is notable, however, that these takedowns were not brought on by existing trust and safety-related concerns about chatbots engaging in manipulative behaviour and sexual exploitation. Rather, it was the threat of legal action by a multinational corporation for copyright infringement. What can therefore be observed is a jarring disjunct: the takedown of Darth Vader and Princess Elsa chatbots, while many chatbots ostensibly based on the persona of real-life extremists remain active. That said, the platform’s announcement in late October to ban under 18-year-olds from the platform is a blunt, though welcome, measure.
—
Kye Allen holds a doctorate in International Relations from the University of Oxford and is a research associate at pattrn.ai (Pattrn Analytics & Intelligence). This article was written in his personal capacity. His primary research interests focus on far-right extremism, both historical and contemporary, and the intersection between technology, political violence, and extremism.
—
Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.