Over the past three years, regional, national, and international governments have repeatedly raised concerns that the misuse of Generative AI (Gen AI) could have a significant impact on the capabilities of terrorists and violent extremists (TVEs). These warnings have not been without foundation. Multiple TVE groups and actors have experimented with Gen AI technologies, primarily to optimise or enhance existing activities and processes.
However, despite these warnings and extensive coverage of these TVE Gen AI experiments, its adoption to date has remained largely ad hoc and experimental. As a result, if Gen AI’s overall impact on the terrorist landscape to date remains hard to define, it is difficult to argue that it has been transformative.
This Insight will outline the slow, piecemeal adoption of Gen AI by TVE actors, and try to characterise the relatively limited impact of this adoption. It will then assess some of the factors behind the rate of adoption, including by contrasting this with the adoption of Gen AI by Serious and Organised criminals (SOC), and exploring factors that impact terrorist adoption of new technologies. In doing so, it aims to help chart a course towards a better understanding of any potential future expansion in TVE Gen AI use, before concluding with brief recommendations for government and private sector actors relating to this activity.
Cases of TVE Gen AI Use
There have been several clear-cut examples of Gen AI use by TVE actors. These include a wide range of propaganda-related activities, with the Islamic State Khorasan Province’s use of AI-generated news bulletins to claim high-profile attacks in March and May 2024 being particularly noteworthy. Other TVE actors have used Gen AI to create imagery and videos, including the re-packaging or translation of materials from ISIL’s weekly newspaper al-Naba. However, most of this activity has originated from ISIL supporters, not official ISIL accounts. Similarly, although pro-Al Qa’ida accounts have discussed or promoted the use of Gen AI, there have only been isolated examples of AI-generated Al Qa’ida activity.
In contrast, AI-generated propaganda has been more widespread among the far-right violent extremist milieu, with AI-generated imagery and videos being used to support anti-immigration narratives in Italy, Germany, France and Ireland. However, a recent analysis highlighting Grok’s use by the UK far-right showed that between January and May 2025, Gen AI content only featured in around 2% of posts on X by a range of UK far-right groups and personalities.
Finally, there are emerging examples of how Gen AI has been exploited to support offline violence. These include its role in the planning of a January 2025 incident in Las Vegas in which an individual died by suicide in an explosion; the use of Gen AI by the 16-year old responsible for a school stabbing in Pirkkala, Finland in May 2025 to draft his manifesto and to prepare for the attack; and a June 2025 incident in New York, where an individual arrested for throwing an IED onto subway tracks in New York claimed that he had “used AI” to assist in its creation.
Determining Gen AI’s Impact to Date
It is difficult to determine Gen AI’s specific impact in each of the examples above, or the broader impact on the terrorist landscape of Gen AI’s use to support propaganda generation and operational research and planning by TVE.
Researchers have emphasised the potential for Gen AI to increase the volume of TVE propaganda production significantly and to enable more targeted terrorist recruitment (including by translating material into multiple languages). However, there is little evidence of a significant surge in TVE material online, that new demographics are being targeted more effectively, or that AI-generated or enhanced propaganda material has had greater resonance with its audience.
Indeed, perhaps the clearest indication of AI’s limited impact to date is the non-linear rate of Gen AI adoption by TVE actors for propaganda production. High-profile use cases such as ISKP’s AI-generated news anchor in early and mid-2024 have not become established methodologies, with existing, well-established tools and techniques – such as traditional audio, video and image creation – remaining the primary means of producing and sharing TVE content.
Similarly, despite concerns that Gen AI will make hostile groups more efficient in their planning and operations, none of the currently publicly-available cases appear to demonstrate that Gen AI provided capabilities or techniques that would have been difficult to obtain via other means, or enabled any significant shift in capability. It is important to note that the extent to which at least two of the three aforementioned cases could be classified as terrorism remains unclear.
Factors Inhibiting Gen AI Adoption by TVE Actors
There are multiple explanations for Gen AI’s relatively slow rate of adoption by TVE actors. One reason, as outlined above, is that Gen AI has delivered relatively limited positive impacts for TVE actors to date. Further, researchers have highlighted the ideological factors that might inhibit the use of Gen AI by both Islamist and far-right TVEs. Internal and external factors that have previously impacted the extent and rate of TVE adoption of new technologies also apply in the case of Gen AI, including TVE group structure, their strategic priorities, resources and relationships (including opportunities for indirect or direct learning), and the impacts of counter-terrorism.
Specifically, in the context of Gen AI, potential inhibitors preventing a quicker, more extensive adoption could include 1) skills or resources issues, 2) the success of counter-measures and/or 3) the failure of hierarchical or ageing groups to appreciate or grasp the opportunities offered by AI.
To understand to what extent these factors might apply, it is instructive to compare Gen AI’s TVE adoption with that of SOC, the group to which terrorists are most often compared (and in some instances, have developed relationships with).
At first glance, the speed of Gen AI adoption by SOC actors appears much quicker, with its impacts clearer and easier to define. For example, Gen AI has made it significantly easier for SOC actors to conduct multiple types of fraud, with AI-generated deepfakes and AI-supported social engineering techniques allowing them to scam individuals and companies more effectively.
Gen AI has also enabled SOC actors to quickly generate vast amounts of Child Sexual Abuse Material (CSAM) material and create deepfake pornography, including through a wide range of ‘nudify’ apps. Cybercrime-as-a-service has also expanded to include a market for “dark” LLMs, enabling SOC actors to buy Gen AI tools without guardrails or with illicit use cases built-in, including the creation of AI-generated fake IDs and inauthentic biometric information.
In addition to this optimisation, Gen AI has also lowered the technical bar for a range of cybercrime offences, including by enabling ‘vibe-hacking’ attacks. Research has further highlighted significant vulnerabilities within widely-used Gen AI tools that can allow SOC actors to conduct supply chain attacks or deliver prompt-injections to user’s browsers, both with wide-ranging cyber-security implications.
While a full assessment of Gen AI’s impact on SOC activities is beyond the remit of this piece, it is clear that it has already enabled new capabilities and opportunities for SOC actors that have caused real harms, and that new vectors for misuse continue to emerge on a regular basis.
Clearly, the comparison between TVE and SOC actors is an inexact one. For example, SOCs are typically pragmatists with few (if any) ideological constraints limiting the technologies they use and how they use them, a clear difference from TVE actors. If this is one possible contributory factor towards the difference in rate of adoption, it is less clear to what extent the other potential inhibitors – 1) resource issues 2) successful counter measures 3) age or hierarchical barriers – apply.
Many Gen AI tools are free and extremely intuitive, typically aiming to reduce the know-how and resources required to conduct a range of specialist or technical activities. It also appears unlikely that content moderation measures are successfully preventing significant volumes of AI-generated TVE content from appearing online, but simultaneously struggling to prevent the proliferation of non-AI generated TVE content.
Finally, if demographics or hierarchical structures can inhibit innovation in certain TVE contexts – particularly for groups like Al Qa’ida, who have tended to prioritise long-term survival over short-term innovation – hierarchical, longstanding SOC groups have successfully integrated AI into their activities.
Conclusion
The comparison between TVE and SOC adoption of AI, while imperfect, demonstrates important Insights. Namely, despite their reputation as an early adopter of new technologies, TVE groups have adopted AI more slowly than many anticipated, and slower than a broadly equivalent hostile actor like SOC.
Despite this finding, Gen AI misuse by TVE remains a significant concern. AI technology continues to progress and the user-base for Gen AI tools continues to grow. Importantly, much of this growth is driven by young people (a critical TVE demographic), attracted both by education-related use cases and increasingly, a search for companionship and emotional support.
This increasing and dedicated userbase is critical to any analysis of the trajectory of future adoption of Gen AI by TVE. One key factor in previous examples where TVE actors adopted new technologies more slowly than anticipated – notably cryptocurrencies and virtual assets – was the relatively slow growth in global usage rates. In contrast, some Gen AI companies are already reporting hundreds of millions of users.
As more lay people become regular Gen AI users, TVE actors are also likely to become more familiar with its capabilities and use cases. In parallel, new or enhanced technological capabilities may shift the relative benefits of using Gen AI in comparison to existing processes, or create new possibilities for misuse. For example, the potential misuse of chatbots by TVE remains a significant risk, while Tech against Terrorism have warned that agentic AI capabilities could enable bot-driven TVE online activity at scale.
Even in the absence of major shifts in TVE usage trends, Gen AI’s large user base means that we should anticipate more frequent examples of TVE using Gen AI for operational research – particularly with many Gen AI users using LLMs instead of traditional search – and that, as a result, TVE Gen AI usage will feature more frequently in counter-terrorism investigations and prosecutions.
Careful analysis will be required to determine whether this increased prevalence represents genuine TVE innovation designed to increase their capabilities or is a reflection of broader societal trends and Gen AI use by TVE actors in their everyday life.
Indeed, given the sporadic adoption to date and the seeming reluctance of TVE actors to repeat or build on these isolated use cases, it is possible that they have drawn similar conclusions to a recent MIT study, which found that 95% of Gen AI pilot programmes had failed. Or more concretely, that TVE actors – particularly those operating in the parts of the world most affected by terrorism – find the current landscape sufficiently conducive to their aims for them to maintain their technological status quo.
In this uncertain context, practitioners and policymakers should avoid overreacting to isolated examples of Gen AI use by TVE actors. Instead, a more comprehensive, ongoing analysis is required that is grounded on the latest intelligence, research, and civil society insights and seeks to track both current trends and anticipate future ones.
Simultaneously, tech platforms and governments should ensure that they are prepared for the potential future shifts in the volume and diversity of AI-generated activity online, including by working closely with the source of this content – Gen AI companies and platforms – to quickly integrate lessons learned from years of countering terrorism online within a new risk vector.
Finally, governments should continue to explore the need for regulation of the AI sector, driven not just by these potential TVE-related harms but also by the growing evidence that Gen AI tools are causing significant harm to their user base.
—
David Wells is a global security consultant currently working with a range of international and regional organisations – including UNOCT, the Council of Europe and OSCE – to better understand and respond to the PCVE challenges and opportunities offered by new technologies (particularly AI). He is an Honorary Research Associate at Swansea University’s Cyber Threats Research Centre and an Affiliate at the Middle East Institute, and was previously Head of Research and Analysis at UNCTED in New York.
—
Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.