Terrorist and Violent Extremist (TVE) actors are known as early adopters of emerging technologies. There is growing evidence that TVEs have used AI, for instance, to translate manifestos into various languages, create propaganda images, and mask TVE content to avoid detection. TVE actors are also exploring the use of AI chatbots as synthetic actors that foster radicalisation in human-AI interactions. While the majority of these uses remain explorative for now, the capacity of AI to cause significant harm when adopted by malicious actors is difficult to deny.
However, despite extensive warnings about the risk of malicious use of AI, policy and legal frameworks continue to lag behind the rapid adaptation of the technology by TVEs. This lag is not just a matter of slow regulation, but of the way policy, technology, and organisational practices entangle, making progress and straightforward fixes difficult. Jackson, Gillespie, and Payette have described this as a “policy knot,” where institutional silos, conflicting incentives, and fragmented governance slow coordinated action (p. 589). This means that, in practice, regulation and decisive action often arrive only after the harms have already materialised. Policy is designed “after the fact” rather than in anticipation (p. 529). For PCVE, this is a problem, as it leaves gaps that TVEs can exploit.
This Insight examines the tension between emerging technologies, TVEs’ capacity to innovate, and policy, based on a multi-stakeholder workshop conducted on 30 June 2025 at the European Conference on Computer-Supported Cooperative Work (ECSCW) in Newcastle upon Tyne, UK. The workshop participants included regulators (such as Digital Service Coordinators [DSCs]), law enforcement, UN-affiliated advisors, individuals with ties to major tech companies, and representatives from civil society. We used a Value Sensitive Design (VSD) approach, an established framework for systematically identifying and integrating human values into the development and governance of technology while also revealing tensions that necessitate difficult trade-offs. This Insight will analyse what the discussions during the workshop reveal about current barriers to effective governance of AI misuse by TVE actors. Finally, we argue that three issues challenge the effective governance of AI misuse by TVE actors. The first is the erosion of trust among institutions, civil society, and the public, including a decline in trust in the effectiveness of responses. The second concerns persistent tensions between competing values. For instance, balancing security and surveillance with privacy and civil rights. Such tensions hinder stakeholder consensus and decisive action. The third issue is the structural “policy knot” of fragmented governance. By unpacking these challenges, we propose recommendations for tech companies, regulators, and law enforcement to move from recognising the risks to taking action.
Trust: The Core Value at Risk
Malicious use of AI by TVE actors threatens to erode trust in institutions, information, and communities. The loss of trust reaches far beyond incidental extremist uses of AI; it establishes broader societal vulnerabilities that hostile actors can exploit. For instance, an AI-generated manifesto or news broadcast could spread widely before fact-checkers intervene, leaving a lasting doubt even after its removal. Thus, even limited or incidental cases of AI misuse can have a systematic impact because they undermine the basis of trust in our information environments.
Workshop participants repeatedly returned to the issue of trust, identifying it as the value most at risk. Across sectors, they agreed that public distrust stems from the absence of a credible regulatory information ecosystem, which is supported by research on trust in AI. This mistrust may be further exacerbated by the presumably benign use of AI. When such tools are deployed without sufficient transparency or accountability, for instance in counterterrorism, they can undermine institutional (and regulatory) legitimacy and corrode trust within communities and broader society.
Thus, while the malicious use of AI by TVE actors remains largely explorative, the societal vulnerabilities, particularly the erosion of trust in information and institutions, are already evident. Malicious actors, such as organised crime groups, have recently weaponised AI in unprecedented ways, including to conduct sophisticated cyberattacks and complex operations that integrate AI across all stages of their operations. While ideological constraints may have so far limited the wider use of AI by TVE actors, the growing fluidity of extremist ideologies and the common opportunistic nature of extremist behaviour (p. 1544) make a more pragmatic adoption of AI by TVEs increasingly plausible – much as their strategic use of blockchain technology emerged only after initial experimentation.
This issue lies not in AI alone, but stems from the fragility of our information environment, which predates the wider use of AI. Systemic issues in content moderation on social media platforms, as well as ineffective and retrospective fact-checking infrastructure, are known issues that still fall short in preventing the amplification of harms. In this context, initiatives like the EU’s trusted flaggers are promising, but late, and have not yet embraced the issue of terrorist and extremist online content. This asymmetry between the rapid generation of content and new forms of content, and delayed verification by trusted sources, accelerates the ongoing erosion of trust and points to a larger issue than the exploratory or incidental malicious use of AI.
Value Tensions that Stall Decisive Action
Workshop discussions made clear that slow governance responses often arise from persistent value tensions that cannot be fully reconciled. Value tensions usually explain why governments regulate technology only after harms have materialised, primarily due to the difficulty of balancing these tensions and justifying the necessity of proactive measures. For instance, the need for comprehensive counter-extremism measures, such as surveillance, to create safety, versus the need for civic rights, such as privacy. Other examples include “borderline content” at the edge of policy or regulation that can be exploited by TVEs. These spaces, where governance is stalled or paralysed, create opportunities or attack vectors for exploitation.
Transparency from governments and technology companies is essential to sustain public trust and the legitimacy of PCVE measures. Yet, transparent communication in PCVE is a double-edged sword. It can help to foster accountability, but it risks exposing sensitive security methods. For example, tech platforms may resist calls for algorithmic transparency, fearing TVEs may exploit it to evade moderation (similar to how AI models have been jailbroken [p. 19]) or expose weaknesses where technologies intersect, such as AI with drones or 3D printing. While these concerns are valid, workshop participants emphasised that policy must account for these trade-offs by ensuring the auditability of businesses and technology actors, and holding them accountable for policy to be effective.
On the other hand, rapid responses (by regulators or law enforcement) often collide with the necessity for due process, leaving room for exploitation. For instance, strict removal deadlines for terrorist content risk over-removal, especially in borderline cases (p. 10-11), as illustrated by France’s Loi Avia, which was struck down for overreach. Each of these unresolved tensions generates hesitations or overreaches that directly translate into exploitable gaps.
The Policy Knot: Challenges in Governance Due to Structural Fragmentation
At the heart of these challenges are structural entanglements, which sociotechnical scholars describe as a “policy knot” that resists linear fixes. These entanglements stem from the interdependent ways in which policy, practice, and the design of technology operate, making simple solutions impossible. In countering the malicious use of AI, regulators, platforms, and law enforcement typically operate under different mandates. While there are touch points, they generally operate in siloes, which makes joint responses slow and inconsistent. Thus, governance challenges may stem less from TVE adoption itself than from regulatory, technological, and practical interdependencies that create exploitable gaps.
The policy knot becomes evident in the existence of silos (p. 320), overclassification and secrecy between stakeholders, and the lack of safe spaces for open exchange. The result is that governments, tech platforms, and civil society adopt and learn more slowly than many transnational TVE actors, such as far-right groups (p. 16) or Jihadist groups (p. 632), which often share tactics more freely and across borders. For instance, policy on online terrorist content, with a few exceptions (such as the EU’s Digital Services Act), remains fragmented across national borders and among stakeholders. Government agencies, tech companies, and NGOs often act independently, and coordination tends to happen only after major incidents. This reactive, ad hoc cooperation leaves large gaps in coverage and slows the ability to adapt to new extremist tactics. Moreover, sometimes conflicting regulatory requirements across jurisdictions make information-sharing challenging. Particularly, smaller platforms are frequently caught between legal risks, which discourages open collaboration with peers or regulators and hampers innovation. This results in a governance system that is not only reactive, but disproportionately affects already marginalised communities (such as minority populations, vulnerable online communities, or regions in the Global South with weaker regulation or less investment).
Inequitable Impacts and Vulnerable Populations
Finally, a recurring theme in the multistakeholder workshop was that the malicious use of AI and its regulation does not affect all populations equally. This undermines both fairness and the strategic effectiveness of countermeasures. Today, it is common knowledge that youths are particularly vulnerable to TVE harms online and are often specifically targeted in parts of the online world they feel most comfortable in (which often may be gaming communities). This is not a particularly new insight, but responses remain fragmented internationally, and some argue that they are overblown or creating value tensions. Thus, without coordinated responses, governments risk adopting powerful policy instruments that may further erode trust and rights while leaving the core problem of uneven or fragmented safeguards unresolved.
Pathways Forward
Building on discussions from our workshop, we propose several paths forward to transition from recognising risks and harms to taking action.
Focus on incremental change: The discussions highlighted the necessity for incremental, cumulative mechanisms that provide a scaffolding or infrastructure for effective, cross-boundary governance. For instance, AI watermarking and media literacy initiatives each have limitations in isolation, but together they can mutually reinforce trust in information and in users’ capacity to evaluate content critically. Institutionally, this can take the form of multi-stakeholder innovation labs, such as between universities, law enforcement agencies, and civil society organisations, where AI tools are tested under controlled conditions, independently evaluated, and reviewed by joint oversight boards before reauthorisation or decommissioning. Examples include the Police AI Lab in Amsterdam, which brings together academic researchers and the Dutch police under shared ethical guidelines. By being (partially) open to the public, these initiatives can also serve a trust-building function.
Shared infrastructure: In light of limited resources, shared infrastructures and observatories can help mitigate resource impediments, build trust through transparency, and create space for pooling cross-stakeholder expertise. Examples of such initiatives are numerous, yet often not explicitly targeted at TVE content. Expanding such initiatives to tackle malicious AI use could create benchmarks for moderation tools and empower more civil society participation in countering TVE.
Translation mechanisms: Shared infrastructures, observatories, and similar initiatives can also serve as additional translation mechanisms, facilitating conversation across the regulatory, tech, and civil society divides. A common issue in cross-disciplinary work is that stakeholders often operate with different conceptual models or divergent language, even within the same context. Red-team libraries, safe-space briefings, and funded “policy-tech translators” can make trade-offs explicit and ensure decisions are comprehensible and actionable. Developing shared standards (like safety by design) can further align practices across silos and create a shared language.
Conclusion
TVE actors will continue to experiment with AI, following common patterns of technology diffusion that explore the boundaries of novel technologies before mainstream adoption and governance. This adoption is happening at a time characterised by a fragile information ecosystem where key societal values, above all trust, are under threat. Discussions during our ECSCW workshop revealed a strong consensus on the urgency of the issue, but also highlighted how regulation often becomes entangled in different value tensions. Approaches that embed safeguards and policy into the development of new technology (such as safety by design) can contribute to a better shared understanding and a common language for addressing these issues.
–
Kevin Marc Blasiak is a Postdoctoral Researcher at TU Vienna’s Faculty of Informatics, where he leads the Responsible Computing Circle at the Center for Technology & Society (CTS). He is also a member of the Trust & Safety Teaching Consortium. His research focuses on technology governance, trust and safety, and persuasive technologies, with a focus on ethical chatbots designed for countering violent extremism across online–offline prevention pipelines, including correctional and rehabilitation contexts. He collaborates with violence prevention NGOs and international organizations on responsible technology use, including initiatives on AI and law enforcement. He holds a PhD in Information Systems from the University of Queensland.
Website: https://www.kevinmarc.org/
Daniel E. Levenson, MA, MLA, is a PhD Candidate at Swansea University. He received an MLA in English and American Literature and Language from Harvard University, as well as an MA in Security Studies from the University of Massachusetts at Lowell. He is a member of the board of the Society for Terrorism Research and, since 2019, has served as a member of the FBI Boston Mass Bay Threat Assessment Team.
Website: www.danielericlevenson.com
–
Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.