On Wednesday, 28 May, the Global Network on Extremism and Technology (GNET) hosted its Fifth Annual Conference at King’s College London. The event brought together academics, government and policymakers, technology representatives, and civil society to understand the evolving relationship between terrorism, violent extremism, and technology in all its forms.
Over the course of a packed one-day itinerary, GNET hosted three panels and a Fireside Chat on wide-ranging topics related to preventing and countering violent extremism (P/CVE), and technology. Expert panellists flew in from Brazil, Canada, France, India, Ireland, Spain, the United Kingdom, and the United States, representing the truly global network our research project aims to cultivate.
A record 739 people registered to attend the event across in-person and virtual registrations, demonstrating that the cross-sector collaboration and knowledge sharing produced at the GNET Conference is highly valuable to stakeholders across the globe.
As the academic research arm of the Global Internet Forum to Counter Terrorism (GIFCT), GNET was pleased to once again have the organisation’s support in putting on this event. GNET was also delighted to have the Violence Prevention Network’s support in co-hosting the post-conference networking reception, which enabled attendees to continue discussing the topics of the day into the evening.
GNET’s Research Director, Julien Bellaiche, began the conference with opening remarks.
Fireside Chat
The first segment of the day was a Fireside Chat with GIFCT’s Executive Director, Naureen Chowdhury Fink and GNET’s Principal Investigator, Dr. Alexander Meleagrou-Hitchens. They discussed the GIFCT-GNET relationship and explored how governments, tech platforms, and civil society are collaborating to address online violent extremism in 2025.

Naureen Chowdhury Fink and Dr. Alexander Meleagrou-Hitchens
“Our expanding community of practice and the technical tools, resources, and information GIFCT provides to its member platforms are informed by engagement with a multi-stakeholder audience from across industry, government, civil society, and academia. GNET, the research arm of GIFCT, plays a critical role in supporting our members and the broader tech and counterterrorism communities with timely and relevant reports and Insights.” – Naureen Chowdhury Fink, GIFCT
The discussion examined GIFCT’s evolution and how, as an independent nonprofit founded by tech companies, it works through a multi-stakeholder approach, engaging the tech sector, academia, government, and civil society, to develop tools and frameworks that respond quickly and effectively to violent incidents with a significant online presence. Chowdhury Fink elaborated on GIFCT’s incident response framework, which was activated as recently as the 21 May Washington D.C. shootings. The framework bridges offline violent extremist and terrorist activity with its online footprint, helping GIFCT member platforms better understand and respond to harmful content. Furthermore, GIFCT’s hash-sharing database – a kind of digital fingerprint system for terrorist and violent extremist content – enables GIFCT member companies to quickly identify and respond to hashed material.
A major theme of the Fireside Chat was the increasing difficulty in defining violative content, as motivations behind violent extremist acts have become more fragmented and less tied to designated terrorist groups in some cases. Both Chowdhury Fink and Meleagrou-Hitchens remarked on the worrying trend of youth radicalisation and terrorist and violent extremists’ increasing weaponisation of hard technology, highlighting the continued necessity of GNET’s targeted and impactful research to help inform law enforcement, technology companies and industry.
“As the threat landscape continues to evolve, cross-sector collaboration will become even more important to address the uncertainties of ‘borderline’ content and TVEC with no clear affiliations; the targeting and radicalisation of youth and minors; and the convergence of emerging technologies to create new threat vectors. We’re thankful for the opportunity to participate in this GNET conference and the important dialogue & exchanges needed to counter threats both on and offline.” – Naureen Chowdhury Fink, GIFCT
Panel 1: Extremism and Emerging Technology: The Risks and Realities of Innovation
The first panel of the program focused on themes core to GNET’s mission: how modern technology is reshaping the online (and offline) threat landscape. Presentations explored GNET’s 2025 research themes, including terrorist exploitation of artificial intelligence, virtual financing, and hard technology. The panel was chaired by GIFCT Senior Director of Memberships and Programs Dr. Erin Saltman.

From L-R: Ricardo Cabral Penteado, Dr. Marten Risius, Dr. Yannick Veilleux-Lepage, Mona Thakkar, Dr. Erin Saltman.
Dr. Yannick Veilleux-Lepage opened the panel by emphasising the dangers of technological convergence, such as combining AI with drones or 3D printing firearms, and called for more imaginative forecasting, data, and funding to counteract bureaucratic stagnation.
“How multiple technologies converge to create an entirely new threat vector is a blind spot that needs further analysis – this goes both for technologies that we already know can be dangerous, as well as those that we see as benign. If we want to anticipate how terrorists and violent extremists are exploiting emerging technologies and the convergence of technologies, we need to think about creating new data pipelines and new opportunities for collaboration, and be more imaginative in solutions.” – Dr. Yannick Veilleux-Lepage, Royal Military College of Canada
Through her investigative monitoring, Mona Thakkar presented how violent extremist groups are exploiting digital financing technologies to fund their operations. Specifically, she discussed jihadist use of cryptocurrencies like Monero, as well as Telegram bots, formal banking systems, and cross-platform strategies to disguise fundraising efforts as humanitarian aid.
Next, Dr. Marten Risius focused on the proliferation of AI in violent extremist circles and warned about AI facilitating radicalisation. He signalled the dangers presented by AI’s ability to create targeted, multilingual, and passive violent extremist content and how trust and safety teams across the tech sector need to act to counteract the negative effects of this technology.
GNET fellow Ricardo Cabral Penteado concluded the panel by discussing the unique challenges of moderating violent extremist content in Brazilian Portuguese. He showcased BR-ECHO, a natural language processing tool designed to detect coded language and assess risk in a linguistically nuanced way.
The panel called on tech companies – both those in the room and those yet to join the conversation – to prioritise trust and safety and invest in cross-sector collaboration to address emerging threats.
Panel 2: The Researcher’s Toolkit: Resources for P/CVE Practitioners
Micalie Hunt, Senior Membership and Programming Associate at GIFCT, moderated the second discussion. The panel centred on practical research tools to counter and address the violent extremist and terrorist trends explored throughout the day.
From TikTok Trust and Safety, Kathryn Grant outlined the platform’s Researcher API tool. Julien Bellaiche spoke about developments at the Repository of Extremist Aligned Documents (READ) and how the resource safely and securely supports researchers in the field. Finally, Wassef Lemouchi from CAPRI presented his AI chatbot tool, which aims to counter radicalisation and understand the evolving nature of violent extremist ideologies.

From L-R: Kathryn Grant, Julien Bellaiche, Wassef Lemouchi.
“READ was designed for professionals in the field working on countering terrorism and violent extremism. It democratises access to sensitive materials within the community of research professionals, preserves the psychological well-being of researchers by minimising the time spent in the TVE online ecosystem, and ensures that these resources are preserved and archived in a secure digital repository.” – Julien Bellaiche, GNET
Panel 3: Multi-Media Radicalisation and the Online-Offline Extremism Nexus
The conference’s concluding panel explored how violent extremist ideologies are shaped and sustained across online and offline spaces, and was chaired by Dr. Alexander Meleagrou-Hitchens.
Dr. Ryan Scrivens’ presentation focused on the connection between online posting and offline behaviours of violent right-wing extremists. He noted that while sample sizes remain small, future research must focus on identifying the tipping point at which online rhetoric escalates into real-world violence.
Next, Clara Jammot examined the decentralisation of radicalising narratives within the manosphere, making violent extremist ideologies more accessible. She discussed the implicit rather than explicit presence of violent rhetoric, and how the manosphere is much broader than merely incels.
Dr. Jessica White discussed online gaming as a powerful entertainment environment, where strong social bonds and shared identities can foster deep group cohesion towards radicalisation. She highlighted RUSI’s research on examining radicalisation in gaming spaces through a gender lens.

Milo Comerford and Dr. Jessica White.
“Billions of people play games and have a very positive experience, but it is a community space that requires our attention. The social element of gaming is a powerful thing, and gaming identities can be more real to people than their offline or “real-life” identities. As practitioners seeking to counter online harms, we must understand the relationship between these identities and their linkages to radicalisation.” – Dr. Jessica White, RUSI
Finally, Milo Comerford concluded the panel by discussing the Islamic State’s resilient online ecosystem. Adding to the recurring theme throughout the day, he touched on the troubling rise in youth radicalisation, as well as the cross-platform proliferation of violent extremist content.
Identified Research Gaps
The conference highlighted that additional research is needed on the following topics:
- Increased difficulty of defining violative content: Motivations behind violent extremist acts have become more fragmented, more nebulous and less connected to designated terrorist groups. More research is needed on: a) Convergence and fragmentation of violent extremist ideologies; b) New online practices emerging from fragmented/ideologically mixed communities.
- Youth radicalisation: a) How younger generations use tech; b) On nihilistic violent extremist online communities which are often formed by young members (such as the Com Networks and 764).
- Technological convergence: a) The risks of technological combination for nefarious purposes (such as Drones/AI; Drones/3D printing tech); b) Methods to forecast future trends related to this.
- Detection and moderation of VE content in under-represented contexts and languages: a) Spread and nature of VE content in less represented contexts and languages; b) How emerging technology can enhance detection efforts (such as NLP tools, for example).
Key Policy Takeaways
- Investing in the development of affordable and easy-to-use AI tools for researchers. Companies should consider how they can help researchers to access and effectively use AI tools to organise and analyse their data.
- Respond to emerging violent extremist evasion tactics of unofficial pro-ISIS and AQ propaganda creation. Consider ways to detect and overcome violent extremist methods for evading moderation methods, such as simple “broken text” emoji communication and content masking approaches, and more complex approaches including the use of difficult to classify texts from languages such as Amharic.
- There should be more focus on using messaging apps to maintain network resilience after social media platform takedowns. While networks on major platforms are often taken down, messaging apps help to maintain these networks and allow them to redeploy with ease.
- “Safety by Design” Audits. Consider requirements for new generative-AI features (text, image, audio) to undergo third-party red-teaming focused on violent extremist use of these tools (e.g., propaganda generation, recruitment scripts). Combine with publicly available summaries of audit findings and remediation steps in annual transparency reports.
Watch Each Panel Recording Here