Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

AI at the Centre: Violent Extremist Exploitation in Pirkkala

AI at the Centre: Violent Extremist Exploitation in Pirkkala
14th July 2025 Luke Baumgartner
In Insights

On 20 May 2025, a 16-year-old boy stabbed three of his female classmates at the Vähäjärvi school in Pirkkala, Finland. Just prior to carrying out his attack, the suspect emailed his manifesto to Iltalehti, a local newspaper. Not only did the manifesto outline his plans, but it also revealed several key details that reinforced recent developments in extremist activity. Among the revelations was his deliberate targeting of girls, a clear indication of his misogynist motivations. Like many extremists before him, he also filmed the assault as it occurred, and the footage was subsequently distributed on the internet. But perhaps the most compelling aspect of the incident was the assailant’s reported use of artificial intelligence (AI) tools, particularly ChatGPT, in planning his attack.

With the rapid development of AI tools, extremists have been quick to adapt to this emerging technology, leveraging its capabilities for tasks such as creating propaganda, spreading conspiracy theories, and social engineering. As such, the Pirkkala stabbers’ use of ChatGPT to assist in planning their attack highlights the risks that unregulated AI platforms pose to mitigating future threats of this kind. This Insight will analyse the ways in which the Pirkkala attack reflects a dangerous synthesis of misogynist violent extremism (MVE) and how emerging technological enablers like Generative AI (GenAI) can be exploited by violent extremists. It concludes with robust policy recommendations for stakeholders across AI use and regulation.

Incels and Misogynist Violent Extremism

In May 2014, 22-year-old Elliot Rodger went on a violent rampage in Isla Vista, United States, where he killed six and injured fourteen before taking his own life. In the years that followed his death, Rodger became the centrepiece in larger conversations surrounding online radicalisation, violent misogyny, and an online subculture known as “incels,” a portmanteau for involuntary celibacy. Incels are (primarily heterosexual) men who identify themselves as a disregarded portion of society, unable to find romantic or sexual partners despite their desire to have one. Although incels are just one part of the “loose confederacy” of online misogynist subcultures that comprise the broader manosphere, such as pick-up artists (PUAs), Men Going Their Own Way (MGTOW), and men’s rights activists (MRAs), they have become the most outwardly extremist and violent. 

Since Rodger’s attack in 2014, robust online communities revolving around incel ideology have flourished on popular sites such as 4chan and Reddit. The most notable of Rodger’s disciples was Alek Minassian, who, in April 2018, killed 11 in a vehicle ramming attack in Toronto, Canada. Before his mass killing, Minassian posted on Facebook, “The Incel Rebellion has already begun! We will overthrow all the Chads and Stacys! All hail the Supreme Gentleman Elliot Rodger!” A few months later, in November 2018, Scott Beierle killed three in a shooting at a Tallahassee, United States, yoga studio. Leading up to his attack, Beierle posted several misogynistic songs to SoundCloud and mentioned Rodger in a YouTube video. Both Minassian’s and Beierle’s accounts were removed from the platforms after their attacks.

In an attempt to grapple with the increasing frequency of incel-motivated violence, debates over whether incels should be considered legitimate terrorist threats inevitably appeared. Bruce Hoffman and Jacob Ware contend that while incels do not constitute the same level of threat as Salafi-jihadists or far-right extremists, they argue that incel violence “conforms to an emergent trend in terrorism…and shows similarities to and has nascent connections with other terrorist movements.”

While the Pirkkala attacker is not an avowed incel, his deliberate targeting of female students because, according to the purported manifesto, he considered them “easier targets and because the idea of hurting them feels pleasant to me,” is a direct reflection of the violent misogyny within the incel community. However, in line with Hoffman and Ware’s findings, the Pirkkala perpetrator’s targets and, more importantly, his use of ChatGPT to assist with planning his attack speak to the concerning trend of violent extremists’ use of artificial intelligence (AI).

Terrorist Exploitation of AI

Within the last few years, the adoption of AI in sectors such as medicine, economics, education, and transportation has received extensive attention. As such, terms describing various subfields of AI, like “machine learning,” “large language models (LLMs),” and others, have been used interchangeably. Generally speaking, AI is a field of computer science that studies the theory and development of computers and machines capable of performing tasks that typically require human intelligence, such as decision-making and problem-solving. 

Within AI are several subfields, notably machine learning and deep learning. Machine learning trains algorithms to make predictions or decisions based on supervised data inputs, while deep learning mimics the neural networks in the human brain that work in tandem to process information. Yet even further within deep learning lies Generative AI (GenAI, or GIA) — models that can create original content in response to a prompt or request. GenAI tools range in capabilities, from simple customer service chatbots to models such as ChatGPT and Copilot, which are powered by transformers and designed to generate more complex sequences of information, including images and code. Even so, as AI tools continue to proliferate in areas of everyday life, their appropriation by violent extremists poses significant threats and challenges.

A 2018 report, titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” penned by a consortium of authors from academia, industry, and civil society, examined the current (at the time) and potential threats that the misappropriation of AI holds in the coming years, identifying threats to cybersecurity, physical security, and political security as the most pressing. Similarly, a joint 2021 report from the United Nations Interregional Crime and Justice Research Institute (UNICRI) and the Office of Counter-Terrorism (UNCCT) notes that the expansion of technological capabilities can result in counterterrorism professionals being caught off-guard by terrorists identifying and using new and innovative technologies for malicious purposes, including AI.

In the years since both reports’ publishing, terrorist use of AI for nefarious purposes has increased, with notable examples spanning the ideological spectrum. For example, In July 2023, Tech Against Terrorism identified a neo-Nazi Telegram channel dedicated to sharing antisemitic and racist memes and images created with GenAI. They also identified several posts on another social media platform that included guides for using GenAI tools to create propaganda featuring classic antisemitic tropes, such as the “happy merchant.” AI-generated video content is also popular among the far-right, leveraging GenAI capabilities to translate speeches by Adolf Hitler and Benito Mussolini that advocate for genocide. 

But far-right extremists are not the only ones partaking in the latest technological revolution. In May 2025, ActiveFence, an online trust and safety technology company, identified the Islamic State (IS) as another extremist user of AI tools. Notably, one of IS’s most prominent media organisations, the Qimam Electronic Foundation (QEF), published an English and Arabic language guide for using AI to further its spread of propaganda and providing technical guidance for inspired believers to carry out violent attacks on their behalf. Additionally, in February 2024, an Al Qaeda-linked jihadist group announced it would begin hosting workshops to train others in using AI for propaganda creation. 

Finally, outside of content creation and technical guidance, algorithmic content recommendation models on social media platforms present an especially difficult challenge to combating the spread of violent extremist ideologies. Many social media algorithms use several AI models, including Collaborative Filtering (CF) and predictive modelling, to present users with content based not only on their preferences but also on those of other users. Therefore, if extremist content is uploaded to social media—regardless of whether it was created with AI tools—those who have previously interacted with such content are more likely to engage with it, facilitating a negative feedback loop. 

Policy Implications and Recommendations

Tech companies and legislators across various levels of government have several regulatory tools at their disposal to help mitigate the risks that AI tools pose in countering extremist and terrorist activity. First, firms responsible for creating AI tools used by extremists can implement tiered access and use restrictions. Tiered access enables companies to limit exposure to more powerful AI models, particularly those capable of generating instructions for weapons or evading content filters. Additionally, limiting usage rates for sensitive queries related to extremist or adjacent content can throttle or deny access to suspicious actors, allowing companies to maintain a clear audit trail in the event of a misuse investigation.

Second, AI companies or those creating AI models with the potential for misuse can implement robust red teaming exercises before going to market. Even with access controls, determined malicious actors can “jailbreak” or manipulate AI models to work around content restrictions. Proactive safety engineering and continuous testing against terrorist and extremist prompts can help with the real-time detection of suspicious or harmful outputs, thereby assisting AI developers in staying ahead of contemporary extremist threats. 

Finally, legislative regulation can create legal guardrails that, if violated, hold individuals and corporations accountable. Governments can create legal deterrents to misuse by implementing minimum safety requirements for AI developers and ensuring public accountability and oversight. Key components of this approach would require not only proscribing extremist applications of AI but also mandating risk audits for further model development and requirements for reporting on transparency. 

Conclusion

The Pirkkala school stabbing serves as a grim reminder of the evolving landscape of violent extremism in an age of continuous technological innovation. While this particular incident is indicative of violent misogynist attitudes present in incel online subculture, it also reinforces a troubling development—the use of GenAI in propagating violent extremism. The Pirkkala attacker’s leveraging of ChatGPT–a fact that OpenAI has yet to publicly acknowledge– to assist in planning his attack marks a critical inflection point, demonstrating how widely accessible AI tools can not only empower traditional, organised terrorist groups but also act as a force multiplier for lone actors with a motive and an internet connection. 

Moreover, traditional indicators of extremist activity, such as affiliation with known terrorist entities, may no longer suffice when increasingly younger assailants can access ideological justification, propaganda, and technical guidance with minimal limitations. As AI tools grow increasingly powerful and ubiquitous, their use by violent extremists poses increased risks and presents unique challenges to parents, governments, and tech platforms alike.

Luke Baumgartner is a Research Fellow at the Program on Extremism at George Washington University, focusing on domestic violent extremism, white supremacist movements, and the role of military veterans in political violence. A former U.S. Army Field Artillery Officer, he holds a B.A. in Political Science from Southern Illinois University and an M.A. in Security Studies from Georgetown University. His work has appeared in outlets such as CNN, The Washington Post, WIRED, USAToday, ABC News, Military.com, Lawfare, the Irregular Warfare Initiative, and the Georgetown Security Studies Review.

Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.