Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

Prompted to Harm: Analysing the Pirkkala School Stabbing and Its Digital Manifesto

Prompted to Harm: Analysing the Pirkkala School Stabbing and Its Digital Manifesto
12th June 2025 Anda Solea
In Insights

On 20 May 2025, a stabbing incident occurred at a school in the town of Pirkkala, southern Finland, in which three female pupils, all under the age of 15, were injured. Prior to the attack, the perpetrator – a 16-year-old male student – sent a manifesto allegedly written with the aid of ChatGPT to a Finnish newspaper.

The attack bears a disturbing resemblance to the all-too-frequent school shootings in the United States, but also a pattern of violent misogynist incidents seen in the UK, such as the 2024 Southport stabbings and the 2025 Bournemouth stabbings. The violent attack also echoes the pattern of extremist violence perpetuated by misogynistic incels (involuntary celibates), who have become a security concern over the last decade. 

This Insight will provide an overview of the Pirkkala school stabbing, analyse the perpetrator’s alleged manifesto and discuss its links to the misogynistic incel ideology and the broader rise in violence against women and girls (VAWG). Further, the role of Generative AI (GenAI) as a facilitator in planning violent extremist attacks, and the use of such tools in the perpetuation of Technology Facilitated Gender Based Violence (TFGBV) will be discussed. The Insight will conclude with a discussion of the intersection between technology misuse and misogynistic violence. 

The Manifesto – Motivation, Planning and ChatGPT

Before the attack, the perpetrator allegedly sent a manifesto to a newspaper, and a recording of the attack was reportedly uploaded online before their arrest. Following the attack, the police confirmed the victims did not sustain life-threatening injuries, and the attacker was quickly apprehended.

The confirmed one-page manifesto is atypical in both form and content. It is notably brief, structured as a series of bullet points, and lacks the extensive justification, personal background, or clear articulation of grievances and targets often found in similar documents. The manifesto, originally written in Finnish and translated for this analysis, is divided into four sections, the first outlining the intended attack date, location and the targets – female students. The stated motivation is disturbingly casual: to do something “significant” and “exciting,” with the perpetrator anticipating a prison sentence of two to four years.

In the second section, the attacker briefly self-identifies as an “atheist existentialist” with a deterministic worldview, noting he does not have nor desires friends, before listing his favourite games, films, and TV series, along with links to playlists.

The final two sections detail the planning and execution of the attack. According to the manifesto, the perpetrator spent six months preparing, reportedly using ChatGPT as part of the process. His stated aim was to kill one person and injure at least two others. The choice of female victims was explained as both a matter of convenience, as he perceived them as easier targets, and disturbing personal gratification, describing the act of stabbing women as more “pleasant.” He intended to film the attack and publish both the video and the manifesto on an unspecified website.

The document concludes with a reflection that this was not his first planned attack, and although he had experienced doubts, these were ultimately overridden by the fear of future regret. A final ten-point checklist outlines the intended actions on the day, including the clothing to wear and the use of a belt to carry the knife and camera. The plan was extremely detailed, outlining where and how to stab the victims, exit the school, upload the filmed material online, contact the police, and surrender.

Misogynistic Violence – Incels and Beyond  

Given the premeditated targeting of female students, parallels between the Pirkkala attack and misogynistic incel violence such as the 2014 Isla Vista shootings, and the 2021 Plymouth shooting can be drawn. Incel ideology centres on a rigid social hierarchy based on physical attractiveness, with attractive men and women at the top, and incels, typically heterosexual men who feel rejected by women, at the bottom. Their worldview is rooted in male supremacy, resentment towards women and feminism, and a belief that societal progress has unjustly privileged women, which is often seen as justification for violence against women and girls. 

While there is no evidence that the Pirkkala attacker identified with the incel community and the manifesto does not clearly articulate a broader hatred of women, elements of his manifesto echo core incel beliefs. Firstly, he targeted girls because he found the act of stabbing them more pleasurable. The identification with an existentialist and deterministic worldview further overlaps with incel narratives that portray life as meaningless or dictated by immutable traits and exclusionary social castes. His self-declared social isolation, also mirrors the loneliness, general desolation and frustration with life frequently reported within incel communities and observed in related academic research.

That said, there are important differences. As of the time of writing, the Pirkkala perpetrator has not been linked to any known incel forums or online communities, and no prior incidents or affiliations with extremist ideologies, violence against women, or criminal behaviour have been identified. Despite this, on the main forum dedicated to the incel community, users have claimed the attack as part of the broader violent incel rebellion and identified the perpetrator as a “Finncel” ( a Finnish incel).   

This highlights an important point: incel-related violence represents only one facet of a broader spectrum of misogynistic harm affecting both online and offline spaces. In recent years, misogynist incel violence has drawn significant attention across media, politics, and academia, including through high-profile portrayals like the acclaimed Netflix series Adolescence. While the attention is warranted, a narrow focus on incel extremism risks obscuring the wider rise in online misogyny, consistently driven by individuals beyond the incelosphere and manosphere. It also risks diverting attention from the more pervasive forms of everyday gender-based violence, which continue to manifest in less sensational but equally damaging ways across society.

For instance, the recent EU gender-based violence survey reports that 1 in 3 women in the EU have experienced violence at home, at work or in public, including physical and sexual violence and sexual harassment in the workplace. The widespread presence of misogynistic and male supremacist rhetoric on social media, as well as in political narratives and traditional media, further normalises such beliefs and reinforces a vicious cycle of violence against women and girls, in both online and offline spaces.

Generative AI and Technology-Facilitated Terror 

Another key aspect of the Pirkkala case is the role that technology, specifically generative AI (GenAI), played in facilitating the attack. According to reports and the attacker’s manifesto, ChatGPT was used to assist in writing the document. Notably, half of the manifesto was dedicated to the planning and execution of the attack, suggesting that ChatGPT might have played a role in shaping the attacker’s approach and strategy.

The rapid adoption of GenAI tools like ChatGPT has raised growing concerns among researchers, policymakers and industry leaders about their potential misuse by terrorist and violent extremist groups and actors. GenAI refers to a category of artificial intelligence capable of generating content (including text, images, video, and more) based on patterns learned from existing datasets. ChatGPT, developed by OpenAI and launched in November 2022, is one of the most prominent examples and has been widely adopted by millions of users globally, with 400 million active users weekly. ChatGPT is just one among many such generative chatbots readily available online.

Research has highlighted the potential risk of exploitation of generative language models for extremist content, radicalisation, recruitment, and cybercrime. Weimann’s et al. 2024 English-based study specifically investigated how generative language models can be prompted to assist with extremist activities. The researchers explored the effectiveness of various prompts, including standard and “jailbreak” prompts (designed to bypass AI safety restrictions), in eliciting guidance on planning violent attacks. Interestingly, the study found that non-jailbreak prompts were often more successful in generating detailed responses related to attack planning. This suggests that, despite safety features, current generative AI models may still offer relatively easy avenues for misuse by individuals with violent intentions.

It is therefore plausible, though not confirmed, that ChatGPT was used to structure the 10-point list outlining the sequence of the attack, and potentially to formulate or rephrase other parts of the manifesto. Given that large language models (LLMs) are prone to “hallucinations“, a term referring to the generation of plausible but inaccurate or fabricated information, it remains uncertain whether all the content in the manifesto genuinely reflects the perpetrator’s own beliefs and intentions. For example, references to existentialism and determinism may have been shaped or elaborated by the AI, rather than directly originating from the attacker. However, part of the dangers associated with LLMs is precisely the lack of technology that can accurately detect the use of AI, making it impossible to definitively identify which parts of the text have been written or influenced by the chatbot. 

The rapid development and widespread accessibility of GenAI provides an additional toolset in the perpetuation of Technology Facilitated Gender Based Violence (TFGBV), defined as “the use of technology to enact or mediate violence against an individual who identifies as a woman”.

While the specific use of AI chatbots in the perpetuation of violent misogynistic acts necessitates further study, the use of generative AI in the perpetuation of TFGBV has been an emerging topic of research. Chowdhury and Lakshmi’s 2023 work suggests GenAI “brings with it new harms, including the creation of more realistic fake media” such as fabricated image-based sexual abuse, but can also facilitate the perpetuation of existing harms. For instance, GenAI can assist in the perpetuation of TFGBV by automating and “optimising” cyber harassment campaigns, hate speech, misinformation, and impersonation, increasing the reach and damage of such attacks. Equally, GenAI technology can be used in the planning and strategising of violent attacks against women, yet of course, this extends to any extremist/terrorist violence, from the recruitment stage to the tactical learning and attack planning, regardless of the intended victims. 

Conclusion 

The 2025 Pirkkala school stabbing highlights important concerns at the intersection of youth violence, gender-based harm, and emerging technologies. Although the attack is not formally linked to incel ideology, it reflects a broader trend in which misogynistic attitudes increasingly underpin violent acts. This case illustrates how deeply embedded such ideologies have become, where they influence and radicalise young individuals, often without immediate detection. Crucially, while incel-related violence has attracted media and policy attention, it represents only one dimension of a much broader and more insidious spectrum of misogynistic harm. A narrow focus on incels or high-profile influencers risks obscuring the widespread, everyday forms of gender-based violence that continue to shape the lives of women and girls across society, both online and offline. 

The case also raises serious concerns about the potential misuse of generative AI technologies, such as ChatGPT, in the planning of violent and extremist acts. As these tools become more accessible and powerful, they may inadvertently serve as enablers for those intent on causing harm, including acts motivated by misogyny. According to OpenAI, ChatGPT’s content moderation policy restricts the model from answering questions classified as harmful, illegal or against its terms and policies. Yet, these safety mechanisms can be circumvented given the correct prompts, as demonstrated by Weimann and colleagues and reported by Europol. Much like the broader debates regarding the moderation of terrorist and violent extremist content online, there is a need for proactive monitoring of existing AI tools and products; this includes the assessment of prompts that violent actors could exploit. These considerations also need to reflect the large diversity of languages supported by AI chatbots. Stricter regulation and evaluation of risk levels and safety features are necessary to ensure such measures are followed. The European Union is the first to progress on this front, instating the EU AI Act in 2024, the first-ever legal framework on AI.

Ultimately, these intersecting challenges require multi-layered responses, including the tackling of misogynistic narratives and attitudes increasingly embraced by young boys and men, the regulation and monitoring of GenAI and emerging technologies, and preventive legislation to address the online harms and offline violence nexus. 

Anda Solea is a Lecturer in Cybercrime and a doctoral researcher at the University of Portsmouth, UK. She investigates the perpetuation and mainstreaming of extreme misogyny online, with a focus on the incel subculture. Her research primarily covers TikTok and YouTube Shorts.

Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.