Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

Digital Pathways to Violence: the Tech Ecosystem Behind the Antioch Shooting

Digital Pathways to Violence: the Tech Ecosystem Behind the Antioch Shooting
18th March 2025 Ricardo Cabral Penteado
In Insights

Introduction: Technology, Radicalisation, and Security Failures

On 22 January 2025, a 17-year-old student carried out a shooting at Antioch High School in Nashville, Tennessee, killing one student and injuring another before taking his own life. While the shooting was not officially classified as terrorism by authorities, the attack underscores how digital platforms can facilitate radicalisation and exposes critical failures in AI-based security systems.

The shooter left a significant digital footprint, posting images of previous attackers and his manifesto was distributed via Google Drive across X, Substack, Instagram, Telegram and more. The shooter’s writings contained antisemitic and white supremacist rhetoric despite him being a young Black man, illustrating the contradictions inherent in online radicalisation. An examination of the shooter’s manifesto and his diary reveals references to attacks in Brazil, Slovakia, Turkey, Russia, and Canada, highlighting the transnational spread of extremist narratives across linguistic and cultural boundaries.

Additionally, the AI-driven weapon detection system in the school, called Omnilert, failed to identify the shooter’s firearm, despite correctly flagging weapons carried by responding police officers. This incident raises concerns about the reliability of automated security technologies and their effectiveness in preventing acts of mass violence.

This Insight examines the shooter’s online presence, the transnational nature of his ideological influences, and the technological limitations that contributed to the attack. It highlights the urgent need for more robust content moderation, cross-platform cooperation, and adaptive security mechanisms.

Radicalisation in the Digital Age: Memes, Manifestos, and Multiplatform Engagement

The shooter’s manifesto illustrates the inherently transnational nature of online extremist content. References to attacks in Brazil, Slovakia, Turkey, Russia, and Canada demonstrate how narratives of violence transcend national borders, facilitated by digital platforms that foster ideological diffusion across linguistic and cultural boundaries. This analysis is based on a direct examination of the manifesto and the shooter’s diary. Brazilian perpetrators such as Gabriel Rodrigues Castiglioni, involved in the 2022 Aracruz school attack, and Guilherme Taucci Monteiro, responsible for the 2019 Suzano school shooting, are cited alongside well-known Western figures. 

Figure 1. The shooter (left) replicates the visual aesthetics and gestures of Guilherme Taucci Monteiro (right), the perpetrator of the 2019 Suzano school shooting.

The document and his diary reveal the role of memetic language — coded symbols, phrases, and images that spread ideologies through online communities —  and ideological patterns in extremist digital ecosystems, illustrating how these narratives are replicated and disseminated across platforms. Structured in a manner reminiscent of previous perpetrators, the 51-page manifesto follows a well-established pattern of ideological dissemination, expressing antisemitic, racist, and white supremacist ideologies, along with self-directed hatred toward his own racial background. Additionally, his 288-page diary is filled with thoughts about killing, a fear of getting caught, and praise for mass murderers, further demonstrating the psychological and ideological trajectory that shaped his radicalisation. 

The manifesto contains coded rhetoric and extremist symbolism sourced from online forums like 4chan, as well as direct references to notorious attacks such as the Christchurch mosque shootings. The shooter incorporated materials from the Terrorgram network, widely disseminated in accelerationist communities. This reliance on pre-existing content, coupled with the manifesto’s instructional tone, emphasises the cyclical nature of digital radicalisation and the copycat phenomenon observed in modern extremist violence.

Figure 2. Illustration cited in the manifesto. The image reflects the shooter’s engagement with extremist ideology and online radical narratives.

The perpetrator maintained an extensive digital presence across a diverse array of platforms, spanning both mainstream and fringe spaces, to disseminate extremist content, engage with like-minded individuals, and plan the attack. He was active on widely accessible platforms such as Instagram, TikTok, and Pinterest. On Pinterest, for example, a profile linked to the shooter contained images of past mass shooters, including those from the Parkland and Uvalde attacks, alongside violent memes and extremist symbols. However, Pinterest’s Trust and Safety teams removed the account within a few hours of the shooting in accordance with the platform’s Community Guidelines. Studies on social media recommendation algorithms have demonstrated how similar content exposure mechanisms contribute to radicalisation, reinforcing engagement with extremist materials.

The attacker’s activity on these platforms demonstrates the ongoing challenge of moderating extremist content in mainstream digital spaces, particularly as such content is often concealed through coded language, memes, and seemingly innocuous visual content. He also used Google Docs to draft and store his diary, as evidenced by a shareable link included in the manifesto itself, exploiting the platform’s accessibility to distribute materials across different communities with relative ease. While similar content could have been written in a notes app or using Word, the choice of Google Docs allowed for seamless cross-device access, real-time editing, and easier dissemination via direct links. Additionally, Straw.page served as another organisational tool for maintaining his plans and extremist content.

Beyond these mainstream platforms, the shooter was an active participant in more niche and insular online communities that explicitly promote extremist ideologies, highlighting critical moderation failures and the persistence of these environments despite existing countermeasures. He frequented sites which function as digital hubs for users who glorify mass violence, share manifestos, and venerate previous attackers. On Bluesky and X, he engaged directly with individuals associated with past school shootings, exchanging tactics and promoting ideological content. He also leveraged Kick, attempting to livestream the attack on the platform before the stream was cut off. His activity extended even to gaming platforms like Steam, where he accessed VR training applications to simulate his planned attack. This cross-platform engagement underscores how extremist subcultures operate across seemingly disparate digital environments, revealing persistent challenges in detection and intervention.

The attacker’s digital interactions observed in these spaces reflect a broader trend of ideological fusion, where individuals synthesise elements from various extremist traditions. In this case, the shooter’s embrace of antisemitic and white supremacist rhetoric, despite being a young Black man, a group historically the target of white supremacist ideology, illustrates the complex and often contradictory nature of online radicalisation. Such phenomena highlight how extremist ideologies can be detached from traditional identity markers when mediated through digital platforms, leading to unexpected alignments with historically antagonistic belief systems.

Figure 3. Manipulated image from the manifesto, depicting George Floyd with religious and militaristic symbolism. The image appears to satirise George Floyd through the use of exaggerated iconography, resembling the visual style often seen in extremist manifestos, but with an apparent intent to mock rather than glorify.

The shooter’s digital engagement mirrors patterns observed in recent school attacks in Brazil, where non-white perpetrators were similarly drawn into extremist online communities that glorify mass violence while promoting antisemitic, misogynistic, and white supremacist narratives. Research indicates that these individuals often engage with subcultures such as the “True Crime Community” and the “incelosphere,” which use forums, social media, and multimedia content to normalise violence and idolise past attackers. As in the Antioch case, Brazilian perpetrators have also exhibited a contradictory adherence to extremist rhetoric that transcends traditional identity markers. This highlights how malicious actors use all the functionalities of digital platforms to decontextualise ideologies and appeal to diverse audiences. 

The shooter’s engagement with online extremist content reveals a growing shift toward the use of non-English languages and lesser-monitored platforms, highlighting critical gaps in current AI moderation systems. His reference to a Turkish-language manifesto and the citation of Brazilian attackers illustrate how violent narratives circulate transnationally across linguistic and cultural boundaries. 

AI-driven moderation systems often struggle with detecting extremist content across multiple languages, particularly when coded language, slang, or regional dialects are used. Extremist communities exploit these linguistic blind spots by developing alternative phrasing and context-dependent symbols to evade detection. Platforms that prioritise English-language moderation frequently fail to recognise threats in languages such as Portuguese, Turkish, and Russian, allowing extremist narratives to spread unchecked. This linguistic gap not only delays the removal of harmful content but also enables cross-border radicalisation by fostering transnational networks that remain largely undetected by automated moderation tools.

By allowing such content to circulate undetected, these moderation failures facilitate the formation of transnational extremist networks, where individuals not only absorb violent rhetoric but also exchange strategies for attack execution. The circulation of manifestos linked to mass school shootings exemplifies this intersection between school-based violence and broader extremist movements. Addressing these issues requires moderation strategies that extend beyond English, incorporating multilingual threat detection systems capable of adapting to evolving linguistic patterns. Initiatives such as Tech Against Terrorism’s Knowledge-Sharing Platform (KSP) should consider expanding their monitoring efforts to encompass these multilingual dynamics, ensuring that extremist content does not escape detection due to language barriers.

The shooter’s extensive engagement with extremist digital ecosystems was not confined to ideological reinforcement; it also played a direct role in shaping his real-world attack planning. Through these online spaces, he accessed tactical guidance, studied past perpetrators, and absorbed practical frameworks for executing mass violence. 

Operational and Psychological Preparation

The Antioch attack was meticulously planned, as outlined in the shooter’s manifesto. Based on a direct reading of the shooter’s manifesto, he described using tools to bypass school barriers and emphasized the importance of physical preparation, inspired by past perpetrators who advocated for fitness and endurance as part of their attack strategies. This focus reflects the tactical knowledge shared in extremist communities, where operational readiness is often as prominent as ideological indoctrination.

Psychological preparation was equally significant. The shooter described repeatedly watching the Christchurch mosque attack livestream to desensitize himself to violence, using it as both inspiration and tactical guidance. The manifesto included instructions for documenting the event, recommending body-mounted cameras to livestream the violence — a tactic seen in other high-profile attacks. This performative dimension reflects broader propaganda strategies in extremist circles, where the visual spectacle of violence is designed to spread fear, attract media attention, and inspire future perpetrators.

The Antioch case draws attention to the persistent adaptability of extremist digital networks despite moderation efforts. The shooter’s multiplatform engagement, transnational ideological influences, and detailed operational planning exemplify the systemic challenge posed by these ecosystems. Addressing this issue requires enhanced content detection tools, cross-platform cooperation, and international efforts to disrupt the digital pathways that facilitate such acts of violence.

The same flaws that enable extremist content to bypass online moderation also weaken AI-based security systems in physical spaces. Both rely on detection models that struggle with contextual interpretation and can be manipulated through adaptive tactics, exposing broader challenges in algorithmic reliability across digital and physical threat detection.

The Blind Spot in School Security: How AI Failed at Antioch

Antioch High School adopted Omnilert, an AI-driven weapon detection system, to enhance campus security by identifying firearms in real-time and alerting authorities before potential attacks. Marketed as an advanced preventive tool against school shootings, the system promised quicker response times and the detection of concealed weapons. However, its real-world performance fell short during the Antioch incident, revealing significant vulnerabilities in AI-based security technologies.

Despite being operational at the time of the attack, the system failed to detect the shooter’s firearm, allowing him to enter the premises undetected. Paradoxically, it flagged the weapons carried by responding police officers as potential threats. Investigations later identified contributing factors such as poor camera placement, inadequate lighting, and obstructions in the field of view. 

The Antioch case raises concerns about the reliability of AI-driven security systems in preventing mass violence, highlighting issues like false negatives, where genuine threats are missed, and false positives, which cause unnecessary panic. Environmental factors, such as poor camera placement and inconsistent lighting, further impede detection accuracy, while attackers can exploit algorithmic weaknesses by using concealed or modified weapons. The incident also illustrates the risks of overreliance on automated systems without sufficient human oversight, as delayed responses can compromise safety. 

Conclusion

This analysis has revealed the technological failures that enabled the Antioch High School attack. The perpetrator’s radicalisation journey reveals the persistent adaptability of extremist communities, which leverage both mainstream platforms and obscure online spaces to disseminate violent ideologies. His engagement with international narratives, including references to attacks in Brazil and Turkey, demonstrates the transnational nature of digital radicalisation, where extremist content circulates across linguistic and national boundaries. This case highlights the need for more robust, multilingual moderation strategies capable of identifying and disrupting the spread of violent propaganda before it leads to real-world harm.

Furthermore, the shooter’s extensive presence on platforms ranging from popular social media sites to niche forums illustrates the cross-platform dynamics of contemporary radicalisation. Mainstream platforms’ recommendation algorithms inadvertently facilitated his exposure to extremist narratives, while less-regulated spaces provided environments for planning and ideological reinforcement. The blend of multimedia content — manifestos, music, memes, and videos — acted as both propaganda and instructional material, fostering a culture of imitation and competition around mass violence. 

The technological shortcomings observed during the attack, particularly the failure of the AI-based Omnilert weapon detection system, further emphasize the need for caution when relying on automated security measures. Environmental factors, technical limitations, and algorithmic weaknesses were exploited, enabling the perpetrator to enter the premises undetected. This incident illustrates the critical importance of human oversight, continuous system recalibration, and the development of more adaptive, context-aware algorithms. 

The Antioch case also provides valuable insights for the Global South. In Brazil, similar attacks show how extremist subcultures both adopt global narratives and adapt them to local contexts. Existing moderation strategies tend to prioritise English-language content, overlooking Portuguese, Spanish, and other languages commonly exploited in extremist communities. Initiatives such as the Global Internet Forum to Counter Terrorism (GIFCT) and Tech Against Terrorism should therefore adopt more regionally informed, multilingual approaches to enhance detection and prevention efforts.

Ultimately, mitigating the risks posed by extremist content and enhancing security infrastructure demands a multifaceted approach — one that integrates technological advancements with comprehensive educational initiatives and sustained international cooperation, especially through the lens of the Global South experience.

Ricardo Cabral Penteado is a Ph.D. candidate in Computational Linguistics at the University of São Paulo (USP). He specialises in deep learning and natural language processing (NLP), focusing on the intersection of violent extremism and technology within the Brazilian and Latin American context. Ricardo is a 2024-2025 GNET Fellow.