This Insight contributes to GNET’s PhD Researcher Series, highlighting emerging academic voices in the field of countering violent extremism and terrorism online.
While antisemitism has long been a core ideological tenet within far-right extremist milieus across the West, recent years have seen a sharp rise in the number of antisemitic incidents across virtually all major Western countries. Digitally, antisemitic content is no longer relegated only to fringe sites and social media platforms, with amplified content and harmful voices now frequenting mainstream online spaces. Offline, the vandalism and destruction of synagogues and Jewish homes and businesses is now a seemingly weekly occurrence within the West.
Antisemitic online rhetoric has and continues to inspire offline violent extremism, seen across an array of otherwise divergent extremist ideologies, notably far-right-wing and Islamist extremism. Though this confluence pre-dates the 7 October attacks of 2023, extremist groups have since attempted to leverage subsequent currents of antisemitism flowing through the West, to recruit, inspire and legitimise acts of gratuitous violence with concerning effectiveness. Data from the United States shows the country has averaged 8 antisemitic terrorist plots and attacks per month since 7 October 2023. An even more alarming finding has been the presence of 26 school-based plots inspired by antisemitism between January 2023 and May 2025, with the average perpetrator age sitting at just 16 years old. These recent attacks, which feature antisemitic rhetoric or calls for violence, mirror trends found in extremist manifestos where antisemitism is weaponised to justify violence and sow fear. Therefore, this Insight will explore the role of antisemitic tropes in manifestos written by far-right terrorists, including their role as ideological contagions and links to broader digital ecosystems.
The Potency of Antisemitic Extremist Violence
Sobering reminders of the tangible effects of antisemitic extremist violence can be found in the shootings that left 10 dead in Buffalo, New York; two dead in Halle, Germany; nine dead in Charleston, South Carolina, and one dead in Poway, California. Though these attacks were carried out against different targets at different times, they also possess some glaring commonalities. All four of the perpetrators were young men (aged between 18 and 27 at the time of their attacks), they were all far-right extremists, they all wrote manifestos outlining their ideological motivations, and they all articulated an array of antisemitic and racist grievances within these manifestos. Through swathes of text and imagery, the perpetrators reproduced and re-articulated a myriad of antisemitic narratives and conspiracy theories commonly seen within the far-right violent extremist ideological architecture. While care does have to be taken when dissecting and reproducing extremist manifestos, they present an invaluable window into the worldviews espoused by those directly responsible for the most potent manifestations of antisemitic violence.
As the evolution of technology continues to outpace institutional capabilities, it is critical to examine how antisemitic ideological violence connects to the broader informational ecosystem in far-right extremist spaces. This Insight uses the Repository of Extremist Aligned Documents (READ) database to analyse key far-right extremist manifestos, particularly those identified as promoting antisemitic violence or terrorist attacks. With an onus on antisemitic ‘tropes’ and their role within both manifestos and broader digital landscapes, this Insight seeks to examine how these ideologically symbolic texts are carefully constructed to justify violence and how they function as tools of propaganda and vehicles for articulating and advancing grievances.
The Role and Relevance of Tropes
Antisemitic tropes and linguistic nuances lie outside the scope of this Insight; however, they are worth touching on for important context. Tropes are a narrative tool that, through inherent ambivalence, deploy subjunctive voice, requiring the audience to draw on their own experience and knowledge to contextualise and interpret the message. This involves the audience in the meaning-making process, as regardless of whether they are sympathetic or oppositional to the sentiment itself, they have effectively engaged with it. In the words of Colvin and Pisoiu, tropes become effective tools for the transmission of extremist ideology, as
“Pointers like “tick,” “comrade,” or “immigrant” stand in for fuller narratives, or untold backstories, and there is no requirement that the narrator who engages such tropes is willing, or indeed able, to articulate that ideological backstory… In pointing to a moral system that never has to be articulated, tropes enable the pleasure or “thrill” of violence without full narrative accountability.” (p. 504)
Tropes become even more salient when considering key aspects of the violent extremist far-right’s evolution over recent years, particularly their leveraging of technology. Ongoing challenges, including the livestreaming of violence, the recruitment and radicalisation of individuals through gaming or gaming-adjacent platforms, the glorification of attacks and their ‘memeification’, encrypted apps such as ‘terrorgram’ and various platforms hosting closed extremist networks, all synergise with the ability to communicate ideological messaging through short-form expressions or memes. Through a process of sustained use, tropes such as ‘14 words’ or ‘88’ become established components of the far-right extremist ideological architecture. Their use in the broader discourse within the far-right digital milieu, as well as the manifestos of those responsible for extreme acts of antisemitic violence, highlights their role as ideologically effective contagions.
Tropes and Manifestos: What Stands Out?
While the perpetrators discussed here held an array of ideological views, a common theme identified across the manifestos was the perceived moral and societal decay of the West. This is a common ideological position within the violent extremist far-right, seen across forms of ethnonationalism, white supremacism, neo-Nazism, racism and xenophobia. Discussions regarding the evocation of pan-Europeanism underpinned this. Perpetrators expressed a desire to return to a perceived era of European domination and purity, evidenced by tropes such as “European race” and “European ancestry”, viewing non-sympathetic white westerners as “blood traitors”, unwilling to stand up to this “assault on the European people”.
Here, manifestations of antisemitism differ. Most of the perpetrators view Jewish people as ethnically non-white, thus impure and implicitly barbaric. This form of antisemitism manifests itself through tropes that were explicit within the manifestos analysed, referring to Jewish populations as “vile, disgusting filth” and a “squalid and parasitic race” responsible for “blood libel”. Similarly, the other two tropes of ‘filth’ and ‘parasitic’ may appear less ideologically driven, but mirror rhetoric common to the Nazi lexicon that depicted Jews as unclean, disease-ridden and subhuman. This had the twofold effect of dehumanising Jews, while also discouraging non-Jews from mixing with them.
However, both the Buffalo and Charleston shooters concede they viewed Jewish people as ethnically white, noting the issue is “not their blood but their identity.” Here, antisemitism manifests itself in relation to the cultural and institutional. Common conspiracy theories that place Jewish control of Western institutions come to the fore, through tropes such as “Cultural Marxism”, “race mixing”, “forced ethnic integration”, and “control [of] the mainstream media…and global banking”. Here, Jews are viewed as engineering the downfall and subjugation of Western society. The convergence of antisemitic conspiracy serves to legitimise the use of violence in their eyes. They view their actions not as irrational or unjustified, but rather as a response to their perceived Jewish destruction of the West, situating their attacks as a form of collective self-defence they hope will unify the far-right.
The Problem of Cumulative Momentum: Lower Thresholds to Violence and Inspiring New Attackers
There is a clear and troubling pattern of ideological succession among far-right violent extremists, where perpetrators of mass violence inspire subsequent attackers. This chain of influence underscores the persistent relevance of violent individuals within the broader extremist milieu. Their manifestos and digital footprints not only glorify previous acts but also serve as templates for future violence and a global opportunity for connection for vulnerable individuals.
The influence of this becomes clearer when considering the words of the Buffalo shooter, who concedes he derived his beliefs “mostly from the internet”. He notes he “started browsing 4chan in May 2020 after extreme boredom”, explicitly pointing to the Christchurch shooting as a key catalyst for his own radicalisation, conceding his [the Christchurch shooter] “livestream started everything you see here”.
This illustrates what’s been labelled as “the self-referential nature of extreme right terrorism” (p. 46), as the visibility generated by disseminating footage of an attack lowers the threshold of participation for sympathetic others. Resultingly, “a manifesto issued by a Norwegian neo-Nazi inspired an Australian anti-immigrant fanatic, whose screed in turn inspired a Texan white supremacist” (p. 10).
This continuity highlights the urgent need for technology companies to restrict and regulate the dissemination of extremist content online, including evolving generalised content moderation efforts to support prevention initiatives. While some efforts have been made in this area, the sustained presence and circulation of extremist right-wing narratives, particularly those perpetuating antisemitic perspectives, suggest that more robust and proactive measures remain critically important.
Steps Forward for Technology and Opportunities for Further Research
Based on this research’s preliminary recommendations, technology companies should proactively assess and reform both policy and programmatic approaches to address antisemitic narratives. A key challenge lies in the tension between how antisemitism is treated under hate crime frameworks and broader law enforcement considerations, which often vary across jurisdictions. Content moderation strategies also differ significantly, with some governments opting for internet shut-downs as a form of containment during national crises, while others emphasise the protection of free speech. These contrasting approaches show the need for context-sensitive moderation policies that respond to national and regional dynamics, as companies need to develop flexible, yet principled content moderation policies that reflect not only legal compliance but also ethical responsibility, contextual sensitivity, and global human rights standards.
On the programmatic front, some international organisations thus far have often sidestepped the issue. However, the European Union has made some progress in this area and seeks to provide guidance to States for effective implementation. International organisations often treat antisemitic narratives as politically sensitive or a secondary concern. However, the rise in online antisemitic content, especially surrounding geopolitical events, demands a more concerted response. The European Union has taken some steps in this direction through targeted initiatives and regulatory measures, such as the Digital Services Act, which can serve as a model for others. These efforts illustrate how regional policy innovation can inform platform governance and shape responsible digital ecosystems.
Future-proofing legislation is also essential to ensure adaptability to evolving forms of online hate and extremism. As extremist actors adapt quickly to evade detection, often using coded language, encrypted platforms, and cross-border networks, static policies will become obsolete. In many jurisdictions, the legal definition of antisemitism is inconsistent, not agreed upon, or underdeveloped, leading to inconsistent enforcement and reporting by law enforcement bodies. This legal ambiguity can hinder content moderation efforts and contribute to the underdetection of antisemitic extremism online. Technology companies should consider collaborating with such initiatives to build internal resilience among content moderators and analysts, ensuring both ethical and operational sustainability.
Addressing antisemitism online requires technology companies to take specific steps that go beyond content moderation – such as developing more effective user-reporting tools and smarter detection systems for identifying harmful content. Options could include pre-filled options in reporting, multi-lingual context-aware tools for platforms, or systems able to identify antisemitic content through the use of memes, coded language (for example, the symbolic use of emojis), and historical references stated within a certain context. Further, technology companies may also consider more proactive intervention strategies. Considerations could include automated alerts (warning of community guidelines, or offering language alternatives), or redirection strategies in shifting users towards educational resources. While these may appear as straightforward technological solutions, challenges such as balancing free expression, evolving hate speech tactics, data privacy concerns, disrupting the user experience, and inconsistent platform enforcement make effective implementation increasingly complex.
Supporting initiatives like the READ primary source repository can also play a critical role in safeguarding researchers’ mental health, while also enhancing understanding of extremist content. This ultimately contributes to more effective and sustainable content moderation and counter-narrative strategies, ensuring that efforts are being implemented in a meaningful way in various preventing and countering violent extremism (P/CVE) initiatives whilst protecting practitioners’ mental wellbeing.
As technology evolves, so too do the challenges posed by extremists who seek to exploit artificial intelligence (AI). Hostile actors are already producing guides and using large language models to advance their tactics, including the spread of antisemitism. To counter this, technology companies and policymakers must prioritise a foundational understanding of AI and its potential for misuse in radicalisation. Doing so will bolster security infrastructure, future-proof regulation, and enable more effective, tailored interventions to disrupt extremist networks.
Antisemitism remains a potent ideological threat mobiliser. However, an integrated approach sensitive to the nuances of antisemitism and the fluidity of digital environments represents a solid foundation for combating its virulence online.
–
Michaela Rana is a counter-terrorism early career researcher, program and policy specialist. Michaela’s expertise focuses on countering violent extremism in fragile states, law enforcement, rehabilitation, and deradicalisation efforts. She brings extensive experience from the Australian GovernmentFederal Police, the United Nations Office on Drugs and Crime, and other international organizations.
Josh Lindsay is an early career researcher and doctoral candidate at Deakin University. His research focuses on the role of meso-level identity and subcultures in impacting violent extremism and radicalisation. He has experience working for the Australian Department of Home Affairs, the Department of Justice and Community Safety Victoria, and the University of Queensland.
—
Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.