This Insight draws on data from the RadiGaMe project and is funded by the German Federal Ministry of Research, Technology, and Space.
The Terrorgram network, rooted in right-wing accelerationism, has evolved into a significant challenge for violent extremism prevention (PVE) across the globe. Militant accelerationism seeks to trigger societal collapse and state destabilisation through targeted violence and deliberate amplification of chaos. Since the late 2010s, this decentralised ecosystem of Telegram groups has functioned as a space for recruitment and radicalisation, in which primarily young individuals can encounter intensive exposure to violent propaganda, tactical instruction, and ideological reinforcement.
Recent developments indicate that the threat of militant accelerationism is no longer only a Western phenomenon but is spreading to other countries as well. Although militant accelerationism has been prominently associated with cases such as the 2022 Bratislava shooting, recent incidents in Turkey and Indonesia suggest that the phenomenon is no longer confined to Western contexts. This illustrates a persistent threat that challenges existing C/PVE approaches and their ability to detect and disrupt such developments at an early stage. As shown in a recent study from Germany, success in early identification of terrorist attack planning plays a crucial role: in approximately 70% of known German cases connected to militant accelerationism between 2020 and 2025, almost all minors were reported by intelligence services to the police before attack execution.
A primary hindrance to early intervention is the specific nature of communication within accelerationist online spaces, where users frequently employ irony, “shitposting”, strong references to violent content or the glorification of past attackers, also known as saints culture. Yet not all violence-related expressions indicate a genuine intent to act. Distinguishing between performative extremism, sustained violent radicalisation, and imminent operational intent within online extremist discourse thus remains a significant challenge.
Whereas traditional risk assessment approaches are designed for individual case analysis, drawing on detailed information about behaviour and personal backgrounds of already radicalised individuals, threat assessment in the digital sphere first requires a filtering step: from the large pool of users active in Terrorgram environments, those who may present an elevated risk must be identified. At present, there is no established conceptual framework to guide the identification of high-risk communication and users (clusters) as a basis for subsequent case-based assessment.
This Insight presents findings from extensive empirical research on Telegram, an operationalisation of “warning behaviour” in accelerationist right-wing communication on messaging services. Building on these findings, we outline implications for platform monitoring, identification of risk clusters, regulation and law enforcement.
Application of the Concept of “Warning Behaviour” for Terrorgram
Warning behaviours are defined as acute, dynamic shifts in individuals’ behavioural patterns, distinct from static traits. Meloy identifies eight proximal warning behaviours: pathway (information-gathering and planning), fixation (cognitive preoccupation with a specific grievance or target), identification (association with past perpetrators or ideological models), energy burst (marked increase in planning activity), leakage (communication revealing violent intent), last resort (perception of no viable alternatives), novel aggression (sudden onset of violent acts), and directly communicated threat (explicit threat statements).
This framework is empirically grounded primarily in the analysis of the real-world behaviour of lone-actor terrorists. However, the operationalisation of warning behaviours in digital communication remains underdeveloped. In particular, it is not clear how these behaviours manifest in right-wing extremist communication on messaging services such as Telegram, where interactions are semi-anonymous, rapidly evolving, and potentially deliberately staged.
Our research adopts an inductive approach to operationalise warning behaviours specifically for right-wing accelerationist communications. Rather than applying pre-existing frameworks one-to-one, we examine how the concept of warning behaviour—as dynamic behavioural shift—translates into the linguistic and interactional patterns distinctive to this platform, capturing varied manifestations and intensities as they genuinely occur in right-wing extremist digital spaces.
Results: Warning Behaviours in Right-Wing Accelerationist Groups
In our research within the German-funded consortium “Radicalization on Gaming Platforms and Messaging Services” (RadiGaMe), we conducted a content analysis of 9,470 posts from 19 publicly accessible right-wing accelerationist Telegram groups, with data collected in 2024. The sample results from a seed-based snowball sampling procedure. The starting points (seed groups) were identified via keyword searches, using terms derived from a literature review on right-wing accelerationism. The collected data covers the period from November 2021 to May 2025, with most observations concentrated in 2023 and 2024.
For each of the 19 groups, between two and eight discussion segments (depending on the amount of available data per group), each consisting of 100 posts, were selected. The time span covered by a 100‑post segment varied considerably (M = 69.2 hours, SD = 234.3), ranging from less than one hour to more than 20 hours for 50% of segments. On average, 16 users participated in a discussion segment (range: 5-31), excluding translation or other bots.
Through inductive analysis of the corpus, we identified five distinct dimensions of posts exhibiting warning behaviours associated with right‑wing terrorist violence:
- Mobilisation toward violence,
- Identification with violence,
- Self‑positioning as violent perpetrator,
- Consumption & display of violent content and
- Operational interest, capacity or planning.
Dimension A: “Mobilisation toward Violence” captures communicative attempts to move others from approval to preparation and action. Level 1 codes messages that promote violence as a necessary, legitimate, or desirable practice without specifying how to prepare or which concrete acts to commit. Violence is framed as a lifestyle, virtue, or inevitable strategy against enemy groups. Level 2 codes call for concrete preparation for violent confrontation, including training, armament, equipment, security measures, and networking, while keeping envisaged acts and targets generic. Level 3 codes direct or indirect incitement to specific violent acts, where method, target group, and/or location are sufficiently specified to constitute an actionable attack idea (e.g. “behead your local…”, “someone should set off a dirty bomb in London”).

Figure 1: Example for “Call to Violence” from a Telegram group
Dimension B: “Identification with Violence” describes how strongly users cognitively and emotionally identify with right-wing terrorist violence and its perpetrators, ranging from simple endorsement to the construction of attackers as saints. Level 1 codes approval, justification, or moral normalisation of physical violence against clearly defined out-groups, without any self-positioning as perpetrator. Level 2 specifies concrete, method-specific descriptions of how violence should or could be carried out, indicating deeper cognitive engagement with violent action. Level 3 refers to subcultural, fandom-like identification with specific attackers as figures or avatars (e.g., skins, cosplay, memes), in which users signal personal affinity without explicitly elevating them to quasi-sacred status. Level 4 defines explicit portrayal of attackers as exemplary heroes or martyrs (such as “Saint X”, reference to specific dates of attacks, “Hall of Heroes”), whose actions are presented as models to be honoured and, implicitly, emulated.

Figure 2: Example of a “Saint Calendar” from a Telegram group
Dimension C: “Self-Positioning as Violent Perpetrator” characterises instances in which users explicitly position themselves as potential or actual perpetrators of violence, structured as an escalation continuum from symbolic self-presentation to claimed real-world acts. Level 1 encompasses performative displays of readiness for violence (such as posing with weapons, “tactical” aesthetics) aimed at identity construction rather than concrete planning. Level 2 codes self-referential fantasies of committing violence against enemy groups that remain hypothetical or intrusive. Level 3 outlines communicated intent or threats to use violence, including conditional statements, recruitment for attacks, and offers of weapons or other resources. Level 4 itemises self-reported past violent acts against ideological enemies, where users present themselves as already active offenders.
Dimension D: “Consumption and Display of Violent Content” documents how users display, seek, and emotionally respond to violent imagery and footage. Level 1 codes the posting or requesting of real-world violence material without affective or aesthetic framing. Level 2 catalogues the stylistic transformation of violent content into memes, cartoons, artistic renderings, or “tactical” weapon photography, where violence is presented as visually appealing or humorous rather than neutral evidence. Level 3 captures explicit positive emotional reactions to violence and suffering, such as enjoyment, pride, amusement or fascination, signalled via text, emojis, usernames, or repeated affect-marked requests.
Dimension E: “Operational Interest, Capacity and Planning” focuses on communication signalling users’ operational interest and capacity for violence. Level 1 signals expressions of operational interest on the information level: users ask for or share practical knowledge about acquisition, construction or use of weapons, treated as early indicators of potential pathways toward operational capacity. Level 2 describes statements that point to existing real-world capabilities: claims of owning specific weapons, sourcing equipment through established contacts, or accessing training infrastructure. Level 3 would capture explicit leaking of concrete operational planning (targets, timelines, logistics), which did not occur in our material but forms the conceptual upper end of this dimension.
The frequencies of these various dimensions and levels are shown in Tab. 1. Identification with violence (Dimension B, 2.0%) occurred most frequently, followed by self‑positioning as violent perpetrators (Dimension C, 0.6%), consumption and display of violent content (Dimension D, 0.4%) and mobilisation toward violence (Dimension A, 0.4%). The dimension of operational interest and planning remained comparatively rare (0.1%). On average, 3.4% of posts in each group were coded in one of the five warning‑behaviour dimensions (Min. 1.0%, Max. 12.2%, SD = 2.8 percentage points). Within each dimension, lower escalation levels predominated: approval and justification of violence (B‑L1, 0.8%) and performative displays of violence readiness (C‑L1, 0.3%) were most common, whereas direct incitement to specific acts (A‑L3, 0.1%) and indicators of real‑world capability (E‑L2, 0.1%) occurred infrequently. Notably, explicit leaking of concrete operational planning (E‑L3) did not appear in the material.
| A Mobilisation toward Violence | |
| L1: General promotion of violence | 0,002 |
| L2: Calls for preparation for violent practice | 0,000 |
| L3: Incitement to specific violent acts | 0,001 |
| Sum | 0,004 |
| B Identification with Violence | |
| L1: Approval and justification of violence | 0,008 |
| L2: Method specific descriptions of violence | 0,004 |
| L3: Subcultural identification with attackers | 0,004 |
| L4: Saintification of attackers as heroes/martyrs | 0,003 |
| Sum | 0,020 |
| C Self‑Positioning as Violent Perpetrator | |
| L1: Performative displays as violent | 0,003 |
| L2: Self‑referential fantasies of violence | 0,001 |
| L3: Communicated violent intent or threats | 0,001 |
| L4: Self‑reported past violent acts | 0,001 |
| Sum | 0,006 |
| D Consumption & Display of Violent Content | |
| L1: Sharing/requesting of violent material | 0,002 |
| L2: Aestheticisation/memefication of violence | 0,001 |
| L3: Positive emotional responses to violence | 0,001 |
| Sum | 0,004 |
| E Operational Interest, Capacity or Planning | |
| L1: Information‑level operational interest | 0,000 |
| L2: Indications of real‑world capability | 0,001 |
| L3: Leaking of concrete operational planning | 0,000 |
| Sum | 0,001 |
| Total Sum | 0,034 |
Tab. 1: Average frequency of warning behaviours in the investigated Telegram groups
Conclusion and Recommendation
While the transition of right-wing accelerationism from a niche subculture to a global security threat requires a broad, multisectoral response, the messaging platforms hosting this content—particularly Telegram—remain central to the shift in governance needed to mitigate this rapidly evolving threat. Our analysis of 9,470 text messages across 19 groups on Telegram identified five dimensions of warning behaviour with varying levels of expression. Aligning with previously identified warning behaviours outlined by Cohen et al., this dimensionalised escalation structure enables differentiation of radicalisation dynamics at the user level from cognitive identification toward operational capacity.
Because identification with violence and self-positioning as a violent perpetrator (Dimensions B & C) represent the primary warning behaviour in these chat groups, collectively accounting for 2.6% of coded posts, it is evident that these spaces serve critical roles in reinforcing, validating, and habituating violence-oriented worldviews. Furthermore, they function as venues for performative identity construction that extends beyond ideological alignment to overt demonstrations of individual readiness for violent action.
Our findings suggest that frequent and escalating warning behaviours at the user level should be treated as potential risk signals that warrant closer, case‑based assessment and, where feasible, longitudinal tracking of the respective users’ communications by prevention experts and security authorities. It is therefore recommended that platforms intensify detection and reporting efforts in regard to warning behaviour and establish fast-track reporting channels to Law Enforcement Agencies (LEAs) for high-level warning behaviours, such as direct incitement or threats, to allow for timely intervention.
While the detection of such behaviour plays an important role, it can only be part of a broader response framework that also focuses on disrupting socialisation hubs that foster mobilisation and identification with violence (Dimensions A & B). Breaking these feedback loops is essential to preventing the normalisation of extremist content. Given this, platforms such as Telegram must intensify efforts to remove violent extremist content, suspend relevant channels and groups, and block user accounts that systematically disseminate or coordinate such material. However, as many violent extremist posts may fall within the scope of the EU’s Terrorist Content Online (TCO) Regulation, LEAs should also increase efforts to identify such content and issue immediate removal orders.
Since many participants within these radicalised digital spaces are minors, platforms should collaborate with external stakeholders to develop a continuous flow of information regarding current trends and coded language. Only when prevention workers and LEAs gain the ability to recognise evolving communication patterns can they provide timely intervention for vulnerable individuals.
–
Robert Pelzer is head of the Security – Risk – Criminology research area at the Center Technology and Society (ZTG) at Technische Universität Berlin. His research focuses on processes of radicalisation and disengagement, biographical research, counterterrorism policing in the digital age, and the participatory design and social evaluation of security solutions.
Sina Weickgenannt is a student research assistant at the Center Technology and Society (ZTG) at Technische Universität Berlin. She is pursuing a Master of Science in Psychology – Cognitive Neuroscience at the University of Magdeburg, with research interests in cognition, emotion and social psychological processes, with a particular focus on radicalisation dynamics.
Tobias Weidmann, M.A., is a research associate for the BMFTR project “Radicalization on Gaming Platforms and Messenger Services” (RadiGaMe) at the Center Technology and Society (ZTG) at Technische Universität Berlin. His research and work focus on right-wing extremism and terrorism in digital spaces, with a particular emphasis on the (re)production of gender and subjectivity.
–
Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.