Click here to read our latest report “Beyond Extremism: Platform Responses to Online Subcultures of Nihilistic Violence”

The Feed That Shapes Us: Extremism and Adolescence in the Age of Algorithms

The Feed That Shapes Us: Extremism and Adolescence in the Age of Algorithms
12th December 2025 Cecilia Polizzi

This Insight was published as part of GIFCT’s Working Group on Addressing Youth Radicalization and Mobilization (AYRM). GIFCT Working Groups bring together experts from diverse stakeholder groups, geographies and disciplines to offer advice in specific thematic areas and deliver on targeted, substantive projects. The following Insight is published as part of Next Wave, The International Center for Children and Global Security’s contribution to AYRM.     

In today’s digital ecosystem, radicalisation no longer takes root in ideological echo chambers but permeates into the speech, humour, and emotional syntax of youth culture. Online spaces once reserved for socialisation and self-expression now double as arenas where identity, belonging and ideology collide.

In the attention economy, algorithms reward engagement over nuance, and outrage may travel faster than empathy. In this online ecosystem, effective prevention requires helping young people understand the very architectures that shape what they see, how they feel, and who they connect with. Reframing prevention as the cultivation of participatory design, the Insight proposes an alternative paradigm for thinking about youth not only as passive consumers, but also as co-creators of ethical norms in their digital coming-of-age.

Facing a fragmented, algorithm-driven digital environment, we may need to shift from a traditional reliance on content moderation and takedowns toward an infrastructure that promotes transparency in recommender systems. Algorithmic literacy may be the key missing element in devising more effective responses to the online logics of extremism and radicalisation.

The New Topography of Youth Radicalisation 

An analogy by Oren Segal of the Anti-Defamation League (ADL) captures the accessibility of ideologically-oriented content and its normalisation within everyday digital routines: “Accessing a world of hate online today is as easy as it was tuning into Saturday morning cartoons on television”. The irony, of course, is that for many children and young people, the algorithmic equivalent of “Saturday morning cartoons” now comes scripted by extremists. 

Terrorist and violent extremist (TVE) digital engagement has evolved from a one-way conversation, primarily marked by its educational and propaganda value, into an extensive overreliance on social media as a novel, interactive, and user-friendly tool. These interactions no longer demand specific pre-existing orientations, interests or access know-how, and enable the broadcasting of extreme ideologies to friends, families, and wider audiences.

Within the current threat landscape, social media not only plays a central role at some stage of an individual’s radicalisation process but also sustains the infiltration and projection of influence over virtually every aspect of their life. Through various iterations of social media ideological inoculation, concerns over privacy and anonymity have become increasingly important and pushed both violent discourse and extremist or extremist-adjacent online cultures onto encrypted platforms, such as Telegram and Gab. 

The necessity to withdraw from the public mainstream of ideology, partly due to moderation and other containment efforts, led to what some scholars defined as a ‘visual turn’. Seemingly innocent memes, humour, and fandoms now add a layer of versatility and inherent lightheartedness to violent narratives, increasing the appeal of adopting specific ideological orientations. For example, the far-right accelerationist ‘Terrorgram’ network has operated as a neo-nazi subculture on Telegram, promoting violence through a unique visual style and aesthetics that both defines the Terrorgram brand and reinforces in-group identification. 

Irony and satire assume a central role in entrenching youth in extremist communities while desensitising them to the racism, sexism, and radical rhetoric that permeates this online discourse. Extremists develop their own language, and frequently unite by antagonising authority and ‘political correctness’. Behaviours specific to the modern digital subculture, including provocative posting practices and meme-making, represent important pull factors for engagement while simultaneously challenging identification by counter-terrorism and extremism practitioners. Shitposting and trolling, or the use of otherwise deliberately offensive and provocative content, simultaneously mask extremists and solidify intra-group bonds.

Blame the Algorithm? 

Multiple incidents across the United States and Europe ignited a debate over the surfacing of hate speech, misinformation, hoaxes and the role of algorithmic recommendations in amplifying extremist material. 

Hoaxes and misinformation can serve as entry points into extremist ecosystems. In 2016, Hoaxmap documented how, during the refugee influx in Germany, a wave of false claims about crimes allegedly committed by refugees circulated widely, fueling xenophobia and far-right mobilisation. Similarly, following the Buffalo shooting, researchers found that the perpetrator had transitioned from mainstream video clips of firearms to white supremacist manifestos via recommendation engines and related content links

The concern is that automated content suggestions on social media and video platforms facilitate the fall of both intentional and unwitting youth into radical digital rabbit holes where exposure to extreme or antagonistic content is perpetually escalating and self-reinforcing.  

Algorithms designed to maximise engagement shape what is sent to users based on their digital footprints and frame personal online activities by controlling what is seen and when. Algorithms promote controversial or sensational material by prioritising metrics such as likes and shares, creating a feedback loop that may amplify fear, anger, outrage, and polarising narratives. This can make users vulnerable to radical content or susceptible to extremism, while strengthening a sense of community and belonging within digital subcultures. 

Theories of direct media effects on radicalisation were subsequently developed, alongside a preliminary definition of algorithmic radicalisation as “changes in human attitudes, beliefs, or behaviour as individuals are directed to extremist content, networks, groups, or other individuals as a result of guided searches, filtered news feeds, recommended videos, and connections from extremist adjacent sites”. However, while it is undeniable that automated content suggestions remain powerful tools for malign actors, blaming the algorithm alone may fail to address a more complex reality. 

Exposing someone to far-out ideas is unlikely to suddenly change their perspectives or raise their critical thinking skills. Algorithmic exposure to extremist content on one platform may surface via referrals from other sites, and young users are not simply passive consumers but often interpreters, curators, and even co-producers of their own ‘algorithmic reality’ within digital systems. Youth interpret and negotiate meaning through their social, emotional, and cultural lenses, decide what to engage with, remix or reject, shape feeds through interactions, and even when doing so ‘ironically,’ train the system on their preferences. Ultimately, they may perform in ways that resist or subvert violent content.

Beyond Top-Down Approaches: Participatory Design and Algorithmic Awareness

Moderation, deplatforming, intelligence-gathering and censorship have been widely adopted by governments and tech companies alike to counter the spread of online radicalisation. However, the extent to which these top-down approaches offer robust solutions remains unclear. While they often attract criticism, they tend to achieve only temporary disruption, push users to less-regulated platforms, and raise concerns about potential infringments on fundamental freedoms.

A more constructive direction is emerging through participatory design as an iterative and flexible process that closely involves youth-aged users in shaping healthier digital environments. Research on children and youth’s participation in different roles in the design of technologies, including those driven by Artificial Intelligence (AI), demonstrates that while harms have been increasingly recognised, young demographics have been underexplored as potential contributors to the future of responsible AI. The involvement of young people in participatory design can play along a continuum ranging from “users to testers to informants to design partners.” As Iversen et al evaluated each of these roles in terms of their objectives, processes, and outcomes, they went on to propose the role of protagonists, in which children are the primary agents of the design process. It is also recognised that young end users, when given the opportunity, can make meaningful contributions in the design of algorithmic systems. 

A study by Noh et al, exploring algorithm auditing as a potential entry point for youth to assess generative AI, demonstrates that adolescents can detect harmful behaviours in technologies they are familiar with that would otherwise go unnoticed. The capacity of minors to confront AI harms is equally visible in real-life settings. For example, a youth-led protest in the United Kingdom led the government to terminate the use of an AI grading algorithm due to the inequities it caused to working-class students. It is therefore clear that young users are not only capable of understanding, assessing and even manipulating the logics of algorithmic systems, but also can articulate what fairness, accountability and effectiveness should look like in practice.  

Overcoming barriers to youth agency while simultaneously safeguarding teenagers from radicalisation and algorithmic influences requires increased transparency into social media mechanisms as part of sustained pedagogical practice. Disruptions caused by digitalisation “span from news to culture, from formal knowledge systems to everyday sense-making”, and concern not only young people’s access to accurate information but their ability to exercise control over the acquisition of knowledge, or otherwise how beliefs are formed and revised. Algorithms, based on predictive modelling, big data, and the optimisation of attention, intervene in the production, circulation, and legitimation of meaning by structuring knowledge hierarchies, ranking content, and determining visibility. The term ‘attention ecology’ provides a conceptual framework for understanding the role of algorithms in shaping the flow, peaks and decay of visibility across online ecosystems. Users’ micro-level activity is aggregated to increase visibility and may exceed virality thresholds. However, attention itself, more than the content of any trend, appears to be the driving force behind hyper-circulation and network saturation. Recommender systems’ societal impacts, defined by some scholars as the foundation of an ‘epistemic crisis’, call for algorithmic awareness, or otherwise the ability to understand how automated systems work, identify their operative logics, acknowledge the biases they embed, and analyse the symbolic, social, and cultural effects they generate in individuals and collectives. 

“Trending” Isn’t Harmless: What Follows

AI is now omnipresent in a range of commercial tech and a keystone of content distribution and engagement on social media platforms. Recommendation tools are designed to optimise users’ experiences and facilitate information access by presenting them with the most “relevant” content according to a series of pre-set criteria. The tagline “this is trending” quantifies attention and repackages it as a statement of value. It is so common and misleadingly innocent that it frequently passes from speaker to listener without a chance to question its validity. For several forms of communication and broadcasting, the fact that something is “trending” itself constitutes the ultimate goal of producing it, or, otherwise, millions of visits, views, likes, and shares.  

Teens are especially susceptible to algorithmic influence because they regularly interact with a new socio-technical environment that has fundamentally displaced the centrality of traditional cultural transmission from schools, families, and mass media to platforms governed by algorithmic architectures that automate identity exploration and community formation. Youth cultures on the Internet are intrinsically ambiguous, layered, and aestheticised. Their fusion with extremist ideology stylises terrorism through art, music, and manifestos, trivialises and gamifies violence through scoreboards, and makes radicalisation more appealing and harder to detect. In the end, “It is just a joke,” until it isn’t. 

The impact of data-driven technology on young people, both at an individual and social level, is shaped by a complex interplay between the interactions of browsing users and the intelligent components of the platforms. The opacity surrounding recommendation systems complicates the ability to clearly size and scope extremism threats. It creates uncertainty about the rules governing content amplification, the pathways through which users are funnelled into digital rabbit holes and get fed increasingly radical material, and whether violent or extremist language was coded or simply went undetected. Algorithmic transparency, or otherwise making the built-in systems that decide what you see, which videos appear next and which accounts or hashtags are suggested, understandable, auditable and accountable, is, however, hampered by trade secrets and ‘black box’ complexities

Thus, youth participation and understanding of the dynamics that seek their attention play a determinant role. Platforms, as well as the technology behind them, are moving targets, and preventing youth radicalisation should prioritise strategies that inform young people navigating the digital environments they already inhabit and equip them to be authorities in their own online safety.

Cecilia Polizzi is an international security strategist and leading expert on child recruitment and radicalization. She has shaped policy and strategy for national governments and multilateral organizations, including NATO, OSCE, the Council of Europe, the European Commission, and UN agencies. Polizzi has spoken before the United States Institute of Peace, the Italian Ministry of Defense, and other high-level institutions, and has published extensively in academic journals and other outlets. She is the Founding CEO of the Next Wave Center, a leading organization in the counter-terrorism and extremism community, focused on addressing the recruitment and radicalization of minors. Recognized for her international impact, she was awarded the 2025 McCain Global Leaders fellowship. X: https://x.com/_CeciliaPolizzi

Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.