Content disclaimers: All identifiers referencing real-life cases will be anonymised with pseudonyms, noting that in Australia, names are publicly withheld during the reporting of any arrest, indictment and sentencing. Cases that have been finalised reference those convicted real names where available.
Over the past decade, Australia has seen the rise of a generation of ‘digital natives.’ These are young people who have spent their entire formative years online and who are entering a vulnerable age where they are more prone to radicalisation due to how they interact with online ecosystems. Their sense of normality, identity, and community is often shaped more by the digital world than by real-life experiences.
The Australian Security Intelligence Organisation’s (ASIO) 2025 Annual Threat Assessment echoes these concerns about youth being increasingly susceptible to radicalisation. This is in addition to Australia’s general national terrorism threat level remaining at ‘probable’. In February, ASIO presented a unique threat assessment to the Director-General of Security, Mike Burgess, detailing the importance of declassifying certain elements of ASIO’s Security Outlook to 2030. Burgess candidly assessed the findings as “uncomfortable.”
This threat assessment restates concerns from previous ASIO threat assessments and is also reflected in Australia’s new Counter-Terrorism and Violent Extremism Strategy (2025): specifically, the continued growth of youth radicalisation online, which increases exposure to, or engagement with, terrorist and violent extremist (TVE) material. Youth are appearing more often within ASIO and the Australian Federal Police’s (AFP) counter-terrorism priority caseloads, and growing numbers within Australia’s violent extremist intervention programs.
Subsequently, this Insight examines what can be deduced from primary and open-source data from case studies involving youth (aged between 12 and 25) who have been investigated, charged, or convicted of terrorism-related offences within Australia from 2019 to date. It explores the specific nuances of how technology facilitated their targeted recruitment, radicalisation, and engagement with violent extremism or terrorism.
To further narrow the scope of this Insight, the cases are focused on those involving right-wing violent extremist ideologies, including those promoting ethno-nationalism and the idea of a ‘White Australia’. This Insight acknowledges Australia’s threat environment is far broader and currently exhibits youth demonstrating fluid or unstable ideological shifts; merging components from different ideologies to form mixed/hybrid beliefs; and/or adopting vague, incoherent or ‘unclear’ ideological mixes to justify their violence.
Recent Cases
The recent public release of the open-access ‘Profiles of Individual Radicalisation in Australia’ (PIRA) database captures records of individuals radicalised to extremism in Australia (between 1985 and 2023). The database’s objective is to enable the empirical study of radicalisation pathways, risk factors, and behavioural patterns within the Australian context.
Cherney & Belton et. al (2020) undertook substantial work in identifying 33 cases of Australians, aged 19 and below, who have been identified as radicalising to violent extremism from the initial PIRA dataset, which covered a range of ideological spectrums.

Figure 1: Cherney, A., Belton, E., & Narham S.A.B., & Milts, J (2022) ‘Understanding youth radicalisation: An analysis of Australian Data’, Behavorial Sciences of Terrorism and Political Aggression, 14 (2), 97-119.
Belton noted limitations in the dataset’s conceptualisation of online radicalism measures, stating that it cannot capture the nuances of how radicalisation occurred despite its public nature. Instead, authors strategically detailed the influence of social media platforms—whether major or minor—based on specific parameters and additional access to court transcripts, some access to terrorist post-sentencing orders, and the presence or publicity of the individual’s case. Figure 1 examines how young people differ from extremist adults, showing the prevalence of mental health problems in youth extremists to be quite high, and even higher levels of social media engagement – this is demonstrated further by some of the cases below.
A recent example includes a 14-year-old (case study #2, pg 4), who adhered to nationalist and racist violent extremist ideology. They had been posting extreme right-wing-related content on their Snapchat account; it was later confirmed by other government departments that the individual was also known for holding racist sentiments towards certain groups. The young person came to police attention through a community reporting hotline, where it was confirmed they planned to conduct a school shooting and that they had the means to do so. The youth was sentenced to charges relating to advocating terrorism, in addition to the use of a carriage service to make threats. Since sentencing, the young person also engaged with countering violent extremism rehabilitation efforts to de-escalate concerning behaviour and ideology.
A considerably noteworthy case was that of Tyler Jakovac, whose majority of offending occurred when he was around 16 years of age (arrested at 18), where he predominately used Snapchat and Telegram to encourage others to kill “non-whites, Jews and Muslims”, share Nazi symbolism, in addition to sharing bomb-making instructions. Jakovac posted a picture of military-style paraphernalia to Snapchat and included the caption: “The shooting will take place on 3 of July, make sure to DM me for my Facebook for the livestream guys, I’m sick of [racial epithets], [derogatory term about homosexuality] and race-mixers”; he also urged for “an upcoming race war to defend the white race”. Jakovac was arrested and charged with advocating terrorism and related offences following his posting on Snapchat, noting “I’m actually going to commit a massive national tragedy”.
The case of brothers Josh and Ben Lucas is also relevant to this discussion. Both brothers idolised the Christchurch attacker and had several online posts reflecting extreme far-right, white supremacist and anti-government ideology. The brothers expressed accelerationist sentiments of enthusiasm for a race war, including the production of videos for improvised explosive devices. The two brothers were charged with acts in preparation for a terrorist attack.
Similar case studies demonstrate a high level of engagement with social media, as seen in the case of Jordan Patten. Patten, at age 19, allegedly plotted to kill a local politician and was formally linked to ‘Terrorgram’, which is said to have played a critical role in the process of his radicalisation. After his plot was foiled, Patten shared a 200+ page manifesto in encrypted extremist messaging groups, citing the Christchurch attacker as his inspiration, and included threats to the Australian Prime Minister’s family; meanwhile, Terrorgram members posted additional instructions for prospective attackers on how to avoid Patten’s failure in conducting attacks. Patten was charged with conducting acts in preparation for a terrorist act.

Figure 2: An image of Jordan Patten in his military-style outfit taken from the livestream of his alleged incident. Source: https://www.theage.com.au/national/nsw/teen-charged-with-terrorism-offence-after-sharing-manifesto-carrying-knives-20240627-p5jp65.html
Emerging Trends in Australia: What Do We Know?
Key emerging trends show that youths involved in violent extremist or terrorism related offences are getting younger, are typically Australian-born, and are “overwhelmingly male” (approximately 85%). ASIO reports the median age of investigation as 15 years old, with the AFP reporting the youngest child involved in their counter-terrorism investigations as merely 12 years old. By extension, Australia’s eSafety Commissioner found online influencers are instrumental in shaping young men’s identities, with an Australian survey of over 1,300 young males found that 25% viewed so-called ‘manfluencer’ and self-proclaimed misogynist Andrew Tate as a role model. This is particularly concerning given the rise of other extremist elements, such as online misogyny and self-proclaimed ‘incels’ in Australia, and extremists using narratives specifically targeting anger and associated shame, humiliation and resentment as a recruitment mechanism.
Shared factors of individual youth identified across the cases above, and those held within the PIRA database, frequently include a neurodiverse or mental health diagnosis, growing up in a disruptive or harmful environment, and facing ongoing social challenges throughout school. While these elements may not directly cause engagement in extremism, their combination—particularly when mixed with unstable or conflicting ideologies, a fixation on certain ideas, and increased exposure to harmful content online—can make one more susceptible to radicalisation. This is especially true for young people, increasing the likelihood of severe outcomes, such as engaging with harmful narratives, ideological violent extremism or coming into contact with the criminal justice system.
How extremists use technology, and their amplification methods, remains a concern with the rise of user-generated content, subliminal messaging, memes, and increasing use of artificial intelligence (media spawning, fully synthetic propaganda, etc.). These tools enable the rapid spread of violent extremist content through the use of small and large language models with more ease and access, streamlining messaging directly to youth. In contrast, other online communities—particularly within gaming platforms—allow for global connections that quickly expand the possibility of reach, both of harmful ideologies, targeted recruitment, and the potential reinforcement of negative self-beliefs.
The widespread availability of social media and other platforms serves as polarising spaces on both the surface web and dark web, offering extremist groups new opportunities to recruit and disseminate propaganda, with extremists often using encrypted messaging apps and/or splinter groups in the right-wing space to evade detection. Many youth continue to glorify Tarrant (the Christchurch attacker) as a role model, in addition to the promotion of other right-wing extremist ideals. Moreover, social media remains a key gateway, linking young Australians to global far-right influences and fostering the rapid spread of disinformation.
Additional online echo chambers and filter bubbles, both on mainstream and closed platforms, gaming-related social media and in-game messaging, can contribute to the normalisation of harmful and violent narratives. Collectively, these trends create a complex, dynamic and evolving digital landscape, making it increasingly difficult to protect and safeguard vulnerable youth from radicalisation.
The Response and Responsibility of Tech Platforms
The 2019 Christchurch terrorist attack in New Zealand served as a wake-up call across Australia, and was felt deeply across the country. This attack saw both the Government and the tech sector’s ways of thinking adapt in terms of safety and security.
Such an attack recognised significant changes in extremist methodologies, such as the severity and live-streaming nature of the attack, the conceptual rise of gamification and the glorification of extremist individuals, and the particular flow-on effects of concerns around youth’s digital literacy skills. This has resulted in numerous multi-stakeholder platforms focused on preventive work, such as the Christchurch Call.
Infamous far-right terrorist attacks have shown that when youth are in the audience, whether observing via livestream, generally exposed to extremist rhetoric, or encountering its glorification online, public expectations for safety increase. In dire contrast, technology companies often prioritise profit, and only by adjusting this focus can those safety expectations be met. Sometimes, the tech sector responds slowly, or fails to respond at all.
Tech companies that engage with children and youth must reassess their approach to product development, particularly considering growing expectations around accountability and child safety. Companies like Roblox, for example, have sought to balance profit maximisation with broad audience appeal, but this has at times resulted in risky outcomes, such as the Australian Federal Police’s 2023 investigation into user-generated scenes depicting Nazi Germany. However, Roblox is continually evolving its safety measures. For its part, Roblox has recently announced an “ambitious plan to expand age estimation to all users” in an effort to make the platform safer for younger users. This is alongside “100 safety initiatives” integrated by the platform since the beginning of 2025. On the legislative side, the Albanese government pledged a record $106.2 million (AUD) over four years in initiatives to counter violent extremism.
Trust and safety teams are tasked with managing massive volumes of online activity while striving to maintain a secure and age-appropriate environment—an increasingly challenging task in high-traffic digital spaces involving youth.
In light of the specific cases of alleged extremist incidents involving youth, the technology sector is taking significant steps within the Australian landscape, including further cross-sectoral collaboration and numerous initiatives.
For example, the Government is collaborating with the Online Harms Foundation, the Australian branch of Tech Against Terrorism, to launch a new 24/7 crisis response capability aimed at combating TVE. The Government is also working with other industry partners, such as Microsoft, to create Gaming Safety Toolkits and other measures.
Reflexive Reactions and the Role of Regulation in the Broader Australian Response
Australia has continued expanding its suite of legislative options to both incorporate youth perspectives in online policy approaches to countering extremism, and to improve the regulation of the online (and offline) threat environment. This has included, but not limited to, Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 (Cth) in response to the Christchurch attack, the Surveillance Legislation Amendment (Identify and Disrupt) (SLAID) Act 2021 (Cth) to combat serious online crime (including radicalisation), and additional amendments to the Crimes Act 1914 (Cth) and Criminal Code Act 1995 (Cth) with the Counter-Terrorism Legislation Amendment (Prohibited Hate Symbols and Other Measures) Bill 2023 (Cth), designed to establish criminal offences for the public display, trading, or use of a carriage service for possessing or controlling violent extremism material, specific to prohibited Nazi-related and Islamic State symbols.
By extension, the Australian Government has also turned its attention to passing new legislation involving youth following the increase in such incidents.
Of benefit within the suite of legislative efforts has included The Online Safety Act 2021 (Cth), providing Australia’s eSafety Commissioner powers to protect Australians from online harm, including by requiring digital platforms, search engines, app stores and internet service providers to remove, cease linking to, cease hosting or cease providing access to certain material that depicts, promotes, incites or instructs in terrorism.
Of controversy, the continued consideration of raising the minimum age of criminal responsibility – noting numerous Australian cases that call into question alleged offenders between the 12–and 16–year–old age range–raises concerns about potential increases in restrictions not reflecting a public health approach. By extension, in an effort to curb exposure to harmful content, the Australian Labor government is pushing through new legislation, set to commence December 2025, to ban those under 16 years of age from social media, including platforms such as TikTok, Snapchat, and Instagram, which aim to penalise technology platforms – experts have presented mixed perspectives of the ban thus far.
As Director-General Burgess of ASIO noted, we ‘cannot regulate our way to fewer grievances’; future-proofing legislation of the online environment going forward, and bringing tech companies to the table more, can seek to act as both a preventive measure and further safeguard youth from exposure and engagement with extremism. However, the role of algorithmic amplification and the technology companies’ accountability remains highly contested, given there is proof of concept that user journeys online can be queried going forward (taking users away from a pathway heading towards harmful content).
It is evident that friction between Governments and the tech sector will remain ongoing for some time; the onus of navigating a system that balances technology companies’ ambitions and user safety has been left to the user, which, unfortunately, often falls to youth.
—
Michaela Rana is a counter-terrorism early career researcher, program and policy specialist. Michaela’s expertise focuses on countering violent extremism in fragile states, law enforcement, rehabilitation, and deradicalisation efforts. She brings extensive experience from the Australian Government, the United Nations Office on Drugs and Crime, and other international organizations.
—
To report a security concern within Australia:
National Security Hotline Australia
Phone Within Australia: 1800 123 400
Phone outside of Australia: +61 1300 123 401
SMS: +61 (0)429 771 822
Email: [email protected]
Australia’s eSafety Comissioner:
To report or refer online material to Australia’s eSafety Commissioner for review, please note this website: esafety.gov.au/report/forms
Engage with Kids Helpline
Phone: 1800 55 1800 (available 24/7)
Provides counselling for young Australians aged 5 to 25, no matter who they are, where they live or what they want to talk about.