Earlier this week, the outgoing director general of the UK’s domestic intelligence agency, Sir Andrew Parker, suggested that technology was one of the biggest challenges facing the UK’s Security Services. Sir Andrew said he was particularly interested in artificial intelligence “because of our need to be able to make sense of the data lives of thousands of people in as near to real time as we can get to.” More pertinently, when outlining key security threats to the UK, Parker noted – among others – the rise of the radical right extremism as a key trend to watch – accounting for half of the six terror plots foiled by the UK intelligence agencies in 2019.
Based on my own research looking into violent radical right extremist groups online, in the past ten years, we have certainly seen a sea change in how the far-right mobilises, uses, and weaponises the online space for their activism and campaigns. First highlighted by Julia Ebner and Jacob Davey in a (2017) study of cross-national campaigns, this has seen radical right groups collaborating online and using a swarm-like structure in order to influence elections, attract worldwide media attention and smear political opponents (Ganesh 2018). More worryingly, these studies have found that radical right networks have used (previously classified) psychological warfare tactics to amplify and manipulate conversations in the online sphere in order to influence electoral outcomes in Germany, Italy and Sweden (Ebner and Davey 2017 & 2019; Colliver et al 2018). Such swarm-like activities present therefore a new challenge to intelligence communities and tech companies as they seek to combat the amplification of radical right propaganda on the Internet.
More foundational to these online influence campaigns, however, is the fundamental shift in the modalities that the far-right is using the Internet for. No longer are radical right groups happy with simply talking amongst themselves, as was the case with the early Internet, on bulletin boards, chat forums, and closed online spaces. Increasingly, these actors have taken advantage of ‘likes’, ‘retweets’ and ‘pins’ in order to disseminate (usually sanitised) versions of their messages to a wider audience. What was problematic about this content was its often banal and coded nature, using notions of tradition, heritage, and support for the military in order to boost followership and widen the pool of those exposed to their more nativist narratives and messaging (Brindle & MacMillan 2017; Copsey 2017). Added to this, radical right actors have been able to take advantage of the voyeuristic nature of the ‘live-broadcast’ function on sites like Facebook and Twitch, using these functions to video attacks on journalists and politicians as well as minorities – with the tragic effects of the latter only too apparent in the wake of the tragic March 2019 Christchurch shooting.
The far-right’s use of technology and the Internet therefore is at a fundamental crossroads that policymakers must grasp. This dialogic turn of the far-right reaching out beyond its own cadres has fostered the dissemination of ethnic and cultural threat frames on a transnational basis – becoming part of the toxic identity politics that has permeated the online space and spurred on individuals to further violent action (à la El Paso, Poway and Halle) (Koehler 2018). This can only be stopped by a concerted effort between governments and tech companies to interdict permissive environments and identify individuals at risk of socialisation into violent extremism on the Internet. The million dollar question here, of course, is – not ‘if’ – but ‘when’ violent words turn into violent deeds (Macklin 2019).