Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

Part 2: Algorithmic Agency in Online Extremism: The Bigger Picture

Part 2: Algorithmic Agency in Online Extremism: The Bigger Picture
21st September 2020 Dr. Yvonne Jazz Rowa
In Insights

The rapid pace of technological evolution and tactics of adversarial attacks confer epistemic authority to data scientists. These experts play both visible and invisible roles that transcend their field in their engagement with the broader public, with political implications. As a result, the experts prescribe techno-utopian solutions to counter radical threats. The technification of cybersecurity promotes epistemic authority and political legitimacy. Technifications are not exclusive to the cyber sector but also inhabit other fields such as, environmental and military spaces. Technifications are “speech acts that ‘do something’’’ rather than merely describe. They construct an issue as reliant upon technical, expert knowledge, but also simultaneously presuppose a politically and normatively neutral agenda that technology serves. Technification models the technical as a field that requires expertise lacking among most politicians and the public. As a result, the experts habitually differentiate themselves from the politicking of political actors. In reality, technification may obscure its political character through rational and technical discourse. The technical is therefore closely intertwined with the language of ‘neutral’ technologies and invisible power.

While social media networks have been instrumental in connecting the world, they have also conferred a platform to actors with nefarious goals. Beyond manifest online extremist content, online social dynamics expose the (non)discursive power of algorithms and their unforeseen and unintended consequences. Some technology industry defectors have spoken out about the premeditative design of some online platforms, geared towards the exploitation of human vulnerabilities with debilitating effects. There is also a growing body of evidence on inter-state cyberwarfare and cyberthreats from extremist movements. The resultant technopolitical tensions have raised concerns over social media misinformation, and threats to democracy and autocracy alike. Online extremism should ideally be viewed within this broader context.

The use of encrypted applications complicates government surveillance efforts of illegal activities by non-state actors. To illustrate, there have been reports of online radicalisation on encrypted applications such as Skype and Telegram that facilitated foreign fighter travels to Syria. Interestingly, previously shelved internal documents show that a prominent technology company invested years of research in investigating its role in polarisation. The data shows that while some online communities engage in constructive discussions, others inspire impulses for conflict, spread misinformation and vilify the ‘other’. The study further reveals that the company’s recommendation tools account for 64% of the membership in extremist groups during the study period. The report additionally acknowledges that “Our algorithms exploit the human brain’s attraction for divisiveness.” Despite these concerns, technology companies have made reasonable strides at reforms in recent years but more needs to be done.

What is also shaping up in the margins of mainstream technology is a borderless online state with its own subculture that poses a threat to the nation-state and those constructed as ‘others’. As an example, the dark web, accessed through a downloadable software supports encrypted communication and is associated with licit and illicit activity. Evidently, the challenge for social media companies has been taming technological platforms that have become ungovernable and command excessive powers that also mediate digital battlegrounds. Within this schema, algorithmic agency compounds the problem of entrapment into ideological silos. Moreover, algorithms exert relative influence on ideological constructions and simulation of an ideal world with real-world impact, as observed in incidents of extremist violence. The proliferation of conspiracy theories on social media is similarly demonstrative of the mercurial influences of the algorithm. Online ‘truth seekers’ typically commune around popular subjects ranging from Reptilian colonisation, red-pill antifeminism (manosphere) to ‘the great replacement’. While media technology has played a crucial role in malignant online subcultures it has similarly enhanced digital freedoms that advance the articulation of grievances, likely to be repressed in the offline world. Antagonistic online interactions expose latent societal schisms and present opportunities for intervention. Besides, social media companies have demonstrated stewardship in monitoring and taking down extremist content that in some instances intersect with conspiratorial thinking. Continued improvements in technology is expected to gradually resolve emergent, unforeseen teething problems in design. Some budding proactive measures currently taking root are likely to mitigate part of the problem.

With the capacity to model language, algorithms have the additional capability to command and influence processes of governance. Notably, algorithms continue to solve problems that have plagued governments, related to security and service delivery. While initiatives such as predictive policing have limitations, these capabilities are increasingly inimical to the raison d’etre of the nation-state. In some cases, government inefficiency has resulted in public discontent and as such, the timely intervention of AI may threaten state legitimacy. Twitter and Facebook were for example instrumental in galvanising the masses during the Arab Spring, popularly known as the ‘Facebook Revolution’. In some countries, these protests consolidated authoritarianism and oxygenated Islamist movements. The utility and limitations of social media are further demonstrated in Cambridge Analytica’s interference in the Kenya and U.S. elections that resulted in political polarisation. Subsequently, social media platforms have not only threatened democracy but also influenced politics in dictatorships. Algorithms that power these platforms are therefore significantly beneficial and cooperative. Nevertheless, they are also pervasively adversarial in their capability to influence constructive relationships, occasionally skewed towards disruption.

Algorithms significantly present the risk of computational propaganda, which may induce computational radicalisation. Computational propaganda is ‘the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks’. The character of social media networks complicates the predictability of algorithmic outcomes for designers and publics therefore, unintended effects should be anticipated. Typically, political actors use autonomous programmes to spread propaganda and influence public opinion. For example, bots and obscure algorithms that propel them mimic humans in social media platforms and produce substantial information on topical issues. The deployment of political bots produces fake reviews, increases attacks on opponents, overwhelms activist discourse and inflates follower numbers, retweets, and likes. Consequently, interactions in digital platforms have moulded political identity and bolstered the rise of both progressive and radical social movements. This dynamic is increasingly palpable in the implosion of COVID-19 conspiracy theories that have at times translated into actual attacks. The theory that 5G causes COVID-19 saw a spate of attacks on phone mast installations in the UK. Some of these theories, reinforced by right-wing political elites, may be consequential post COVID-19. Algorithms, in a similar manner to speech, are therefore performative and have the capacity for perlocutionary effects.

An emerging question is how media technology can further be harnessed for good since the problems it presents offer opportunities to interrogate and improve established systems that have induced popular disaffection. The argument that social media is toxic, while attractive and valid to some extent, masks individual agency and responsibility in media technology consumption. This assertion further conceals the structural problems underlying adversarial social media interactions, conveyed via social media. Bureaucratic posturing on social media algorithms is for the most part reductivist in its suppression of root causes of problems that manifest through social media. Algorithms play the dual role as exhibitors and covertures that convey and at the same time buffer deep-seated systemic problems. As such, a holistic approach that addresses systemic inequalities in social, political, economic, environmental, technological and other societal structures is imperative.