Click here to read our latest report “Far‐Right Extremism and Digital Book Publishing”

Technochauvinism and Online Extremism

Technochauvinism and Online Extremism
24th February 2023 Robert Zipp
In Insights

Introduction

In recent years, we’ve witnessed the continued rise of technochauvinism and its sibling phenomena, ‘techno-solutionism’, which gained notoriety as a term around 2013 after Evgeny Morozov’s book, To Save Everything, Click Here: The Folly of Technological Solutionism was published. The two related ideas both describe a belief that technology is always the answer, and that it is superior to more traditional methods of solving problems. At its core, technochauvinism relies on the assumption that any ‘tech’ solutions are distinct, different, and superior to more traditional solutions that don’t involve a design thinking workshop or agile development teams. This is a false distinction. Firstly, technochauvinism assumes tech can solve all problems effectively and optimally if the problem is defined well enough. While this mentality has led to transformations in sectors like public transportation and food delivery, it has also facilitated the rise of online extremism by encouraging social media firms to incentivise content engagement metrics that overlap with the engagement strategy of terrorist and violent extremist content (TVEC)

In response to online extremism, there is a tendency to hold tech companies solely responsible for the spread of TVEC on their platforms, without also engaging with offline approaches. In this Insight, I will examine the relationship between the technochauvinist mindset and its impact on the proliferation of TVEC online. I argue that in order to comprehensively address online extremism, governments and civil society actors will be essential partners in filling in the gaps between the actions taken by the largest tech companies to address the issues caused by the problem-solving lingua franca of Silicon Valley. 

Limitations of Tech Company Solutions 

Interventions that have already been developed by large tech companies show signs of strong success. In Q1 of 2021, for example, Facebook is reported to have removed 99.6% of suspected terrorist content proactively – meaning TVEC was removed before it was shown to users on the platform. YouTube, Microsoft, and TikTok had all reached above 95% proactive removal rates in the same time period. 

The success of these solutions by the largest tech companies may give the appearance that technological solutions are indeed the optimal way to solve the problem of TVEC spreading online. However,  it would be an oversight to assume that since the largest platforms have been successful, it must follow that all other tech platforms can and will follow suit. The technochauvinist mindset would take for granted that the issue will resolve itself as the sector adopts the golden standard of ‘FAANG’ – “an acronym for the five best-performing American tech stocks in the market: Meta (formerly Facebook), Apple, Amazon, Netflix, and Alphabet” 

In reality,  these giant firms rely on their economies of scale to justify the large overheads associated with developing robust response mechanisms to online TVEC. Many such tools are built on costly and complicated language detection machine learning models built by teams of data scientists – a space in which Facebook, Google, and Amazon are already pioneers. Smaller firms with core product offerings outside data science can’t justify such an investment in their business models and thus are left at risk for TVEC content to spread on their platforms. 

Further, some tech companies have implemented features that exacerbate the dissemination of TVEC. In October 2021, the Washington Post published a piece revealing that upon release in 2015, Facebook’s data teams weighted the newly published emoji reactions – ‘wow’, ‘sad’, ‘anger’, and ‘love’, at five times the weight of a traditional ‘like’ reaction. This shifted the attention and views on the site towards content that sparked intense emotions. It is well understood within research that TVEC relies on emotional responses for credibility and engagement.  At the intersection of TVEC and optimisation of content algorithms, it has been reported that “[these] reactions allowed members of far-right extremist groups … to emotionally immerse themselves in extreme sub-cultures and politics, and signal their ideological affiliation to the broader in-group.”  While Facebook has updated its recommendation algorithm since the publication of the Washington Post article, it still stands that recommending content similar to that which a user has previously had an emotional reaction to results in more personalised experiences, and this general assumption made by social media companies backfires when analysing how TVEC content spreads online.

While tech companies’ content moderation strategies play an important role in tackling TVEC, it cannot be the remit of tech companies alone to solve this prolific issue. In addition to the technical challenges or the reluctance of some platforms to remove TVEC is that content removal does not tackle the more complex problem of how and why TVEC is produced. Government and civil society action are therefore essential to fully addressing TVEC.  

The Complexity of TVEC 

In identifying the unique challenge that combatting TVEC presents, I introduce two distinct types of problems: complicated and complex problems.  Complicated problems are those where their solution is replicable. They are challenging “not only [because of] the scale of the problem, but also to their increased requirements around coordination or specialised expertise.” However, once a solution has been found, it can be replicated for similar problems. For example, if a Facebook video goes viral because it garners positive engagement from users, repeating the same procedure for all videos on the platform should therefore infinitely generate more satisfied users. When building a software solution, engineers are encouraged to reduce complications for better product outcomes. But this business model doesn’t take into account what happens when a video that provokes negative reactions goes viral and users report negative experiences. This side effect was ‘assumed out’ of the problem when it was solved with a technochauvinist solution.

Complex problems, however, are those that have multiple ambiguous inputs and outputs that don’t lend themselves to being reliably mathematically modelled. As Rick Nason writes, “Complex problems involve too many unknowns and too many interrelated factors to reduce to rules and processes.” Importantly, running the same process or procedure over and over will not produce the same result in a complex system. 

Here, I propose that we should conceive of TVEC as a complex problem. Strategies that are successful in identifying and removing TVEC on one platform or group may not be replicable across the board. TVEC may appear on its surface as a complicated problem within the remit of engineering solutions by tech companies. However, research also shows participation in extremist activities is psychological in nature and inherently tied to culture, emotion, and politics. The domain of emotional psychology in particular is notably complex, and its application to extremism none less so. Complex challenges are traditionally the domain of governments and social services organisations. Governments and the nonprofit sector aren’t bound by a mandate to maximise profits as corporations are, and can measure their success with other, more human-centred objectives. 

How and why radicalisation happens is dependent on a number of internal and external factors that are difficult to measure in totality. Because tech solutions assume that all problems are complicated, not complex, the challenge of combating online extremism will not be solved by software engineers on their own. For these reasons, tackling TVEC requires a coordinated partnership between tech companies and governments. One such example of this in action is Australia’s eSafety Commission – a global leader in establishing these partnerships.  Passing the Online Safety Act of 2021 in the country gave “eSafety new powers to require internet service providers to block access to material showing abhorrent violent conduct such as terrorist acts.” 

Current and Future Initiatives

To ensure robust protections from TVEC across the sector, governments can and should take steps to equalise the online safety playing field. The Global Internet Forum to Counter Terrorism (GIFCT)’s 2021 report encourages larger companies to open-source their counter-extremist tools. It is feasible and reasonable to imagine moving towards mandating these kinds of softwares as universal digital public goods – a space currently being pioneered by organisations such as the Digital Public Goods Alliance. In the same way that governments oversee building codes, fire safety, and public health, there is a unique opportunity here for governments to ensure the digital safety and well-being of their citizens by engaging more with the open-sourcing of tools to identify TVEC online and simultaneously addressing real-world harms connected to the TVEC being produced and consumed in their countries.

More recently, the rapidly changing tech labour market has opened new possibilities for governments to be empowered to work alongside the private sector in combating TVEC. As layoffs continue to spread across Silicon Valley, civic technology advocates are seizing the opportunity to offer stability to recently unemployed software engineers and use their development skills to develop better public technology products. Programs like The U.S. Digital Corps and the Civic Innovation Corps are building pipelines for entry into state, local, and federal agencies in the United States and can serve as a model for engaging tech workers in different country contexts. With more technical practitioners in the public sector, there will be a greater ability for agencies to engage in deeper, more thoughtful dialogue with the private sector on issues like the harm caused by online TVEC. 

Robert “Bobby” Zipp is a civic technologist and algorithmic accountability enthusiast working in public sector product management in New York. Prior to this, Bobby worked as an analyst in the United Nations Population Fund’s Policy & Strategy Division and as a community engagement volunteer with Code for America, focusing on connecting with local nonprofits and small businesses to promote the organisation’s GetCTC.org initiative. He is a regular contributor to public interest tech publications, such as All Tech is Human’s forthcoming Tech and Democracy working group report. He received a BA from Swarthmore College in English Literature, Political Science & Educational Studies in 2018 and an MS in Computer Science from Northeastern University in 2022.