In the run-up to the 2020 US presidential election, Facebook prepared for the potential spreading of disinformation and election-related violence with tools it had developed in so-called ‘at risk’ countries such as Sri Lanka and Myanmar. This toolkit emerged as a consequence of sustained persecution of the Rohingya population in Myanmar in 2018, when Buddhist nationalists incited hate speech and violence against the Muslim minority that quickly became viral on the platform and resulted in deadly offline spillover effects.
While toolkits developed to counter disinformation and algorithmic manipulation may be ‘tested’ (but not fully applied) in laboratories of the Global South, policies concerning the tackling of far-right extremist and terrorist content in these regions are remarkably absent in comparison. CVE/CT policies targeting Islamist content are already well integrated into system designs of content moderation, but only in the past couple years has the far-right become considered an equally important global security threat following the Christchurch Call. The main blind spot is that the far-right is broadly understood by tech companies—as well as governments and the public at large—to be Western-based, and centred primarily around white identity. This creates a major gap in tech policies that need to recognise the far-right as a global problem.
Take, for instance, the recent incident of an arrested 16-year-old Singaporean teenager of Indian Christian descent who attempted to carry out an attack at a local mosque. It was revealed that he had been inspired by the Christchurch shooter after becoming radicalised online through exposure to extreme Islamophobic and violent content. Although a confounding case, the underlying assumption among Singaporean experts was that far-right ideology had been ‘imported’ to Singapore as a Western product. This reveals the problematic association towards what is even considered to be the far-right in the first place. If experts and academics who advise governments and tech companies only consider the far-right to be of Western-origin, the result will be policies that fall short of addressing the complete picture.
The far-right is not just an expression of white supremacist extremism or neo-Nazism, as it is commonly manifested in North America and Europe. It is much broader in scope: spanning from anti-Korean ultranationalist activists in Japan, to paramilitary Hindu nationalist organisations in India, to right-wing populist leaders in Latin America. All of these far-right movements share in common the ideological facets of ethnic nationalism, nativism, and desire for authoritarianism achieved by law and order. And like their Western counterparts, far-right extremist actors in these regions also recruit, radicalise, and mobilise using online platforms.
Tech company policies addressing far-right extremist and terrorist content on platforms need to be better situated as a global phenomenon to fit this reality, not just catered to Europe and North America—and certainly not designated as a separate category of so-called ‘ethnic unrest’. This reflects a much larger issue concerning the Western-centric focus on European and North American users and audiences within the design and implementation of tech policies to counter far-right extremism and terrorism.
Work commissioned by the Global Internet Forum to Counter Terrorism (GIFCT) (GNET’s parent organisation) in 2019 reveals the research area priorities set forth in order to implement policy recommendations to counter terrorist and extremist content by its member companies. One project commissioned by GIFCT includes counter-violent extremist online campaign toolkits developed in collaboration with the London-based Institute for Strategic Dialogue. Toolkits included in the category of ‘far-right extremism’ are all targeted towards audiences in the UK, Europe, or North America. While important to undertake, civil society organisations across the world, who are often under-funded and under-resourced, can also benefit from this relationship with tech companies to develop toolkits to counter far-right extremism in diverse contexts. One potential partner could be the Indian organisation, Karwan-e-Mohabbat (Caravan of Love), which runs ongoing national campaigns in solidarity with victims of far-right mob lynchings or those affected by communal violence. With an already active online presence, Karwan-e-Mohabbat aims to spread awareness of the consequences of hate, and would greatly benefit from tech company support in developing and implementing an online campaign toolkit with a diverse network of civil society actors.
Meanwhile, a second project commissioned by GIFCT features a series of reports in partnership with the Royal United Services Institute (RUSI) with the aim “to better understand radicalization, recruitment, and the myriad of ways terrorist entities use the digital space.” These reports are topically situated on either jihadism or the far-right. However, while geographical coverage of jihadism includes case studies in South Asia (Pakistan, Bangladesh, Sri Lanka), Southeast Asia (Indonesia and the Philippines), and the Middle East (Islamic State), the only report that deals with a country case study of the far-right is in the UK. This reflects a disproportionate focus on jihadist compared to far-right content. Efforts brought forth by GNET since then have included more geographical diversity as Insight blogs include case studies of the far-right in Latin America and India, and reports on the Asian context.
A related challenge concerns tech companies’ reliance on international and national government lists of proscribed or designated terrorist groups in order to assess the appropriate removal or moderation of content. While such lists make it easier to cross verify, there are limitations and biases with terrorist group designations. An initial restriction is the fluidity of proscription, as groups may be banned at one point and then reinstated at a later time (or simply that disbanded members will form a new group). GIFCT relies upon the UN Security Council’s list of sanctioned extremist groups, entities, and individuals to determine its member companies’ policies on terrorism. Yet, the list includes no far-right extremist group or individual, thus creating no obligation to adhere to the moderation of far-right content compared to jihadism.
Perhaps more importantly, evidence shows that the online threat landscape of the far-right is no longer dominated by groups, but increasingly by unaffiliated users characterised by scattered networks. These users are well adept at navigating social media regulation through modifying their behaviour on platforms with tactics like coded language, exploiting weaknesses in the infrastructure, and identifying discrepancies between industry and government frameworks.
A final, additional complication is that far-right extremist or terrorist content originating from the Global South can often be state-sponsored. Thus, definitions of terrorism predicated on the primary involvement of non-state actors creates the possibility for government actors to circumvent responsibility in far-right extremist activity. This creates a vast dilemma if initiatives such as GIFCT and the Christchurch Call depend on government cooperation to effectively combat far-right online extremism and terrorism. As a result, more collaboration with civil society organisations that operate with a human rights focus should be prioritised, as inclusive and democratically oriented stakeholders.