Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

Online Terrorist Content: Is it Time for an Independent Regulator?

Online Terrorist Content: Is it Time for an Independent Regulator?
16th November 2020 Dr. Patrick Bishop
In Insights

In April 2019, the U.K. Government published its Online Harms white paper. This proposed the creation of a new independent regulatory body, whose task would be to enforce a new statutory duty of care on relevant companies to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services. Other countries have also proposed new regulatory approaches to online terrorist content, as has the European Commission.

In this short piece – which is based on our previous work on this topic – our starting point is not whether a new independent regulator should be established, but what form a new regulatory regime might take. The reason for this is that we believe it is necessary to answer the ‘what form’ question in order to be able to discuss the ‘whether’ question meaningfully. In our opinion, to be effective a new regulatory regime must possess three key features.

First, there is no one-size-fits-all regulatory intervention and so a diverse regulatory toolkit is essential. This toolkit should include advice and guidance, removal orders, fines, disruption of business activities and ISP blocking.

A number of platforms, such as Facebook, already use AI to enforce their prohibitions on content that promotes terrorism. There are also cross-platform initiatives, such as the hash-sharing database of the Global Internet Forum to Counter Terrorism (GIFCT). These efforts have been criticised, including by the European Commission and Intelligence and Security Committee of the U.K. Parliament, as insufficient. Some companies lack the capacity and resources to regulate the content on their platforms effectively. For these platforms, the central role of a regulator would be to enable self-regulation by the provision of advice and guidance. Indeed, one of the objectives of the GIFCT is to build the capacity of smaller platforms by knowledge-sharing. To this end, it has collaborated with Tech Against Terrorism to launch a knowledge-sharing platform. At the same time, many smaller platforms also lack the capacity required to fulfil the GIFCT membership criteria. These include: terms of service that include content standards; regular, public transparency reports; a public commitment to human rights; and, support for civil society organisations challenging violent extremism.

Moreover, as the Christchurch attacks illustrated, some platforms lack the willingness to self-regulate, meaning other forms of intervention are required. None of these is the proverbial silver bullet. Removal orders, with fines in the event of non-compliance, is a feature of many regulatory regimes, including Germany’s NetzDG law. As well as an economic impact, the imposition of a fine could affect the platform’s attractiveness as an advertising space. On the other hand, there may be difficulties enforcing fines against companies that are registered outside of jurisdiction, plus the biggest companies may simply absorb fines as an additional cost of doing business. The possibility of fining individuals within a company’s senior management has also been touted as a possibility, including by the U.K.’s Online Harms white paper, but this too faces a number of difficulties. These include identifying the roles to which liability might attach and proving individual responsibility within the context of a complex management structure.

A more draconian enforcement option would be the disruption of business activities. This could include requiring third parties to withdraw services from the transgressing company, including removal from search results and app stores, and the cancellation of a range of ancillary services such as domain name registration and payment processing. For example, following the October 2018 Pittsburgh Synagogue shooting, GoDaddy refused to be further associated with Gab, forcing it to find another domain provider. Similarly, following the August 2019 shooting in El Paso, Texas, Cloudflare announced that it would no longer offer the message board 8chan protection from distributed denial of service attacks. Actions such as these might cause the impugned company to change their behaviour or force them to seek another service provider. For a smaller platform, this could be a significant cost and/or restrict its future growth. However, the impact is likely to be lesser for the biggest companies, who already have extremely large numbers of existing registered users.

The most draconian option – ISP blocking – raises some significant issues. Blocking access to the biggest platforms would cause public outcry, have a significant socio-economic impact and constitute a disproportionate interference with the right to freedom of expression. There are also technological challenges, not least the fact that many users would have sufficient know-how to be able to circumvent such measures. Nonetheless, in cases involving smaller, uncooperative platforms, the threat of ISP blocking may prove a useful way of incentivising engagement with less severe interventions.

The second key feature of any regulatory regime is that it must be responsive to a range of factors. Different platforms within the online ecosystem offer a variety of different services. These platforms have widely differing numbers of users, as well as different levels of resources and capacity. They are also likely to display differing levels of cooperation and commitment to the overarching objectives of any regulatory regime. For a regulatory regime to be effective, it must be responsive to these various factors. The benefits of a responsive approach to regulation, not least its effectiveness, are well-established in academic literature.

This leads to the third key feature, which is that to realise the benefits of a responsive approach the different interventions outlined above should be organised into a pyramid structure, where each layer of the pyramid consists of interventions of increasing severity. At the bottom of the pyramid would be advice and guidance, aimed at supporting companies’ efforts to self-regulate. The next layers would be removal orders and fines, then disruption of business activities, with ISP blocking as a last resort at the top. The rationale of such an enforcement pyramid is that companies are more likely to engage with the more persuasive, dialogic interventions at the base of the pyramid when faced with the prospect of escalation and increasingly severe penalties. In short, regulators are able to speak more softly when they carry a big stick.

Of course, even the most carefully designed regulatory framework will not lead to perfect compliance. Efforts to tackle online harms face acute jurisdictional challenges. Terrorist groups and sympathisers may circumvent regulatory regimes that are limited to a single jurisdiction, by the use of technology such as the TOR browser or by moving to other platforms or jurisdictions that are relatively unregulated. At the other extreme, securing international agreement to a global approach is beset with difficulties. The prospects for a regional approach may be more favourable. Ultimately, however, the fact that a regulatory regime will not achieve perfect compliance is not a reason to not enact the regime in the first place. As Berger and Morgan have stated, “The consequences of neglecting to weed a garden are obvious, even though weeds will always return.”