Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

Paved With Good Intentions: How the EU Legislative Process has Placed Internet Voluntary Counter Extremism Near the Edge of the Legal Wilderness

Paved With Good Intentions: How the EU Legislative Process has Placed Internet Voluntary Counter Extremism Near the Edge of the Legal Wilderness
10th December 2020 Dr. Victoria McCloud
In Insights

The modest progress towards the modernisation and replacement of the current EU ePrivacy Directive has led to an uncomfortable legislative position which has brought with it headlines such as the New York Times’ “E.U. Privacy Rule Would Rein In the Hunt for Online Child Sexual Abuseas recently as 4 December. In this paper I consider how this state of affairs has arisen, what it may say about EU ‘rule and enforce’ style regulation, and why it matters, both for global counterextremism/counterterrorism and for the debate around where we place the fulcrum in the balance between the privacy of end users’ communications on the one hand and the ability of non-State service providers to detect, store and report suspected extremist or terrorism-related material on the other.

Since the early 2000s when the EU addressed the subject of privacy rights in relation to the interception of electronic communications via the 2002 ePrivacy Directive, the variety of ways in which end users can engage in user to user communication has grown substantially, and hence so has the potential for extremist efforts to use such services.

We now see an ‘over the top’ (OTT) number of independent interpersonal communications services such as VoIP, WhatsApp and FaceTime which bypass conventional forms of direct communication (such as SMS) over public communications networks in favour of functionally equivalent services superimposed on the existing internet architecture. The 2002 Directive was not drafted with such services in mind as part of its concept of electronic communications, and the privacy rights of end users of such services have been taken to be governed by the General Data Protection Regulation (GDPR), as part of general personal data protection. There is an ongoing process of discussion of a new ePrivacy Directive which may see the light of day as legislation in 2021 but which at present takes the form of a draft proposed Regulation the last version of which was circulated in November by the Presidency of the EU, currently held in rotation by Germany.

Service providers who operate OTT communication services have been taken to have a lawful basis for implementing technical means to monitor, collect, report (and delete) extremist, terrorist or child abuse-related messaging without being treated as in breach of the relevant end users’ privacy rights, based on the GDPR. That instrument is not explicit in specifying the lawfulness of such voluntary activity but a logical basis is that in accordance with Art. 6 GDPR, the processing of personal data is lawful (among other things) on grounds of legitimate interests pursued by the controller or by a third party. Recital 50 of the GDPR says in terms that “Indicating possible criminal acts or threats to public security by the controller and transmitting the relevant personal data in individual cases … to a competent authority should be regarded as being in the legitimate interest pursued by the controller.” An opinion given by the Article 29 Working Party in 2014 concurred that ‘legitimate interests’ of the controller “may include situations where a controller goes beyond its specific legal obligations set in laws and regulations to assist law enforcement or private stakeholders in their efforts to combat illegal activities […].”

Service providers have been strongly encouraged by governments to take steps to be on the lookout for terrorist material and to deal with it (see for example a speech by the now former Prime Minister of the UK, David Cameron: “… we need our internet companies to go further in helping us identify potential terrorists online. Many of their commercial models are built around monitoring platforms for personal data, packaging it up and selling it on to third parties. … But when it comes to doing what’s right in the fight against terrorism, we too often hear that it’s all too difficult. Well I’m sorry – I just don’t buy that.”).

On 21 December 2020, with the entry into application of the European Electronic Communications Code (“EECC”) Directive, OTT services will be within the scope of “interpersonal communication services” as a subset of “electronic communications” (see recital 15 of the EECC), and hence will be covered by the 2002 ePrivacy Directive.  Yet the ‘legitimate interests’ exception we see in GDPR is not contained in that Directive, and instead the lawfulness of such voluntary counterterrorism activity by service providers will arguably depend on whether and if so what national-level measures any given member state has enacted to allow for exceptions to privacy rights (as is permitted by Art 15 of the 2002 directive). Not all member states have felt the need to do so, let alone in relation to newer OTT services. Net result: a risk that after 21 December 2020 service providers may take the position that they are on shaky ground in terms of voluntary communications monitoring, retention and transfer relating to OTT communications even if doing so with the objective of being able to detect online extremism and terrorism.

In an effort to resolve one aspect of this, pending the stalled new ePrivacy Regulation, a swiftly constructed proposed interim regulation was drafted in September which expressly re-applies the GDPR to the case of child abuse-related material, but not to the cases of extremism or terrorism. Even in the case of the proposed interim Regulation it has become apparent that there is little prospect of it being agreed in time for the 21 December date (see, for example, reported comments of MEP Birgit Sippel). In November the European Data Protection Supervisor issued an opinion on the proposed interim Regulation relating solely to child abuse, which was strongly opposed to adoption unless far more specific provisions (‘a comprehensive legal framework’) were included:

“… the issues at stake are not specific to the fight against child abuse but to any initiative aiming at collaboration of the private sector for law enforcement purposes. If adopted, the Proposal, will inevitably serve as a precedent for future legislation in this field. The EDPS therefore considers it essential that the Proposal is not adopted, even in the form a temporary derogation, until all the necessary safeguards set out in this Opinion are integrated.”

All of the above is rather ‘of a piece’ with the position taken in October 2020 by the CJEU in the ‘Privacy International’ cases concerning bulk data surveillance by States in the hope of detecting new extremist or terror threats. The court held that member states of the EU may not deploy legislation which requires electronic communications providers to perform general (ie ‘bulk’) transmission or retention of communications data even where it is for the purpose of safeguarding national security or preventing crime in general. This poses problems where, as in the UK, “a fundamental feature of the [security and intelligence agencies’] use of [bulk communications data] is to discover previously unknown threats to national security by means of non-targeted bulk techniques which are reliant upon the aggregation of [those data] in one place.”, in other words aggregation of traffic data to try to detect new threats rather than threats already known or foreseeable.

The impending position approaching on 21 December matters because it will not be lost on users of OTT communications for terrorist purposes that, quite apart from the risk that such a lacuna in EU law may be seen as creating a relaxed space for such messaging to remain undetected, the mere fact of a somewhat confused and patchy position may from 21 December start to exert a chilling effect on service providers if they become concerned at being seen to infringe EU Law.

Perhaps this also says something about the approach to EU regulation in this space. The advocacy within the EU Data Protection Controller’s opinion referred to above, and the tenor of the Privacy International cases, point to a heavy weighting of privacy rights and to very specific rules relating to exceptions, backed by the kinds of significant fines available for example within the current draft of the proposed new ePrivacy Regulation.

I suggest that it is important to take a step back and consider what one is dealing with: there are respectable arguments that voluntary detection and reporting of terrorism and extremist threats is a public good and deserves encouragement if done ethically. Where responsible service providers have a record of acting in ways which are shown by evidence to be proportionate, secure, and genuinely focussed on detection of such serious threats, perhaps a system which focusses enforcement more closely on service providers who lack an evidence-based track record in this space might lower the risk that the willingness of responsible providers to engage in voluntary detection activity will be damaged by uncertainty and an arguably punitive outlook. It is notable that Florence Keen’s 2019 report for GRNTT indicated that “public–private collaboration on CTF suggests that voluntary collaboration is the most likely to lead to positive outcomes, as all parties should participate in good faith. A voluntary model is likely to be less formal, which is important to encourage participation from smaller platforms.”

Dr Victoria McCloud is a lawyer and psychologist who is pursuing research at the Centre for Socio-Legal Studies, Oxford University. Her interests include technology and internet use for political and extremist ends, and ethical approaches to regulation. She is also a judge in the High Court in London. The analysis here is personal and not given in a judicial capacity.