Click here to read our latest report “Far‐Right Extremism and Digital Book Publishing”

Hate Speech in the Context of Mass Atrocity Crimes

Hate Speech in the Context of Mass Atrocity Crimes
11th September 2020 Anton Peez
In Insights

The May 2020 arrest of Félicien Kabuga brought an end to a manhunt spanning 26 years and two continents. Among other indictments, the elusive alleged financier of the 1994 Rwandan genocide is charged with inciting genocide as the chairman of the infamous RTLM hate speech radio station (Radio-Télévision Libre des Milles Collines; Free Radio and Television of the Thousand Hills).

The Kabuga case shows that fugitives from International Criminal Law (ICL) can often evade capture for decades, while illustrating the importance of documenting hate speech for court proceedings if and when they are eventually arrested.

The Rwandan genocide was the “last genocide with a paper trail”, the last genocide with a heavy reliance on physical paperwork. The UN Tribunal for Rwanda’s proceedings also produced hundreds of pages of radio transcripts as evidence for incitement to genocide and hate speech on a massive scale, leading to numerous convictions.

Today, extremist hate and atrocity speech in the context of genocide and war crimes takes place and is spread online. However, social media platforms have been slow to respond to and document it, and to cooperate with international authorities in doing so. On 10 September 2020, Human Rights Watch published a 100-page report detailing international authorities’ difficulties in securing the cooperation of social media platforms – Twitter, Facebook, and Google – in gaining access to potential evidence. Perhaps most prominently, Facebook’s initially hesitant reaction to rampant hate speech in the Myanmar context has been roundly criticized, as has the company’s subsequent reluctance to support UN investigative bodies.

This piece highlights two aspects of this debate. First, it discusses hate speech on social media and platforms’ responses and responsibilities in the context of ongoing atrocities, summarising current ICL assessments. Second, it discusses the emerging importance of social media hate speech as evidence in international criminal proceedings, and platforms’ troubling track records of removing activist evidence of war crimes. Such evidence is crucial for making the case against alleged perpetrators of genocide and crimes against humanity– even decades down the road, as the Kabuga case shows.

Hate speech in the context of genocide: media and social media

Parallels have been drawn between the Rwandan radio station RTLM’s role in fomenting racial hatred against the Tutsi minority over the airwaves on the one hand; and the spread of hate speech via Facebook during ongoing violence against the Rohingya in the Myanmar–Bangladesh border region.

Since October 2016, Myanmar military and police have destroyed Muslim Rohingya villages across Rakhine State, conducting “clearance operations” and perpetrating alleged mass killings and widespread sexual violence and gang rapes. Between 600,000 and one million Rohingya have been forcibly displaced. The events are the subject of ongoing proceedings regarding the applicability of the 1948 Genocide Convention at the International Court of Justice (ICJ), as well as the ongoing investigation of the matter by the International Criminal Court (ICC), both at The Hague.

In 1994, with clear evidence of mass murder in Rwanda, US policymakers decided against jamming RTLM, partially for reasons of free speech. In 2018, Facebook in particular was slow to react to calls for increases in content moderation regarding Myanmar. In September 2018, a UN fact-finding mission quoted numerous Facebook posts by Myanmar officials as indications of hate speech and incitement. The mission’s report concluded that “Facebook has been a useful instrument for those seeking to spread hate” and that the company’s “response (…) has been slow and ineffective.” The company itself reported in August 2018 that it had only 60 content moderators for a country of nearly 55 million people and 20 million Facebook users, later commissioning an impact assessment of its activities in Myanmar, published the day of the US midterm elections.

Some scholarly opinions interpret the company’s slow reaction as complicity in atrocity crimes, but more optimistic assessments point to self-regulation as an opportunity for reducing hate speech via social media in the context of genocide. When a government fails to fulfill its obligation to protect its own population, proactive measures by social media platforms can certainly mitigate harms.

Social media posts as legal evidence of hate speech and incitement

However, Facebook’s poor record extends not only to content moderation in the face of ongoing alleged war crimes, but also to the collection and provision of evidence in their wake. The 2018 fact-finding commission “regret[ted] that Facebook [was] unable to provide country-specific data about the spread of hate speech on its platform.”

In August 2020, the head of a UN investigative body on the situation again explicitly criticised Facebook for withholding evidence relevant to the ongoing inquiry. The company responded by sharing some of the data in question, with the UN investigators confirming the receipt of a “first data set which partially complies with our previous requests.”

Unfortunately, this is not the only case of lacking cooperation by social media platforms in the aftermath of alleged atrocities and war crimes. Beyond Myanmar, both Facebook and YouTube have been criticised for removing potential evidence and documentation of war crimes from Syria and Nigeria, gathered and shared by activists.

In its September 2020 report, Human Rights Watch recommended setting up a restricted archive to preserve potential evidence and make it available to investigators, as well as civil society organisations and journalists. The report named the International Tribunal for Rwanda’s archives as a best practice for doing so.

While Facebook has made significant improvements in removing hate and toxic speech from public profiles, the importance of archiving, retaining, and providing evidence for eventual (international) criminal proceedings is evident. Social media platforms are the primary custodians of these data – but they have so far not been particularly cooperative or proactive regarding international enquiries.

Platforms must be cooperative and proactive in helping international investigations

Misinformation, extremism and hate speech in the US and European contexts have seemingly been a higher priority to Facebook than elsewhere in the world. All this has led critics to conclude that Facebook applies a different standard of care to Asian users than to US and European ones in general, and in the Myanmar case in particular. The vast majority of the platform’s users are located outside the US and Europe, often in countries with precarious human rights situations regarding religious and racial hate speech, including India and Brazil.

The Kabuga case illustrates that well-connected and sheltered suspected genocidaires can take decades to catch. This makes the permanent documentation of often fleeting social media data particularly important. Here, timely and stringent content moderation and the longer-term gathering and provision of evidence need not be at odds. As extremist hate speech and incitement in the context of genocide and mass atrocities is increasingly spread on social media platforms, international criminal justice is reliant upon social media platforms’ cooperation in bringing alleged perpetrators to justice.