Could better monitoring have prevented a terror attack?
If we recap the spiral of events that led to the beheading of the French schoolteacher Samuel Paty on 16 October after he showed cartoons of Prophet Muhammad to his students, we may observe two parallel dynamics. On the one hand, an online campaign, which went viral and finally reached a large audience, designated and singled out a name, Samuel Paty and a place – the school. On the other hand, the assailant’s social media activity indicated a growing radicalisation process during the last year, with several illicit posts that were reported and deleted. The attack is actually the consequence of the “encounter” of these two phenomenon, and of a “supply and demand for target”: first, the highlighting and emphasis put on Samuel Paty, that was interpreted by some, including the assailant, as a form of “supply of target”, and on the other hand the “demand for target” at the end of the assailant, an 18-year-old man of Chechen descent.
What elements, at both ends, could have raised an alarm before the online threat progressed to real life? What RetEx could we make to draw lessons from this event and what are the challenges encountered?
Samuel Paty was Singled out in a Virulent Online Campaign
An online campaign, initiated by the father of one of Paty’s student, started on 7 October, two days after his class on freedom of speech. In a first video released on Facebook, Brahim C. expressed his anger against the teacher for showing cartoons of the Prophet Muhammad to his class. He also accused him of discrimination for allegedly asking Muslim students to leave the class. (this assertion has been invalidated, the teacher actually gave his students the choice of watching the cartoons or choosing not to if it would offend them). On 8 October, he published a second video displaying Paty’s name, before deleting it, and providing his audience with his own phone number to communicate in private. The campaign was then amplified by a second man, Abdelhakim Sefrioui, radical leader of the “Sheikh Yassin” collective, a pro-Palestinian association named after a Hamas founder, and known to the French intel services. Sefrioui, in a first video, which widely extended the reach of the online campaign, demanded that Paty gets suspended. Then on 12 October, Sefrioui released a second video, featuring Brahim C’s daughter and accusing French president Emmanuel Macron of exacerbating hatred against the Muslim community.
The campaign’s videos did not call to carry out violence against Samuel Paty. Nevertheless, the two men tried to use social media as a ‘people’s Court’, with the schoolteacher in the role of the defendant, and more than 20,000 people who could actually watch the video as jurors. Paty – whose name and place of work was known to some, was virulently and repeatedly singled out as someone having committed a hostile act. The school received threats and Paty himself admitted his feared physical reprisals.
The Assailant’s Social Media Account Could Have Drawn Attention
At the other end, the retrospective analysis of the assailant’s Twitter account demonstrates he was in the process of being radicalised. Abdullakh Anzorov, who arrived in France as a refugee at the age of 6, used a Twitter account with a handle called “al-Ansar” (the one who champions the cause), to post radical Islamist content. His tweets were reported several times to Twitter itself and to Pharos, the French State platform for reporting illicit content, but the threat he posed was not assessed as a priority by the French authorities. 25 September, the date of a terrorist attack in front of the former Charlie Hebdo office in protest of the republication of the caricatures, seems to have been a turning point in his decision to take action, with an effect of mimetic inspiration. From that date on, he tried to identify and locate several targets via social media, among people that had been, in his mind, disrespectful towards Islam. He caught sight of the campaign against Paty and he decided to contact Brahim C via Whatsapp. It is now clear that the assailant was in search of a target, and the singling out of Paty provided him one on a plate. Of note also, the assailant deleted several hundred of his tweets before carrying out the attack.
RetEx From the Online Activity
The convergence of these two dynamics, as well as their transition from the virtual space to the real life, raises a number of questions. Regarding the online campaign against Paty, what is the border between militancy and illegality and was it crossed? Regarding the social media activity of the assailant, could have he been spotted by the authorities? What lessons could be drawn? The retrospective bias of this RetEx helps contribute to spot the problematic facts that led to the attack.
The online campaign against Paty, had it been reported to the platforms or to the authorities, would probably not have been deleted. The two men did not call for violence. Nevertheless, several elements could have raised an alert. First, the publication, even one-off, of the teacher and school names is problematic. The investigation will determine if that constituted a disclosure of personal information, but were the two men really in a legal approach (they intended to press charges against Paty for circulating pornographic images to children: one on the caricatures he showed to his class featured the Prophet naked), why disclose these elements, if not to call for some to execute justice for oneself? Second, the fact that a contact number was published could have been subject of concern: why exchange in private when all started publicly? Funnelling public anger to private conversation could have been a red flag. Third, the reach of the campaign and its relentlessness: the larger the audience, the more sensitive the topic (since the republication of the caricatures beginning of September, the protests have gone global) and the more chance there is that it is interpreted by an ill-intentioned person as a call for action. For instance, screening of the violence of the posts commenting on the video could have been an additional red flag. Fourth, the fact that Sefrioui was known to the authorities might have accentuated the worry as having a potentially violent audience – even if the assailant was finally not in contact with him. All these points contributed to the fact that both Brahim C and Abdelhakim Sefrioui have been arrested with moral responsibility as a potential charge.
At the other end, the fact that the assailant account was not assessed as a priority threat highlights the difficulty for the authorities to identify online weak signals in the huge volume of social media posts.
Pharos, the State platform to report illegal content is constituted of a team of 28, and received 228,000 reports in 2019, or approximately 624 per day, including graphic content, child pornography, death threats, hate content. The team is therefore materially obliged to sort, prioritise and treat the most urgent threats.
What could have drawn the attention here, once again with a retrospective bias, is the body of evidence. First, the account (at least his tweets) were reported several times. Recidivism is therefore a hint to a repeated, redundant illegal online activity. Second, the “background music” of his Twitter account demonstrates he was evolving with radical ideology (not illegal but at-odds with democratic values). Third, one could have looked at his online environment: was al-Ansar evolving in a radical ecosystem? The answer is yes, all the more so as his Instagram account shows he was in contact with jihadists in Syria, located in the Idlib region. Fourth, a sudden modification of the physiognomy of an account or a shift of one’s online behaviour, like a profile picture modification, or, like here, the deletion of a series of tweets, may suggest a readiness to action. Last but not least, a brief review of his Twitter conversations and replies could have highlighted that several times he had been searching for the address of a target, thus suggesting he was on the verge of putting his virtual threats into action in real life.
Legislating on online hate is a complex issue generating longstanding debates on multiple issues, as developed by the authors. A consequent staff reinforcement is highly needed, which is one of the first steps taken by the French authorities regarding Pharos: a recruitment of 100 people has been announced, as well as the creation of a special unit within the Paris Prosecutor’s office, to centralise investigation on online activity and facilitate immediate summons, as well as the reactivation of a group for permanent contact between social media and the French authorities. But currently, and regardless of the necessity of new legislation, one of the most important avenues for reflection is to identify a corpus of criterions to spot weak signals, a tool to set it up and an efficient process to enforce it. Taken individually, those weak signals don’t necessarily prove any credible threat, but all combined, they may – and they actually did.