Click here to read our latest report “Transmisogyny, Colonialism and Online Anti‐Trans Activism Following Violent Extremist Attacks in the US and EU”

Predictive Technologies in Preventive Counterterrorism

Predictive Technologies in Preventive Counterterrorism
4th May 2021 Shiri Krebs
In Insights
  • On 3 May 2017, an Australian Super Hornet airstrike in Iraq dropped a GPS-guided bomb on a house in West Mosul, killing two civilians and injuring two children. The attacking forces were supported by visuals from an unmanned drone, as well as from the Australian aircraft. An Australian Defence Force investigation acknowledged the civilian deaths and injuries but did not identify any faulty actions or processes.
  • On 3 October 2015, at 2:08 a.m., a United States Special Operations AC-130 gunship attacked a Doctors Without Borders hospital in Kunduz, Afghanistan, with heavy fire. Forty-two people were killed, mostly patients and hospital staff. A US military investigation concluded the attack resulted from several factors, including significant failures of the electronic communications equipment that prevented an update on the fly.
  • On 5 January 2009, Israeli forces fired several projectiles at the Al-Samouni family house south of Gaza City, killing 21 civilians. The attack was approved based on a drone image depicting five men holding RPG rockets at that location. An Israeli military investigation later found the attack resulted from erroneous reading of the drone image, which in fact depicted the five men holding firewood.

These three examples demonstrate the mounting reliance on technology or machine-generated data in counterterrorism operations, as well as its potential impact on effective protection of civilians and human lives more broadly. In March 2021, Australia’s federal government invested an additional $115 million, on top of its previous investment of $40 million, in the development of an Australian-designed combat drone with an artificial intelligence targeting system. Similarly, the US National Security Commission on Artificial Intelligence concluded in its latest report that “the United States must act now to field AI systems and invest substantially more resources in AI innovation to protect its security, promote its prosperity, and safeguard the future of democracy.” It is therefore evident that drone imaging, together with AI capabilities, are considered essential to efforts to effectively counter violent extremism while protecting human lives. However, evidence from concrete counterterrorism operations – the above examples included – suggests that while adding valuable information, predictive technologies also place additional burdens on decision-makers that may hinder – rather than improve – safety and security. In particular, three distinct problems emanate from this human-machine interaction:

First, predictive and visualisation technologies have technical and human-technical limitations, including insufficient or corrupt data inputs, blind spots, as well as time and space constraints. The missing details or corrupt information remain invisible, while the visible (yet limited or partial) outputs capture decision-makers’ attention. Indeed, emerging empirical evidence suggests that real-time imaging outputs may contribute to reduced situational awareness of decision-makers, who tend to place an inappropriately high level of trust in visual data. Additionally, technology systems may fail or malfunction. When military practices rely profoundly on technology systems, decision-makers’ own judgment, and their ability to evaluate evolving situations without the technology, erodes. The misidentification of the Doctors Without Border hospital in Kunduz as a legitimate target was partly attributed to the aircrew’s reliance on infrared visualisation technology, which was incapable of showing coloured symbols, including the red cross symbol, which could have alerted the aircrew that the intended target was a medical facility. Additionally, the AC-130 targeting systems should have alerted the aircrew that the coordinates selected for engagement were included in the “no strike list” database. In this case, however, because the aircraft departed urgently, this crucial data was not properly updated into the aircraft’s systems. Ultimately, the Centcom investigation concluded the misidentification of the hospital and its subsequent engagement resulted, among other factors, from “malfunctions of technical equipment which restricted the situational awareness” of the forces.

Second, these technical (and human-technical) limitations create gaps in the available data, which are then filled with subjective human judgments, influenced by several cognitive biases, such as availability or anchoring. Availability biases occur when people overstate the likelihood that a certain event will occur because it is easily recalled, making decision-makers less sensitive to information that runs contrary to their expectations. This means, that at least under some circumstances, people depicted in technology-generated visuals may be more likely to be interpreted as insurgents rather than civilians. Anchoring biases occur when the estimation of a condition is based on an initial value – anchor – that might result from intuition, a guess, or other easily recalled information. The problem is that decision-makers do not adjust sufficiently from this initial anchoring point. As a result, predictive technologies’ outputs may be interpreted consistently with decision-makers’ expectations, and this interpretation may then serve as an (inaccurate) anchor for casualty estimates or target identification. Erroneous subjective judgments – likely affected by availability bias – were found to be the cause for the IDF attack on the Al-Samouni house. In particular, a visual depicting men holding firewood was misinterpreted as a representation of men holding RPG rockets. The technical limitation of the image left room for human judgment, which inserted subjectivity into a seemingly objective visual.

Third, technology-generated visuals create an accountability gap, as predictive and visualisation technologies are sometimes blamed for human errors. The result is the creation of avatars that replace the real persons – or the actual conditions on the ground – with no effective way to refute these virtual representations. The military investigation into the attack on the Al-Samouni house identified the drone-images and their limitations as the main cause for the erroneous lethal attacks on civilians. As a result, no criminal (or disciplinary) proceedings were initiated in this case. Moreover, after being informed about the erroneous interpretation of the drone image, Justice Richard Goldstone, whose damning UNHRC report concluded that this incident may rise to the level of war crimes and crimes against humanity, retracted his original conclusions and vindicated the IDF. For Goldstone, too, the suggestion of a technology-related error was enough to find the IDF decision-makers blameless. Similarly, because the attack on Doctors Without Borders’ hospital was mainly attributed to technical malfunctions, no individual was held criminally accountable (though disciplinary processes were held for 16 individuals). Finally, in the Mosul incident, the ADF declared its forces acted in full compliance with the law of armed conflict and did not make available information concerning the causes for the lethal error. As the attack was supported by various real-time visuals, meaningful accountability in this (and other) cases could have identified the gaps in the available data or weaknesses in the human-machine interaction that led to civilians’ death and injury.

Based on this analysis, improving human-machine interaction in counterterrorism decision-making requires enhanced transparency concerning the fact-finding methodologies hidden behind the outputs of predictive technologies. In particular, it is essential to identify how visuals and predictions affect human risk assessments, adding tailored protections for such challenges. These may include visibility of internal disagreements about the interpretation of technology outputs; an obligation to highlight and clarify the limitations of the particular technology used; and enhanced accountability for technology failures. This last recommendation can be achieved through the identification and improvement of suboptimal human-machine interactions, including the development of ‘organisational responsibility’ framework for erroneous decisions. While technology-generated data holds much promise for counterterrorism decision-making, it also has the propensity to jeopardise safety and security by masking evidential uncertainties and numbing the exercise of human judgment. As governments around the world intensify their investments in sophisticated combat drones equipped with AI-enabled targeting systems, it is essential to develop effective ways to better integrate these technologies into human decision-making processes. Examining past war room failures is one place to start.