Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

The Robots Will Not Save Us: The Limits of Machine Learning for Counterterrorism

The Robots Will Not Save Us: The Limits of Machine Learning for Counterterrorism
4th November 2021 Christopher Wall
In Insights

At the dawn of the Global War on Terror (GWOT), Secretary of Defense Donald Rumsfeld argued it would trigger a technological revolution in military affairs (RMA) centred on fast mobile attacks by shock troops supported by air platforms and satellites. In Secretary Rumsfeld’s vision, this RMA would obviate the need for large scale troop deployments for regime change, as technological superiority would awe rivals into submission. The Wars in Afghanistan and Iraq invalidated most of his thesis, but he was right in predicting a new RMA. As the US sifted through gluts of data produced by sensors like satellites, mobile networks, and human informants to locate Osama bin Laden, the country invested heavily into machine learning (ML) platforms that could parse information into actionable intelligence for soldiers and airstrikes alike. By 2006, American forces could collate all-source data, analyse it at machine speed, and order a strike against Abu Musab al-Zarqawi with apodictic certainty within hours. As the United States draws down the GWOT, its domestic law enforcement and intelligence services aim to import and domesticate these warfighting capabilities to assist in the struggle against terrorism at home.

For all its evident success in overseas wars, the current state-of-the-art blunts ML’s application in domestic scenario and is unlikely to change unless terrorism scholars start considering novel ways of applying their research in practical scenarios. Much of this stems from information asymmetries that persist among developers, consumers, and observers who have not collaborated in designing the appropriate tools for peacetime scenarios. At its most basic level, ML refers to algorithms trained on data to discern patterns. These techniques emerged from the marriage of computer science and statistics and the democratisation of data and processing power in the 21st century.  When taken to scale, ML flattens information, highlights patterns imperceptible to humans, automates laborious human tasks, and reduces biases, allowing humans to prioritise higher order tasks like inference and decision-making. Rarely does an ML platform achieve its platonic ideal. Even when given tens-of-thousands of observations in benign scenarios like assisting doctors improve healthcare outcomes, ML often fails to anticipate the complexity of the real world. The way to mitigate this is through proper research design, but this does not happen often when designing tools for countering political violence.

Research design refers to strategies for studying phenomenon, such as hypothesis testing, understanding the data generating process, and defining the problem. This is particularly important for terrorism studies where terrorism’s definition requires constant refresh to account for mutations. Unfortunately, in industry, this substantive knowledge is often missing. Many of the leading technology companies building tools for government are staffed by people with backgrounds in the hard sciences. In contrast, consumers of ML outputs in government may have substantive backgrounds but lack training in understanding what data communicates. They must rely on developers to explain what they are seeing for them, who in turn have a perverse profit motive to guarantee their tools conform to marketing brochures. On the outside are observers like terrorism scholars, who have remained reactive to ML’s role in counterterrorism. The extant literature has prioritised either ethical discussions about the harms of ML, or focused on taking ML as a given, presuming these tools accomplish what developers claim. The terrorism scholarship rarely dives into practical applications of the field’s knowledge into how to better construct tools that identify extremists without incurring harms to the public.

The consequence is that many of the technologies touted as the future for countering domestic extremism are not optimal for the current threat environment, nor do they reflect innovations for detecting threats. At best, they represent automations of policing methods developed in the late 20th century, such as profiling. Furthermore, wartime scenarios are not adequate analogies for domestic security given the higher legal burdens for data acquisition, monitoring of targets, or acting upon non-dispositive information. The results so far have been muddled and if correctives are not applied, public backlash might stymie the use of ML tools. This is an undesirable outcome because ML can transform counterterrorism in a salubrious way.

Consider the United States’ response to January 6. In the aftermath, the Department of Homeland Security (DHS) and other bodies accelerated their acquisition of ML tools. They had previously experimented with biometrics and facial recognition software to identify risks more efficiently. The tools desired are meant to stop a repeat of January 6, giving the US government the capacity to scrape social media to identify plotters.

But these efforts have both legal and practical limits. On the legal front, most intelligence and law enforcement bodies are statutorily limited from looking at US persons’ social media without warrants or special dispensations. The ones best positioned to act upon these data are the social media platforms themselves, something they are loathe to do for fear of undercutting profits. They also refuse to allow outside experts examine their data for fear of incurring reputational costs, limiting tools from scraping and analysing their data. From the practical side, the ontological question of terrorism emerges, and most ML tools have not been developed to suss out these nuances, like surfacing intent. This is particularly true when it comes to separating ardent Trump supporters from would-be violent extremists storming the Capitol. The country’s strong freedom of speech laws protect the former unless there is a direct incitement of violence, and no existing tool can provide such clean answers. The status quo would be workable if there was sufficient training for law enforcement for reading ML outputs to understand confidence scores, but they are not normally trained in this manner, which creates a heightened risk of acting upon false positives.

Indeed, given the limitations social media platforms themselves place on governments accessing their data, this is a misuse of resources, even in areas like combating radicalisation. This matters especially because a significant portion of radicalisation occurs offline, with the most important vectors often being ties of kith-and-kin. But the privileging of social media data has tied-up resources for combating a problem that social media platforms themselves created through their algorithms and could eliminate if they chose the public good over profit.

The paramount role played by social media platforms in facilitating plotting for January 6 makes this the most visible domain for applying ML for domestic counterterrorism. But the harms associated with social media apply in equal, if not greater, measure, to machine vision. Machine vision refers to those technologies that ingest images to find patterns, such as facial recognition or augmented reality. After January 6, the FBI successfully used facial recognition software to track some of the participants. These successes belie the data biases inherent in developing machine vision tools, as developers struggle in capturing a representative swath of the country’s demographics. A 2019 study by the National Institute of Standards and Technology (NIST) found a high false-positive rate in leading facial recognition software based on ethnicities. As studies have shown, these false-positive rates make these tools easy to defeat. Despite this information, various law enforcement agencies across the United States purchased Clearview AI, a machine vision platform that violated terms-of-service to scrape images from Facebook. These shortcomings should weigh heavily as technology companies start producing headsets with augmented reality, as without the proper training, could create more harm than abuse.

For the litany of ML critics, the above examples represent reasons to limit the use of ML for CT. But these are examples of ML platforms built with little input from terrorism scholars and the wrong research designs or types of training data. As mentioned initially, ML’s strength comes from its ability to flatten information to discern patterns, especially non-obvious trends difficult for humans to discover on their own. With this mindset, terrorism scholars can play a more proactive role, bringing their research to assist the counterterrorism enterprise.

ML developers and former government officials proffered areas where not much work has been done and are ripe for using ML in a domestic context. The first deals with misinformation. The domain of ML that looks at text, Natural Language Processing, has a difficult time ascertaining authorship, but it can provide indicators of coordinated information campaigns. As rival nation-states engage in information warfare campaigns to radicalise domestic audiences or sow discontent, ML tools can detect coordinated information warfare campaigns to identify and neutralise these networks. This is a form of defensive ML that can be used en masse to protect against COVID-19 misinformation or QAnon propaganda. It also represents a rare instance where agencies tasked with foreign missions can collaborate with domestic bodies, as this type of information warfare distorts the lines between domestic and international terrorism.

Other areas where ML thrives in is when it serves as a neurocognitive enhancer for individuals. This applies to the task of screening individuals for pre-existing ties to extremist organisations. Before January 6, the military struggled in identifying extremists seeking to enlist, and still struggles in this task. As recently as May 2021, the army allowed a January 6 insurrectionist to enter active service. The Internet contains immense amounts of publicly available information (PAI) that governments have more authorities to examine. ML platforms can collate these data and vet, in a bulk manner, enlistees against these datasets to look for indicators of extremist ties rather than the current approach that involves laborious individual google searches. Likewise, this type of collection of PAI would allow governments to keep abreast of new pages where extremists gather online, determine early on if these websites have a foreign provenance, and look for changes in behaviour regarding new forms of propaganda.

Machine vision also has a role to play as autonomous vehicles proliferate in major cities. As noted previously, while the false-positive rate with these platforms is high, their ability to gather data from numerous sources and not suffer from human foibles such as anxiety and information overload, allow them to cut through the fog-of-war. In high-risk scenarios, such as bomb threats or hostage-rescue operations, drones and robots can serve as sensors and provide insights without risking human personnel. In calibrated scenarios, these types of sensors can provide more accurate mapping of a threat environment, providing valuable real-time information and leads to security professionals under heavy stress.

These use-cases are hypothetical, and only scratch the surface of what machine learning can do for counterterrorism. But innovations remain lacking owing to the inertia towards current approaches and the observer role terrorism studies perform, assuring that ML is not the massive game changer for domestic counterterrorism as it was in the Afghan and Iraq wars. Government officials themselves believe this needs to change, arguing that with the input of terrorism scholars, civil society groups best primed for detecting and combating radicalisation could develop non-intrusive methods for intervening into a person’s path to radicalisation. This would require terrorism scholars to consider data sources other than what is readily available on social media platforms. This would also allow law enforcement and domestic intelligence to focus on the more kinetic aspects of national security and not meddle in areas where they lack expertise.

So, what is the path forward? Technology companies are unlikely to deviate from a system where they are the subject matter experts for governments. But terrorism scholars, reluctant to engage in large-N studies, have the substantive knowledge necessary to design better tools. The current moment is ripe for terrorism scholars, and the broader social scientific community, to bring their expertise into practical domains. To do this, terrorism studies will need to consider ML more holistically, thinking clearly about research design, and finding ways that they can push innovation and change to both support counterterrorism while reducing harms to society.

Christopher Wall is a War Studies PhD student at King’s College London, Adjunct Professor at Georgetown University, and Social Scientist at Giant Oak.