This report is also available in French, German and Arabic.
Please read on for the Introduction.
According to popular belief, artificial intelligence (AI) will revolutionise everything, including national security. To what effect the internet facilitates radicalisation remains an unanswered question, but the latest terror attacks, in Halle in eastern Germany, Christchurch in New Zealand and at Poway synagogue in California, are just three recent examples of the online sphere playing a significant role in radicalisation today.
How can AI help to counter radicalisation online? Expertise on the matter is divided into different disciplines but can be found among researchers and experts from security and counterterrorism backgrounds, as well as policymakers and tech‑experts, who increasingly come together to investigate this domain. Currently, the existing landscape of information makes it difficult for decision‑makers to filter real information from the noise. This report wants to shed light on the latest developments in AI and put them in the context of counter‑radicalisation efforts in liberal democracies.
This publication contributes to the topic by highlighting some limits and possibilities of AI in counter‑radicalisation online. The second chapter briefly explains the key concepts and ideas behind AI. In a ‘Deep Dive’ at the end of the chapter, special attention is given to the quality of data and bias and manipulation in datasets. The third chapter discusses the potential provided by and limitations of AI‑based technological innovations for a ‘healthy’ online space, free from terrorist content, propaganda material and fake engagement. The assumption is that this healthy online environment contributes to the prevention of radicalisation. The chapter assesses a range of popular AI‑based concepts, ranging from Deepfakes to bot armies spreading fake news, and explains why search engines, recommendation systems and, in particular, natural language processing (NLP) have the potential to contribute to this objective in one way or another. The fourth chapter looks solely at a hypothetical ‘general AI’, the omniscient system that identifies individuals undergoing radicalisation and can consequently help law enforcement to prevent crime before it happens. This chapter also argues, however, that such AI technology will remain solely in the realm of science fiction for the foreseeable future. This leads to a discussion of the reasons behind such a position. Big data debates, especially regarding traditional security, cannot take place in liberal democracies without safeguarding and prioritising privacy. Another ‘Deep Dive’ in chapter four provides more information for the interested reader. The fifth chapter concludes the report.
The report is based on semi‑structured interviews with researchers, policymakers and advisers as well as private sector representatives. Additionally, findings from desk research and media monitoring influenced the positions of this report. I talked to different stakeholders to gain a multi‑disciplinary perspective, which considers the fragmented information landscape. However, there are clear limitations to the research as a consequence of information on the use of machine learning that has been either restricted by intelligence services or limited by private sector companies.
An earlier version of this report incorrectly identified Alexa as a Google product, and Inspire as an Islamic State magazine. This has now been corrected.
Read full report View infographic