Introduction
The European Union’s AI Act, adopted in June 2024, represents the first multinational framework regulating Artificial Intelligence (AI) development and deployment. To prioritise users’ safety and fundamental rights, the Act categorises AI systems based on potential risk levels—prohibited, high-risk, limited-risk, and minimal-risk. Depending on the risk posed by the tool, stricter regulations are applied to protect users. This user safety approach has been largely promoted and even described as supporting a “human-centric” approach to AI regulation.
In the context of P/CVE, AI presents itself as a complex and multifaceted tool. Terrorist groups have used AI to recruit members, to produce and disseminate propaganda, or could even use it to carry out attacks. Simultaneously, government agencies across the EU have embraced AI’s potential for enhancing security measures through predictive policing, crime forecasting, automated risk assessment, and forensic analysis. AI technologies are transforming how criminal justice systems operate. It can optimise security, improve efficiency and rapidity of justice, and is cost-effective. It is even seen as a way to limit infringement on rights and freedom.
AI’s potential in criminal justice offers governments complete and powerful tools for P/CVE purposes. In this security effort, EU countries have adopted unique legal and procedural frameworks allowing some human rights infringements for national security reasons like counter-terrorism. Those exceptions seem difficult to uphold, as AI tools are rarely limited to a single field due to the extensive data required. This raises complex regulatory challenges that could once more ease human rights violations by governments.
However, the use of AI broadly has not gone undebated, particularly related to concerns regarding ethics and climate implications in widespread use. Even the Act, which imposes stringent requirements on high-risk AI applications, allows for some exceptional infringements of human rights for counter-terrorism purposes.
So far, it remains unexplored how exactly the Act regulates AI’s use in P/CVE. Therefore, this Insight examines the AI Act concerning the benefits it grants for the prevention and prediction of terrorism and its use in terrorism trials.
AI in Counter-Terrorism: Risk Management and Potential for Misuse
Real-Time Remote Biometric Identification Systems and Prevention of Terrorism
Art. 5(1) (h) of the Act (p. 52) prohibits real-time biometric identification in public due to concerns about human rights. However, the article carves out two significant exceptions specifically for counter-terrorism. First, the use of such tools are authorised to prevent a “genuine and present or genuine and foreseeable threat of a terrorist attack”.
The use of this highly debated system allows law enforcement agencies to identify individuals by scanning and analysing facial features, gait, or other biometric markers and comparing this biometric data to stored data in a reference database. Notably, their use can occur without the consent of individuals being monitored. The implementation of such systems during the Tokyo Olympics demonstrated their practical effectiveness in large-scale security operations. However, the UN Special Rapporteur on counter-terrorism and human rights has issued substantial warnings about the potential for such unrestricted biometric surveillance to fundamentally alter privacy and erode trust in democratic processes, underscoring the importance of transparent safeguards.
The Act mandates member states to implement several crucial safeguards for the deployment of these systems. These include obtaining authorisation from an independent administrative authority created for this purpose, or conducting a fundamental rights impact assessment.
Nevertheless, those can still be overruled in case of emergency, reflecting the everlasting debate between security and privacy, which lies at the heart of many P/CVE strategies. While the potential benefits of those systems are clear – they can be deployed at large public events or in crowded areas to identify and apprehend suspects before an attack can occur – the risks to human rights violations are substantial. Previous research has shown that excessive surveillance for counter-terrorism purposes can cause citizens to be reluctant to express their opinions or to protest for fear of being labelled as a potential threat. This could undermine democracy by enabling constant surveillance, infringing on the fundamental right to privacy, and creating a chilling effect on public behaviour – ultimately aligning with what terrorists seek to achieve. Therefore, the lack of safeguards in a counter-terrorism context, especially when facing a security emergency, poses a risk of overreach, discrimination, and erosion of democratic values to outweigh the benefits. Robust safeguards to prevent misuse and ensure proportionality should be implemented by authorities no matter the situation.
Predictive Policing and Potential Bias
The Act provides significant attention to predictive policing tools, which represent an increasingly sophisticated approach to crime prevention through the analysis of historical data patterns. These tools, classified as high-risk in the Act, can distinguish between tools targeting ‘risky’ individuals and those focusing on ‘risky’ locations. Examples of the former are systems like HART, KeyCrime, or Precobs, which identify near-repeated crimes and potential offenders based on past behaviours or affiliations. Similarly, profiling systems like the MOTRA tool can detect changes in attitudes and, thereby, can potentially serve as an early indicator of criminal activity. However, none of these tools have been used for terrorism offences yet. On the other hand, systems like PredPol, CompStat, or X-law predict where crimes are likely to occur, thereby enabling law enforcement agencies to allocate resources more effectively.
Article 5(1) (d) (p.51) of the Act establishes crucial restrictions on these technologies, prohibiting systems that rely solely on personality profiling for criminal behaviour prediction. In counter-terrorism, predictive policing would involve assessing the risk of an individual becoming the victim of criminal offences. The Act classifies these tools as high-risk systems, subjecting them to Article 9’s (p.56) comprehensive requirements for risk management protocols, data governance standards, testing and monitoring procedures, documentation requirements, and security measures.
Despite these safeguards, the use of AI in predictive policing within counter-terrorism could have adverse effects on human rights. The issue of bias perpetuation is particularly concerning, as these systems rely heavily on historical crime data and may inadvertently reinforce existing patterns of discriminatory law enforcement practices, leading to the over-policing of marginalised communities, as seen with PredPol. The implementation of threat scoring systems raises substantial privacy and equal treatment concerns, as individuals may face increased surveillance and monitoring based on algorithmic assessments.
Furthermore, the use of AI to monitor social media for signs of radicalisation or other indicators of potential terrorist activity could lead to individuals being unfairly targeted based on their online activity. This can create a chilling effect, where individuals are reluctant to express their views. This is particularly problematic in the context of counter-terrorism, where the line between legitimate expression and incitement to violence can be difficult to draw.
The framework established by the Act is essential but seems insufficient to be human rights compliant. While it classifies predictive policing tools as high-risk and imposes safeguards, these measures do not fully address the underlying risks of bias, discrimination, and excessive surveillance. For instance, it does not eliminate the risk of algorithmic decisions unfairly influencing the actions of law enforcement. For these systems to be both effective and equitable, member states should go beyond the Act with strict accountability measures, prioritising fairness, and actively avoid exacerbating societal inequalities.
The Role of AI in Evidence Evaluation and Fair Trials
Real-Time Remote Biometric Identification Systems and Suspect identification
The ‘real-time’ remote biometric identification systems can be used for the localisation or identification of a person suspected of having committed certain criminal offences. A list of the offences is displayed in the Annex II of the Act, and terrorism is one of them. As previously mentioned, each member state already has specific legal and procedural frameworks for combating terrorism. For example, participating in a terrorist organisation is a criminal offence in every EU member state under Directive 2017/541 on combating terrorism. In this context, those systems could be employed to identify known members of terrorist groups before they commit any attacks. Art. 5(1) (h) (p.52) of the Act expands the scope of methods available for countering terrorism, potentially leading to greater infringements on human rights for national security purposes.
Consequently, the balance between security and privacy is, once again, leaning towards security when it comes to counter-terrorism and impacting suspects and citizens’ human rights; namely the right to privacy and freedom of speech. The Special Rapporteur cautions that if biometric data collection and analysis are unregulated, there could be a serious encroachment on individuals’ rights to privacy and due process. The Act’s provisions could still fall short if not rigorously enforced, as the potential for overreach and misuse remains significant. Ensuring that these safeguards are consistently applied across all member states will therefore be crucial to maintaining public trust and protecting democratic freedoms.
Evaluation of Evidence
Evidence evaluation represents another area where AI is being increasingly used in criminal justice and P/CVE. AI systems can process large volumes of data, such as witness statements, digital evidence, and other evidence, to assess their reliability and relevance for the trial. Under the AI Act, these tools are classified as high-risk systems and are subject to stricter requirements to ensure user protection.
Among these tools are AI-driven polygraphs and systems that analyse individuals’ expressions or behaviours. For example, in China’s Yancheng prison, the Smart Jail system tracks every high-profile inmate around the clock and alerts the guards if anything seems “out of place”. While such systems are prohibited under Article 5(1) (f) (p.51) in workplace or educational settings, they are not expressly forbidden in other contexts, such as offender management. These systems offer substantial benefits for counter-terrorism efforts, potentially reducing in-prison radicalisation risks and enhancing rehabilitation program effectiveness through comprehensive behavioural monitoring. However, their implementation requires careful consideration of privacy implications and inmate rights.
AI systems are also employed to evaluate the reliability of evidence in criminal investigations. This is particularly useful in complex terrorism cases, where large volumes of data must be analysed. For instance, in the Netherlands, the Hansken tool enables investigators to process digital evidence from seized devices efficiently. However, the accuracy and fairness of such systems heavily depend on the quality of the data they are trained on. This could result in unfair treatment, such as misidentifying suspects or prioritising evidence based on flawed algorithms. With proper safeguards, as set up in the AI Act, this tool could be very useful to authorities when implemented carefully.
Conclusion
While AI tools are not yet widespread in counter-terrorism, their development is likely imminent. The new regulations established at the European level are creating a framework for the development of new tools, primarily for predictive and investigation purposes rather than in trial proceedings. Some provisions are notably permissive, particularly in the context of counter-terrorism, which poses a risk to human rights, notably the right to privacy and fair trial rights. Despite oversight mechanisms, the potential for misuse and the broad application of these exceptions may undermine the human-centric approach that the Act aims to promote. Notably, the abuse potential of these technologies beyond their original intent, such as for political surveillance or to suppress dissent, remains a major concern. Hence, as AI tools continue to evolve and expand in the counter-terrorism domain, the effectiveness of these regulations and the protection of human rights will require ongoing scrutiny by the governments.
Jade Briend is a junior researcher at the International League Against Arbitrary Detention. She worked at the International Centre for Counter-Terrorism, where she conducted research on the nexus between Artificial Intelligence (AI), terrorism, and criminal justice, among other topics. She also worked at a law firm specialising in asylum law, and subsequently, she interned at the International Criminal Court for the Director of the Judicial Services of the Registry and as a support for the Victims Participation and Reparation Section. Finally, she holds an MSc in International Public Law from the University Paris-Nanterre.