Artificial intelligence (AI) is fundamentally transforming societies by driving innovation in various sectors, including the security sector. However, its dual-use nature, as a means of promoting innovation in security as well as being a source of new vulnerabilities and threats, poses challenges, especially for P/CVE. We have already seen terrorist and violent extremist groups exploit AI for malign purposes, including “enhancing cyber capabilities, enabling physical attacks, facilitating the financing of terrorism, spreading propaganda and disinformation, and other operational tactics.” In Africa, concerns arise from the growing accessibility of AI tools, which terrorist groups are using. While there is limited data on AI usage by these groups in the region, recent cases such as those involving the Islamic State West African Province’s (ISWAP) using AI in “video editing and the editing of written electronic communications” show how this technology is being exploited.
At the same time, this evolving threat landscape offers an opportunity to use AI tools to counter terrorism. However, this potential is limited by existing counter-terrorism frameworks, which are largely ill-equipped to address the risks posed by terrorist use of AI. Against this backdrop, recently adopted policy frameworks such as the 2024 African Union Continental Artificial Intelligence Strategy—endorsed by the AU Executive Council, comprising representatives from all 55 AU Member States—acknowledge the risks that AI poses to peace and security. While it predominantly focuses on innovation, AI governance, and maximising the benefits of AI on the continent, it also identifies security as a priority area. However, there is a notable gap in academic literature on how the AU Continental AI Strategy can guide AI-based counter-terrorism efforts in the region. Therefore, this Insight explores how terrorist groups in Africa use AI, the relevance of the continental AI strategy in guiding counter terrorism efforts, and identifying opportunities and gaps in the strategy.
AI’s Dual Use
In Africa, the use of AI by terrorist groups is emerging, with little sophistication. According to an article published in April 2025, Boko Haram and ISWAP are expanding their physical and online activities, with ISWAP using AI for “video editing and the editing of written electronic communications.” Recent reports also suggest that al-Qaeda has improved its use of AI in many ways, notably in enhancing content for lone-wolf terror attacks and spreading propaganda through AI-generated news content. These groups are also alleged to have attempted to impersonate government officials using AI content, and bypass language barriers, target vulnerable audiences, and evade content moderation. As recently as 2023, the use of AI was not particularly sophisticated – more experimental – but due to AI’s ease of access and low entry barriers, these groups may now leverage AI tools for their operational use, significantly influencing their ‘tactics and modus operandi.’ By 2024, reports indicate that “a group linked to al Qaeda organised a workshop seeking to enhance its skills in using AI and related software.” This ongoing development highlights the need for African countries to come up with measures and tools to counter these risks.
Researchers and policymakers require more data and information regarding the use of AI tools in African counter terrorism operations. However, the growing adoption of AI in various sectors throughout the region suggests that there is potential for its application in counter terrorism efforts. Speaking at the 1267th Peace and Security Council (PSC) meeting in March, Mahmoud Ali Youssouf, the African Union Commission Chairperson, highlighted AI’s potential to revolutionise counterterrorism. He noted that “AI-powered surveillance tools can track terrorist movements, analyse suspicious financial transactions, and detect radicalisation patterns on social media platforms.” Youssouf also highlighted the potential of AI-driven forensic analysis in disrupting terrorists’ financial operations by identifying illicit cash flow and abnormal financial behaviour.
AI’s potential in counter terrorism on the African continent can go even further. In a joint report, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and the United Nations Counter-Terrorism Centre (UNCCT) identify six ways in which AI-enabled technology is used in counter terrorism including; (i) predictive analytics for terrorist activities, (ii) identifying red flags of radicalisation (iii) detecting mis-information and disinformation spread by terrorists for strategic purposes, (iv) automated content moderation and takedown, (v) countering terrorist and violent extremist narratives, and (vi) managing heavy data analysis demands. With AI’s enormous potential identified by UNICRI and UNCCT, African countries have an opportunity to harness AI to revolutionise their counter terrorism efforts.
Leveraging the AU Continental Artificial Intelligence Strategy
In 2024, the African Union Commission developed a Continental Artificial Intelligence Strategy to guide the governance of AI across Africa. It focuses on maximising the benefits of AI, addressing and mitigating its risks, building capacities for AI and fostering regional and international cooperation. The Strategy identifies key policy interventions on societal, ethical, security and legal challenges associated with AI, including, for instance, proposals for the “adoption and implementation of technical standards to ensure the safety and security of AI systems across the Continent.” It also calls upon AU Member States to develop regulatory frameworks to address the safety and security challenges of “advanced and complex AI Systems.”
Among the fifteen key areas of action outlined in the Strategy, the safety and security of AI systems are particularly relevant to counterterrorism. This can help address the risks posed by terrorists using AI-driven tools and guide states in developing counter-terrorism strategies. First, the Strategy stresses the need for safe and secure AI systems development and use within Africa, ensuring that “non-authorised and maleficent actors cannot access them.” Although not mentioned in the Strategy, such groups may include terrorist organisations that are adopting AI technology to exploit AI-enabled systems. Second, the Strategy emphasises the use of AI in disseminating misinformation, fake news, hate speech and disinformation. It is well known that terrorist groups employ these tactics. For instance, Boko Haram and ISWAP “leverage false information to manipulate public perception, radicalise individuals, and recruit new members.” In some cases, misinformation is used by such groups to “incite unrest.” Disinformation is also used to “spread extremist ideology.” While the use of AI for such purposes is not well established, it is well known that it can increase the effects of disinformation and misinformation.
Third, the Strategy emphasises the need for assessing machine learning and deep learning safety and security risks of AI in the continent. This is critical as terrorists can “decipher how to exploit weaknesses in both physical and cyber security” using these tools. Fourth, the Strategy emphasises the need to mitigate risks associated with generative AI and Large Language Models (LLM) through transparent AI systems and well-informed regulations and guidelines. While the importance of transparent AI systems is widely recognised, the Strategy lacks specificity regarding what such systems entail. It also does not establish a clear link on how such systems mitigate AI risks. Nonetheless, the efficacy of well-informed guidelines and regulations could help mitigate AI risks, such as the use of AI by terrorist groups.
It is also important to highlight that the Strategy recognises the need for capacity building and cooperation between AU member states, partners, international organisations and the private sector, including building regional and national capabilities to “assess and identify, protect, detect, respond and recover from AI threats.” Indeed, capacity building and cooperation are crucial for effective security measures, including the use of AI to counter terrorism. A collaborative approach leveraging both domestic and international expertise could be beneficial in identifying and responding to threats, while also addressing the challenges of using AI in counter terrorism.
Key Policy Gaps and Challenges
Despite the opportunities to leverage the Strategy for P/CVE, three notable gaps and challenges remain that undermine its efficacy for counter terrorism efforts. The first challenge is that the Strategy is a guiding framework and does not provide for any binding obligations on AU member states. This means that the policy guidelines are to be voluntarily adopted by states. The second challenge is that counter terrorism is not a priority area in the Strategy. Therefore, there are no guidelines on using AI in counterterrorism, safeguards for its use, or guidelines on integrating AI in regional or national counterterrorism frameworks. The third challenge relates to implementation. Many African countries face capacity and regulatory challenges in developing AI tools for counter terrorism, and the framework does not provide detailed guidance for addressing security risks.
Conclusion
While the AU Continental AI Strategy holds promise for innovation and identifying priority areas for policy action on the continent, there is a need for detailed security-related provisions on how to address AI risk, including adequate safeguards on the use of AI. In addition to assessing the security risks posed by AI, attention should also be paid to the emerging trends around terrorist exploitation of AI. Furthermore, it is crucial to focus on the development of comprehensive frameworks that focus on the complexities of terrorism and counter terrorism, particularly within the rapidly evolving landscape of AI development and use. As a starting point, the African Union Peace and Security Council recently highlighted the importance of raising awareness among African states about the risks associated with the growing use of AI by terrorist organisations, which could “destabilize security and increase conflicts in Africa.”
—
Brenda Mwale is a lawyer and legal researcher with a PhD in cyber terrorism law. Her research interests focus on tech policy, counter terrorism and international law.
—
Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.