Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

Leveraging AI-Driven Tools for Capacity Building in Crisis Response: Enhancing Moderation and Crisis Management on Smaller Digital Platforms in Africa

Leveraging AI-Driven Tools for Capacity Building in Crisis Response: Enhancing Moderation and Crisis Management on Smaller Digital Platforms in Africa
23rd January 2025 Abraham Ename Minko
In Insights

Introduction

In recent years, smaller digital platforms across Africa have emerged as vital spaces for social engagement, entrepreneurship, and information sharing. However, these platforms, often overlooked in global discussions on online safety, are increasingly targeted by extremist groups. Unlike major social media giants with robust AI-driven moderation systems, smaller platforms frequently lack the resources to monitor and combat malicious activities effectively. This vulnerability has made them fertile ground for extremist organisations such as Boko Haram in Nigeria and Al-Shabaab in Somalia, which exploit these digital spaces to disseminate propaganda, recruit followers, and coordinate attacks.

The challenge is compounded by the linguistic and cultural diversity of the African continent, with extremist groups leveraging local languages such as Hausa, Swahili, and Amharic to communicate in ways that evade detection. The lack of scalable, context-aware content moderation tools on these platforms exacerbates the threat, leaving millions exposed to harmful narratives. This underscores the urgent need for tailored, AI-driven solutions that can bridge the technological gap and enhance the crisis response capabilities of smaller platforms.

This Insight investigates how AI can be leveraged to effectively empower these platforms to counter extremist activities. By focusing on localised solutions and capacity-building initiatives, this Insight proposes a sustainable framework to enhance online safety and mitigate the misuse of digital platforms for malign purposes.

Understanding the Threat Landscape on Smaller Digital Platforms

Small digital platforms in Africa provide vital communication channels for local communities. However, they are often abused by extremist groups to disseminate propaganda, recruit members, and coordinate activities, exploiting the platforms’ limited moderation capacities and oversight.

The rise of smaller digital platforms in Africa has created new vulnerabilities that extremist groups exploit to further their agendas. Unlike major platforms such as Facebook and X, these smaller platforms, including chat applications such as Nimbuzz, Eskimi, and local forums unique to African countries, often operate with limited moderation capabilities and minimal regulatory oversight. This makes them appealing tools for extremist groups seeking to bypass the scrutiny of larger, well-resourced platforms. For example, Boko Haram in Nigeria has been documented using localised messaging apps and forums to disseminate propaganda and recruit young individuals, leveraging these platforms’ ability to reach specific, often rural, communities with targeted narratives.

In Somalia, Al-Shabaab has similarly exploited smaller platforms and regional forums to spread its ideologies and organise operations. By communicating in Somali and embedding cultural references, the group avoids detection by global moderation systems that often struggle to handle non-mainstream languages and dialects. Platforms with limited technical capacity, lacking robust algorithms for natural language processing, become safe havens for such actors. These groups use coded language, multimedia content, and interactive messaging to evade rudimentary moderation tools, ensuring their messages persist and propagate.

The convergence of digital connectivity and localised grievances further complicates the threat landscape. In Kenya, for instance, extremist recruiters use WhatsApp clones and niche social platforms to exploit local tensions and economic disparities, targeting vulnerable youth with promises of financial support or purpose. These platforms’ limited data-sharing agreements with law enforcement and civil society organisations also create an accountability gap, hindering effective intervention.

The issue extends beyond direct extremist communication to include the indirect consequences of platform exploitation. Extremist content on smaller platforms often fuels polarisation and distrust within communities, as seen in regions like northern Nigeria. Here, inflammatory messages spread unchecked, exacerbating tensions between ethnic and religious groups, and creating fertile ground for recruitment into violent movements.

Challenges Facing Smaller Platforms in Moderating Extremist Content

Smaller digital platforms across Africa face a range of interconnected challenges in moderating extremist content, leaving them particularly vulnerable to exploitation by extremist groups. A significant issue lies in the disparity of resources between these platforms and larger, well-established ones. Platforms like Facebook and YouTube have invested heavily in artificial intelligence, machine learning models, and teams of moderators capable of analyzing complex and context-specific content. By contrast, smaller platforms such as Eskimi and localised chat apps lack the funding and infrastructure to build comparable systems, making it easier for extremists to use them as unregulated spaces for disseminating propaganda and coordinating activities.

One critical challenge is the linguistic diversity of Africa. Extremist groups like Boko Haram in Nigeria use Hausa, while Al-Shabaab in Somalia relies on Somali. Smaller platforms are often ill-equipped to handle content in these languages, as they lack advanced natural language processing (NLP) capabilities tailored to regional dialects. In 2020, for instance, studies found that extremist groups used lesser-known platforms to share videos and messages in local languages, successfully evading detection. The inability of these platforms to understand coded language or cultural references further exacerbates the problem, allowing extremist content to remain accessible for longer periods.

Another hurdle is the limited availability of skilled moderators who understand the socio-cultural context of extremist narratives. Smaller platforms rarely have the resources to hire or train moderators fluent in local languages or familiar with extremist groups’ tactics. In regions like northern Kenya, where extremist messaging often incorporates grievances about political and economic marginalisation, moderators lacking local knowledge struggle to differentiate between legitimate grievances and extremist rhetoric, leading to either over-censorship or inadequate action.

Technological limitations also hinder the development of advanced content detection mechanisms. Smaller platforms often rely on simplistic algorithms or manual reporting systems, which are insufficient against the sophisticated strategies employed by extremist groups. For instance, extremist organisations use memes, encrypted images, or manipulated videos that slip through basic moderation filters. In Cameroon, armed groups have exploited local platforms to distribute coded propaganda and recruitment materials, knowing that the platforms cannot decipher these strategies.

Moreover, smaller platforms often operate in jurisdictions where regulatory frameworks for digital content are weak or nonexistent. In countries like Somalia or South Sudan, governments cannot enforce laws or partner with platforms to ensure effective content moderation. This absence of collaboration creates an environment where smaller platforms are left to navigate the issue alone, often with little to no guidance. This vacuum of oversight not only enables the spread of extremist content but also undermines trust in these platforms as safe digital spaces.

Addressing these challenges requires a multifaceted approach, including investments in AI tailored to regional contexts, capacity-building programs for moderators, and stronger partnerships between smaller platforms, governments, and civil society organisations. Without targeted interventions, these platforms’ vulnerabilities will continue to be exploited, with profound consequences for peace and security in the region.

AI-Driven Solutions for Moderation and Crisis Response

AI-driven solutions offer transformative potential for moderating extremist content and managing crises on smaller digital platforms in Africa. However, implementing these technologies requires addressing the unique challenges posed by the linguistic, cultural, and operational contexts of these platforms. Unlike larger platforms with extensive AI resources, smaller platforms often need solutions that are affordable, adaptable, and effective in detecting nuanced extremist strategies.

One critical application of AI lies in natural language processing (NLP) tools tailored to Africa’s linguistic diversity. Current moderation systems are often inadequate for recognising extremist content in regional languages such as Hausa, Somali, or Amharic. For example, Boko Haram’s use of Hausa to disseminate propaganda on localised platforms often goes undetected due to the absence of language-specific AI models. AI-driven solutions like machine learning algorithms trained on regional dialects and local cultural contexts can bridge this gap. By recognising subtleties in language, such tools can detect coded messages, slang, or phrases that standard moderation systems might overlook.

AI can also enhance multimedia analysis, a critical need given extremists’ frequent use of images and videos to evade text-based filters. Al-Shabaab, for instance, has leveraged video content to glorify violence and attract recruits. Smaller platforms could employ AI models capable of analysing video and image metadata, identifying violent or extremist content, and flagging it for review. Open-source AI technologies, such as TensorFlow, can be customised for these purposes, reducing the cost barrier for smaller platforms.

Beyond content detection, AI can support real-time crisis response. Extremist groups often use digital platforms to coordinate attacks or incite violence, as seen during the 2021 escalation of violence in northern Mozambique. AI tools equipped with anomaly detection algorithms can monitor unusual patterns of communication or spikes in activity, providing early warnings to moderators or law enforcement. For example, sudden increases in specific keywords related to extremist narratives could trigger alerts, enabling preemptive action to mitigate harm.

The scalability of AI also allows smaller platforms to moderate content more efficiently without relying solely on human moderators. Given the resource constraints of platforms like Eskimi, deploying AI to handle initial content filtering can significantly reduce the workload for human moderators, who can then focus on more nuanced cases. This hybrid model—combining AI’s speed and accuracy with human judgment—ensures that moderation efforts remain both efficient and contextually sensitive.

However, the deployment of AI-driven solutions must address inherent biases in existing models, which often reflect the priorities of developers from non-African contexts. A failure to adapt these tools to local realities can lead to false positives or negatives, undermining trust in moderation systems. For instance, overly broad filtering might censor legitimate discussions about political grievances, further alienating vulnerable populations. To counteract this, partnerships with African tech developers and linguists are essential to create culturally informed AI models.

Ultimately, while AI-driven solutions are not a panacea, they offer a critical pathway to empowering smaller platforms in Africa to combat extremist exploitation effectively. By prioritising local contexts, fostering partnerships, and ensuring affordability, these tools can enhance digital safety and resilience, contributing to broader efforts to address extremism on the continent.

Capacity Building and Partnerships for Sustainable Crisis Management

Capacity building and partnerships are vital for enabling smaller digital platforms in Africa to address extremist exploitation effectively and sustainably. These efforts must prioritise equipping platforms with the tools, knowledge, and collaborative networks needed to mitigate the challenges posed by resource constraints and limited technical expertise. A strong focus on regional cooperation and local engagement is crucial for crafting solutions tailored to Africa’s diverse digital and socio-political ecosystems.

One critical area for capacity building is training platform administrators and moderators in advanced content moderation practices. Smaller platforms often lack personnel with the expertise to identify extremist narratives, particularly those embedded in local languages or cultural nuances. For example, in northern Nigeria, Boko Haram recruiters use subtle religious and historical references to frame their messaging, which can be misinterpreted by untrained moderators. Workshops and training programs supported by international tech organizations or regional bodies such as the African Union (AU) can empower moderators with the skills to recognize and respond to such content effectively.

Partnerships between platforms, local governments, and international organizations are equally essential for fostering sustainable crisis management. Smaller platforms cannot tackle extremist threats alone, as their reach often overlaps with broader societal vulnerabilities. Moreover, collaboration with African tech startups and academia can play a pivotal role in addressing the technological gaps faced by smaller platforms. Initiatives like the iHub in Kenya demonstrate the potential of local innovation in addressing regional challenges. By involving local developers in creating affordable AI tools, platforms can access context-sensitive solutions that align with their operational capacities. For instance, localized natural language processing algorithms developed through such partnerships could enhance moderation capabilities for platforms operating in Ethiopia or the Sahel.

Financial and technical support from global tech companies and donor organizations is another critical component of capacity building. Smaller platforms often lack the resources to develop or deploy advanced AI-driven moderation tools independently. Programs similar to Meta’s efforts to fund misinformation detection projects in Africa could be adapted to focus on extremist content. Offering grants, software, and cloud resources to smaller platforms would provide them with the infrastructure needed to scale their crisis response efforts effectively.

At the same time, a focus on community-driven approaches is essential for sustainability. In Cameroon, where extremist narratives have deep roots in local grievances, engaging civil society organizations (CSOs) can help platforms develop trust and accountability. CSOs can act as intermediaries between platforms and their user base, offering cultural insights that improve moderation outcomes while ensuring that platform policies respect local sensitivities.

Finally, an overarching framework for regional cooperation could enhance the collective capacity of smaller platforms to tackle extremism. The AU’s African Cybersecurity Collaboration and Coordination Committee offers a potential starting point for such an initiative. By creating a centralised repository of best practices, shared AI tools, and linguistic datasets, smaller platforms could pool resources and knowledge to address extremist exploitation more effectively. Such partnerships would also facilitate information sharing during crises, enabling quicker and more coordinated responses.

In conclusion, capacity building and partnerships represent a multi-dimensional strategy to address the exploitation of smaller digital platforms by extremists in Africa. Through targeted training, regional collaboration, and financial support, these efforts can empower platforms to act decisively while fostering resilience in the face of evolving threats.

Abraham Ename Minko is a senior researcher and policy analyst in Peace, Security, and Conflict Resolution. He is completing a Ph.D. in Political Science and International Relations at Istanbul University in Türkiye. His research interests are UN Peace Operations, Terrorism and Counter Violent Extremism, Peace and Conflict Resolution, Mediation and Negotiation, International Humanitarian Law and Armed Conflicts, Peacekeeping, and Peacebuilding. 

LinkedIn: www.linkedin.com/in/abraham-e-minko-4719b375