Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

The Role and Potential of Artificial Intelligence in Extremist Fuelled Election Misinformation in Africa

The Role and Potential of Artificial Intelligence in Extremist Fuelled Election Misinformation in Africa
8th March 2024 Jake Okechukwu Effoduh
In Insights

Background

The advent of artificial intelligence (AI) has ushered in a new era of technological possibilities, transforming industries and societies worldwide. However, its impact on African democracies reveals a complex interplay between technological advancement and the persistent threats of extremism and voter disenfranchisement. In Africa, where political landscapes are often marked by instability and ethno-tribal/religious divisions, the potential for technology, particularly AI, to be exploited by extremist groups to cause political stability poses a significant risk. These actors can harness AI to deepen societal cleavages, manipulate public opinion, and ultimately undermine the democratic fabric through disenfranchisement and misinformation

On the global stage, deepfakes and other synthetic media technology have raised concerns about the fragility of democracy and the new realities of true-to-life AI-generated content peddling fake news. Deepfakes not only pass off fake content as real but also create psychological mistrust towards genuine content, which can be problematic for political observers and electorates. They can also enable what some scholars have described as the ‘liar dividend’, in which politicians fuel cognitive dissonance by claiming that real but demeaning content has been digitally manipulated to evade political and legal accountability. Such activities, leveraging AI platforms with disabled ethical safeguards, are particularly troubling given the current absence of sophisticated deep-learning models and the exposure to exploitation by extremist groups in Africa.

Africa’s Pathway to AI Adoption 

Historically, Africa’s adoption of AI has been at a snail’s pace as the continent’s technological development struggles to catch up with the rest of the world. Experts have noted that the growing centrality of AI and other technology in institutions worldwide could create substantial inequality between countries with high AI adoption and countries with low adoption. Going by this narrative, there is cause for concern at the political level regarding the increasing vulnerability of political institutions. This is illustrated by the constant cost-benefit approaches many African countries engage in as they weigh the potential rewards of adopting emerging technologies against the risks of disintegrating existing institutions and political structures. For instance, while drone technology offers promise for humanitarian efforts, it also presents risks if leveraged by terrorist groups, underscoring the continent’s technological vulnerabilities

However, recent elections in countries like Nigeria and Kenya have challenged any ‘slow-growth’ narrative, showcasing AI’s burgeoning role in a critical aspect of democracy—the electoral process. With nearly 20 African presidential and legislative elections on the horizon for 2024, apprehensions about AI’s influence on public opinion and the integrity of electoral decision-making are mounting. The lack of robust regulatory frameworks, governance structures and technological capabilities opens the continent up to external and internal political algorithmic manipulation and threats, disenfranchisement, extremism, and electorate suppression. 

The already-fraught political atmosphere in many African countries has recently been made even more tense by the rise of AI-powered technologies. Algorithms, deepfakes, bots, and data manipulation have become primary factors in elections on the continent in recent years. The story is not always negative, as exemplified by the responsible use of data analytics to avoid fake data and predict the outcome of the elections by Ghana’s major opposition in 2016. Deepfakes can, by consequence, help create or fan the flames of existing political tensions and tribal acrimony. For instance, in December 2018, a strange-looking video of Gabon’s President Ali Bongo led to large-scale suspicions that the video was a deepfake meant to hide the truth about the president’s health status. This caused further unrest and uncertainty over the country’s leadership, leading to a coup attempt the following month. 

AI-enabled Propaganda and Misinformation in Kenya

Fears of foreign influence, political violence, and misinformation dominated the lead-up to the August 2022 general elections in Kenya. While misinformation has always been a significant challenge in a political environment defined by propaganda, these elections were unique because they had to contend with an even more sinister dimension to the problem. In 2021, several reports showed that the unsuspecting Kenyan public was being assaulted with a barrage of AI-generated deepfakes portraying videos of politicians making incendiary statements that spread like wildfire through the woodlands of social media. Resultantly, politicians were able to deny that actual clips of them making controversial statements were real, blaming them on deepfakes instead. These created a political atmosphere of distrust and uncertainty over what reality means in today’s digital world. In February 2023, an Israeli-based firm known as Team Jorge was exposed for its involvement in the Kenyan elections through hackings and a deployment of social media bots to turn the narrative against the eventual winner, William Ruto.  

In 2017, Cambridge Analytica, a now-defunct British consulting firm which used controversial AI algorithms, found itself in hot water after it was revealed that the firm had played a dominant role in President Uhuru Kenyatta’s election campaigns by harvesting data from citizens and propagating fake news. Nonetheless, the story of AI in Kenya is not entirely negative: AI did not just contribute to pushing fake news but also helped fight it. The 2022 electoral process was marked by MAPEMA, an AI consortium using machine learning to detect and counter toxic and manipulative content in online political warfare. Using information from over 3,900 citizens, one organisation, Shjuaaz Inc., neutralised disinformation with ‘messages of peace’ that reached over 3.9 million citizens. Another organisation in the consortium, AIfluence, used its research insights to disseminate peaceful messages to 8.5 million citizens. This points to the significant impact of AI and social media in shaping public opinion and influencing political decisions. This, in a broader sense, could contribute to an already volatile environment that could potentially be exploited by extremist groups known to operate close to the Kenyan borders.

AI-enabled Disinformation Campaigns in Nigeria

The extent to which Nigerian elections have grappled with  AI-facilitated manipulations was largely unknown until recently. Like in Kenya, reports emerged about the covert role of Cambridge Analytica and Team Jorge in the lead-up to the Nigerian presidential campaigns and elections in 2015. These groups aimed to discredit opposition party candidates through misinformation and disrupt the communication line of party leaders on the day of the election. Again, in a rather more precarious election season, the 2023 Nigerian presidential elections were underscored by division on all levels: tribalism, political partisanship, religious bigotry, and social media disinformation aimed at deepening the chasm among citizens. A major tool of this disinformation was the use of AI. On social media, several partisan bots dedicated posts to either promoting a specific candidate or attempting to tarnish the reputation of another. Even in the aftermath of the elections, accounts on Twitter used AI-generated images to give false impressions of public support for their various stances.

Some Nigerians tried to use AI to predict the elections or generate ideas for electoral strategy; reports showed that ChatGPT was inundated with questions such as who would win the 2023 presidential election in Nigeria and how some presidential candidates could be defeated in the elections. In February, it was revealed by experts that AI software was used to clone the voices of a presidential candidate and his running mate. The fake voice note recording featured these candidates conspiring to subvert the impending elections. During the lead-up to the 2023 elections, Full Fact, a London-based independent fact-checking platform, used AI tools to tackle cases of electoral disinformation. Those tools included a search function to spot significant statements that needed to be fact-checked; an ‘alerts tool’ which informed fact checkers whenever an already-flagged piece of fake news was being repeatedly circulated on the internet; and real-time support offering live transcription of interviews and electoral debates so that claims made by the political figures could be fact-checked on the spot. 

As with the Kenyan example above, while AI has been used to undermine the electoral process through disinformation, it also presents significant opportunities for promoting extremist narratives. By deepening existing societal fractures, disinformation campaigns provide fertile ground for extremist ideologies to take root, making it difficult for people to decipher fact from fiction and undermining social cohesion. The strategic dissemination of polarising content can escalate tensions, potentially leading to violence and destabilisation. This trend is particularly disconcerting when considering the observed uptick in extremist activities around election periods, where the intersection of AI-facilitated disinformation and heightened political tensions creates a precarious powder keg, priming conditions for potential conflict and upheaval.

Quo Vadis: Harnessing AI for Democratic Transparency and Countering Extremism

While AI is one of the most impressive inventions of our time, the need for regulation in extremist-triggered election misinformation campaigns cannot be overstated.  As AI-generated text, images, and videos become increasingly realistic, it is difficult to rely on individuals to make a case-by-case discernment to discern the fake from the real. Africa already has a fake news problem (e.g. Nigerians and Kenyans still have particularly low levels of digital literacy), and using AI in adverse ways threatens to exacerbate that problem in the coming years. Also, political instability in many African countries poses a frontier risk should AI gain an unchecked stronghold. As in the case of Gabon, an atmosphere of uncertainty created by a media filled with AI content catalyzed political tensions that may have lasting effects on the county’s history. AI systems rely on consuming large datasets, including private information of both regular and high-ranking citizens, possibly leaving them open to security risks. The kind of environment that has led entities such as Team Jorge to thrive on collecting sensitive data from government officials needs to be discouraged or, at least, contained. 

Embracing the AI surge with a human-centric approach can help map out a strategy to mitigate technology’s potential to hinder democratic participation through extremist-fuelled disinformation campaigns. Regulating AI in this narrow electoral context is challenging; some African States resort to drastic measures like internet shutdowns during elections to curb misinformation and political instability. These actions are ad hoc, disregarding citizens’ right to freedom of expression and the press and built on the erroneous assumption that the only prevailing information during elections is fake content. Indeed, there is no one-size-fits-all approach to regulating AI-induced electoral misinformation in Africa. It will require multifaceted interventions: legislative, technical, and educational. 

The African Union (AU) has since developed a comprehensive mandate that encompasses the establishment, maintenance, and advancement of democracy across the entire continent. Even the AU’s development roadmap, as well as the African Charter on Democracy, Elections and Governance, actively advocate for transparency and trust towards the continent’s shared ambition of democracy. Similarly, The AU Declaration on Terrorism and Unconstitutional Changes of Government in Africa has recognised the nexus between technology and extremism, advocating for concerted efforts in cybersecurity and responsible use of social media to counteract terrorism. These strategic documents identify informed citizen participation in electoral processes as pivotal to sustainable human development. This commitment is further evident in frameworks such as the AU Digital Strategy Information for Africa (2020 – 2030), which prescribes rules of solidarity and cooperation to ensure that Africa’s forthcoming AI digital infrastructure is cooperative, inclusive, transparent, and safe while accommodating diverse technological adoption levels among Member States.

With the existence of an African AI observatory that can monitor the ethical and social implications of AI development on the continent and provide data and insights to policymakers, such bodies can also ensure citizens do not face any form of discrimination, privacy breach, adoption of likeness or intellectual theft in the use of their data by AI platforms. Insights could be gleaned from a US approach which requires designated national intelligence agencies to provide comprehensive reports on foreign weaponisation of targeted deepfakes and misinformation. In addition, domestic regulations should reinforce platform holders’ proactive detection and reporting obligations and their commitment to collaborating with domestic stakeholders. Such regulations must adhere to disclosure and watermarking standards for AI-generated content. Additionally, criminal laws should apply when AI is used illicitly in generating materials like art, videos, text, etc., to mislead, misinform or abuse civil and political rights. 

Beyond legal and technical remedies, raising digital literacy is essential to equip the public to recognise false AI-generated content. There is a need for a consolidated effort domestically and regionally, to educate the public to spot predictable elements of fake news. Without such education, the risk of political institutions being manipulated by foreign and domestic actors with ill intent remains high, potentially leading to a form of digital colonialism by foreign manipulators of the electoral process, as well as internal actors who may use high-level technologies to hijack the electoral system and promote extremism by promoting falsehoods as truths. As the use of AI continues to become more mainstream, African states must be proactive in regulating these systems, in addition to engaging with international regulatory efforts. 

Jake Okechukwu Effoduh is an Assistant Professor at the Lincoln Alexander School of Law, Toronto Metropolitan University