Click here to read our latest report “Far‐Right Extremism and Digital Book Publishing”

Weapon or Tool?: How the Tech Community Can Shape Robust Standards and Norms for AI, Gender, and Peacebuilding

Weapon or Tool?: How the Tech Community Can Shape Robust Standards and Norms for AI, Gender, and Peacebuilding
6th December 2023 Nick Zuroski
In 16 Days, Insights

Introduction

In October, following earlier remarks that artificial intelligence (AI) poses “enormous potential and enormous danger,” President Biden issued a sprawling executive order (EO) on the technology. The EO establishes a White House AI Council and tasks agencies with forming guidelines for the safe use of AI, from disinformation to cybersecurity. While the Gender Policy Council—established by President Biden in 2021 to advance gender equity and equality in US domestic and foreign policy—is included in the AI Council, the EO fails to mention gender, violence, or conflict in relation to AI best practices. Ironically, this oversight came just before the UN’s 16 Days of Activism Against Gender-Based Violence.      

The international community lacks standards and norms to ensure that AI contributes to gender equality and builds sustainable peace. Evidence shows that digital technologies can drive gendered violence and violent conflict. Generative AI can, for instance, multiply disinformation that targets women and girls, while gender bias in AI training data and companies can perpetuate harmful gendered stereotypes. But, if used properly, digital technologies like AI  can amplify gender equality and peacebuilding interventions. AI digital dialogue tools can bring more women into official peace processes—making them more representative and successful—and AI-automated data analysis can “enable real-time identification of gendered dimensions of conflict.”Maximising the benefits and minimising the risks of AI is vital, given the current 30-year high in global violent conflict, which women bear the brunt of. This Insight will explore how  AI can both worsen and eliminate gendered violence and how technology companies can adopt AI standards and norms that integrate gendered perspectives to prevent and reduce violent conflict and build sustainable peace.

The Bad: AI, Gender Inequality, and the Risks to Peace 

While no single definition of AI exists, the Organization for Economic Cooperation and Development (OECD) defines it as “the ability of machines and systems to acquire and apply knowledge, and to carry out intelligent behavior.” AI’s applications include speech recognition, image/text analysis and generation, data summarisation, and chatbots. The AI craze has flooded public discussions over the past year, especially the technology’s capacity to drive and prevent violence.

The principles of the Women, Peace, and Security (WPS) Agenda make clear the need for a gendered understanding of and approach to AI. Global conflict is at an all-time high, currently affecting more than two billion people and disproportionately impacting women, girls, and gender-diverse individuals. Women’s inclusion in political processes is therefore critical to building sustainable peace; the security of women determines the security of states. Put simply, peace is unachievable without gender equality. 

AI can significantly increase the capacity of bad actors to create disinformation on a large scale through AI-generated content like deepfakes. Freedom House’s Freedom on the Net 2023, which assesses data from 70 countries and 88% of the world’s internet user population, found that in 16 countries, malicious actors used AI to “generate images, text, or audio” to “distort information on political or social issues.” In Venezuela, state media outlets used fake, AI-generated commentators—with software developed by the London-based company Synthesia—to spew false narratives favouring the authoritarian regime and discrediting political opponents. Around the same time, AI-generated videos supporting the military coup in Burkina Faso made by Synthesia technology began circulating online. 

Moreover, the tactics and consequences of disinformation heavily intersect with gender. #ShePersisted, a leading organisation on this intersection, defines gendered disinformation as deceptive narratives framing women as “inherently untrustworthy, unintelligent, unlikable, or uncontrollable.” According to the Economist Intelligence Unit, these tactics account for 67% of reported instances of technology-facilitated gender-based violence (TFGBV). In practice, TFGBV can occur online via defamation, hacking, cyber gender and sexual harassment, or non-consensual sharing of explicit images. It can then spill over into offline violence through doxing (the leaking of personal information such as one’s address), stalking, or grooming victims of human trafficking. Estimates suggest a staggering 85% of women have experienced or witnessed online harassment or violence.  

AI has the potential to compound these connections between gendered disinformation, gender inequality, and violence. According to simulations carried out by UNESCO, AI can amplify gendered disinformation and TFGBV by increasing the convincingness and volume of inaccurate online content, such as fake histories and manipulated images targeting women. A 2019 study found 96% of AI-generated deepfakes online are pornographic, and 100% featured women without consent. The headlines show the prominence of this trend, with deepfake attacks on American disinformation expert Nina Jankowicz, Indian journalist Rana Ayyub, and female Twitch streamers

AI can also contain gender biases that perpetuate—and exacerbate—the subjugation of women, as AI models train on enormous amounts of online data containing inherent bias against women, girls, and gender-diverse individuals. A Bloomberg study found the Stable Diffusion text-to-image AI generator, which creates photo-realistic images based on text prompts, mainly produced images of men when asked to depict people with high-paying jobs. Three leading AI voice assistants—Amazon’s Alexa, Microsoft’s Cortana, and Google Assistant—use default female voices, portraying women as submissive and subservient. In a more harrowing example, Microsoft’s short-lived AI chatbot Tay, designed in 2016 to have conversations on Twitter, called feminism “cancer” hours after it was launched. The corporate cultures of AI companies reveal similar issues around bias; the World Economic Forum notes women account for just 30% of people working in AI

Gendered disinformation, gender-based violence, and gender inequality are crucial factors in determining global peace and security. #ShePersisted notes that gendered disinformation is central to autocratic and illiberal strategies. In Italy, social media accounts identified by #ShePersisted as most active in hate campaigns against women politicians spread Russian disinformation during the onset of the war in Ukraine. Groundbreaking research by the International Center for Journalists found that in the Philippines, gendered disinformation campaigns targeted esteemed journalist Maria Ressa to vilify journalism and weaken public trust in factual reporting. 

Furthermore, online attacks against women and girls are an early warning of instability, violence, and atrocities. Research published in the Journal of Advanced Military Studies concluded that support for online misogyny in ISIS and the incel movement increased the chance of violent attacks by these groups by emboldening traditional norms such as male dominance. Misogynistic propaganda, most often distributed through social media, enables intolerance, exclusion, and violence. The August 2017 murder of 24,000 predominantly Muslim Rohingya people in Myanmar, for example, was preceded by online narratives accusing Rohingya women of threatening the Buddhist population with high birth rates. Should AI worsen gendered disinformation and violence, it will do the same to peace and security writ large. 

The Good: How AI Can Innovate Gender Equality and Peacebuilding

At the same time, AI can spur innovations that promote gender equality and sustainable peace. AI-powered dialogue tools like the Remesh platform can facilitate real-time conversation between large populations. The UN has integrated the platform into consultative elements of peace processes in Yemen, Libya, and Iraq. This technology could meaningfully include women peacebuilders and women-led organisations in peace processes, thus making negotiations more inclusive and effective.    

Through automation, AI can also increase the amount and real-time accuracy of data analysis in conflict-affected and fragile states. In Ukraine, the Small Arms Survey used AI to design a system that analysed images on social media to reliably distinguish and count weapons systems and the groups using them. The UN notes the system allows real-time crowdsourcing of information vital to protecting citizens, such as the use of banned weapons and attacks on civilian infrastructure. These AI systems could rapidly synthesise analyses of complex and constantly shifting conflict dynamics, including risks to civilians, propaganda narratives, environmental factors and natural disasters. 

This increased capacity can advance gender equality and the security of women and girls globally. Through the AI-powered eMonitor+ tool, UNDP partnered with the High National Election Commission in Libya to identify geographic trends around online violence against women in elections and deploy automated fact-checking. By integrating gender-sensitive indicators, such as restrictions on women-led NGOs, AI-powered early warning and early response (EWER) mechanisms can more accurately and quickly pinpoint warning signs of emerging violence in conflict-affected and fragile contexts. For example, researchers at UN Women used AI to find that online extremist narratives in Southeast Asia label women’s submission as a powerful act and encourage women to be ‘strong’ mothers by raising children involved in jihad. The research notes how this refined understanding of extremist messaging can inform peaceful counter-messaging and alternative opportunities for gender-equal empowerment, like economic livelihoods. 

AI can also elevate gender equality through more operational, practical means. AI-bolstered data analysis can enhance GBV response and prevention by helping to more effectively and efficiently collect and analyse data on GBV incidents—a historically slow and uncoordinated process. Tools such as chatbots can provide crucial resources to assist the capacity of women-led movements and organisations tackling GBV. Take, for example, the Sara chatbot, which offers 24/7 free information and guidance to victims or women at risk of violence in Central America. Or Kwanele South Africa, which uses its own ChatGBV bot to provide rapid legal services for women and children experiencing abuse. AI can also automate the organisational processes of underfunded women-led organisations to increase their work efficiency –  SisterLove, a small-but-mighty organisation that addresses the health needs of women globally, massively scaled its capacity to create communications content using AI tools by Zapier and OpenAI.

How to Form Effective Norms and Standards on AI, Gender, and Peacebuilding

The international community is trying to understand the best approach to managing and deploying AI; tech companies can be either part of the problem or the solution. Now, they must partner with peacebuilders to formulate and operationalise AI standards that prevent and reduce the harmful gendered and violent impacts of the technology and ensure the promotion of peace and gender equality globally. 

First, AI companies must consult with women-led communities and organisations in conflict-affected and fragile contexts to understand how their tools can drive or prevent gendered violence in specific local contexts. Many AI language models, for instance, lack understanding of local languages across regions like Southeast Asia. This inability to contextualise online discussions can lead to AI tools overlooking context-specific misogynist language that could provide early warnings for violent extremism, violent conflict, and atrocities. By learning from diverse women peacebuilders, AI companies can train natural-language processing tools in understudied languages to better understand how gendered discussions can lead to online and offline violence globally. AI companies should therefore centre conflict sensitivity—which recognises any actor within a conflict system can have intentional or unintentional impacts that either worsen violence or contribute to peace—in their technologies. 

AI companies can also partner with women peacebuilders to mitigate misunderstanding or mystery around AI, making the technology actionable and practical for conflict prevention. By providing resources and funding, AI companies can also design bespoke tools to enhance the impact of women peacebuilders across different conflict-affected contexts. 

After meeting with President Biden, the seven leading tech companies announced voluntary commitments to the safe use of AI, such as risk evaluations and marking AI-generated content. These commitments must be heeded. In addition, tech companies should partner with governments and multilateral institutions to help policymakers use AI to prevent conflict and mitigate its negative impacts. AI engineers can help diplomacy, development, and defence agencies promote gender-equal conflict analysis, EWER mechanisms, conflict analysis, mediation and dialogue processes, and capacity building of peacebuilding organisations.  To address gendered disinformation, generative AI tools should include watermarks or computer code that identifies AI-generated content to assist in debunking AI-produced disinformation. They should also avoid training AI programs on personally identifiable information, such as facial images, to protect women from targeted online abuse.

Finally, AI companies will need to address gender biases in their organisational cultures through concerted efforts to recruit women of all identities at all levels. By involving women, girls, and gender-diverse individuals across the design, testing, and deployment of AI technologies, technology companies can ensure gender sensitivity within their internal corporate processes. 

AI can be a weapon or a tool. It can either multiply the strength and effectiveness of gender-sensitive peacebuilding or create an entirely new generation of gendered drivers of violence and conflict. The tech community must seize the AI inflexion point to put the future of AI on the right track. 

Nick Zuroski is Manager for Policy & Advocacy at the Alliance for Peacebuilding. He has extensive experience at the intersection of peace and stability, gender equality, and grassroots-oriented understandings of human security. In 2021, Nick received his M.A. in International Affairs — concentrating in Global Gender Policy and International Law & Organizations — from The George Washington University’s Elliott School of International Affairs, and his B.A. in Mandarin Chinese and World Politics from Hamilton College in 2017.