Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

The Challenges and Potential of Harnessing Technology for the Prevention of Gender-Based Violence and Violent Extremism 

The Challenges and Potential of Harnessing Technology for the Prevention of Gender-Based Violence and Violent Extremism 
25th November 2022 Jacqui True
In 16 Days, Insights

This Insight is part of GNET’s Gender and Online Violent Extremism series in partnership with Monash Gender, Peace and Security Centre. This series aligns with the UN’s 16 Days of Activism Against Gendered Violence (25 November-10 December).

Introduction

Technology-facilitated gender-based violence (TFGBV) is a wide-ranging umbrella term. It encompasses many subtypes of violence, harassment and/or abuse directed at, or due to, someone’s gender identity. The violence is enabled or assisted by digital technologies and includes abuse that takes place both online and offline. Like gender-based violence (GBV), TFGBV targets ciswomen, transwomen, non-binary and gender-diverse individuals because of their gender identity, and can cause significant harm and suffering. While TFGBV occurs throughout society, as noted by the UN, there is also a growing number of extremist groups engaging in online harassment of women and some emerging evidence that TFGBV precedes real-world violence against women and girls. There is evidence to suggest some groups may also be more at risk of TFGBV including young women, human rights defenders, women in politics, journalists, and women from marginalised groups. 

While GBV is an everyday occurrence, TFGBV is exacerbating this violence in new ways. It is particularly difficult to address TFGBV among extremist groups because we haven’t been able to find effective means of exposing or eliminating it. There are also considerable challenges in determining when and why harmful online behaviours may escalate into violent extremism. We have yet to develop online early warning indicators that have been validated in this field. Could content promoting traditional ideals of womanhood for the reproduction of a racial or national group, or spreading explicit images of sexual violence against minority women, for example, be signs of potential radicalisation to violence? We know that perpetrators consume sexually violent and violent extremist content through videos and images which may normalise the use of violence but establishing the connection between these types of online violence in order to prevent and counter acts of offline violence still eludes us. As a result, some digital platforms don’t perceive it as a pressing issue. This Insight explores these issues, outlining what TFGBV is, how it is exploited by extremist groups, and the potential for technology to counter both TFGBV and extremism. 

What is TFGBV? 

Currently, there is no internationally agreed definition of TFGBV, however, the United Nations Special Rapporteur on Violence Against Women and Girls defines online GBV against women as encompassing “any act of gender-based violence against women that is committed, assisted or aggravated in part or fully by the use of ICT … against a woman because she is a woman, or affects women disproportionately”. This violence takes many forms, including sexual harassment, stalking, zoom-bombing, image-based abuse, trolling, doxing, and hate speech. While many forms are not unique to the cyber landscape, there is emerging evidence that technology has made it easier to commit traditional forms of GBV, incite political violence through the targeting of women and girls, and commit GBV in new ways, such as using Artificial Intelligence (AI) to create deepfake pornographic videos.

In 2020, the Economist Intelligence Unit found that 85% of women worldwide had experienced or witnessed this kind of violence, with 38% of women and 58% of young women and adolescent girls reporting personal experiences of victimisation online. For the Asia Pacific region, the prevalence is even higher, with the same report finding 88% of women have experienced or witnessed online violence. The latest research from the Pew Research Centre shows women experience higher levels of specific forms of technology-facilitated violence than men, including sexual harassment and stalking. Our 2022 study, which included a nationally-representative survey of technology-facilitated violence in Australia, found that 51% of Australian women have experienced this type of violence in their lifetime. This study also found that women are more likely to experience technology-facilitated violence from an intimate partner, and report experiencing greater adverse impacts than men.

Current research overwhelmingly illustrates that victim-survivors of TFGBV experience marginalisation and inequality, especially due to intersections of gender, race, indigeneity, disability, sexuality, and sexual and gender-minority status. These intersecting identity and inequality factors compound the experiences of TFGBV, fuelling misogynistic, sexualised, racist, ableist and homophobic abuse. For example, Indigenous and First Nations women and girls have been found to experience high levels of racist and misogynistic harassment online. In Canada, digital technologies have been used in facilitating the trafficking of Indigenous women and girls. Studies have also found that migrant and refugee women are at particular risk of TFGBV. In the Middle East, Asia and Africa, some forms of TFGBV have also been connected with “honour-based violence”. Other research has found that LGBTQIA+ women and girls are disproportionately targeted by specific forms of TFGBV and that LGBTQIA+ women and girls of colour experience more online harassment than their white peers.

TFGBV as a Tool of Extremist Groups 

TFGBV has also become a key feature of extremist groups’ online conduct. Research, including our own, has found illicit online networks mobilising hate against women, while violent extremist groups, such as Islamist groups in Indonesia, are explicitly targeting women online with gendered messaging, encouraging women in conservative societies to join a community of belonging via websites that are not explicitly connected to violent extremism at first glance. Images of flowers, kittens, and discussions on domestic, intimate or family-related topics draw women into such communities seeking to radicalise them to support or enact violence. Online coverage of GBV is used as a way to create sympathy for the suffering of Muslim women overseas. In the USA and other western countries, women are also participating in online groups that encourage them to embrace their traditional roles as wives and mothers and contribute to the reproduction of the white race and far-right nationalist causes. Such online groups echo the ‘Great Replacement’ conspiracy theory that inspired Brenton Tarrant’s manifesto and terrorist acts. Cynthia Miller-Idriss has shown how Youtube cooking channels create a soft online entry point for women to extremist ideologies and to playing supportive roles to offline violent groups; while TikTok videos by celebrities such as the British-American former kickboxer, Andrew Tate, bombard young men with misogynistic messages that promote hitting and choking young women to keep them in their place as men’s property.

Harnessing Technology to Prevent and Counter GBV and Extremism

In some contexts, social media platforms and digital technologies have been held responsible for facilitating and increasing GBV, including amongst extremist groups. This is not surprising. It is easy to blame technology for the violence, and even easier to blame the digital platforms on which GBV occurs. In the last 12 months alone, we have witnessed data collection scandals involving social media platforms and the mobilising of online hatred based on race, religion and gender. Linkages have been documented between certain platforms’ algorithms and radicalisation or extremist messages and networks. Prioritising user rights over gender equality and even national security may further the profits and expansion of digital platforms. But when we solely blame the technology or the platforms on which GBV occurs, we remove accountability from perpetrators, from support services and from states. We also create an environment where people’s civil rights may be limited and abused by states as a response to perceived civil disobedience. The recent shutdown of internet access and digital platforms such as WhatsApp and Instagram in response to Iranian women protesting the death of  Mahsa Amini and the policing of women’s bodies and freedoms provides one example. Addressing extremist use of TFGBV needs to be connected with broader strategies and policies to combat GBV and understood as harmful to communities and society as a whole and not just individual victim-survivors.

Such approaches miss the potential of technology to respond, prevent and disrupt GBV. The #BeOurVoice and #MeToo campaigns are examples of collective feminist-inspired movements that raise awareness of GBV and provide support for victim-survivors, sharing the messages of millions of women and girls and their lived experiences. These movements have gained worldwide attention, support and strength precisely through the use of digital technologies. 

There have been various initiatives enacted to prevent and respond to TFGBV. At a basic level, most digital platforms have community standards users must adhere to and reporting mechanisms for abusive and harassing behaviours. Laws also capture a range of TFGBV behaviours, including harassment, stalking, hate crimes and image-based abuse, meaning perpetrators who are found guilty face state-imposed punishments; the offences and punishments vary across jurisdictions (see e.g., Pacific Region, Australia, the US, Asia, the UK).

One clear way we are beginning to see technology being harnessed to prevent GBV is through a Safety By Design approach, in which technology companies and digital platforms minimise online threats by anticipating and eliminating harms before they occur. This proactive approach enables user safety and risk mitigation to be embedded into content and platform design at the outset, fostering more positive online experiences. 

AI technologies are also being used proactively to prevent and detect misogynistic and potentially abusive content online. This includes safety features like user photo verification, ‘see fewer posts like this’, and the blurring of nude images that requires the user (receiver) to decide whether to view, block and/or report the image. Some platforms have also begun trialling a range of innovative safety measures to actively prevent TFGBV, such as automatically detecting comments that may be misogynistic and alerting the user to its potentially offensive nature, asking them if they are sure they want to post it. Other innovations include the automated detection of harmful language in intimate messages on digital dating platforms, where a real-time warning is given to get users to think twice about what they are sending before they send it. This is combined with an automated checking-in with the receiver of that message, who has the option to report the sender to the platform if they found the message offensive. Many platforms have also introduced safety centres that provide tools, support and resources on TFGBV. Most recently, one major dating platform made its lewd image detector available open source to encourage all digital platforms and technology companies to adopt these tools to eliminate cyber-flashing and its harms.

Challenges and the Future of Preventing and Responding to TFGBV

There remains a wide range of challenges in responding to and preventing TFGBV and violent extremism. It is evident that both men and women are consuming violent extremist and terrorist content online in South-East Asia, however, few online disengagement and redirection programmes are targeted at women and girls despite their increasing perpetration of violent extremist crimes.

It is clear that extremists utilise TFGBV within their groups, but countering it is not straightforward because GBV continues to be a part of our society. Countering TFGBV requires not only addressing extremist content, but everyday GBV committed by those who we would not consider extremists.

Much of the research aimed at countering and preventing TFGBV has, to date, focused on ciswomen in western countries. Indeed, some scholars have begun to criticise the often-over-simplified approach to GBV research that limits demographic data collection to those who identify as ‘male’ or ‘female’, restricting the development of effective intersectional preventions to GBV. This approach overlooks the specific forms and drivers of GBV for gender-diverse individuals. In this regard, future research into TFGBV would benefit from engagement with a more transnational and intersectional gendered lens that extends beyond western countries. 

In the absence of effective or trusted mechanisms, women undertake extensive ‘safety work’, investing time and resources into efforts and strategies to minimise and avoid violence, and often, to disengage from online spaces. Key challenges exist for preventing, disrupting and responding to TFGBV in countries where the Internet is not freely accessible, and in repressive and conflict-affected contexts where state agents enact shutdown regimes. These areas must be the focus in exploring the harms, impacts, responses and prevention of future TFGBV, and importantly, in harnessing technology to help interrupt GBV and violent extremism. Rather than being a space of subjugation, violence and silencing, digital technology should act as a mechanism for the advancement and amplification of female empowerment and gender equality.