Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

For the AI Generation, We Need Education as Much as Regulation

For the AI Generation, We Need Education as Much as Regulation
30th August 2023 Jon Deedman
In Insights

Introduction

In the field of responsible tech, contemporary digital technologies such as large language models (LLMs) are rapidly becoming a central consideration, as these AI-powered tools can be operationalised by both extremists and those working to counter them. This has precipitated a digital ecosystem characterised by a complex dichotomy. On one side, there are those leveraging the emerging power of AI text and/or image generators to advance extremist ideologies and political goals. On the other, some wish to use AI tools – such as natural language processing (NLP) or algorithmic machine learning moderation – to combat these activities. 

In this Insight, I argue that the current pathways to restrain the use of AI by malicious actors are too reactionary; they respond to threats as they arise and seek to regulate technologies just as newly advanced tools become available. I suggest that beyond simple regulation, there is a need for extensive education in critical thinking and digital literacy. Such education can inoculate tech users against the manipulative and misleading effects of extremist AI content. While regulation remains an important tool – whether in a self-regulatory or (inter)governmental legislative model – I propose that a digital education agenda should be pursued. This bottom-up approach would fortify defences against extremist AI co-optation and complement top-down regulatory efforts to ensure an effective and responsible AI agenda.

State of Play

Contemporary developments within AI have upended an uneasy status quo within the tech sector, with the mass dissemination of LLMs and other neural networks for image, sound, or video generation. These breakthroughs have provoked significant ethical and practical concerns. By scraping and collecting vast amounts of data, generative AI tools can be used to respond to stimulus content or produce original material autonomously. These tools enhance efficiency, create textual and visual outputs without human intervention, identify faults and issues with inputted data, and create organised systems. As neural networks develop, their capacities will only expand further. As with many digital technologies, generative AI is neutral – neither inherently good nor bad; it is merely a tool. Contention stems from those who seek to exploit AI’s potential for violent or radical purposes, mirroring similar challenges experienced in the development of Web 3.0.

While these technological developments have benefited writers, practitioners, and small businesses, they have also become a powerful tool for online extremists. Generative AI provides malicious actors with a platform to produce propaganda and harmful content including deep-fake pictures and videos, hate literature, tactical summaries, and logistical recommendations. In addition, the capacity to develop subsidiary neural networks has empowered extremist actors to create or co-opt their own LLM chatbots to amplify their hateful bias and recruitment efforts.

As it stands, there is a concerning lack of legal scrutiny or accountability for the misuse of AI tools. This regulatory vacuum allows for the continued exploitation of AI tools for violent or hateful ends. Despite currently developing regulatory measures, such as requiring licenses for developing advanced AI systems, this dynamic has created an operational opening for hateful actors and extremists online. As generative AI technologies continue to progress exponentially, legislative solutions are increasingly falling behind the pace of development. Therefore, the effectiveness of regulatory measures becomes uncertain as they lag further behind the technologies they seek to regulate.

The Need for Greater Action

Regulatory or legislative solutions must fully recognise the dual nature of AI technology, where its capacity for good is equal to its capacity for harm. Every policy author using ChatGPT to summarise new legislation can be complemented by an extremist leveraging      Bard to improve the readability of their manifesto, for example. The generative or ‘creative’ capacity of AI tools poses a particular threat, enabling hate groups to create hate content including disinformation or visual propaganda. While the proportion of use of AI tools is unlikely to be equally divided between ‘safe’ users and bad actors, even a ‘small’ proportion of AI co-optation cannot be ignored. This is particularly pertinent given the ever-increasing potential for extremist and terrorist use of AI tools.

While some measures do exist to prevent the appropriation of LLMs, like ChatGPT’s Moderation API, these can often be circumvented through changes in wording and altering queries to avoid detection. The existence of multiple distinct LLMs allows bad actors to switch between chatbots if one gets updated to combat extremist activity. This issue highlights the need for a more centralised and cooperative model of AI regulation and legislation to ensure that extremists’ use of AI is equally tackled across distinct neural networks. Governmental collaboration with international organisations, to design and implement legislation to ensure the application of minimum security standards, does have potential. However, at present, (inter)governmental systems of regulation and monitoring are slow and ineffective compared to the speed of technological development. Lacking the backing of major AI developers, these regulatory measures alone are not enough to effectively combat this issue.

Self-Regulatory Models

Tech-led options offer alternative regulatory pathways. In the first instance, AI companies could be more transparent about their neural network construction to identify vulnerabilities. However, this level of transparency may not be practical, given both the desire of private entities to maintain a competitive edge, and the risk of misuse. Shining light on these powerful algorithms may allow governments or intergovernmental bodies the capacity to legislate on and regulate AI technology more effectively and in a timely fashion, as they can better understand that which they are attempting to regulate.

A self-regulatory model with moderate governmental oversight, involving multiple stakeholders – such as the voluntary framework agreement between the White House and seven large AI-developing companies – could limit access to generative AI tools for bad actors. If such a self-regulatory model were also to include or engage multiple stakeholders, as members of the newly announced Frontier Model Forum have suggested they intend to, it would ensure that those present are fully informed of the technological and ethical limitations and ramifications of AI regulation across all relevant sectors.

Even with a more effective, self-regulatory, multi-stakeholder framework for the regulation of generative AI tools, the issue of extremist co-optation of AI cannot be considered completely resolved. Extremist actors may still find ways to corrupt existing neural networks, if not create their own, as code interpretation becomes more democratised, and malicious alternatives are sold on the dark web by hackers. Open-source LLMs, like Meta and Microsoft’s Llama2, even allow the source code to be used freely; for good or ill. As such, effective solutions should acknowledge the inevitability of extremist operational use of neural networks and the spread of radical, AI-generated content

Digital Education

Instead of focusing entirely on regulatory solutions, a more effective redress would be to inoculate the population against the potentially detrimental effects of AI proliferation through targeted education. By equipping populations with critical thinking and digital literacy tools, propaganda and hate speech can be met with scepticism, allowing individuals to better understand the origin of such content. Educational programs could help people to question what they encounter online, enabling them to discern deep fakes and approach them with a critical mindset. Instilling incredulity in users regarding online content may be able to ensure that AI- (or even human-) produced hate or recruitment content is met with criticism rather than affinity.

Greater digital literacy among users can also help implement tech measures to limit bad actor co-optation of AI at the global level. For example, teaching users to read and use digital watermarks, as agreed upon for use within the aforementioned seven AI company-White House voluntary framework, could reduce reliance on potentially biased sources of AI instruction and improve accountability. In the long-term, fostering knowledge and critical thinking skills in the responsible use of generative AI tools could also help to cultivate a culture of responsible AI usage.

Conclusion

In summary, regulation alone is not a solution to extremist co-optation of generative AI tools. Any future AI regulatory framework must be complemented by bottom-up, user-level preventative measures. These can ensure that AI-generated propaganda or extremist-run chatbots are ineffective recruitment tools and that average users understand how to use AI safety tools appropriately. While we cannot prevent the widespread operationalisation of generative AI, nor the use of these tools by bad actors, we can strive to limit the effectiveness of such misuse.

Jon Deedman is a recent postgraduate student on far-right terrorism and extremism, with current work focusing on the nexus between online and offline manifestations of extremist ideology.