Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

The Deepfake Threat to the 2024 US Presidential Election

The Deepfake Threat to the 2024 US Presidential Election
27th March 2024 Ella Busch
In Insights

Introduction

As the 2024 US election campaign ramps up, artificial intelligence (AI) and deepfakes are already having a corrosive effect on the democratic process. In the New Hampshire Democratic primary, an artificially generated robocall purported to represent President Biden telling his voters not to participate in the primary. “Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again,” the faux Biden declared. “Your vote makes a difference in November, not this Tuesday.”

The call was linked to an operative working for Biden’s rival candidate, Minnesota Rep. Dean Phillips. He claimed to have created the call not for political purposes but to raise awareness of the dangers of AI in politics. “This is a way for me to make a difference, and I have,” he declared. On this last score, at least, he is right: artificial intelligence has arrived, and its impact on politics and elections is likely to begin immediately. Perhaps even more concerningly, though, these technologies will also be eagerly adopted by extremists across the ideological spectrum, who will look to advance their radicalisation and recruitment efforts at greater speeds and scale. This GNET Insight looks at the rising role of AI in both mainstream politics and its violent extremes before summarising possible countermeasures to this new threat. 

A Growing Catalogue of AI in Politics

Of course, the New Hampshire robocall was far from the first time political actors or extremists have used deepfakes and artificial intelligence to advance their ideologies and recruit newcomers. The Islamic State, for instance, has repeatedly experimented with AI-generated propaganda. Far-right extremists have also employed AI to advance their visual propaganda efforts, for instance, paradoxically using this new technology to recreate content celebrating Nazi Germany during World War II, overlaid with modern aesthetics. The incident in New Hampshire is also not the first time these technologies have been used to disrupt elections. Indonesia and Slovakia have already witnessed the widespread use of artificial intelligence and deepfake content to rig election outcomes, largely by portraying opposition candidates as nefarious and corrupt.

Given the increasingly normalised use of these technologies, terrorists and violent extremists are quickly adjusting and incorporating them into their regular propaganda techniques. In an article published for the International Centre for Counter-Terrorism in the Hague, we outlined the three core areas where far-right extremists have benefited from the technology, namely through the incitement of violence, the spread of new ideological ammunition, and the recruitment of new radical individuals. Each of these end-goals was sought by ‘far-right’ actors who circulated a video purporting to show London mayor Sadiq Khan announcing the cancellation of annual Armistice Day commemorations in favour of a pro-Palestine march after October 7. On 4chan, meanwhile, far-right networks spread guides on using AI for generating new anti-Semitic propaganda. There are also possible tactical benefits, particularly as the technology improves, potentially allowing extremists to alter the success or failure of particularly violent incidents or accelerate the course of violent action as it happens. Generative AI represents a new precipice in communications technology, which extremists will surely eagerly exploit. 

However, perhaps the greatest looming danger, particularly in this divided and hostile political climate, is caused by the potential of the Liar’s Dividend–the fact that more widespread scepticism about content and information could lead politicians and other actors to dismiss real content as fake. Such realities would be particularly damaging around elections, where politicians might dismiss legitimate scandals by claiming falsehoods. The impact of deepfakes and AI-generated content thus is far broader than merely the fakes themselves.

Next Steps – Policy Recommendations and CVE Strategies 

In safeguarding the integrity of the 2024 elections, protection against misinformative deepfakes will be vital, especially as it concerns the far right. The most important antidote to deepfakes in this context will be education across all ages, communities, and political affiliations. Public-private partnerships may be leveraged to generate public awareness of deceptive election-related media, with government incentives offered to technology companies and media outlets to elevate public understanding of what deepfakes are, how to detect them, the contexts in which they may be used, and how to report suspicious material. 

Such an endeavour may follow the framework presented by Project Origin, a collaboration between Microsoft, the New York Times, CBC Radio Canada, BBC, and other organisations to enhance content provenance, or the Content Authenticity Initiative, which promotes industry best practices. Another educational avenue, as suggested by a September 2023 NSA-FBI-CISA joint paper on Deepfake Threats to Organisations, would be to encourage employers and universities to provide deepfake training to students and personnel through public resources such as those available through Northwestern University and Microsoft. Such programs should direct attendees to report suspicious content–particularly threatening national security–to entities such as the FBI or the NSA Cybersecurity Collaboration Center. Initiatives for more broad deepfake-related education may be adapted from these frameworks and implemented at age-appropriate levels in K-12 institutions to protect the next generation of internet users from malign synthetic media. 

Lawmakers should quickly and diligently implement legislation to ban synthetic media that attempts to steal the likeness or voice of a political candidate–or otherwise uses deception to undermine election security. Appropriate penalties should be levied according to the severity of the content and its effects. Such an effort should be implemented well ahead of the November elections and supplemented by broader efforts against hateful or otherwise deceptive content targeting any US citizen. Such legislation should focus on the underlying technology and intent rather than the nature of the speech itself in order to avoid First Amendment free speech challenges and facilitate bipartisan efforts. 

Such efforts have already started regarding election security. In response to the deepfake ‘call’ from President Biden, the Federal Communications Commission (FCC) voted to ban such robocalls in February 2024, a first step that must now be extended to other forms of synthetic media. Once enacted, laws made at the national level should be pushed internationally under the auspices of groups such as the United Nations, European Union, and NATO, as well as tech-focused consortiums such as the Christchurch Call, which would determine norms of international behaviour regarding permissible use of deepfake technologies.

Concerning technology, efforts should be made to accelerate the standard currently being developed to enhance content provenance. C2PA, a ‘nutrition label’ of sorts for digital content, is an open-source internet protocol that relies on cryptography to encode details about the origin of content. Google, Meta, and OpenAI have all recently announced interest in such a protocol, suggesting that the technology community would widely accept such a measure. C2PA would make it difficult for malign actors to successfully deploy deepfakes without public knowledge, adding the further benefit of undermining their cause. Counterterrorism practitioners could also weaponise the technology in a more controversial possibility, effectively using deepfakes against those attempting to exploit them. In this sense, deepfakes themselves may be used in counterterrorism efforts.

Violent extremists have always been eager and skilled adopters of new communications technologies and are already identifying innovative new applications of deepfakes and other generative AI. Counterterrorism practitioners can no longer afford to ignore these technologies, particularly as a fractious election year ramps up. Generative AI has arrived on the political stage. It’s time to respond.

Ella Busch is a researcher at Georgetown University studying Government and Psychology. She has a particular interest in domestic terrorism and hopes to specialise in security in the future.

Jacob Ware is a research fellow at the Council on Foreign Relations and an adjunct professor at Georgetown University’s Walsh School of Foreign Service and at DeSales University. With Bruce Hoffman, he is the co-author of God, Guns, and Sedition: Far-Right Terrorism in America. He serves on the editorial boards for the academic journal Studies in Conflict & Terrorism and the Irregular Warfare Initiative at the Modern War Institute at West Point.