Introduction
On the 16th of December 2024, the government-backed UK media regulation body, OFCOM, published the Illegal Harms Codes, a collection of codes of practice combatting illegal harms within the online space. This followed the introduction of the UK Online Safety Act 2023, which outlined new legislation surrounding the exploitation of online platforms by terrorist actors and the steps expected from platform providers to combat it. OFCOM has responded accordingly to the Online Safety Act, implementing new regulations and expectations for platforms, signifying a changing landscape of countering extremist and terrorist content online.
This Insight aims to provide a summary of the features of the new OFCOM guidance that are most significant to the P/CVE community. Specifically, it examines the guidance in the Illegal Content Codes of Practice for user-to-user services. This Insight begins with an outline of the new provider requirements, listing what is expected of tech companies in the coming months to adhere to the new UK regulation. This is followed by an analysis of the ongoing issue of defining illegal content online in light of new legislation, with a final note on OFCOM’s admirable interest in hashing technology as an emerging tool to moderate terrorist content.
Expectations of Providers
Before the Illegal Harms Codes can be properly implemented in the summer of 2025, OFCOM requires platforms to concede to the following conditions:
Providers have been given three months to complete a new ‘illegal harms risk assessment.’ Through this framework, they are then categorised as small or large services of low, single or multi-risk, with different trust and safety expectations for each. Platforms are susceptible to fines upwards of £18 million or 10% of the company’s worldwide revenue (whichever is greater) if they fail to properly assess risk and take appropriate steps in response, or if they are seen to have neglected their responsibility to protect their users from online harms.
While each platform is expected to adhere to specific criteria depending on its risk assessment categorisation, there are some regulations which apply across all providers:
Likely the most significant aspect of the guidance in the context of P/CVE is that, for the first time, providers now have a legal responsibility to protect their users from illegal online harms. This is a strong move towards holding technology companies accountable for the content users have access to and the potential harms that may arise from the content they host.
In another move towards platform responsibility, this legislation emphasises the culpability of the individuals leading the trust and safety processes, as opposed to the company as an overall body, placing duty directly at the feet of those in power. Interestingly though, the responsibility does not lie with the head of the company, but the senior individual made directly responsible for illegal harms within the organisation. This may have the potential to change the nature of trust and safety employment due to the weight of new responsibilities on tech platforms’ head of Trust and Safety teams.
Content moderation teams have become an increasingly significant point of contention within both large-scale and small-scale tech platforms. Some of our largest providers have recently taken steps to downscale their trust and safety teams, as well as other infrastructure and policies designed to assist the moderation process. For example, Meta recently announced the removal of fact-checking on its platforms, which was met with a barrage of discontent from trust and safety experts in a long-term battle to try and limit mis and disinformation online. New pressure from OFCOM guidance has the potential to challenge platforms who wish to similarly scale-down their trust and safety teams.
Terrorism, similar to child abuse online, is considered a significant enough threat to warrant special regulations. New rules state that platforms must provide specific plans alongside their general outlines to prevent online harms. These plans must detail how they aim to limit terrorist activity on their platforms. Terrorism is also listed as one of 130 ‘priority offences’ in the Online Safety Act, which OFCOM groups into 17 categories which include fraud and financial offences, human trafficking and extreme pornography, to name a few (Overview of Illegal Harms, Table 1.1).
Overall, OFCOM’s guidance clearly outlines the need for tech companies to take responsibility for the harms their platforms may host, and the well-being of their users. This is welcomed progress, which provides encouraging foundations for much-needed frameworks regarding tech companies’ roles in preventing the spread of terrorist content online.
New Guidelines, Old Issues
However, while the expectation for tech companies to assume responsibility for user harms is a welcomed move towards more accountability, tech companies may still struggle to comply with legislation due to definitional issues not being entirely addressed. Macdonald and Staniforth (2023) outlined this issue in an investigation into the relationship between technology companies and law enforcement in regulating terrorist content, establishing that a key aim should be to “determine a shared language” (p. 27) of the legal parameters of terrorist content. Compliance can be markedly difficult where policy is unclear about what exactly qualifies as terrorist content legally and, thus, what warrants removal; research from GIFCT supports the concept that the clearer definitional boundaries of harmful online content, the easier it is for legal frameworks to be developed and for tech companies to take action. In cases where tech companies are left to decide what content is considered extremist, or considered inappropriate for the public forum, it results in an enhanced power to determine the parameters of important ideals, which arguably is too heavy a responsibility for a tech company.
There are two major documents within the Illegal Harms Codes which link to defining terrorist content. The first is the aforementioned Illegal Content Codes of Practice for user-to-user services, which on page 75 states that ‘Schedule 5 [of the Online Safety Act 2023] lists the relevant offences for determining when content is terrorist content’. Schedule 5 of The Online Safety Act maintains a definition of terrorist content rooted in specific sections of the Terrorism Act 2000 and 2006 (For the full list of specific sections, please refer to Schedule 5 above). The other major document is the Illegal Content Judgements Guidance (ICJG), Section 2 of which details the definition of terrorism in the context of the online space. There is a clear range of guidance here, enough that it could not be fully conceptualised in this Insight alone, and an obvious attempt to understand and provide guidance on terrorist content within the online space. However, again, it broadly refers to the Terrorism Acts 2000 and 2006 to reference its guidelines.
Therefore, in terms of defining what content qualifies as ‘terrorist’, while the ICJG provides some promising movement into the online space, the concept is still very much steeped in content which fulfils definitions of terrorism from the early 2000s. Thus, there is limited evidence of progress in truly clarifying these early definitions for a new internet age. As a result, the issue of vague or outdated legal definitions of terrorist content may continue to hinder tech companies abilities to moderate platforms. Where tech companies are left to interpret potentially outdated definitions of terrorism on their own, there is massive room for error. The importance of upholding freedom of speech while not providing a platform where harmful content can thrive requires ongoing intervention.
That being said, it is important to acknowledge that defining terrorist content is not easy, and the lack of progress in producing an updated legal concept of ‘terrorist content’ is certainly not for lack of trying. It is a deeply complex issue which many academics and practitioners alike have attempted to tackle over a long period of time. Overall, in the same way that we cannot expect technology companies to simply ‘be more responsible’, it is difficult to expect legislators or academics to simply ‘define terrorist content’. Both of these pursuits exist in a mutual balance, where collaboration between platforms, practitioners and policy alike is required to produce a concept of terrorist content, and a willingness to regulate it.
New Perspectives on Moderation: Hashing
While definitional issues remain, there is one significant change which warrants praise for the OFCOM guidance: the consideration of hashing technology. In Protecting people from illegal harms online, Volume 4, which outlines how to mitigate risks, OFCOM explores recommendations for hashing technology to detect terrorist content (14.120). A hash, as explained by GIFCT, is a numerical representation of video or image content. When stored in a database and labelled, tech companies can use the hash values to identify visually similar content on their platforms. When terrorist content is stored in a hashing database, tech companies may be able to use this to identify any visually similar content on their platform, meaning it can be regulated or removed in accordance with the platform’s trust and safety policy. Ultimately, hashing is about being able to create a ‘library’ of visual content against which we can compare and identify terrorist imaging and video.
While hash-matching technology is not officially recommended in the first Codes of Practice, OFCOM is looking at evidence being gathered from stakeholders to investigate its use to ‘disrupt the pace and scale of dissemination of terrorism content’ (14.134). There is debate about whether to create an ‘in-scope’ hashing service, which operates internally within services or whether to use a third-party hash-matching database (14.134). In the same section, OFCOM names GIFCT’s Hash-Sharing Database as the most prominent example of a third-party hash-sharing database which could be used.
Hashing technology is by no means a fix-all solution to regulating terrorist content online, but the discussions featured in OFCOM guidelines show an encouraging step towards cross-stakeholder collaboration with NGOs like GIFCT to better protect users from online harms. There are some understandable concerns voiced by OFCOM regarding false positives and false negatives, as well as costs both in terms of finance and human resources, and the ever-lasting concerns about content moderation infringing on freedom of speech (14.136). It calls specifically for evidence from stakeholders to address these concerns. However, the guidelines also acknowledge that on platforms already using hashing technology, it has shown the potential to facilitate ‘quick identification, review and removal of content’ (14.137). Hashing has proven to be effective, particularly in the aftermath of terrorist incidents where content and footage spread rapidly, making efficient visual media detection paramount.
Conclusion
The new OFCOM guidelines indicate a policy move towards tech companies taking accountability for the spread of terrorist content on their platforms, increasing pressure for companies to invest both attention and resources into trust and safety. That being said, there is still work to be done on the side of academics and legislators to try and create a more pragmatic concept of terrorist content so that companies are provided with clear, updated definitions which exist in the context of the online space. OFCOM’s discussions surrounding hashing content are promising; despite hash matching not necessarily being a perfect solution, it certainly provides a firm apparatus for tech-based moderation, encouraging cross-stakeholder engagement and addressing a central issue of identifying terrorist visual content.