Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

The Internet Consortium for Online Safety: How Collaborative Tech, Not Legislation, Could Prevent Harmful Content Proliferation

The Internet Consortium for Online Safety: How Collaborative Tech, Not Legislation, Could Prevent Harmful Content Proliferation
3rd May 2023 Jon Deedman
In Insights

Introduction

Countering violent extremism (CVE) in a digital context sees practitioners grappling with the enterprise of removing terrorist and violent extremist content (TVEC), with the goal of ultimately denying extremist actors access to online spaces. While this task appears superficially facile, the reality is that deciding what content is ‘extremist enough’ to prohibit – and what content is innocuous enough to allow – is a constant debate among representatives of tech companies, states, and supranational organisations. 

The debate becomes characterised by a conventional and omnipresent dichotomy: freedom of speech vs public safety. In the digital CVE space, one is leveraged against the other to find a balance in which freedom of speech does not sanction the proliferation of extremism, whilst simultaneously ensuring that the removal of content does not border on authoritarianism. Nevertheless, existing moderation efforts are falling short, given that regulators, users and tech company executives remain unsatisfied, as damaging content and disinformation are still widely accessible. In this Insight, I suggest that a collaborative tech consortium may be the most effective way forward to remove nuanced or ‘borderline’ TVEC, and demonstrates a future capacity for removing other damaging content such as child sexual abuse material (CSA) or suicide-related content.

Lawful but Awful Content

Lawful but awful’ content (also known as borderline content), a term regularly used by CVE practitioners and others in digital moderation policy, refers to content that is legal, but widely considered to be morally reprehensible or offensive. This includes racist comments, sexually explicit content, offensive memes, or harmful disinformation. A reason such content proliferates is that tech companies often do not have sufficiently clear or stringent moderation policies to address borderline content, or they lack the resources to engage in effective large-scale ‘general monitoring’ content moderation. Additionally, as content moderation is closely tied to content legality, the removal or spread of nuanced content is exacerbated by the fact that legality is not the same internationally. Content that may fall foul of hate crime laws, or be considered to support a Designated Terrorist Organisation in one nation, could be protected political speech in another. This lack of uniformity in content acceptability is mirrored by tech policy, in that content can be acceptable on one site but not on another. The minimum expected standard of content removal is fairly low; existing legislation – such as Section 230 in the US – typically regards illegality as a minimum criterion for removal. Additionally, companies can subscribe to external frameworks, such as the Global Internet Forum to Counter Terrorism’s membership criteria, that showcase their willingness to lead in taking these threats seriously. Harmful content outside this minimum standard, then, can be freely available on one site but not another, in the same way that it could be accessible in one country but not another. Where this inconsistency exists, and the capacity to spread content from one platform to another prevails, ‘lawful but awful’ content thrives and disinformation germinates.

Much the same as the inherent definitional ambiguity and inconsistency of ‘terrorism’ as a concept, the collective understanding of ‘borderline content’ is equally convoluted. The prevailing issue with more nuanced and definitionally ambiguous content stems from the prevalence of true extremist content that is both immoral and illegal. For example, hateful content can become blurred when shrouded in ‘edgy’ humour, as illegality is typically predicated on intentionality.  Extremist actors weaponise edgy humour to convey ideologically extreme talking points to prime vulnerable audiences to more extremist messaging. In addition, disinformation can be deliberately operationalised by extremist groups and movements for these same ends. Having been exposed to detached borderline content, a user can subconsciously be aware and even accepting of extremist talking points. Disinformation and edgy humour, as borderline content, can pass through the proscriptive mesh of current CVE policies and procedures. By allowing this kind of ‘lawful but awful’ content to proliferate, social media users continue to interact with it, ultimately allowing seemingly innocuous online spaces to become another medium for terrorist propaganda. This process persists, as it operates within the grey space of definitional obscurity.

Existing Legislative Landscape

Various state and supranational entities have strived to engage with borderline content and reduce its negative impact. The UK has, over the last several years, attempted to implement the Online Safety Bill which seeks to place a duty of care on tech companies to take responsibility for the harms their users might face. Initially, this bill included a specific caveat demanding the removal of ‘harmful but legal content’, but this has since been axed in its latest format. Instead, recent changes to the UK’s Online Safety Bill aim to hold tech executives personally accountable if they fail to prevent children from viewing ‘harmful content’. This issue is compounded when we acknowledge the impact such personal liability will likely have on public interest sites and personal servers that do not prevent the publishing of subjectively ‘harmful’ content. Similarly, the EU recently implemented its Digital Services Act (DSA) which seeks to hold tech companies responsible for the production and dissemination of illegal and harmful content. But the national or supranational implementation of regulatory rules does not end the discussion over nuanced content, and instead further complicates the debate. For example, while the DSA may drive tech companies to better moderate content, it drastically impacts free speech and makes demands for tech companies to operate as privatised censors on behalf of governments. Additionally, those companies that do not actively comply with the legislation face possible bans. By implementing legal strategies to prevent the proliferation of harmful content, the dichotomy errs too far toward the authoritarian. It forces tech companies to overcompensate and over-moderate for fear of being held liable under these laws, creating potential legal peril for those whose job it is to ensure the removal of harmful content. To make matters worse, the slow-moving nature of governmental bureaucracy means that legal avenues to addressing borderline content struggle to keep up with the pace of ever-evolving online trends. 

Legal adoption of strategies to counter the spread of harmful content is further beleaguered with issues in regard to the geographical inconsistency of their application. For example, while the UK and EU have pursued legislation to force tech companies to remove borderline content, the U.S. is currently cogitating on Texas’ House Bill 20 which stipulates that sites with more than 50 million monthly users in the U.S. must produce regular reports of removed content, create a complaint system, and disclose their content regulation procedures. This is with the intention of preventing social media companies from moderating any debatably contentious content. In reality, this equally vague (but distinctly lax) regulation may instigate the proliferation of hateful content on social media sites, as sites will be pushed to under-moderate their users’ content for fear of being sued under House Bill 20. This quandary could be further exacerbated by a possible repeal of the longstanding Section 230 currently being considered by the Supreme Court. Such divergent regulation of harmful content will only further complicate the already vastly complex content moderation landscape. This complex confrontation between state-based legislation and tech-based self-regulation debilitates the rights of users and fails to address the latent issues they seek to rectify. Overall, it is clear that state- and supranational organisation-led responses to borderline content serve to complicate the issue and are therefore not the optimal solution for ensuring the universal rights of users.

What Next?

If not led by states, then, how should such nuanced content be moderated? I suggest that the answer to this can be found in a self-regulatory framework whereby tech firms agree to standard practices, definitions, and minimum criteria for the removal and reinstatement of borderline content. Such a self-regulatory framework is not only supported by academic practitioners but has received comprehensive backing from tech firms themselves. For example, TikTok – when attempting to address rampant borderline content – introduced a rating system (called Content Levels), and have since recently updated this ‘borderline suggestive model’ to improve its accuracy in detecting such content. In addition, there exists considerable evidence of the capacity for self-regulation by a cooperative tech sector. For example, the formation of the Digital Trust and Safety Partnership in 2021 sought to provide a conversational forum for tech platforms to establish best practices, and collectively take action on trust and safety principles where such collaboration is possible. Overall, the shift toward self-regulation (with adequate guide rails to prevent cartelisation) is something that tech firms have increasingly demonstrated their support of and capacity to deliver, both unilaterally and collectively. 

By implementing their own moderation agenda, tech companies alleviate this responsibility from legislative bodies. Given the proportional adolescence of the tech sector, it has demonstrated a remarkable degree of goodwill in attempting to prevent terrorist use of the internet, and so its capacity for ethical problem-solving should not be underestimated. Nevertheless, the introduction of vast quantities of moderation policy does not necessarily settle the matter. As suggested earlier, the fact that tech companies have individual and occasionally divergent moderation policies can catalyse the spread of harmful content by allowing it to proliferate on some sites and not others. Additionally, the moral complexity of allowing unelected tech executives to act as gatekeepers of acceptable content must be taken into account.

As such, I join existing calls for tech companies to proactively cooperate in the fight against borderline content. The formation of tech-driven collective policy architecture is not new, with GIFCT being one example. Openness to cooperate in preventing borderline content dissemination can also be inferred from the signing of the Aotearoa New Zealand Code of Practice for Online Safety and Harms by several large tech firms in New Zealand. This framework seeks to reduce users’ exposure to borderline content through collective commitments to prevent its proliferation. The formation of a tech firm consortium, whereby extensive discussion and diplomatic negotiation could facilitate the creation of broad and uniform consensus regarding acceptable content, could significantly ease the ethical and practical difficulty of content moderation decisions that tech companies must constantly make.  As tech companies take on a greater diplomatic role in the contemporary, it would not be too extraordinary to see them form such diplomatically collective frameworks in the future.

Whilst reducing the overreach of legislative bodies, this tech consortium would also form a basic framework with even greater capacity for further cooperation. For example, if these companies pooled designated resources akin to the EU budgetary system, money can be endowed to smaller tech companies which lack the resources for large-scale content moderation. Given that content moderation is already a huge cost for tech firms, a unified moderation landscape could even minimise individual firms’ costs as there would be a reduced need for companies to have their own individual and regional moderation practices. By cooperating and creating minimum standards applied across all dominant platforms (while providing the capacity to later incorporate smaller platforms), tech companies can act as diplomatic agents and proactively take the lead in countering the spread of harmful and borderline content.

Conclusion

Exposure to ‘lawful but awful’ content can prime vulnerable audiences to extremist content. Legislative repose to borderline content consistently incurs an array of logistical and ethical costs, and likely fails to effectively prevent the most nuanced harmful content from proliferating in a timely manner. Instead, I have asserted that tech companies should take the lead and use their resources to drive positive change themselves. As newly acknowledged diplomatic agents, tech companies have enormous amounts of political cachet. Now, they must seize that opportunity and demonstrate the capacity for good such collaborative tech-driven policy architecture could pose.

Jon Deedman is a postgraduate student in far-right terrorism and extremism, focusing on the nexus between online and offline manifestations of extreme-right ideology.