Introduction
Work aiming to counter violent extremism (CVE) must be based upon deference to the rule of law, the importance of human rights, and the central role of accountability of tech platforms to society at large, governance institutions, and civil society. Accountability within the field of Responsible Tech is predicated on familiarity with the types of content on platforms; awareness of the actions the platform takes (and has taken) to monitor and police such content; knowledge of the relationships between platform content and the state, and cognisance of the impact of appeals. Such knowledge is pivotal to effective CVE and platform accountability and demonstrates the key role of transparency reporting from tech firms. We can realistically only hold platforms to account and provide effective counter-operations on them when we, as a field, have a holistic and clear picture of what these platforms look like (i.e. the types of users and content present on them) and how they operate.
Transparency is functionally understood as the provision of information to provide clarity regarding how information is created, processed, managed, transmitted, stored, and shared. Typically, this information is included within transparency reports – whatever sector or field they relate to. The provision of such information, and the clarity it helps to provide, is the foundation of sectoral accountability. In the past, greater public confidence enabled the publication of fewer, and more limited, transparency reports in the sector. Following Edward Snowden’s revelations about NSA surveillance in 2013, however, a crisis in consumer confidence around companies’ handling of private data catalysed a sectoral expansion of the use of transparency reports. Now that social media platforms are omnipresent in the lives of many, and consumer confidence in companies’ handling of private data remains low, transparency reporting must go further to ensure that the provision of information is in fact insightful and actionable.
In the realms of CVE and Responsible Tech, the term ‘meaningful transparency’ has emerged as a descriptor for desired levels of effective accountability within sectoral transparency reporting. In reality, it is important to note that definitions of this term vary significantly among distinct stakeholders and practitioners, leading to various divergent perspectives. In a broad sense, meaningful transparency can be defined as maximised aggregate transparency, serving both as an intrinsic goal and a means to facilitate more substantial outcomes from published transparency reports. While the concept of meaningful transparency as a tangible end goal comprises distinct policies and reporting decisions, its overarching values reflect a shared consensus on the need for increased and more detailed collaborative transparency reporting from within the tech sector.
The Current Transparency Landscape
Existing legislation prescribing the publication of transparency reports in some states lacks rigour in their demands for information within the reports. Given the relative infancy of the tech sector, it is not surprising that many of these laws merely mandate the publication of reports without specifying their content. Additionally, even while some legislative actors do now prescribe transparency report publication, much of the world continues to rely upon a voluntary publication model, where tech firms are not legally required to produce any reports. Although many firms do still produce reports voluntarily, or to meet the membership requirements of multi-stakeholder initiatives such as the Global Internet Forum to Counter Terrorism (GIFCT), the lack of legal prescription perpetuates a lack of contextual or consequential information in these reports.
Transparency reports have traditionally served two purposes. Firstly, as a public relations tool for platforms to demonstrate a willingness to share information about their inner workings in order to foster trust and accountability. Secondly, to demonstrate compliance with governmental or regulatory minimum standards. Typically, however, legal mandates for transparency reporting do not mean that such reports will be useful or, indeed, ‘meaningful’, especially given much of their content can be predicated on taxonomic inconsistencies and varying cultural understandings and value systems. Efforts to improve transparency reporting and the collective definitional and value frameworks that underline such reports are being made by organisations and multistakeholder initiatives such as the OECD, Tech Against Terrorism, GIFCT, the Action Coalition on Meaningful Transparency, the Christchurch Call, and the Santa Clara Principles. Despite this work, and endorsements for such existing frameworks from tech firms, tech transparency reporting is still far from being legitimately ‘meaningful’.
The current transparency reporting landscape is a patchwork, with some platforms lacking reports altogether, some producing in-depth reports and interactive transparency centres, and others meeting only the minimum mandated criteria. This fragmentation even differs across sub-sectors; while social media platforms now mostly realise the benefits and moral imperative of transparency reporting, only some gaming platforms are now moving to act. Most large US-based game companies are still doing nothing to assess the prevalence of harmful content on their platforms. In reality, this reporting landscape lacks the cohesion to allow for effective CVE programmes due to the multi-platform nature of online harms. As such, achieving meaningful transparency requires a multi-platform approach.
Meaningful Transparency as a ‘Solution’
Despite varying definitions of ‘meaningful transparency’, certain factors can be agreed upon by those within the fields of CVE and Responsible Tech, like the need for more contextual information and greater algorithmic transparency. Fundamentally, it is a process that aims to provide informative, interoperable, and comparative transparency reports that offer a clear sector-wide view of online extremist users, communities, and content.
Meaningful transparency is not a policy that can be implemented simply; it requires a shift in values, processes and collaborative efforts within tech platforms. This process of transfiguration – where transparency becomes meaningful – would look different to different platforms, and as such, meaningful transparency cannot be considered a policy in its own right. It is instead a process that serves to enhance tech accountability through improved transparency reporting. To realise meaningful transparency, there must be a sectoral shift toward medium and long-term goals, greater comparability and interoperability of transparency reports and frameworks, and a shared acceptance of risk among all stakeholders. Indeed, the entire transparency reporting process must be multi-stakeholder; CSOs should not only be designing new transparency frameworks but also remain present at the implementation point.
Looking to Future Frameworks
Meaningful transparency is not bereft of risk but must be based on a sectoral understanding or agreement as to the acceptable level of risk. While this will differ among stakeholders, states, and legislative boundaries, consensus could be achieved through the recommendations laid out in GIFCT’s Pathways to Meaningful Transparency that seek to comprehensively increase sector-wide interoperability. This more holistic, macro-level approach would grant states and researchers a clearer picture of how and where online harmful content/actors operate, thus enhancing the effectiveness of current and future CVE methodologies.
Achieving meaningful transparency starts with effective stakeholder collaboration. While greater legislative or regulatory oversight or collaborative tech consortiums could serve as catalysts, the central requirement is a shared commitment to more comprehensive, contextual, and outcome-focused information sharing among those who combat online harms across the entire tech sector.