Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

‘Fogging’ and ‘Flooding’: Countering Extremist Mis/Disinformation After Terror Attacks

‘Fogging’ and ‘Flooding’: Countering Extremist Mis/Disinformation After Terror Attacks
8th November 2021 Martin Innes
In Report-Gnet

The executive summary is also available in French, German, Arabic, Indonesian and Japanese.

Please read on for the Introduction.

This report explores how and why mis/disinformation develops in the wake of terror attacks and the ways it is used by extremist groups to attempt to shape public understanding and political responses. These uses include extremist sympathisers engaging in information manipulation and obfuscation as part of their attempts to explain or justify the violence, as well as distorting and deceptive messaging designed to marginalize or stigmatize other social groups. Having presented evidence and insight about the construction of these messages, the discussion also looks at the policy and practice options in terms of ‘what works’ with regard to managing and mitigating any such messaging and the harms it seeks to induce.

It is now largely taken for granted that social media and the wider changes to the media ecosystem with which it is associated have had profoundly disruptive and transformative impacts upon the institutional and interactional ordering of society. But while very few social and political commentators would contest the general tenor of this assertion, it is increasingly clear that the effects of social media upon patterns of communication and knowledge are complex, especially with respect to specific policy and practice domains. One such domain is political violence and the countermeasures intended to limit its effects.

Terrorist violence is fundamentally a form of communicative action. The violent act is designed to deliver a message in pursuit of a political objective or in response to some grievance. The proliferation and diversification of social media has altered the processes of social communication associated with terror attacks. First, it has changed the conduct of the violent act itself with increasing numbers of assailants designing the delivery of attacks in ways that encourage social media dissemination, for example in terms of promoting their manifestos and/or livestreaming the attacks themselves. Second, it has impacted the processes of social reaction to terror attacks, as supporters and ideological opponents of the perpetrator engage in ‘framing contests’ to try to establish a public definition of the situation in terms of how the violence is interpreted and understood. A third strand of influence relates to the social control responses of governments, police and intelligence agencies, who increasingly have to think about the effects of social media messaging on their control strategies, in terms of not only the practicalities of any investigation but also public reassurance.

Finally, social media has afforded new ways of studying patterns of social reaction in the aftermath of terror attacks (this is especially significant from the perspective of this paper). In particular, the streaming quality of many social media feeds has opened up new ways of capturing what happens in the minutes, hours, days and weeks following an event. This is significant in that, until fairly recently, relatively little research attention had been directed towards analysing processes of social reaction in the aftermath of terror attacks. Compared with the amount of attention that has been directed to the upstream issue of violent extremist radicalisation, little work has focused upon the immediate post‑attack situation and how this shapes public perceptions and understanding. In part, this reflects the challenges of tracking and tracing the dynamics of public opinion, especially in terms of thinking about how distinct audience segments may display substantively different response patterns. However, social media provides a source that is simultaneously rich in detail, but also available at scale.

Reflecting this trajectory of development, over the past five years or so there has been a growing literature using social media data to illuminate different facets of what occurs in the post‑violence moment. For example, Randall Collins initially used his theoretical work on the time dynamics of conflict to argue that there will be moments of ‘collective emotion’ and thus heightened risks of further hate crime and violence in the period following a major attack. Using empirical data collected following the murder of Lee Rigby by Islamic fundamentalists in London in 2013, Roberts et al. (2018) found key elements of Collins’ theory to be supported. Developing this insight that there may be ‘reaction patterns’ to the organisation of public responses to terrorism, further work suggested that there were ‘ten Rs’ of reaction. This included, of particular salience to this report, what was labelled ‘rumouring’, the transmission of speculative and unverified information of uncertain origin.

Cast in this light, social media functions as what the sociologist Donald Mackenzie (2008) dubbed both ‘an engine’ and a ‘camera’. This is because it drives changes in the causes, conduct and consequences of terrorism while simultaneously it functions as a source of data available to researchers to capture the details and intricacies of what happens in terms of the evolution of public perceptions and sentiments, and how these vary across particular audience groups and segments.

The focus of this report is upon one specific facet of this public interpretation and sense‑making process, whereby groups author and/or amplify mis/disinformation to obfuscate and obscure particular definitions of the situation. Often this occurs as they try to subvert and contest interpretations that they do not like, that run counter to their group’s ideological values and objectives. The mis/disinforming messages can originate ‘organically’ out of the chaos and confusion that arises immediately post‑attack or be more deliberately and purposively ‘manufactured’. They are authored and amplified to encourage and motivate supporters and sympathisers, while simultaneously triggering reactions from ideological adversaries.

Disinformation is where communicating false or misleading information is intentional. Misinformation, by contrast, involves the unwitting transmission of such material. Because divining intent for such communicative actions with any degree of analytic confidence is increasingly difficult, especially in post attack situations that are so uncertain and riven with imperfect information, throughout this discussion we will simply refer to mis/disinformation. As a concept this formulation rather neatly captures some of the ambiguities and contingencies that pertain to the messaging that various actors transmit in their attempts to influence how others perceive and understand the causes and consequences of the violence that has occurred. It also allows the discussion to avoid getting hung up on defining whether intent is present or not, in order that the analysis can attend more to the impacts and consequences that flow from the stream of distortions and deceptions identified.

Framed in this way, the two core techniques centred by the analysis can be defined as follows:

• ‘Fogging’ involves constructing and communicating multiple explanations and interpretations of the events in question. These can be more or less plausible. The purpose of transmitting these alternative versions of reality is not necessarily that they should be widely believed, but simply sufficient to induce a sense of doubt and complexity about the underlying causes. This creates a miasma of competing and contrasting accounts and explanations in the information space, such that public audiences don’t quite know what to believe happened or why.

• ‘Flooding’ involves ensuring that the information space is dominated by a particular mis/disinforming message. Reposting a message in high volumes and frequently across platforms makes it highly visible and likely to be encountered repeatedly by audience members engaging with the event or issue of concern. Under a general condition of fogging, then, ‘flooding the zone’ with a particular distorting or deceptive message constitutes a specific influencing tactic that reinforces and reproduces the wider condition of which it is a part.

In the process of elaborating how fogging and flooding work as part of attempts to influence public perceptions and understanding, a third concept of ‘surfacing’ is also introduced:

• ‘Surfacing’ refers to some specific techniques of persuasion used to establish a patina of plausibility to the alternative narratives being constructed and used to fog and flood the zone of influence.

These concepts are developed through analysis of empirical data originally collected as part of a long‑term international research programme focused on understanding public reactions to terror attacks.

The central innovation and contribution to knowledge of developing these three constructs is to connect recent developments in the study of social reactions to terrorism with the rapidly growing literature on the construction and communication of mis/disinformation. It is an approach that helps to illuminate some of the complexity of public interpretation and sense‑making with regard to fabricating the political and societal meanings and significance that are ascribed to particular incidents. Ultimately, it provides a deeper, more nuanced understanding of terrorist and extremist propaganda, and the communicative actions that the authors of such material perform as they seek to garner social support for their cause while simultaneously degrading and discrediting opposing ideas and values.

The next section provides a brief overview of the procedures via which the empirical materials underpinning this analysis were collected. Extracts from several different case studies are then introduced to map out the contours of the first key concept, fogging. As part of this, the specific role of surfacing techniques is described, before a similar empirically guided account of flooding the zone is provided. This is then followed by an analysis of some strategic and tactical options for disrupting and degrading the impacts of such techniques of disinformation, including the different stakeholders who might potentially leverage them.

Read full report View infographic