Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

Beyond Fake News: Meta-Ideological Awareness (MIA) as an Antidote to Conspiracy and Radicalisation

Beyond Fake News: Meta-Ideological Awareness (MIA) as an Antidote to Conspiracy and Radicalisation
17th July 2023 Ryan Nakade
In Insights


Online misinformation and disinformation are widely recognised as detrimental to society, most of all due to their links with increased levels of conspiratorial thinking and radicalisation. This has real-world implications, as seen during the January 6th Capitol insurrection. Concerned researchers and policy-makers have developed media literacy and other ‘de-bunking’ approaches to combat ‘fake news’ and prevent these violent outcomes. However, these approaches share a core presupposition: that statements can be unequivocally categorised as true or false. As we show, this presupposition poorly reflects the reality of online communication. Many online claims cannot be easily placed into these binary categories, particularly those that, through incitement, polarisation and dehumanisation, are most likely to feed radicalisation. Using a popular critical thinking framework to categorise claims, we illustrate where current measures need expansion and present a novel intervention to achieve this: Meta-Ideological Awareness (MIA). We close with three initial strategies for MIA implementation to better prevent online radicalisation and the violence that may follow.

Misinformation Interventions: The Current Framing

Sander van der Linden’s new book, ‘Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity’ opens with three headlines and challenges the  reader to identify which is false: 

  • “Putin issues international arrest warrant for George Soros”; 
  • “A baby born in California was named heart eyes emoji”; 
  • “Criminal farts so loudly he gives away his hiding place.” 

Regardless of the correct answer, all three claims have definitive right or wrong answers.

Many online posts across social media platforms – from individual users to news agencies – do not fit this binary format, manifesting as narratives, memes, moral assertions, and other non-empirical claims. For instance, a recent post in a neo-Nazi Telegram channel claimed that “If the Nazis were still in power, Germany would not have an immigration problem!” Compared to the examples in Foolproof, this claim is difficult to empirically scrutinise. 

Another example below, taken from the same Telegram channel, shows how integral these non-empirical types of claims are to radicalisation pathways. Here, a meme frames the political identity of a white man as a binary choice between pro-LGBTQIA+ and Nazi. Of course, in reality, there are an infinite number of ideological positions.  For the average social media user, however, continuous exposure to swathes of similar posts normalises a radical framework for interpreting the world. ‘De-bunking’ approaches to misinformation based around ‘true or false’ propositions cannot deal with these kinds of statements.

Rectifying A Limited Framework

To build out a solution to these alternative statements and mitigate their role in radicalisation pathways, we first need a framework to conceptualise different types of claims. Here, we can borrow from the literature on critical thinking. Richard Paul and Linda Elder distinguish three categories of claims or ’questions’:

  1. Questions of Procedure have definitive correct answers, such as maths problems and trivia. These are what conventional media literacy focuses on, as articulated above.
  2. Questions of Preference employ personal opinion, such as a liking for chocolate ice cream. Because these are based on subjective values, there are no right or wrong answers.
  3. Questions of Judgment lack definitive answers and are open to interpretation. Most socio-political issues fall in this category. While answers can be better or worse, based on the quality of reasoning, theoretical dimensions open to dispute remain. 

While Questions of Procedure fit within classical models of fact-checking, Questions of Preference and Questions of Judgment do not, as neither can be verified to the extent they can be labelled true or false. To better prevent radicalisation and conspiracy, we need a way to approach these types of claims. 

As socio-political claims tend towards Questions of Judgement, this is where we focus our solution. However, we also think Questions of Preference have a more robust framework to build off in social and emotional learning interventions, as other articles have touched on. For Questions of Preference, increasing empathy and a recognition that people hold different values is key. 

For Questions of Judgement, we propose the preliminary framework for an approach we call Meta-Ideological Awareness (MIA). 

MIA: Levelling out the ‘Solution’ Landscape

The term ‘meta’ derives from the Greek for ‘beyond’ and signifies an external, self-reflective vantage point above the ‘object level’ which focuses on specific details. We define ‘ideology’ as a system of abstractions used to interpret the socio-political world; in essence, a ‘normative framework’.

MIA involves taking a ‘meta perspective’ on ideology. It emphasises reflecting upon the orienting abstractions and core presuppositions supporting our worldviews, outside of any specific thought system. This moves the level of analysis away from specific socio-political issues and ideas, towards the collective worldview (ideology) and its structure. This analysis is applied both to the ideology of the individual and, indeed, to others that the individual engages with. Examples of the type of questions MIA entails are: What framework am I using to structure my beliefs? How do we relate to them? What abstract concepts am I using to interpret information? It looks beyond the trees, to the forest and its roots. 

MIA has direct applications to Questions of Judgement, in that it offers a way for individuals to approach these questions meaningfully from outside the ‘true or false’ binary. Take the often-cited claim by conspiracy theorists that the ‘system is corrupt’. How one understands, holds, and acts from this belief is what matters: Does one hold the belief in a dogmatic, fundamentalistic way or in a nuanced, flexible, and self-reflective way? MIA seeks to elevate the latter and dampen the former. 

The opposite of MIA would be a ‘blind ideological advocacy’, in which one promotes their beliefs through an ideological framework without any awareness that they are doing so, even if they believe themselves to be taking multiple perspectives. This ‘true believer’ state of mind is dangerous. Research has shown that higher levels of ideological conviction can deactivate moral self-regulation, increase reactivity and retaliation to ‘threatening information’ that conflicts with one’s ideology, and draw people further into extremist networks. These features both further radicalisation and increase the propensity of one acting violently. MIA directly counters ‘blind ideological advocacy’ and, by extension, its outcomes. 

MIA vs Critical Thinking: What’s the difference?  

While we used the critical thinking literature to conceptualise different types of online claims, it is worth clarifying that MIA is distinct from critical thinking interventions targeted at fake news. Critical thinking involves interrogating the epistemic inputs of beliefs. In contrast, MIA emphasises how one relates to their conclusions and beliefs, regardless of how one got there. Here, we are not arguing that critical thinking interventions are useless. Instead, we see MIA as a necessary complement to them. The table below highlights these differences.

                      Critical Thinking                                MIA 
Emphasises how one arrives at a conclusion Emphasises how one relates to their conclusion
Active thinking and cognition General awareness that acts as the backdrop to thinking 
Object level – ‘in the weeds’ of the thinking process, examining claim by claim Meta level – ‘above the weeds’ reflecting on the organising abstractions framing inquiry 
Critical thinking is independent from theories, ideologies, and frameworks Thinking about social and political topics can never be disentangled from underpinning theories, ideologies, and conceptual frameworks – so awareness and literacy of them are essential 
Can imply that one needs to arrive at a single, well-informed conclusion Reflects on the landscape of competing perspectives, ideologies and frameworks (while still holding whatever beliefs one holds)  
What one believes, the content of one’s conclusion, are what matter most (e.g. true or false conclusion) How one holds and relates to their belief is primary, irrespective of the specific content of the conclusion (held in meta-awareness) 


MIA in Action

What can MIA specifically add to countering extremism, radicalisation, and hyperpolarisation? We contend the following. 

1) It’s hard to escape the grasp of ideological thinking. MIA provides an intervention that functions while recognising this reality.

Ideological content remains the dominant mode of political expression, manifesting as narratives, hashtags, memes, and so forth. It is hard to resist, and critically counter, all the subtle ways this impacts our thinking. Furthermore, it is cognitively demanding to do this with every single online claim. This is, after all, biological: the human brain relies on consolidating information into abstractions, such as ideology, to save energy. MIA better deals with this by discouraging radical attachment to beliefs (the ‘true believer’ mindset) through an awareness that precedes reasoning.

2) A lack of critical thinking is not always the issue. MIA deals with these alternative cases.

Critical thinking interventions are often promoted with the assumption that one will arrive at the ‘correct’ conclusion. Critical thinking is often offered as an antidote to conspiracy and other ‘bad’ forms of thinking, but this can lead to dead ends when it doesn’t change one’s mind in the direction we want (our experience in community-based radicalisation prevention suggests most conspiracists and extremists think of themselves as critical thinkers). As discussed before, this is especially the case for Questions of Judgement. MIA raises an awareness of these approaches, without forcing correction, in a way that provides the average social media user with the tools to evaluate a broad range of online claims from a detached (non-radical) position.

3) Claims of neutrality tend to just perpetuate more ideology. MIA, explicitly, does not.

MIA is not about being apolitical, but rather being aware of the landscape of ideological conflict so it can be best navigated. Groups will always be accused of ideological bias, irrespective of intention. Furthermore, ideologies are often integral to an individual’s identity, and ideological criticism is therefore perceived as a personal attack. This prevents constructive dialogue that could otherwise discourage radicalisation. MIA focuses on making explicit the frameworks that inform a person’s beliefs, even if they still see these beliefs as ‘correct’. It does not require dissociation from personal convictions. As a result, MIA sidesteps accusations of bias or ‘reverse brainwashing’, keeping those vulnerable to radicalisation, or further radicalisation, in the conversation.

Next Steps: Implementing Change through MIA

There are several ways we think MIA can be promoted online to prevent radicalisation and conspiracy thinking.

First, critical thinking interventions should be built out to include MIA, by adding self-reflective questions about how personal ideologies affect interpretations of online information. Alongside asking ‘Is this source reliable?’, we can ask ‘How do you think your current values impact the way you perceive the reliability of this source and information, and your willingness to act off this information?’

Second, social media platforms should nudge towards MIA. Here, we can extend the ways posts are ‘flagged’ when they contain false information. Further labels could highlight the key assumptions behind claims, to prevent judgements masquerading as fact. AI language models offer unique opportunities, that are already being explored for de-bunking approaches to misinformation. For instance, ChatGPT can list out the presuppositions of a statement when asked. When prompted with the ‘Which way white man?’ meme mentioned above, it identified false binaries, mutual exclusivity, universality and homogeneity assumptions: 

“A meme, found on a far-right social media platform, shows a fork in the road. One direction leads to a swastika, the other to a LGBTQIA+ flag. The caption reads ‘which way white man?’. What are the assumptions behind the framing of this question?” – Prompt

“The meme utilizes false binaries by presenting only two options, mutual exclusivity by implying that one must choose one option while rejecting the other entirely, and assumptions of universality and homogeneity by assuming that all white men should make the same choice based on their identity. These elements oversimplify complex issues, hinder nuanced understanding, and perpetuate divisive narratives.” – ChatGPT

Third, high-engagement accounts that cover political topics, such as news agencies and official government Twitter accounts, should lead by example with MIA by being clear that their statements of judgement are distinct from facts. While these accounts can claim to use facts to inform judgments, this approach would highlight the existence of underlying frameworks that motivate these posts. Many news agencies already do this – if only slightly – with the ‘opinion piece’ label. Normalising MIA from the top down would build resilience to radicalisation and conspiracy on a system and societal level.

Ultimately, MIA is a novel and necessary complement to existing programs that seek to combat mis- and disinformation, and the negative societal outcomes of radicalisation and conspiratorial thinking that they produce; such as polarisation, declining institutional trust and, even, violence. While notions of online fact-checking and critical thinking are prevalent in today’s discourse, MIA is not. As it is novel, concrete applications are still in development. Through this Insight, we hope to invite collaboration around operationalising, developing, and testing this framework and its implementation. 

Ryan Nakade is a mediator and facilitator, with a focus on de-polarisation and conflict prevention, based in the Pacific Northwest.

Twitter Account: @NakadeRyan

Jack Wippell is a Sociology PhD Student at The Ohio State University, with a research focus on far-right extremism. He consults regularly with non-profits and practitioners on these topics.

Twitter Account: @JGRWippell