Introduction
Understanding the mechanisms driving the diffusion of problematic information – information that is “inaccurate, misleading, inappropriately attributed, or altogether fabricated” – in online networks is essential for developing effective strategies to counter it. Social media platforms have transformed how information is disseminated and consumed, facilitating the rapid spread of problematic content, including conspiracy theories, misinformation, and extremist narratives. Understanding the dynamics of information diffusion in online networks is, therefore, crucial to mitigating the harmful effects of problematic information and extremism, particularly in cases where the information is generated in the Global North and spread indiscriminately across vulnerable communities in the Global South.
Through a social networks perspective, this Insight identifies key factors influencing the formation of online networks and discusses their implications for social media platforms’ policy design efforts to counter the spread of problematic information and extremism online. The findings underscore the importance of understanding the interplay between cultural, linguistic, geographic, and thematic commonalities in shaping online information diffusion. It highlights the need for targeted and network-informed interventions to address the spread of problematic content and mitigate the influence of dangerous and extremist ideas and groups on vulnerable communities around the world, posing a potential for disproportionate harm. The belief in dubious ideas has been linked to extremist political, far-right views, especially when such beliefs become politicised. In recent years, few topics have become as politicised as the coronavirus pandemic, especially among politicians with extremist predispositions, like Donald Trump or Jair Bolsonaro. This link is the motivation for this piece, arguing that politicised scientific misinformation and conspiratorial beliefs at local and global scales are ripe areas for political extremism to fester.
Methodology and Hypotheses
Two prominent case studies are used to analyse the diffusion of problematic information related to the COVID-19 pandemic from the Global North to the Global South, focusing on Facebook groups that shared false and misleading content about the virus and its treatments. The focus was placed exclusively on audiovisual content, since video format was identified as the main vehicle for the ‘misinfodemic’ to spread. The first case study deals with the spread of COVID-19 misinformation emanating from the pseudo-scientific European organisation Doctors for Truth, which is particularly active in Spain. This group was notorious for spreading false information about the virus. For instance, they denied the existence of the pandemic, arguing that it was planned, referring to it as a ‘plandemic’. Furthermore, they also denied the effectiveness of and the need for masks and vaccines to curb the spread of the virus, and they were even linked to election fraud conspirators in Spain. By organising summits and rallies in Spain, their message spread to Latin America, where local chapters were formed, and further rallies were organised. This dataset consisted of posts from 1 February 2020 to 1 October 2021 in public Facebook groups, totalling 15,336 and collected using Facebook’s Crowdtangle.
To compare the insights to another similar phenomenon, we analysed a second case, which focused on the false promotion of hydroxychloroquine to treat the virus by French microbiologist Didier Raoult. In a paper published early in the pandemic that ended being retracted for methodological inaccuracies, the microbiologist proposed that anti-malarial medicine was the cure for COVID-19. These claims travelled globally, notoriously being proclaimed by President Donald Trump as a miracle cure, endangering not only those who needed the drug due to possible shortages but also those who were most vulnerable to these claims in times of heightened risk and uncertainty. The second dataset comprised 11,122 posts in the same time period as the first. The objective remained the same: to uncover the underlying mechanisms driving the spread of problematic information within the digital realm. To do this, we adopted a social networks perspective that allowed us to test theories of connective action, linking the spread of problematic information to on-the-ground social movements, such as rallies. Finally, a networked approach was employed, with Facebook groups representing nodes linked to each other if they shared the same video.
We identified four explanatory variables for the analysis that could account for the spread of problematic information. In terms of cultural similarity, we employed Facebook’s Social Connectedness Index – the ratio between actually connected individuals between two territories and all possible connections between those territories. This index helped capture a sort of post-colonial affordance, explaining the reason behind the flow of problematic information from Europe to its former colonies: in Spanish, from Spain to Latin America and Hispanic communities in the United States; and, in French, from France to Francophone countries, particularly but not limited to Northern Africa.
Moreover, since the theoretical framework revolved around connective action, we included linguistic similarity, thematic similarity and geographic co-location as plausible explanatory variables. In general, theories of connective action posit that otherwise distant and disconnected actors can engage with each other through the perception of narrow, overlapping interests by means of some sort of technological communication tool. As such, both linguistic and thematic similarities offer two distinct ways connective action can occur. Thematic similarity represents the general theme that Facebook groups deal with (for instance, politics, media, community, religion, conspiracies, etc.) and captures the element of connection of otherwise unrelated and distant actors through overlapping interests. On the other hand, we hypothesise that linguistic similarity acted as the second element of technological communication, namely connective affordances, as represented by public Facebook groups worldwide. Finally, connective action recognises the possibility of hybridity with traditional collective action approaches, meaning that connective action can be boosted through geographic co-location or proximity. Therefore, we also hypothesise that Facebook groups located in the same country can be expected to drive the diffusion of problematic content.
Findings
Our analysis revealed several key findings regarding the diffusion of problematic information with major implications for countering extremism in online networks. In the case of Doctors for Truth, geographical co-location does not have a statistically significant effect on the likelihood of ties forming within a network, helping explain its emergence. However, the same cannot be said of cultural similarity, which shows a positive and significant effect, indicating that cultural similarity between groups’ nationalities increases the likelihood of tie formation in the network, holding everything else constant. Regarding language similarity, our findings indicate that linguistic similarity alone actually decreases the likelihood of tie formation, holding everything else constant.
Based on thematic similarity, we estimated the likelihood of tie formation among all possible Facebook group pairs in the network based on their thematic nature. The estimation answered what the likelihood is of a tie existing between groups that deal, for example, with politics and those dedicated to religious or spiritual themes. The results range from the expected increase in tie formation between thematically-adjacent pairs, like in the case of Conspiracy Theory groups forming ties with other Conspiracy Theory groups in the network, or between Politics-Politics thematic pairs. However, our estimates indicated that other possible thematic pairs that included Conspiracy Theory groups see an increase in the likelihood of tie formation, such as in the cases of Conspiracy Theory groups paired with Community, Media, Politics or Religion & Spirituality groups. This seems to indicate a spillover effect on the diffusion of problematic information from dangerous dedicated Facebook groups that often promote politicised misinformation and conspiratorial ideas among genuine Facebook groups, creating fertile ground for extremism to emerge, as explained above.
For the case study of Didier Raoult, geographic co-location and cultural similarity had a positive effect in explaining the emergence of the network. In this regard, the estimates differ from the Spanish-language network, which could be attributed to the higher variance in culture, religion, language, geography, etc., among Francophone countries. In the case of language similarity, the estimates are not statistically significant. Finally, regarding thematic similarity, the same trend operated on all possible thematic pairs; we see the same increase in likelihood among Conspiracy Theory-Conspiracy Theory pairs and Politics-Politics pairs. More interestingly, our estimates for this network also indicate that other thematic pairs in which Conspiracy Theory groups are active see an increase in the likelihood of tie formation. In other words, thematic pairs in which Conspiracy Theory groups are paired with Community, Media or Politics-related groups also help explain the network’s emergence. This finding demonstrates that a spillover effect on the diffusion of problematic information from dedicated groups to unassuming and vulnerable groups operates more generally, regardless of language or region.
Discussion and Implications
These findings highlight the complex interplay between linguistic, thematic, and cultural factors in shaping online problematic information diffusion. They underscore the need for targeted network-informed interventions to address its spread from the Global North to the Global South. By identifying key factors influencing tie formation, policymakers and platform moderators can implement targeted interventions to mitigate the spread of extremist content. Overall, the study offers valuable insights and methodologies that can help online platforms develop more effective strategies for preventing the proliferation of online misinformation and extremism. By leveraging network analysis techniques and accounting for geographic, cultural, linguistic, and thematic similarities, platforms can enhance their ability to detect and mitigate extremist content, ultimately creating safer and more inclusive online environments.
Identification of High-Risk Nodes
Platforms can prioritise monitoring and intervention strategies by analysing the structure of online networks and identifying high-risk nodes, such as Conspiracy Theory groups, which often serve as brokers in disseminating problematic information, including extremist content. By targeting these brokering agents within the networks, platforms can effectively disrupt the spread of extremism.
Understanding Information Flow
The study’s network analysis techniques allow a deeper understanding of how information flows within online ecosystems. Platforms can leverage this knowledge to track the propagation of extremist narratives and identify key pathways through which they spread. By mapping out these pathways, platforms can implement targeted interventions to prevent the rapid dissemination of extremist content.
Assessing Cultural and Linguistic Factors
The study highlights the importance of cultural and linguistic factors in shaping online interactions for information sharing. Platforms can use this insight to tailor their moderation efforts to specific linguistic and cultural contexts. By understanding the unique dynamics of different communities, platforms can develop more effective strategies for combating extremism and misinformation.
Monitoring Thematic Similarity
The emphasis on thematic similarity as a significant factor in forming ties in online networks underscores the importance of monitoring specific topics and themes associated with extremism. By tracking the spread of extremist narratives across thematic boundaries, platforms can detect emerging trends and preemptively intervene to prevent the escalation of extremist activity.
Integrating Geographical Proximity
In the case study focusing on French-speaking groups, geographical co-location emerged as a significant factor in tie formation. Platforms can incorporate this insight into their moderation efforts by considering the geographical distribution of users and the potential impact of local context on extremist activity. By accounting for geographical proximity, platforms can develop more nuanced strategies for addressing extremism at the regional level.
Applying Social Network Analysis Techniques
Using computational and statistical methods demonstrates the value of social network analysis techniques in understanding and predicting online behaviour. Platforms can leverage these techniques to identify patterns of extremist activity, predict future trends, and assess the effectiveness of intervention strategies. By adopting a data-driven and network-informed approach, platforms can better allocate resources and prioritise efforts to combat extremism.