It is becoming increasingly difficult, nearly impossible really, to manually search for violent extremists, potentially violent extremists, or even users who post radical content online because the Internet contains an overwhelming amount of information. These new conditions have necessitated guided data filtering methods that can side-step – and perhaps one day even replace – the taxing manual methods that traditionally have been used to identify relevant information online. As a result of this changing landscape, a number of governments around the globe have engaged researchers to develop advanced machine learning algorithms to identify and counter extremism through the collection and analysis of large-scale data made available online. Whether this work involves finding radical users of interest, measuring digital pathways of radicalisation to violence, or detecting virtual indicators that may prevent future terrorist attacks, the urgent need to pinpoint extremist content online is one of the most significant challenges faced by law enforcement agencies and security officials worldwide.
It should come as little surprise, then, that scholars in recent years have shown a vested interest in developing large-scale ways to identify and analyse radical content online. To illustrate, researchers have used machine learning algorithms to detect extreme language, websites, and users online, as well as to measure levels of online propaganda and cyberhate following a terrorism incident, and to evaluate how radical discourse evolves over time online. Machine learning has also been used to detect violent extremist language and violent users online, as well as to measure levels of – or propensity towards – violent radicalisation online. But in light of these important contributions, a systematic look at what constitutes radical posting behaviours in general has been overlooked. Taking a step back from measuring levels of online radicalisation, for example, to investigate radical posting behaviours found in an already radical online community may provide law enforcement and intelligence agencies with new insight into what constitutes online behaviours worthy of future investigation. It may also provide a useful baseline for key stakeholders to in turn identify credible threats (i.e., those who engage in violence offline) or inform future risk factor frameworks.
This study sought to address this gap via a sentiment analysis-based algorithm that adapts criminal career measures – and is guided by communication research on social influence – to identify and describe radical posting behaviours on one of the most visited right-wing extremist (RWE) discussion forums: Stormfront.
In order to develop the algorithm that would serve as the study’s base, online discussions that related to RWEs’ three primary adversary groups were identified, which included members of Jewish, Black and LGBTQ communities. For each of these groups, a set of keywords – which included slur words and racial epithets – were used to identify content of interest. Next, the context surrounding each keyword was systematically evaluated using sentiment analysis software. Traditional criminal career measures were then incorporated into the algorithm to identify radical users and, by extension, three RWE posting behaviours of those who posted messages over an extensive period of time. The following is a summary of the macro- and micro-level patterns that were uncovered during the analysis.
High-intensity radical posting behaviours
High-intensity radical posters tend to be those who post few messages over a relatively short period online. These authors, however, post a high volume of negative and very negative messages, but again this posting activity unfolds over a short period of time (see Table 1).
Table 1. Descriptive comparisons of radical posting behaviours.
|Radical posting groups|
|n (%)||100 (33.33)||100 (33.33)||100 (33.33)|
|Mean number of posts|
|Very negative posts||2.27||1.71||1.14|
|Mean posting score|
|Very negative posts||-16.33||-16.35||-17.13|
|Mean posting duration (days)|
|Very negative posts||154.79||170.80||156.64|
Note: HIR = high-intensity radical, HFR = high-frequency radical, HDR = high-duration radical.
Nonetheless, worth noting is the powerful, clearly written and detailed nature of most of the very negative messages posted by the high-intensity users. Communication experts would describe their messages as vocabulary rich and stylistic – and messages that may be influential to readers. These messages are also assertive in tone and feature an array of alarming and emotional language (with the usage of words such as ‘bomb’, ‘kill’, ‘evil’, and ‘threat’) about their adversary groups, oftentimes advocating violence against Jews specifically – leakage warning behaviour that researchers believe can assist in identifying radical violence online. Authors who post these intense messages appear to be fixated on revealing the “truth” about Jewish control over the white race, which is a linguistic marker that has been recognised as a key warning behaviour in online settings. In addition, many of these high-intensity posters are actively trying to reinforce a sense of community by generating discussions about the so-called white struggle against “Jewish domination”, which is a communication tactic that Joyce and Kraut posit is very influential, provided that the source of the message is perceived as credible by the recipient. Such a community-building tactic is commonly used by the extreme right to unite their movement online.
High-frequency radical posting behaviours
High-frequency radical posters are those who post a high volume of content in general and negative messages in particular, and such content is moderately negative and over a moderate period of time relative to the other two posting groups. Although these high-frequency users tend to post few very negative messages in the online community compared to the high-intensity authors, their radical discourse spans over an incredibly long period of time, which may suggest a level of dedication to the community, according to previous research on social influence. High-frequency users’ negative messages, too, include emotional and alarming language which is intended to degrade Jews, Blacks, LGBTQs and a wider group of adversaries – and not primarily verbal attacks against Jews. But such messages tend to include descriptive accounts of their adversary groups rather than calls to action or inciting violence against them. Nonetheless, high-frequency users may be influential in the RWE forum. Their high volume of engagement may increase their potential to influence others, as has been found in previous research on social influence.
High-duration radical posting behaviours
Lastly are the high-duration radical users who post few negative and very negative messages over an extensive period of time, most of which target the Jewish community and raise concerns about “Jewish corruption.” Importantly, though, is that these authors are not as active in the sub-forum as the other two user groups, but when they do post very negative content online, the messages are amongst the most negative in the sample. This may indicate their long-term level of commitment to the online community, according to previous studies which suggest that the time in which an individual spends communicating in a specific place may impact their ability to gain social influence, especially if they are communicating with emotional and vocally rich sentiment. This is particularly the case when individuals attempt to motivate others to participate in the discussions, or again, as was noted earlier, when individuals attempt to build a sense of identity for a particular group. High-duration posters in the sample did just that: over time they posted radical messages in an attempt to bond with others to overcome the “evil Jews” – a community-building tactic that is commonly used by the extreme right online.
Together, RWE posting behaviours are multi-dimensional and include an array of posting patterns that law enforcement officials and intelligence agencies may deem worthy of future investigation. But it remains unclear as to whether one may be perceived as the most credible threat for law enforcement and intelligence agencies. Future work is needed to connect the on- and offline worlds of radical users. Researchers, for example, could combine the online data with offline data of a sample of known violent extremists in an effort to triangulate their offline experiences with their online behaviour. This, amongst other research strategies, would provide researchers, practitioners, and policymakers with new insight into the online discussions, behaviours and actions that can spill over into the offline realm.
Ryan Scrivens is an Assistant Professor in the School of Criminal Justice at Michigan State University. He is also a Research Fellow at the VOX-Pol Network of Excellence and a Research Associate at the International CyberCrime Research Center at Simon Fraser University.
This article summarises a recent study published in Deviant Behavior.