Click here to read our latest report: Behind the Skull Mask: An Overview of Militant Accelerationism

Facebook’s Disruption of the Boogaloo Network

Facebook’s Disruption of the Boogaloo Network
5th August 2020 Jonathan Lewis
In Insights

On 30 June 2020, Facebook announced it was designating “a violent US-based anti-government network as a dangerous organization” and subsequently banning it from its platform. While this fringe network is identified by Facebook as one that uses the term boogaloo, it is described as distinct from the “broader and loosely-affiliated boogaloo movement because it seeks to commit violence.”

The ‘boogaloo’ movement itself, while not a cohesive or unified organisation, has emerged as a catch-all rallying cry popularised by extremist actors who see violence as a means to bring about societal collapse. While the so-called movement itself must be understood within the broader contours of far-right and militia-style movements, the recent targeted action by Facebook is emblematic of the difficulties in countering an increasingly meme-based set of narratives by a range of accelerationist actors online.

While Facebook has previously taken steps to crack down on what they describe as “violence within the boogaloo movement,” this represents a noteworthy usage of their updated Dangerous Individuals and Organizations policy and their first significant, targeted action against this specific network. The strategic network disruption reportedly removed 220 Facebook accounts, 95 Instagram accounts, 106 groups, and 28 pages, with another 400 groups and 100 pages that operated outside of this targeted network also removed for similar policy violations.

This disruption represents an important, albeit belated, first step in denying such extremists a platform online. On the day Facebook’s announcement was made, three Senators wrote to Facebook requesting information on “125 Facebook groups devoted to the ‘boogaloo’” and the prevalence of white supremacist actors and “right-wing extremist groups…using Facebook to plan a militant uprising in the United States.” Concerningly, these ‘boogaloo’ centric groups had enjoyed relative freedom on a range of platforms to network and attract others in the months before the strategic network disruption. And, while platforms such as Discord have also engaged in content moderation to remove ‘boogaloo’ aligned groups and servers that incite violence, the ability of individuals within these networks to quickly adapt their coded language at 4chan-esque speed is indicative of the challenges of countering such a network: When any common word, from luau to cookout, can be easily weaponised as a blend of irony and satire that clouds violent intent, moderation requires not only intimate knowledge of the subject matter but the ability to read between the lines.

The core anti-government ideas behind the boogaloo meme are not new. However, there have been notable recent mobilisations in response to perceived government overreach related to COVID-19 stay at home orders, as well as nationwide demonstrations in response to the murder of George Floyd. These flashpoints have served as catalysts for extremists to exploit real tensions and move from an online meme to unorganised, violent offline action, and allowed adherents to develop a set of shared collective narratives grievances, and even martyrs. The boogaloo ‘movement’ itself is not only amorphous and flexible, the barrier to entry is practically nonexistent. As JJ MacNab detailed in her recent testimony before the U.S. House Committee on Homeland Security, “when a movement is no deeper than a special look or a shared set of memes, anyone can join.” This ‘movement’, and the violent anti-government ideologies that underpin it, are a manifestation of modern domestic terrorism. The decentralised, leaderless violent movements that have become synonymous with far-right extremism in the United States represent a unique challenge to both technology companies and the government, and require a reevaluation of how these entities perceive and counter emerging threats that do not adhere to traditional terrorist organisations’ structure or modus operandi.

Counterterrorism is, at its core, inherently a public function. As both governments and technology companies work to grapple with the range of far-right actors and movements that seek to recruit and radicalise others online, partnerships between the two will be increasingly important. All the tools at their disposal should be examined and utilised in a responsible way, with proper oversight, to showcase that the government should not ask technology companies to engage in more robust counterterrorism actions than they are willing to do themselves. As technology companies are berated for what some stakeholders and policymakers have deemed as insufficient or inadequate accountability and transparency reporting concerning their counterterrorism efforts, governments must also take responsibility by setting and clarifying legal expectations, supporting the development of industry standards, and considering opportunities for responsible partnerships and collaboration. Furthermore, long-term strategies that address the role of online spaces in furthering extremist narratives set forth by bad actors at both the state and non-state level must answer the question, ‘Are we comfortable with technology companies acting as the ultimate arbiter for what is and is not terrorism?’ In the interim, however, inaction is simply not an option.

Given this, it is important we engage in a robust and frank conversation regarding existing authorities the government possesses to support online efforts to combat extremist activity offline. Not only is the designation of white supremacist, neo-Nazi and accelerationist groups an important targeted tool for countering organisations that fit such legal criteria, it can also provide added value to platforms attempting to engage in crucial moderation and enforcement activities against bad actors. As Tech Against Terrorism’s Adam Hadley recently noted, “designating far-right organisations as terrorist groups would help the smaller platforms that are most vulnerable to extreme far-right exploitation by giving them the legal protection they need to remove content unchallenged.” While the unique constitutional protections in the United States likely preclude such designations domestically—and in some cases would potentially be a fundamental mischaracterisation of the efficacy of terrorist group designation as a legal mechanism—coordinated actions in line with those taken by U.S. allies like the U.K. Home Office that have proactively proscribed numerous white supremacist terror groups. While not a panacea, when used in conjunction with a range of other tools, coordinated designation and proscription, as we saw in efforts to counter the online presence of the Islamic State, can be an effective measure against white supremacist and neo-Nazi organisations that can empower technology companies to enforce their platforms.

While further action is needed from the government, social media giants are not off the hook. Coordinated action from a wide range of companies, perhaps through established industry organisations—including buy-in and participation from smaller companies lacking resources or the will—is integral to meaningfully stymie online movements like the violent anti-government network identified by Facebook. Last month’s designation and strategic network disruption by Facebook should be built upon, reproduced, and executed in both a transparent and collaborative manner across the Global Internet Forum to Counter Terrorism’s membership base. This model can and should be replicated in future efforts to engage in proactive measures to accompany existing reactive coordination protocols.

The linkages between online incitement and offline violence continue to be a potent threat that requires renewed attention and a whole of society approach, inclusive of policymakers, practitioners, and researchers. While questions remain over whose responsibility it is to confront terrorism online, the actions taken in June by Facebook are a step in the right direction. However, more action is needed to counter the underlying white supremacist and anti-government extremist networks that fuel the overt violence witnessed offline and the incitement of such violence within the network targeted last month. As long as efforts to counter terrorist content online are scattershot, one-off operations by a single social media company against a network that represents only a symptom of the overall disease, extremists will continue to enjoy relative free reign online to recruit, radicalise, and incite violence.

Jon Lewis is a Research Fellow at the Program on Extremism, where he studies violent extremist organisations and actors in the United States as well as the activities of the Islamic State and its supporters in the United States and Europe.