Violent extremists are continually adapting the ways in which they exploit digital platforms for coordination and recruitment. Open-source intelligence (OSINT), information openly available through social media, satellite imagery, public records, and other online data points, is increasingly empowering nonstate actors and violent extremists to conduct surveillance and targeting once reserved for national militaries. Recent cases seen across the ideological spectrum, from Wager Group mercenaries recruiting arsonists in London to Hamas militants compiling “kill lists”, show how the malicious collection of freely available information can manifest into real-world violence. Extremist groups are leveraging commercial satellites, data breaches, and AI-powered facial recognition applications alongside encrypted messaging platforms to generate targeting dossiers within minutes rather than weeks. This enables these groups to publish doxxing lists, personal information leaked with the intent to harass or incite violence against the subject, to further their individual goals. However, the same technological TTPs (tactics, techniques, and procedures) that amplify the militants’ capability to harm can be repurposed by tech companies and policymakers to disrupt these same measures. This Insight explores the evolving phenomenon of OSINT-driven extremism, its strategic effects on the threat landscape, and a set of action-oriented recommendations ranging from enhanced content moderation to cross-platform intelligence sharing to raise the costs for extremists who weaponise open data.
From Screens to Sabotage: OSINT as an Extremist Tool
On 20 March 2024, three British men set fire to a warehouse in East London that stored Starlink satellite terminals bound for Ukraine. UK prosecutors later revealed that the arsonists had been guided by Russia’s Wagner Group, which crowdsourced reconnaissance via a Telegram bot. Five days before the attack, a Wagner-linked handler on Telegram provided the men with satellite images and other OSINT data points of the target site. The attackers would later carry out the arson, causing roughly £1 million in damages. Investigators discovered the same handler immediately set new tasks, including an attempted kidnapping and more arson attacks, demonstrating how quickly online direction can generate offline violence. This case marked the first convictions under the UK’s National Security Act for an attack orchestrated via digital intelligence cues.
OSINT-driven targeting is not confined to Russian proxy groups. In the Middle East, Hamas operatives have systematically harvested open-sourced information on Israeli security personnel. In mid-2024, Hamas released dossiers on over 2,000 Israeli Air Force officers compiled from LinkedIn profiles, Facebook posts, Google Earth, and leaked databases. These detailed files included names, addresses, and family photos; going as far as to include vehicle license plates and bank details in some cases, ostensibly “for revenge” against those identified by the terror organisation as being responsible for casualties in Gaza. This was noted by Shin Bet (Israel’s internal security agency) to be an ongoing effort stemming back as early as 2007, wherein Hamas has historically used OSINT techniques to map out their attacks on Israeli targets. The pairing of personal data with online maps allowed Hamas to substitute traditional espionage measures for mass-harvested OSINT data. An Israeli probe into these OSINT practices found that Hamas had spent years collecting information on border villages via mapping security cameras, routines and downloading satellite imagery to aid in the eventual surprise attacks of 7 October 2023. Together, these examples illustrate a maturing extremist modus operandi: (1) mass collection of exposed personal or geospatial data, (2) rapid fusion of that data into actionable target packets, (3) broadcast of target packets via encrypted or other such closed channels, and (4) real world harassment or violence manifested via an online trigger, sometimes within days of the online event.
Strategic Effects on the Extremist Landscape
Access to OSINT-derived information is shifting extremist focus toward “softer” targets. Beyond attacking uniformed military or security personnel and more traditional soft civilian targets such as hotels or mass gatherings (as seen in the U.S. Boston Marathon bombing and the Bamako Radisson Blu hotel attack), militant networks now find it effective to individually target election workers, journalists, infrastructure engineers, school officials, tech executives, and by extension their families for violence and intimidation with information found as openly as on social media. By publishing these individuals’ home addresses, family photos and/ or workplace details, extremists hope to intimidate them into inaction or inspire others to commit acts of terror on their behalf. The 2025 Homeland Threat Assessment by the U.S. Department of Homeland Security warns that domestic extremists will likely persist in targeting government and critical infrastructure personnel, often using doxxing or harassment to terrorise them at home. By expanding the target set to include these “softer” civilians and low-profile officials, extremists gain leverage; as silencing a local official can disrupt democratic and civic processes in ways that serve extremist narratives. The Wagner Group case in London is a stark demonstration that doxxing and internet-based targeting are not just psychological warfare TTPs as similarly seen with Hamas, but can also function as preparatory strikes for real-world sabotage. The availability of real-time intelligence on Telegram or 4chan effectively lowers the cost of violence; if an extremist knows exactly where a targeted individual will be and their routine from Google Earth images and Facebook check-ins, the effort and risk required to strike them drops dramatically.
Cross-Ideological Diffusion
Techniques for OSINT exploitation are spreading rapidly across different extremist movements, blurring traditional ideological boundaries. Online platforms, especially Telegram with its lax moderation standards, serve as hubs for militant accelerationists, Salafi jihadists, and others to effectively swap tactics. A report by the Southern Poverty Law Center (SPLC) in late 2024 found that Telegram’s channel recommendation algorithm was creating pathways between disparate extremist communities. While the SPLC references diffusion among groups with similar ideological backgrounds, a fusion of tactics and strategy can be seen in organisations that are ideologically opposed to one another. Channels and lines of communication ostensibly focused on one ideology have been found to cross-promote content from another; for instance, white nationalist channels have been found to forward instructional material originally posted by pro-Jihadist groups. This cross-pollination means that once a doxxing effort or OSINT TTP proves effective for one extremist faction, others adopt it almost immediately.
Investigators have also noted how content moderation evasions are copied; providing that when Telegram “hid” some extremist channels from search results under pressure, users simply created mirror channels and repost bots to keep the content circulating. Furthermore, a WIRED investigation revealed in November 2023 that Telegram’s supposed bans on certain Hamas and far-right channels were purely cosmetic, as the channels remained active for those who sought them out via direct links, and their content (videos, addresses, etc.) was copied and pasted into unrestricted public channels visible to everyone. In effect, extremist OSINT and propaganda have proven to be platform agnostic and highly resilient, bouncing across channels and services. This interconnected ecosystem greatly complicates attribution and response, in which contributors from multiple countries and ideologies are found to be united by technique rather than by a single command structure.
Existing Mitigation: Progress and Gaps
Major technology platforms have started to respond to the rise of doxxing and OSINT-enabled terror, but measures so far are uneven and often lack the speed needed to combat the material found online. However, privacy rules have tightened on mainstream social networks. In 2022, Meta, via its independent Oversight Boar,d removed the loophole that previously allowed users to post someone’s private home address if it was publicly available elsewhere. Meta’s policy now treats any residential address or GPS coordinates as private information that cannot be shared without consent, regardless of whether it’s public records. X, Reddit, and others have implemented similar rules against doxxers. However, enforcement remains a challenge. Extremist doxxers often drop cryptic hints or partially obfuscated information to skirt the platforms’ rules. As such, content moderators find it difficult to catch these before they spread. In fast-moving extremist channels outside the major platforms, doxxing content can go viral long before any removal occurs. Meta’s transparency reports acknowledge that detecting posts with private personal info relies heavily on user-generated reporting and is hard to automate. These policy changes have been made on paper, but the real-time enforcement often fails to keep up with the velocity of personal information dissemination.
Encrypted messaging platforms present additional issues. Telegram, the platform of choice in many recent cases, has been inconsistent in its countermeasures. After public pressure in late 2023, particularly during the Israel–Hamas war, Telegram began to restrict some extremist channels. However, as the aforementioned investigative piece by WIRED found, this is far from a full ban; as the channels remain functional and subscribers can still see and forward content. Telegram’s actions did reduce the organic virality, yet archives of doxxing information and violent content persisted. Mirrored channels popped up to evade restrictions, and users on desktop or direct download versions of Telegram continued sharing location coordinates of targets with little interference. Telegram’s semi-encrypted nature also means that once data is out in a channel, it can proliferate safely even if the original post is deleted. Other encrypted apps like WhatsApp or Signal pose an additional dilemma; strong encryption protects privacy, but also means extremists can broadcast doxxing lists to closed groups with no visibility by moderators. Some services have tried creative solutions, with WhatsApp now letting group members report highly forwarded messages so they can be checked for virality, but doxxing detection in encrypted channels is largely an unresolved problem.
Reccommendations
In light of these gaps, a layered defence is needed; one that combines smarter platform design, cross-platform cooperation, and supportive public policy. Platforms and data providers should reduce the exposure of high-value personal data through thoughtful design changes. For example, limiting access to sensitive databases and maps behind friction tools like login requirements, CAPTCHAs, or rate limits for scraping. If extremists find it easy to pull hundreds of addresses from a public site in seconds, that site might introduce a verification step or throttle bulk queries. Mapping services could implement dynamic watermarks or a slight coordinate jitter/offset for certain views, making automated extraction of precise coordinates harder. Social platforms can offer blur features, for instance, as an option for the user to blur out their own or others’ faces, house numbers, or car licence plates in user-uploaded images. This effort would mirror what Google Street View already does, but apply it to user content.
Campaigns by extremist networks often use burst patterns when employing OSINT TTPs that algorithms can catch. Platforms should deploy graph analytics to flag coordinated bursts of personal data sharing. For example, if 20 newly created accounts post the identical phone number or address within a 10 minute span, this is likely a doxxing effort. Similarly, a sudden flood of messages tagging the same individual with slurs or threatening language could indicate an ongoing harassment swarm. By adjusting machine learning models to these patterns, social media platforms can raise an automated alert to their Trust & Safety teams before the doxxing effort spreads. Quick response is key; ideally, the first few posts can be removed and the user accounts locked, stopping the campaign in mid effort. Rate-limiting features can be implemented in messaging apps as well; for instance, if a section of content is detected in an encrypted channel, the system could temporarily disable the forward or invite function for that content to prevent it from cascading further. Even without reading encrypted messages, a service could recognise a checksum or filename (ex., “address_phonenumber.png”) associated with known doxxing content and apply a sort of circuit breaker effect, implementing real-time mitigation once a doxing incident begins.
Conclusion
The rise of OSINT-enabled terror has effectively collapsed the barrier between online collection and offline attack. What we are witnessing is extremists constructing a military style ISR (intelligence, surveillance, reconnaissance) and strike chain program organically out of everyday digital tools and platforms; using them to intimidate and launch real-world violence. This trend is reshaping the extremism landscape by broadening the range of victims, amplifying psychological impact, and rapidly diffusing tactics across ideologies. Yet the very same technological toolsets that empower militants also offer new defences for those willing to innovate and collaborate. By shrinking the exposed attack surface of personal data, institutionalising shared doxxing detection assets, deploying real-time analytics, and coordinating across platforms on threats, the tech and social media communities can significantly inhibit OSINT-focused extremist campaigns. The recommendations outlined here seek to preserve the benefits of a connected world while denying violent extremists the easy victories they currently enjoy in the OSINT arena. By implementing these steps, we can help ensure that the next Wagner Group-esque plot or doxxing spree is detected early, disrupted effectively, and ultimately deterred.
—
Timothy Kappler is a violent extremism researcher and criminal justice and homeland security doctoral student at Liberty University. His research predominately focuses on militant accelerationism, nihilistic violent extremism, and the role of online platforms in the radicalization of pre-radicalized subjects. He has a collective fifteen years of experience in the public sector; having worked with the Department of Justice, Department of Defense, and US Army Special Operations Command while enlisted. Additionally, he contributes to a privately held risk intelligence firm as an intelligence consultant and teaches homeland security at a local state college.
—
Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.