Click here to read our latest report “30 Years of Trends in Terrorist and Extremist Games”

Digital Aftershocks: Deepfakes in the Wake of the Pahalgam Attack in Kashmir

Digital Aftershocks: Deepfakes in the Wake of the Pahalgam Attack in Kashmir
18th August 2025 Tarun Agarwal
In Insights

The terrorist attack in Pahalgam, India, on 22 April 2025 marked a tragic moment in the country’s history. The attack killed 26 people and responsibility was claimed by an offshoot of Lashkar-e-Toiba in Kashmir, The Resistance Front (TRF). As official ground-level investigations unfolded, a parallel conflict was already ensuing online. Messaging and social media platforms, particularly Telegram, WhatsApp, and X, saw an onslaught of AI photos, deep fake videos, and fake military transmissions about the attack within hours. The aftermath was not an eruption of grief or disinformation alone, but also the rapid creation of polarising storylines propagated by loosely networked ideological actors.

The use of synthetic media was judiciously calibrated, often ideology-driven, visually credible, and linguistically tailored to remain undetected. More than direct incitement to violence, much of the content operated in the grey zone: emotionally manipulative, story-wise polarising, and legally ambiguous.

These materials capitalised on platform vulnerabilities, weak content policies, such as the absence of mandatory labelling for AI-generated media, over-reliance on keyword-based moderation instead of context-aware detection, and the lack of crisis-specific protocols during terror events and the emotionally charged nature of digital publics. While some content was shared in regional languages like Hindi, Urdu, and Kashmiri, a significant portion of the deepfakes and fabricated narratives appeared in English, emotionally charged and platform-native in both form and tone. The multi-lingual strategy, coupled with the speed of circulation, resulted in the disinformation spreading quicker than verification. Much of this content was crafted to appeal to, and reinforce, violent extremist narratives spreading across both domestic and cross-border digital networks.

This Insight examines the nature and structure of the synthetic information flows after the Pahalgam attack. It examines how AI-generated content was leveraged to deepen communal and sectarian divides, amplify fringe ideological movements from Hindutva hardliners to pan-Islamist actors, and erode public trust in official institutions. In doing so, it situates the Pahalgam episode within a larger pattern: the convergence of emerging technologies and extremist propaganda, where disinformation serves simultaneously as a recruitment tool and a contested battleground.

Synthetic Narratives and Ideological Amplification

As opposed to gradually dissolving after the initial shock, the online reaction to the Pahalgam attack fast solidified instead into something more organised and nefarious. In a matter of hours, Telegram and X saw not only an inflow of user responses but the deliberate uploading of synthetic narratives: AI-generated visuals, deepfakes, and fabricated documents crafted with ideological intent. These weren’t individual cases of misinformation but part of a larger process of digital manipulation meant to capitalise on the emotional response of violence and deepen fault lines.

Among the earliest viral artefacts were AI-generated images of dead bodies and militant figures, circulated by Pakistani as well as pro-Khalistani handles on X and Telegram. These images were misrepresented as documentary proof of the attack. Further forensic reviews revealed evident markers of digital forgery, such as distorted body parts and watermark traces of generative tools. One such image showed an alleged “martyr” from TRF, but image analysis confirmed it was synthetically constructed using generative adversarial networks (GANs).

Equally disturbing was the emergence of stylised, sentimental visuals. Ghibli-style AI-generated illustrations depicting mourning widows or divine retribution were circulated as part of tribute posts, including one widely shared image showing Hindu deity Krishna fighting the attackers. While the visual content presented itself as gestures of solidarity, their aestheticised violence and communal iconography contributed towards the heavily polarised emotional climate. These images were not mere illustrations; rather, they served as ideological artefacts, visually seen sympathising with victims while also implicitly suggesting communal responsibility for their deaths.      

At the core, Pakistan-linked messaging consisted of false flag narratives. The troll networks and digital influencers aligned with Pakistan military-linked accounts made #IndianFalseFlag, #PahalgamDramaExposed, and #ModiExposed hashtags trending within hours of the attack. Many of these posts claimed that India orchestrated the attack on its own people just to defame Muslims and win elections. However, OSINT data analysed by several fact-checkers showed that 75 % of these posts originated from accounts that had links to previously promoted pro-government or pro-military narratives in Pakistan. The framing of the attack as staged also served two purposes: one to weaken the global sympathy, and second, to reframe culpability, presenting India as the aggressor instead of the victim.

 

Figure 1: Screenshot from a Reddit post spreading a “false flag” conspiracy around the Pahalgam attack.

What stood out in this disinformation wave following the Pahalgam incident was not just the volume or repetition, but how carefully crafted the content was. It was not sloppy or obviously fake. Instead, it used emotional appeal and religious symbolism to anchor its narrative in the audience’s emotions. At first glance, the content seemed rooted in humanitarian concern and moral commentary, but underneath, it carried an undertone that blurred the boundary between freedom of expression and incitement. It made it harder to distinguish where the ordinary opinion ended and where extremism began. These videos attempted to delegitimise media and present themselves (the one fomenting these narratives) as the only ‘truth-tellers,’ a recruitment tactic used to build in-group loyalty and out-group hostility. By evoking trauma, honour and religious valorisation, this content did not merely circulate; it recruited. It shaped identity-based worldviews that could push users further into extremist pipelines, especially when repeated in closed echo chambers.

At its core, this campaign was designed to inject doubt, distort perception, and destabilise confidence in institutions. Whether by mocking victims, inventing communal fractures, or claiming cyber breaches into Indian defence systems, the goal was not merely to deceive, it was to fragment. In doing so, extremist narratives did not operate in isolation; they coalesced through coordinated amplification, tailored language targeting (English, Hindi, Urdu), and platform-native formatting that made these messages stick.

The Pahalgam attack became more than an incident of physical terrorist violence. It became a template for how digitally enhanced ideological warfare can be deployed in real time, where the first casualty is not truth, but trust.

Platform Gaps, Encrypted Networks, and Moderation Failures

The synthetic content generated after the Pahalgam attack contained a clear ideological thrust, framed through violent extremist discourses rooted in both pan-Islamist and radical Hindutva narratives. These narratives amplified sectarian grievances, religious symbolism, and communal blame, fuelling a climate that spilt into real-world violence and persecution. Following the attack, Kashmiri Muslims, including students in Uttarakhand, Haryana and Punjab, faced targeted harassment, physical assaults and eviction threats because of online rumours and hate-driven songs that labelled them as “traitors.” Al Jazeera reported the proliferation of Hindutva pop tracks that called for retribution and economic boycott of Indian Muslims, contributing to public fear and mistrust. Such emotionally resonant material blurred the boundary between expression and incitement, reinforcing recruitment and mobilisation dynamics within echo chambers and ideologically aligned digital spaces. Despite the severity of these narratives, much of this content evaded timely moderation due to technical limitations, policy blind spots, and the structural incentives of platform architecture.

First, major platforms including Meta and X operate moderation systems which focus on identifying explicit violations and violent appeals, glorification of terrorist content, hate speech and impersonation. Much of the content that followed Pahalgam existed in a deliberate grey zone: for instance, AI-generated victim tributes that appeared realistic at first glance, while framed as memorials, used religious iconography or caste references to suggest communal targeting, subtly reinforcing a Hindu-versus-Muslim framing and institutional conspiracies. These did not always breach the platforms’ terms of service, but their cumulative effect was polarising and manipulative.

Second, the platforms’ reliance on user reports and keyword-based detection limits their responsiveness to such content, especially when it is visually sophisticated and emotionally resonant. Videos falsely depicting a soldier’s grieving widow or portraying internal dissent within the Army, cloaked in sombre music and visual cues of martyrdom, were consumed not as propaganda but as ‘truths too sensitive to be on the news.’ This false authenticity delayed intervention.

In this case, the majority of false information spread through Telegram and WhatsApp because these platforms either have limited moderation systems in place or the content is protected through end-to-end encryption. The closed-loop systems enabled viral narratives to spread unchecked, often being reposted across multiple public channels. Once such content spills over into more visible platforms like Instagram Reels or Facebook Watch, its traction is already too high for containment.

This raises a core dilemma: disinformation or ideologically charged content is not law-breaking or necessarily violates platform rules directly. Yet, such content often overlaps with extremist propaganda in structure and impact. Platforms must move past simple true/false content evaluation to establish contextual moderation strategies. The solution includes three moderation protocols: event-triggered protocols that activate during terror attacks, soft interventions through friction layers or downranking, and transparency tools that show content origins. In closed platforms like Telegram, collaboration with multilingual fact-checkers and civil society OSINT actors can supplement detection where moderation is technically limited.

Language and aesthetics also played critical roles. Many deepfakes were in English, polished, persuasive, and seemingly credible to both Indian and global audiences. Others used Hindi or Urdu captions layered over visual content, allowing the same clip to circulate across ideological communities. This multilingual tailoring made moderation even more difficult. Platforms ill-equipped for nuance in regional languages failed to catch incendiary edits or contextually loaded imagery. Similarly, AI-generated content presented as artistic renderings, such as the Ghibli-style animations of Hindu deities or martyrdom sequences, was treated as expression rather than distortion.

Many platform architectures reward provocation. Algorithmic amplification of controversial content ensured that posts with inflammatory hashtags, emotionally charged visuals, or extremist framings gain disproportionate visibility. In effect, the structure of engagement economies made polarisation not a byproduct but a design feature.

Conclusion and Recommendations

The post-Pahalgam ecosystem revealed that platform policies have not evolved fast enough to counteract synthetic disinformation campaigns crafted for plausible deniability. The result is a digital terrain where ambiguity becomes armour, and ideological propaganda cloaks itself in the language of mourning, nationalism, or outrage.

The solution requires platforms to set up regional moderation systems, develop proactive detection tools for synthetic media, and improve their understanding of how disinformation capitalises on emotional cues. The first essential step involves creating multilingual AI detection models that can identify deepfakes, lip-sync inconsistencies, and synthetically generated voices in Hindi, Urdu, Kashmiri, and other regional languages. The current detection tools are overly English-centric and lack cultural specificity. Platforms should collaborate with local AI labs and regional linguists to fine-tune detection algorithms using datasets grounded in South Asian visual, audio, and linguistic patterns. This will improve the identification of culturally embedded synthetic content that often evades standard filters.

Beyond detection, regionalisation of content moderation infrastructure is essential. Real-time moderation panels should be invested in by platforms which include local language experts and civil society actors, especially during sensitive events such as terror attacks or elections. The speed and credibility of content verification will improve through partnerships with fact-checking organisations that specialise in particular regions, such as Alt News, BoomLive or Factly. Platforms should implement metadata transparency tools, including CAI (Content Authenticity Initiative) and C2PA standards, to maintain essential information about the origins of the media content. User behaviour can be guided through minimal provenance indicators which include badges that indicate “AI-generated” or “source unverifiable” without compromising privacy.

Platform architectures must attempt to interrupt the virality of emotionally manipulative content. Algorithms that detect high emotional valence, recent trauma references, and synthetic visuals should trigger content throttling or friction layers before mass dissemination. Platforms should activate event-specific moderation protocols during high-risk events, including downranking unverified media, issuing public advisories, and directing users toward verified updates through curated truth hubs. Without such targeted interventions, synthetic propaganda will continue to exploit platform vulnerabilities, deepen ideological fault lines, and erode public trust.

Tarun Agarwal is an Associate Fellow at the Centre of Policy Research and Governance (CPRG), New Delhi, where he leads the Artificial Intelligence and Society division. He completed his PhD at the Centre for International Politics, Organisation and Disarmament (CIPOD), School of International Studies, Jawaharlal Nehru University, New Delhi, focusing on the intersection of technology and conflict in Kashmir. Previously, he served at the Indian Association of International Studies (IAIS), New Delhi, where he led the Conflict Division. His research interests include artificial intelligence, terrorism, and climate change. He holds an M.Phil. in Criminology from the University of Delhi and a Master’s degree in Criminology and Justice from the Tata Institute of Social Sciences, Mumbai. He has been a recipient of the Junior Research Fellowship awarded by the Government of India.

Are you a tech company interested in strengthening your capacity to counter terrorist and violent extremist activity online? Apply for GIFCT membership to join over 30 other tech platforms working together to prevent terrorists and violent extremists from exploiting online platforms by leveraging technology, expertise, and cross-sector partnerships.