Extremism Online, Evidence Offline: Dealing with the “Do a Screenshot, or it Never Happened” Phenomenon
- Venus Torabi

- 3 days ago
- 5 min read
Early reporting on the Tumbler Ridge tragedy quickly shifted from the scene to the attacker’s digital presence. As platforms removed accounts, posts, and traces, researchers faced a familiar problem: the moment violence occurs is often the moment the empirical record disappears, leaving only fragments and making it harder to understand how extremism develops and spreads.

The news update was brief yet grave: “What we know about BC Mass shooting” (CBC.ca, 11 February). Early coverage focused on the basic facts: what happened, where, and who was involved. However, the reporting quickly moved beyond the scene itself. Headlines began examining “the apparent online footprint of the Canada school shooter […]” which should be “tell[ing] us something” (CNN.com). The focus shifted from the events on the ground to the perpetrator’s digital presence. The 18-year-old perpetrator was no longer described only through an act of terror, but also through their online life. They turned into the “Canada school shooter” who has had a life online (i.e., posts, profiles, and platforms).
The Tumbler Ridge tragedy is horrific, confronting Canadians with a form of violence we rarely see on our own soil and rightfully causing national distress. As researchers, we choose to face forms of violence like this and others (online harassment, doxxing, being exposed to various triggering/extremist materials, etc.) in the service of building a better world. We collect and analyze that violence in different ways, from interviews to immersion. In this blog, I want to highlight the methodological challenge of documenting digital traces (what we leave of ourselves online) in the aftermath of extremist events. I want to do this for two overlapping reasons: (1) this matters because digital traces are often crucial to understanding how violence develops, spreads, and might be prevented, and (2) as researchers, we need to know that the way we handle these traces has direct implications for prevention, accountability, and evidence-based policy.
The Tumbler Ridge case is of particular importance in my current studies of online extremism. As a researcher in online gaming and gaming-adjacent platforms exploited by extremists and terrorists, my attention was caught by a particular part of the Tumbler Ridge attacker’s digital footprint: A Roblox game created by the shooter. I felt drawn to study the game. I should study it. I should and I would, I told myself. This is precisely the kind of material researchers like me are expected to access and examine. Why should a simulation of a mass shooting in a mall go unstudied? Is Roblox becoming a “hotbed of extremism” like some other gaming and gaming-adjacent platforms (RAN, 2020)? I set out to find the game and the answers to my questions.
I started my search engine, surfed and searched, but found almost nothing beyond secondary screenshots. Every single trace was removed after the tragedy. Before February 13th, 2026, the game had been available and was playable by many users. Now? It is gone. Removed. Pulled down by platforms.

In the immediate aftermath of an extremist attack, the digital clean-up begins. Accounts disappear, posts are removed, and networks dissolve. Platforms act quickly and, from a harm-reduction perspective, this is difficult to object to. Few would argue for leaving violent propaganda accessible in the public sphere, i.e., online. This issue requires closer scrutiny, as extremists and terrorists are at times constructed and portrayed as “cult hero” figures. As a result, young people of a similar age may imitate their behaviours and extremist styles. A similar trend of imitation can be found in online gaming users’ “honour[ing]” mass shootings. Some online gaming profiles paid tribute to the Christchurch attacker (2019) by claiming his name. Note that the Christchurch attacker had been discussing his attack on fringe online forums such as 4chan and 8chan, and ultimately, livestreamed his terror on Facebook.
That said, for researchers of extremism and terrorism, this reflexive, strategic, and security-oriented erasure produces an increasingly familiar problem: the moment violence occurs is often the moment the empirical record vanishes. The fast-paced takedown process often forces researchers into a race to document content while it remains briefly accessible.
Analytically, radicalization and extremism are not simply an offline process with an online echo (Valentini et al., 2020; Ebner, 2021; Convey et al., 2019). They are often mediated through platform infrastructures (Robeiro et al., 2021), recommendation systems (the algorithms) (Tufekci, 2018), niche subcultures, symbolic online practices (Nagle, 2017), and networked escalation. These environments matter analytically. They shape pathways of mobilization, influence ideological framing, and in some cases, provide observable warning signals which are often most visible to trained researchers and easily missed by traditional investigative frameworks.
Yet the empirical record of these processes is frequently wiped within hours. After an attack, platforms, under public and political pressure, tend to prioritize rapid content removal to protect their reputation and limit liability. This “delete first” response may be understandable, but it also destroys valuable evidence and leaves researchers and investigators trying to reconstruct events from incomplete traces.

This creates a clear imbalance. Researchers try to study extremism and propose preventive strategies against extremist violence, but they are denied access to some of the digital spaces and materials needed to understand how it develops and spreads. What remains are fragments: screenshots, media summaries, second-hand reconstructions or vague archives; hardly a robust foundation for serious research. This might drive and expose researchers to unsafe or data-compromising/doxxing online outlets, landing them on troll-dominated websites or disinformation pitfalls where no reliable or officially verified sources are accessible.

To be clear, preservation is not amplification. Even when researchers attempt to collect data through formal disclosure channels, the process is rarely timely. Ethical review requirements, institutional safeguards, and platform cooperation take time, often longer than the lifespan of the content itself. By the time approval is secured, the empirical record has already vanished. The barriers are even higher for researchers outside large institutional settings. The result is a structural mismatch between traces of digital erasure and the timelines of responsible research. The unresolved question is what forms of secure documentation should exist alongside removal, and who should be responsible for governing access.
Extremist violence, if happening offline, bears the traces of perpetrators’ online lives. The Tumbler Ridge shooter, according to Steven Rai, a senior analyst at ISD, had an account on WatchPeopleDie, “a forum where users can post and view real images and videos of violence […]” (Anti-Defamation League). Such crimes never occur in vacuums. In the most recent cases, they have been committed by someone who has been chronically online! And yet, such an online life, which should be brimming with research (raw) materials, codes, patterns and hints to unravel and study the phenomenon are erased (for good) almost immediately after the fact.
The default remains blunt: deletion replaces understanding. And for a researcher concerned with prevention and contribution to the field rather than post hoc commentary, that is not a sustainable model.
Extremism thrives in the shadows, and studying it in a timely manner with a clear research agenda places a significant burden on the researchers. Still, research cannot be expected to proceed in the dark. If prevention is to be credible and evidence-based, platforms and policymakers must move beyond reflexive deletion and create clear, accountable systems for preserving and sharing data for public-interest research. This means establishing secure data “safe havens” where flagged extremist content is preserved for a limited period and made accessible to both affiliated (vetted) and independent researchers under strict ethical and privacy safeguards.
Venus Torabi, PhD.
Extremism Online, Evidence Offline was written by Venus Torabi, a CIFRS Fellow whose Digital Humanities research examines video games, radicalization, propaganda, and the semiotics and politics of representation.




Comments