(PatriotNews.net) – A tech giant reportedly spotted violent warning signs months before a family was murdered—then chose silence over contacting police.
Quick Take
- OpenAI reportedly flagged and banned a ChatGPT account tied to Tumbler Ridge shooting suspect Jesse Van Rootselaar in summer 2025 after violent queries triggered internal review.
- Reports say more than a dozen employees debated alerting Canada’s RCMP, but OpenAI concluded the material did not meet its reporting criteria and did not contact authorities pre-attack.
- The February 10, 2026, shooting in Tumbler Ridge, B.C., killed Van Rootselaar’s mother and half-brother; OpenAI later contacted police after the incident.
- Canadian officials and experts are now pressing for clearer, stronger rules for how AI companies share credible threats with law enforcement.
What OpenAI reportedly saw—and what it didn’t do
OpenAI banned a ChatGPT account linked to Jesse Van Rootselaar in summer 2025 after the account’s violent queries were flagged for internal review, according to reporting summarized across multiple sources. Those reports also describe an internal debate involving more than a dozen employees over whether to notify the Royal Canadian Mounted Police. OpenAI ultimately decided the content did not meet its reporting threshold and did not contact authorities at that time.
The distinction matters because “banning” is not the same as “warning.” Removing access can stop one channel, but it does not automatically trigger a welfare check, a knock-and-talk, or a preservation of evidence by investigators. The research indicates OpenAI later contacted police after the shooting and removed the account. What remains unclear from the available public reporting is the exact wording of the violent prompts and the specific standard OpenAI used to decide it was not reportable.
A digital trail across platforms, and a community left reeling
The Tumbler Ridge case sits at the intersection of online radicalization and real-world violence. Sources in the research describe activity beyond AI chats, including participation on a gore-focused forum and the creation of a mall shooting simulator inside Roblox before the February 10, 2026 attack. The available reporting also describes the community impact afterward, including RCMP investigations into threats that reportedly disrupted funeral-related plans for victims’ family members.
British Columbia Premier David Eby and federal AI Minister Evan Solomon are cited in the research as pressing for answers and stronger information-sharing around AI-driven threats. Authorities have also sought digital evidence preservation orders tied to the broader investigation. For everyday citizens, the hard reality is that law enforcement cannot act on what it never receives. When companies keep decisions inside corporate channels, public safety outcomes can hinge on private policies that voters never approved.
The reporting gap: voluntary “criteria” vs. public safety expectations
The research frames OpenAI’s decision as rooted in a judgment that there was no “immediate threat,” a concept echoed by analyst commentary included in the materials. Former Ontario provincial police commissioner Chris Lewis is cited as arguing that AI firms should collaborate with police around keyword triggers and escalation routes rather than leaving consequential calls to internal boardroom debates. Sociology professor Laura Huey is cited as saying she was not surprised, pointing to incentives that can favor commercial protection over rapid escalation.
That gap—between what a company can see and what it must share—has obvious constitutional and liberty implications for Americans watching from across the border. Conservatives generally reject sweeping surveillance states, but they also expect basic competence and accountability when a firm detects credible signals of violence. The strongest takeaway from the research is not that AI should become a national snitch network, but that clearer, narrowly tailored, transparent standards are needed so that companies do not default to silence when the stakes are life and death.
What’s confirmed, what’s still unknown, and what comes next
Across the provided sources, the core timeline aligns: the account was flagged and banned in 2025; the RCMP was not contacted before the February 2026 shooting; OpenAI later reached out to police after the tragedy. Discrepancies remain on precise timing details, and the public still lacks a full picture of the exact prompts, how moderators interpreted them, and whether any other platforms shared information earlier. Ongoing investigations may clarify those details, but the research notes that RCMP has withheld specifics.
Canadian officials appear poised to use this case as fuel for mandatory reporting discussions in the AI sector. Any policy response will need to balance civil liberties with a real-world problem: platforms can see patterns that families, schools, and police never get to see until it’s too late. For conservatives, the lesson is straightforward—when unaccountable institutions set private “criteria” behind closed doors, the public pays the price. Transparent rules, due process protections, and clearly defined escalation channels are the minimum baseline.
Sources:
tumbler-ridge-rcmp-investigate-threats
Copyright 2026, PatriotNews.net























