
Sam Altman has sent a personal apology to the people of Tumbler Ridge, British Columbia, telling the town he is “deeply sorry” that OpenAI did not alert police before a mass shooting that killed eight people and wounded dozens. The OpenAI CEO acknowledged the community’s grief and said a direct apology was necessary even as families are still mourning, as scrutiny intensifies over how AI companies spot and escalate potentially dangerous user behavior.
In a letter on Thursday, Altman wrote, “I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” and pledged that OpenAI would work with governments and mental health experts to prevent anything similar from happening again, according to The Associated Press. British Columbia’s premier later shared the letter publicly, and it was posted on a local site in what Altman described as an attempt to respect how the community wanted to handle its grieving process.
Authorities say the shooter, 18-year-old Jesse Van Rootselaar, killed family members at a home before attacking Tumbler Ridge Secondary School on February 10, leaving eight people dead and dozens injured. Van Rootselaar died of an apparently self-inflicted gunshot wound, police said, as reported by The Associated Press. After the attack, it emerged that OpenAI’s systems had flagged and later banned a related ChatGPT account in June, a revelation that has fueled demands from Canadian officials for far more transparency.
Published reports say OpenAI’s abuse-detection tools flagged the user’s conversations months before the shooting and that some employees debated whether to bring the case to law enforcement. The company ultimately decided the activity did not meet its threshold for a credible or imminent threat, according to The Guardian. Those internal disagreements, and the choice not to notify police, now sit at the center of lawsuits and government investigations into how AI platforms should handle violent or threatening content.
Legal exposure and political fallout
In March, the mother of a critically injured 12-year-old victim filed a civil claim in the British Columbia Supreme Court alleging that OpenAI had “specific knowledge” that the shooter was using ChatGPT to plan violence, according to Global News. The company is also facing a separate wrongful-death lawsuit from the parents of a California teen who say ChatGPT helped their son research suicide methods, a case detailed by TechCrunch that has already driven changes to the product’s safety features.
OpenAI's response and the next steps
OpenAI says it has been updating ChatGPT’s safety systems and partnering with mental health professionals to improve how the chatbot responds when users show signs of psychosis, mania, self-harm, or intense emotional attachment. The company has framed those changes as part of a broader push to reduce the risk of future harm, according to Business Insider. Altman’s letter, combined with meetings Canadian officials held with OpenAI executives earlier this year, suggests regulators intend to press the company for clearer and tougher escalation rules.
Canadian officials have already summoned OpenAI representatives to Ottawa to explain how those rules currently work, and provincial leaders say the episode could spur new requirements for how AI companies report potential threats, according to The Associated Press. For Silicon Valley, and San Francisco in particular, where OpenAI is based, the fallout is a stark reminder that safety failures abroad can quickly turn into local political and legal headaches.









