Sam Altman apologized after OpenAI flagged and suspended the ChatGPT account of Jesse Van Rootselaar in June 2025 for misuse “in furtherance of violent activities” but did not alert law enforcement before the teen killed eight people in a February 2026 shooting spree in Tumbler Ridge, British Columbia.
The apology came in a letter dated Thursday and shared Friday on British Columbia Premier David Eby’s social media and the local news site Tumbler RidgeLines, following sustained pressure from officials who said OpenAI had a chance to intervene. Altman wrote that while the company determined the account activity did not meet the threshold for a credible or imminent threat, he now believes it should have alerted authorities.
Rootselaar, 18, who was born male but identified as female, killed her mother Jennifer Jacobs, 39 and stepbrother Emmett Jacobs, 11, at their home before opening fire at Tumbler Ridge Secondary School, where she killed five students and an educator before turning the gun on herself. Twenty-five others were injured in the attack, which ranks among the deadliest mass shootings in Canadian history.
OpenAI said it detected the account through abuse monitoring systems last June and suspended it for violating usage policies, but concluded at the time that referral to the Royal Canadian Mounted Police was unwarranted. Eby had previously stated it “looks like” OpenAI had the opportunity to prevent the tragedy, a sentiment he reiterated after the apology, calling it “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”
In his letter, Altman said he had spoken with Eby and Tumbler Ridge Mayor Darryl Krakowka, who conveyed “the anger, sadness, and concern” felt in the remote northern community. He acknowledged that while words cannot undo the harm, a public apology was necessary to recognize the irreversible loss suffered by victims’ families.
“I want to express my deepest condolences to the entire community,” Altman wrote. “No one should ever have to endure a tragedy like this. Imagine anything worse in this world than losing a child.” He reaffirmed his commitment to work with all levels of government to prevent similar failures, adding that OpenAI’s focus will continue to be on finding ways to ensure something like this never happens again.
The incident has reignited debate over the responsibilities of AI companies when users exhibit warning signs of violence, particularly as generative tools become more accessible and harder to monitor in real time. Critics argue that reliance on internal thresholds for referral creates a dangerous gap between detection and intervention, especially in cases involving isolated individuals who may not exhibit overt, imminent threats but whose cumulative behavior raises concern.
Supporters of OpenAI’s initial caution warn against overreach, noting that flagging accounts for concerning content without clear intent could lead to privacy violations and misuse of surveillance-like powers. Yet the Tumbler Ridge case highlights a central tension: how to balance user privacy and free expression with a duty to prevent foreseeable harm when algorithms detect patterns associated with mass violence.
Going forward, Altman said OpenAI will continue collaborating with government agencies to refine protocols for handling high-risk accounts, though he offered no specifics on what changes might be implemented. The company has not disclosed whether it has revised its internal threshold for law enforcement referral since the shooting.
Why didn’t OpenAI report the account to police earlier?
OpenAI said it determined the account activity, while flagged for misuse “in furtherance of violent activities,” did not meet the threshold for a credible or imminent threat of harm to others at the time of suspension in June 2025.
What has been the community’s response to the apology?
While officials acknowledged the apology as necessary, Premier David Eby called it “grossly insufficient” given the scale of the loss, reflecting broader sentiment that accountability must extend beyond words to concrete preventive action.
