News listChatGPT banned the shooter's account 8 months ago but did not report it to the police; he later killed 8 people, and Altman apologized.
動區 BlockTempo2026-04-29 07:11:50

ChatGPT banned the shooter's account 8 months ago but did not report it to the police; he later killed 8 people, and Altman apologized.

ORIGINALChatGPT 8 個月前就封禁槍手帳號但沒報警,後來他殺了 8 人、Altman 致歉
AI Impact AnalysisGrok analyzing...
📄Full Article· Automatically extracted by trafilaturaGemini 翻譯1497 words
In February of this year, a shooting in Tumbler Ridge, British Columbia, Canada, resulted in 8 deaths. It was later revealed that OpenAI had banned the shooter, Rootselaar, from his ChatGPT account 8 months prior due to "gun violence-related scenarios," but did not notify the police, citing that it "did not meet the threshold of an imminent threat." OpenAI CEO Sam Altman sent a letter of apology last week. (Previous coverage: Musk's lawsuit against OpenAI begins: Demanding the reversal of the for-profit transition, the removal of Altman, and $134 billion in damages.) (Background: OpenAI releases five AGI constitutions: AI cannot be monopolized by a few, and sacrifice can lead to greater resilience.) On February 10, 2026, a shooting broke out in the remote town of Tumbler Ridge, British Columbia, Canada. 18-year-old Jesse Van Rootselaar first killed his mother and 11-year-old brother at home, then headed to the local middle school, resulting in a total of 8 deaths before he died by suicide. Subsequent investigations revealed that 8 months earlier, OpenAI had banned Van Rootselaar's ChatGPT account on the grounds that he "described gun violence-related scenarios" in his conversations, yet failed to notify any law enforcement agencies. In June 2025, ChatGPT's automated abuse detection system and human reviewers flagged Van Rootselaar's account, leading to its suspension. According to OpenAI's later account, reviewers held an internal discussion at the time regarding whether to notify the police. The result of the discussion was: do not notify. OpenAI stated that the review determined the account's behavior "did not meet the threshold of posing an imminent and credible threat of serious physical harm to others," and therefore, the law enforcement reporting protocol was not triggered. After the account was banned, the matter remained internal. The account was closed, but the signal was not passed on. It was not until the shooting occurred in February 2026 that OpenAI proactively contacted Canadian authorities. "Imminent and credible threat" is a reporting threshold set by OpenAI itself, determined by the company's internal safety, legal, and policy teams to decide which situations warrant police notification. This standard is not subject to external regulatory review, nor is there any public explanation of how it is calibrated. Van Rootselaar's account met the standard for "needing to be banned" but did not meet the standard for "needing to be reported." In OpenAI's internal logic, these two thresholds are separate. On April 23, Sam Altman wrote a letter of apology, which was first published in the local Tumbler Ridge newspaper, TumblerRidgeLines. In the letter, Altman wrote: The pain your community has endured is unimaginable. I have been thinking about you for the past few months. I am deeply sorry that we did not notify law enforcement when the account was banned in June. Before the letter was made public, Altman had personally communicated with Tumbler Ridge Mayor Darryl Krakowka and British Columbia Premier David Eby. Eby shared the letter on social media on April 24, adding: "An apology is necessary, but for the devastation suffered by those families in Tumbler Ridge, it is far from enough." OpenAI announced that it would relax the criteria for reporting to law enforcement, allowing more account scenarios to trigger the reporting process. Canadian Minister of Artificial Intelligence Evan Solomon stated that Altman agreed to establish a direct communication channel with the Royal Canadian Mounted Police (RCMP) and add a mechanism to direct users who exhibit crisis signals in their conversations to local support services. The Canadian government is currently evaluating whether further legislation is needed to regulate the reporting obligations of AI platforms. However, the contradiction remains: will many people who are simply venting temporary emotional instability (you must have said "I want to die" casually to relieve stress, right?) also be reported, resulting in people becoming afraid to confide in AI (perhaps that's not a bad thing?) For now, the issue is not one of OpenAI's capabilities. ChatGPT detected it, and human reviewers saw it; the problem is that threshold—who decides it? Who interprets it? How is the balance struck?
Data Status✓ Full text extractedRead Original (動區 BlockTempo)
🔍Historical Similar Events· Keyword + Asset Matching6 items
💡 Currently matching via keywords + symbols (MVP) · Will be upgraded to embedding semantic search later
Raw Information
ID:c9711f98da
Source:動區 BlockTempo
Published:2026-04-29 07:11:50
Category:zh_news · Export Category zh
Symbols:Unspecified
Community Votes:+0 /0 · ⭐ 0 Important · 💬 0 Comments