News listOpenAI launches ChatGPT for Clinicians, assisting human physicians with 99.6% accuracy (free access in the US)
動區 BlockTempo2026-04-23 02:52:51

OpenAI launches ChatGPT for Clinicians, assisting human physicians with 99.6% accuracy (free access in the US)

ORIGINALOpenAI 推出醫療版 ChatGPT for Clinicians,99.6% 準確率協助人類醫師(美國免費開放)
AI Impact AnalysisGrok analyzing...
📄Full Article· Automatically extracted by trafilaturaGemini 翻譯1493 words
OpenAI has made ChatGPT for Clinicians free for certified physicians in the U.S., with the company claiming that 99.6% of responses are safe and accurate, outperforming human physicians in multiple medical benchmarks. (Previous coverage: ChatGPT gone wild! OpenAI was found helping chat clients write arbitrage programs, cryptocurrency papers, and even compose songs.) (Background: Is OpenAI implementing KYC? Users must upload ID documents and personal headshots to access the gpt-image-1 image generation model API.) OpenAI announced that ChatGPT for Clinicians is now free for all certified physicians, nurse practitioners, physician assistants, and pharmacists in the U.S. The company simultaneously released official data stating that after 6,924 test conversations evaluated by physician consultants, 99.6% of responses were rated as safe and accurate, with the model even surpassing human physicians in several benchmarks. OpenAI noted today (23rd) that millions of clinicians worldwide are already using ChatGPT to support clinical work each week, with usage more than doubling over the past year. According to the latest 2026 survey by the American Medical Association (AMA), 72% of U.S. physicians report using AI tools in their clinical work, a staggering increase from 48% last year. This growth reflects the long-standing structural pressures within the healthcare system: time-consuming administrative paperwork, the rapid pace of medical knowledge updates, and the increasing complexity of diagnostic decision-making are all forcing physicians to seek new solutions. Features provided by ChatGPT for Clinicians include: access to the latest models, reusable clinical workflow skills, trusted medical search, in-depth medical journal research, Continuing Medical Education (CME) credits, and HIPAA-compliant options (data protection mechanisms required by U.S. medical privacy regulations). OpenAI states that these features are not merely for information retrieval; they are designed to embed AI into the three core scenarios of nursing consultation, documentation, and medical research, attempting to transform the traditional "search data → make decision → write record" workflow of physicians. OpenAI claims that the GPT-5.4 model, specifically tuned for medical scenarios, outperforms the base GPT-5.4 model and other competing models within the ChatGPT for Clinicians workspace, and even surpasses human physicians in some tests. The simultaneously launched HealthBench Professional open benchmark platform allows external researchers to verify the model's performance in real-world medical scenarios. GPT-5.4 also ranks at the top of the Stanford MedHELM and MedMarks authoritative medical AI leaderboards. However, a 99.6% safety accuracy rate means that out of 6,924 conversations, approximately 28 responses still contained issues. In the medical field, a 0.4% error rate could correspond to incorrect medication dosages, misdirected diagnostic paths, or inappropriate treatment recommendations. OpenAI emphasizes that the model is designed with an "uncertainty expression mechanism," which explicitly informs physicians when the system lacks confidence and suggests seeking the opinion of a human expert. Nevertheless, the trigger thresholds, false positive rates, false negative rates of this mechanism, and whether physicians might over-rely on AI suggestions under time pressure remain unresolved issues. Will ChatGPT for Clinicians replace doctors? The company states that the focus of the tool is not on whether AI can be smarter than a physician, but on how it redefines the boundaries of "clinical labor." When AI can organize medical histories, compare the latest literature, and generate preliminary diagnostic suggestions in seconds, the core value of a physician will shift from a "knowledge memorizer" to the "ultimate decision-maker." The CME credit feature provided by OpenAI hints at another ambition: turning the process of AI training physicians into a source of data for physicians to train AI. Every interaction where a physician uses, corrects, or confirms an AI suggestion provides real-world labeled data for the next generation of models. However, concerns remain. As medical decision-making becomes increasingly dependent on algorithms, how is accountability defined? When AI suggestions conflict with a physician's judgment, who should yield? When system failures or model updates lead to changes in recommendations, how are patient rights protected? These questions cannot be answered in technical white papers and can only be shaped through the collisions in real-world clinical practice.
Data Status✓ Full text extractedRead Original (動區 BlockTempo)
🔍Historical Similar Events· Keyword + Asset Matching6 items
💡 Currently matching via keywords + symbols (MVP) · Will be upgraded to embedding semantic search later
Raw Information
ID:bf6ea1494f
Source:動區 BlockTempo
Published:2026-04-23 02:52:51
Category:zh_news · Export Category zh
Symbols:Unspecified
Community Votes:+0 /0 · ⭐ 0 Important · 💬 0 Comments