ニュース一覧OpenAIがChatGPTがGoblinsについて話し続けるのをやめなかった理由をようやく説明
Decrypt2026-04-30 17:16:37

OpenAIがChatGPTがGoblinsについて話し続けるのをやめなかった理由をようやく説明

ORIGINALOpenAI Finally Explains Why ChatGPT Wouldn't Stop Talking About Goblins
AI 影響分析Grok が分析中...
📄原文全文· trafilatura により自動抽出6176 文字
In brief - OpenAI's "Nerdy" personality rewarded goblin metaphors, spreading the quirk across all GPT models through reinforcement learning. - Goblin mentions in GPT-5.4's Nerdy mode surged 3,881% compared to GPT-5.2, prompting an internal investigation and emergency system prompt patch. - The fix—writing "never talk about goblins" in a developer prompt—shows why system prompt patches are faster but riskier than retraining. If you asked ChatGPT for coding help lately and it responded by calling your bug a "mischievous little gremlin," you are not imagining things. The model developed a genuine obsession with fantasy creatures—goblins, gremlins, raccoons, trolls, ogres, and yes, pigeons—and OpenAI published a full post-mortem on how it happened. The short version: a reward signal designed to make ChatGPT more playful went rogue, and the goblins multiplied. The goblin story only became public because Reddit users spotted the "never mention goblins" line in a leaked Codex system prompt on GitHub. The post went viral before OpenAI published its own explanation. How the Nerdy personality spawned a goblin infestation According to OpenAI, the trail starts with GPT-5.1, launched last November. That's when OpenAI introduced personality customization, letting users pick styles like Friendly, Professional, Efficient, and Nerdy. The Nerdy persona came with a system prompt telling the model to be nerdy and playful, to "undercut pretension through playful use of language," and to acknowledge that "the world is complex and strange." That prompt, it turned out, was a goblin magnet. During reinforcement learning training, the reward signal for the Nerdy personality consistently scored outputs higher when they contained creature-word metaphors. Across 76.2% of datasets audited, responses with "goblin" or "gremlin" received better marks than the same responses without them. The model learned: whimsy equals reward. Goblin mentions exploded in GPT-5.4, with the Nerdy personality showing a 3,881% increase compared to GPT-5.2. The problem is that reinforcement learning doesn't keep learned behaviors neatly contained. Once a style tic gets rewarded in one context, it bleeds into others through a feedback loop: the model generates creature-laden outputs, those outputs get reused in fine-tuning data, and the behavior deepens across the entire model, even without the Nerdy prompt active. Nerdy accounted for just 2.5% of all ChatGPT responses. It was responsible for 66.7% of all "goblin" mentions. Because of OpenAI’s methods, Goblin and gremlin prevalence climbed steadily over training progress when the Nerdy personality was active. Even without the Nerdy personality, creature mentions crept upward—evidence of cross-contamination through supervised fine-tuning data. GPT-5.5 was already too far gone By the time OpenAI found the root cause, GPT-5.5 was already deep in training, and it had absorbed a full family of creature words. A data audit flagged not just goblins and gremlins but raccoons, trolls, ogres, and pigeons as what the company called "tic words." (“Frogs,” for the curious, were mostly legitimate.) The first measurable spike: goblin mentions rose 175% and gremlin mentions 52% after GPT-5.1's launch. Even OpenAI Chief Scientist Jakub Pachocki got a goblin when he asked for a unicorn in ASCII art. OpenAI retired the Nerdy personality in March and scrubbed creature-affine reward signals from future training. But GPT-5.5 had already started its training run. The company's solution for Codex—its coding agent—was to simply add a line to the developer system prompt reading "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query." Someone at OpenAI committed that to production code and moved on with their day. The system prompt patch problem But why did OpenAI choose this path? Retraining a model the size of GPT-5.5 to remove a behavioral quirk is expensive and slow. A system prompt tweak takes minutes. Companies across the industry reach for the prompt patch first because it's the low-cost, fast-deploy option when user complaints spike. But prompt patches carry their own risks. They don't fix the underlying behavior but only suppress it. And suppression can have side effects. OpenAI's goblin situation is a relatively benign example. The scariest version of this dynamic played out with Grok last year. After xAI pushed a system prompt update that told Grok to treat media as biased and "not shy away from politically incorrect claims," the chatbot spent 16 hours calling itself "MechaHitler" and posting antisemitic content on X. The fix was another prompt change, which promptly overcorrected so hard that Grok started flagging antisemitism in puppy pictures, clouds, and its own logo. Desperate prompt engineering cascading into more desperate prompt engineering. The goblin patch hasn't caused anything that dramatic. But OpenAI admits GPT-5.5 still launched with the underlying quirk intact, just suppressed in Codex. The company even published a command to remove the goblin-suppressing instructions if users want the creatures back. Why companies hide their system prompts Hiding or obfuscating your full system prompt is typical in the AI industry. Companies treat system prompts as trade secrets for a few reasons: intellectual property protection, competitive advantage, and security. If a jailbreaker knows the exact rules a model is following, bypassing them becomes trivially easier. There's also a fourth reason companies don't advertise: image management. A line reading "never mention goblins" doesn't inspire confidence in the underlying technology. Publishing it requires either a sense of humor or a strong research culture, or both. OpenAI says the investigation produced new internal tooling to audit model behavior and trace behavioral quirks back to their training roots. GPT-5.5's training data has since been cleaned of creature-affine examples. The next model generation should arrive goblin-free—unless, of course, something else gets rewarded for reasons no one understands yet.
データステータス✓ 全文抽出済み原文を読む(Decrypt)
🔍過去の類似イベント· キーワード + 銘柄照合6 件
💡 現在はキーワード + 銘柄照合(MVP)を使用しています · 今後 embedding セマンティック検索へアップグレード予定
原始情報
ID:3dbe7027a0
ソース:Decrypt
公開:2026-04-30 17:16:37
カテゴリ:一般 · エクスポートカテゴリ neutral
銘柄:未指定
コミュニティ投票:+0 /0 · ⭐ 0 重要 · 💬 0 コメント
OpenAIがChatGPTがGoblinsについて話し続けるのをやめなかった理由をようやく説明 | Feel.Trading