News listOpenAI pursues the Michelin-star route, DeepSeek adopts a pre-made meal strategy; the battle for AI pricing power between China and the US intensifies.
動區 BlockTempo2026-04-26 02:29:24

OpenAI pursues the Michelin-star route, DeepSeek adopts a pre-made meal strategy; the battle for AI pricing power between China and the US intensifies.

ORIGINALOpenAI 走米其林路線、DeepSeek 拚預製菜戰略,中美 AI 定價權之爭升溫
AI Impact AnalysisGrok analyzing...
📄Full Article· Automatically extracted by trafilaturaGemini 翻譯5744 words
As OpenAI strives to build high-priced, top-tier models, DeepSeek is attempting to transform AI into a ubiquitous infrastructure—like water and electricity—through extremely low costs and edge-side technology. (Context: OpenAI has obtained evidence that DeepSeek infringed on its copyright, using GPT distillation technology to train Chinese AI.) (Background: OpenAI accuses DeepSeek of infringement; creators retort: "The biggest thief cries 'stop thief'"; US Navy orders a ban on DeepSeek.) On April 24, 2026, the DeepSeek V4 preview was officially released. This Chinese-made large model, featuring a 1.6 trillion-parameter Pro version and a 284 billion-parameter Flash version, brought its core selling point to the market: a million-token context window, which became a free standard for all official services. At almost the same time, across the ocean, OpenAI rolled out GPT-5.5. It possesses more massive computing power and richer Agent functions, but it is also significantly more expensive. "Million-token context," in plain terms, means that AI is no longer a "goldfish" that can only remember your last few sentences, but has become a "super brain" capable of swallowing three volumes of *The Three-Body Problem* in one go, understanding a two-hour movie in a second, and helping you spot typos along the way. To give the most direct example: you can dump all your company's contracts, emails, and financial reports from the past three years into V4 and have it find that breach-of-contract clause hidden in an attachment on page 47. In the past, this required a team of lawyers; now, it is free. GPT-5.5 has publicly priced this super brain: the standard version costs $5 per million input tokens and $30 for output; the GPT-5.5 Pro version, aimed at high-end tasks, is sold at a premium price of $30 per million input tokens and $180 for output. However, according to DeepSeek's official pricing, V4-Flash cache-hit input costs only 0.2 RMB per million tokens, and output costs 2 RMB; even for the V4-Pro, which rivals top closed-source models, cache-hit input is 1 RMB, cache-miss input is 12 RMB, and the output price is only 24 RMB. People often think the US-China AI competition is a race of model capabilities. In fact, it has long since turned into a divergence in business model paths. OpenAI was once the dragon-slaying hero that claimed to "benefit all of humanity," but now it is selling expensive, luxury finished housing. Meanwhile, DeepSeek is using near-free computing power to turn AI into water, electricity, and coal. When OpenAI becomes a shrewd general contractor, why is DeepSeek sparing no cost to turn top-tier AI into free tap water? What kind of undercurrents are hidden behind this shift in pricing power? The decisive battle for large models is taking place in a server room in Inner Mongolia at -20°C. Shortly before the release of V4, a surprising position appeared in DeepSeek's job postings: Senior Data Center Delivery Manager and Senior Operations Engineer, with a monthly salary of up to 30,000, 14 months of pay, stationed in Ulanqab, Inner Mongolia. This is a light-asset company that once prided itself on being "minimalist, pure, and focused only on algorithms." Over the past two years, their proudest label was "using four ounces to move a thousand pounds," achieving the DeepSeek-R1—which caused the US AI sector to plummet—with less than $6 million in training costs. But the massive computing power requirements of V4, coupled with the increasingly tight US computing power blockade, have completely shattered this light-asset pastoral. In 2025, the US Department of Commerce further tightened export controls on AI chips to China. NVIDIA H100 and H800 were already cut off, and even the downgraded H20 was added to the control list. This means that DeepSeek's future expansion of computing power must shift entirely to the Huawei Ascend ecosystem. In the V4 release notes, the company explicitly stated that the new model is "powered by Huawei Ascend" and revealed that after the batch launch of the Ascend 950 super-nodes in the second half of the year, the price of Pro will be significantly reduced. This shift cannot be completed by changing a few lines of adaptation code; it requires building a complete domestic computing infrastructure from scratch at the physical level. The trillion-parameter scale of V4 (with pre-training data reaching 33 trillion tokens), combined with the
Data Status✓ Full text extractedRead Original (動區 BlockTempo)
🔍Historical Similar Events· Keyword + Asset Matching6 items
💡 Currently matching via keywords + symbols (MVP) · Will be upgraded to embedding semantic search later
Raw Information
ID:54ac474b97
Source:動區 BlockTempo
Published:2026-04-26 02:29:24
Category:zh_news · Export Category zh
Symbols:Unspecified
Community Votes:+0 /0 · ⭐ 0 Important · 💬 0 Comments
OpenAI pursues the Michelin-star route, DeepSeek adopts a pre-made meal strategy; the battle for AI pricing power between China and the US intensifies. | Feel.Trading