News listWhy does the extreme cost-effectiveness of Chinese AI make Silicon Valley collectively anxious?
動區 BlockTempo2026-04-28 00:46:24

Why does the extreme cost-effectiveness of Chinese AI make Silicon Valley collectively anxious?

ORIGINAL為什麼中國 AI 的極致性價比讓矽谷集體感到焦慮?
AI Impact AnalysisGrok analyzing...
📄Full Article· Automatically extracted by trafilaturaGemini 翻譯1673 words
Despite being squeezed by limited computing power and chip export controls, Chinese AI companies have surged their global market share from 1% to 15%, with DeepSeek’s inference costs standing at just 3% of GPT-5.2. This low-cost open-source strategy is eroding the pricing moats of major US AI platforms. (Flash: NVIDIA hits a new all-time high, breaking through $212.6, with a market cap of $5.17 trillion, reclaiming the top spot globally.) (Background: Microsoft loosens ties with OpenAI! Both parties modified their partnership agreement: breaking the Microsoft Azure cloud monopoly and ending revenue sharing.) The contradiction: While the US uses export controls to weaken China’s training compute, China is widening this gap by compressing model architectures. On one side is GPT-5.2 at $14 per million output tokens; on the other is DeepSeek V3.2-Exp at $0.42, a 33-fold difference. Bloomberg points out that the core method Chinese firms use to break through chip blockades is a technical design called "Mixture of Experts" (MoE). In simple terms: the model does not need to mobilize its entire "brain" to answer every question; instead, it divides the neural network into multiple specialized sub-networks, activating only the most relevant ones each time. Taking DeepSeek’s latest V4-Pro as an example, it uses no more than 3% of its total parameters to answer any given question. In contrast, the flagship models from OpenAI and Anthropic must activate a large proportion of neurons to maintain stronger reasoning depth and context length, resulting in several times higher compute consumption. Export controls make the chips China obtains about 20% slower and 30% more power-hungry. China’s logic in response is: since we cannot get the fastest tools, we must maximize the efficiency of every bit of compute. The report mentions that on the LiveBench LLM leaderboard, DeepSeek and Moonshot have already climbed into the global top 12, standing alongside OpenAI, Anthropic, and Google. On the other hand, Chinese firms are betting heavily on an "open weights" strategy. Open weights means publishing the internal parameters derived from model training, allowing external developers to download, modify, and re-apply them directly. US models like ChatGPT, Claude, and Gemini do not disclose their parameters; third parties must pay for API access to use them. The effectiveness of this strategy is evident: Alibaba’s Qwen series models reached over 1 billion cumulative downloads in January 2026, surpassing Meta’s Llama to become the world’s most downloaded open-source AI model family, spawning over 200,000 customized versions. After DeepSeek released the weights for R1, the developer community quickly launched vertical versions for finance, healthcare, and Chinese language processing. The report notes that this model effectively "outsources" national AI R&D to the global developer community, with the community voluntarily completing the "last mile" of application tuning. The Chinese government views open models as a tool of soft power, using the "Global AI Governance Action Plan" released in July 2025 to promote the accessibility of Chinese AI technology to low- and middle-income countries. Bloomberg Intelligence analyst Robert Lea stated: "US allies won't use DeepSeek in official capacities, but in this cost-conscious era, if DeepSeek can provide 90% of ChatGPT’s functionality, consumers might make different choices than their governments." This statement clarifies the issue. The business model of major US AI firms is built on the premise that the strongest model deserves the highest pricing. But when Chinese firms bundle "sufficient performance" with "extremely low inference costs," that premise begins to falter. Anthropic has taken legal action, accusing DeepSeek, Moonshot, and MiniMax of launching "industrial-scale distillation attacks" on Claude using 24,000 fake accounts, essentially using others' training results to compress their own development costs. Anthropic, OpenAI, and Google are currently joining forces to try to stop such behavior. (Further reading: What is AI model distillation? How DeepSeek spent $6 million to learn $100 million worth of skills.) However, the structure of the problem is already set. Export controls compressed China’s hardware advantage but forced out more efficient software architectures; market share climbed from 1% to 15% in just one year. A competition defined by chips is being replaced by a battle redefined by algorithms.
Data Status✓ Full text extractedRead Original (動區 BlockTempo)
🔍Historical Similar Events· Keyword + Asset Matching0 items
No similar events found (requires more data samples or embedding search; currently MVP keyword matching)
Raw Information
ID:5a872de2ab
Source:動區 BlockTempo
Published:2026-04-28 00:46:24
Category:zh_news · Export Category zh
Symbols:Unspecified
Community Votes:+0 /0 · ⭐ 0 Important · 💬 0 Comments
Why does the extreme cost-effectiveness of Chinese AI make Silicon Valley collectively anxious? | Feel.Trading