News listGitHub Copilot starts charging, revealing the "biggest lie" in the AI industry
動區 BlockTempo2026-05-01 04:59:45

GitHub Copilot starts charging, revealing the "biggest lie" in the AI industry

ORIGINALGitHub Copilot改收費,揭開了AI產業「最大的謊言」
AI Impact AnalysisGrok analyzing...
📄Full Article· Automatically extracted by trafilaturaGemini 翻譯11479 words
GitHub Copilot finally couldn't hold on, switching from a monthly subscription to token-based billing. This isn't a product upgrade; it's the collective bankruptcy of the AI industry's subsidy scam. This article is sourced from AI commentator Ed Zitron’s piece "AI’s Economics Don’t Make Sense," compiled and reorganized by Deep Tide. I just wrote an article about how OpenAI is killing Oracle, and today’s piece uses some of that material. This is one of the best articles I’ve ever written, and I am very proud of it. Subscribing to the paid version is a great value and allows me to write these large, deeply researched free articles every week. Yesterday morning, GitHub Copilot users received confirmation of the news I reported a week ago—all GitHub Copilot plans will switch to usage-based billing on June 1, 2026. Instead of giving users a set number of "requests," Microsoft will charge based on the actual model costs incurred by the user. Microsoft calls this "...an important step toward a sustainable, reliable Copilot business and experience for all users." How much users can do now depends on how many tokens their subscription fee can buy (e.g., a $19/month plan buys $19 worth of tokens). Translation: We can no longer afford to subsidize GitHub Copilot users, or Amy Hood (Microsoft CFO) will start hitting people with a baseball bat. The announcement itself is an interesting preview of how these price changes will be packaged: Copilot is no longer the product it was a year ago. It has evolved from an in-editor assistant into an agentic platform capable of executing long-running, multi-step coding sessions, using the latest models, and iterating across the entire codebase. Agentic usage is becoming the default mode, which brings significantly higher compute and inference demands. Now, a quick chat question and a multi-hour autonomous coding session might cost the user the same amount. GitHub has been absorbing the escalating inference costs behind this usage, but the current request-based high-end model is no longer sustainable. Usage-based billing solves this. It better aligns pricing with actual usage, helps us maintain long-term service reliability, and reduces the need to throttle heavy users. See, it’s not that Microsoft was subsidizing compute for nearly two million people; it’s that AI has become so powerful and complex that it’s basically a different product! While Copilot may "not be the product it was a year ago," the underlying economic mismatch remains almost unchanged: Microsoft has allowed users to burn through more tokens than their subscription fees for three years. According to a Wall Street Journal report from October 2023: Individual users pay $10 a month to use this AI assistant. According to data from a person familiar with the matter, in the first few months of this year, the company lost more than $20 per user per month on average, and some users cost the company $80 per month. Naturally, GitHub Copilot users are now revolting, saying the product is "dead" and "completely ruined." I predicted this day two years ago in "The Subprime AI Crisis": That day has finally arrived, because every AI service you use is subsidizing compute, and every service is losing money because of it: When you pay for an AI startup’s service—including OpenAI and Anthropic, of course—you pay a monthly fee, such as $20, $100, or $200 a month for Anthropic’s Claude, $20 or $200 a month for Perplexity, or $8, $20, or $200 a month for OpenAI. In some enterprise scenarios, you get "credits" for certain work units, such as Lovable giving users "100 monthly credits" in a $25/month subscription, plus $25 in cloud hosting (until the end of Q1 2026), with credits that can roll over. When you use these services, the companies involved either pay AI labs at a rate per million tokens or (for Anthropic and OpenAI) pay cloud providers to rent GPUs to run the models. A token is basically 3/4 of a word. As a user, you don't feel the token consumption; it’s just the process of input and output. AI labs use "tokens," "messages," or 5-hour rate limits with percentages to mask service costs, and you, as a user, don't really know how much all of this costs. On the backend, AI startups are burning cash like crazy. Until recently, Anthropic even allowed you to burn up to $8 in compute for every $1 of subscription fee. OpenAI allowed this too, though it’s harder to measure exactly how much
Data Status✓ Full text extractedRead Original (動區 BlockTempo)
🔍Historical Similar Events· Keyword + Asset Matching0 items
No similar events found (requires more data samples or embedding search; currently MVP keyword matching)
Raw Information
ID:9e873be4f1
Source:動區 BlockTempo
Published:2026-05-01 04:59:45
Category:zh_news · Export Category zh
Symbols:Unspecified
Community Votes:+0 /0 · ⭐ 0 Important · 💬 0 Comments