News listIntroduction to Claude Code's new feature /ultrareview: Cloud-based multi-agent deep code review, now free for a limited time for Pro and Max users
動區 BlockTempo2026-04-23 08:06:00

Introduction to Claude Code's new feature /ultrareview: Cloud-based multi-agent deep code review, now free for a limited time for Pro and Max users

ORIGINALClaude Code 新功能 /ultrareview 介紹》雲端多 agent 深度審查代碼,Pro、Max用戶限時免費
AI Impact AnalysisGrok analyzing...
📄Full Article· Automatically extracted by trafilaturaGemini 翻譯2372 words
Understand Claude Code Research Preview /ultrareview in 3 Minutes: Differences from /review, Use Cases, How to Launch, Billing, How to Claim 3 Free Uses for Pro/Max, and Pricing After May 5th. (Context: Claude Opus 4.7 In-depth Review: Coding Capabilities Upgraded, 1M Context at No Extra Cost, What Are the Real-world Downsides?) (Background: Anthropic Triggers Claude Code Unsubscription Wave, Giving OpenAI Codex a Massive Marketing Show) /ultrareview is positioned as a "Research Preview." It is a multi-agent review feature running in a cloud sandbox (remote isolated execution environment), designed for deep checks before merging, not for daily use. Differences between /ultrareview and /review, when to use them, how to launch, billing, and 5 pitfalls for enterprise accounts. - 3 Common FAQs /review provides local, real-time feedback, running on your machine to give suggestions within seconds, suitable for checking direction while coding. /ultrareview is completely different: it launches a set of independent agents in Anthropic's remote sandbox, analyzing the diff between your current branch and the default branch in parallel. The entire process runs in the background, consuming no local CPU or memory, and delivers results to your session via notification after 5 to 10 minutes. The core difference lies in the verification mechanism: /review suggestions often include fuzzy feedback like code style or naming conventions, which can sometimes make it hard to distinguish what actually needs fixing. Every finding in /ultrareview is reproduced and verified by an independent agent. The official stance is that it "focuses on real bugs, not style suggestions." Security, correctness, architecture, style, and test coverage are handled by different agents, with results consolidated at the end. The official documentation positions the two tools for different stages: /review is for quick feedback during the coding process. /ultrareview is for deep reviews of critical changes before merging. Note that the official documentation specifically highlights two scenarios: authentication logic (auth) and data migration. These are areas where errors have a wide impact and high debugging costs, making a 5 to 10-minute deep check worthwhile. If you are just fixing a frontend style or adding a console.log, /review is sufficient. There are two ways to launch it. The first is to enter it directly in the Claude Code CLI: /ultrareview The system will automatically grab the diff between the current branch and the default branch, including uncommitted changes. The second is to specify a GitHub PR (Pull Request) number: /ultrareview --pr 123 Once running, you don't need to stare at the terminal; the results will be pushed back to your session as a notification. Version requirement is Claude Code v2.1.86 or higher. You must also be logged into your Claude.ai account, as /ultrareview is tied to Claude.ai authentication. Important! The billing model is "extra usage," which does not consume the monthly quota included in your subscription plan. Pro and Max users each have 3 free uses until May 5, 2026. This is a one-time offer; once used, they are gone. After May 5, each use will cost approximately $5 to $20 USD. The actual amount depends on the scale of the diff; the more changes and the longer it runs, the higher the cost. Team and Enterprise users have no free quota and are billed from the first use. Note: The following five scenarios are not supported, and you will see an error message if you run the command: - Claude Code deployed on Amazon Bedrock - Claude Code deployed on Google Cloud Vertex AI - Claude Code deployed on Microsoft Foundry - Organizational accounts with Zero Data Retention (ZDR) enabled - Team and Enterprise accounts (no free quota; not unsupported, but be aware of immediate charges) From my observation, the most common pitfall for enterprise accounts is the ZDR policy. Since enabling ZDR is a long-term setting for many companies, engineers may not even know their account has this restriction until they try to run the command. You can verify this by checking your organization's data policy in your Claude.ai account settings or by asking your company's IT administrator. Detailed official documentation: docs.anthropic.com/en/docs/claude-code/code-review Q1. Can /ultrareview and /review be used at the same time? A: Yes, but there is no need to run both at the same time. The official positioning is to use them at different stages: run /review for quick checks during coding, and /ultrareview for deep checks before merging into the main branch. They are not mutually exclusive, but /ultrareview is
Data Status✓ Full text extractedRead Original (動區 BlockTempo)
🔍Historical Similar Events· Keyword + Asset Matching6 items
💡 Currently matching via keywords + symbols (MVP) · Will be upgraded to embedding semantic search later
Raw Information
ID:52543f8312
Source:動區 BlockTempo
Published:2026-04-23 08:06:00
Category:zh_news · Export Category zh
Symbols:Unspecified
Community Votes:+0 /0 · ⭐ 0 Important · 💬 0 Comments
Introduction to Claude Code's new feature /ultrareview: Cloud-based multi-agent deep code review, now free for a limited time for Pro and Max users | Feel.Trading