News listQwen3.6-27B released as open source, "top choice for Openclaw and Hermes": AI performance matches Claude Opus 4.5 at 14x lower cost
動區 BlockTempo2026-04-23 06:04:06

Qwen3.6-27B released as open source, "top choice for Openclaw and Hermes": AI performance matches Claude Opus 4.5 at 14x lower cost

ORIGINALQwen3.6-27B 開源發表「Openclaw、Hermes首選」:AI 表現打平Claude Opus 4.5 成本縮 14 倍
AI Impact AnalysisGrok analyzing...
📄Full Article· Automatically extracted by trafilaturaGemini 翻譯2710 words
Alibaba's Qwen series latest flagship, Qwen3.6-27B, was officially open-sourced on the evening of April 22, 2026. This 27B dense model tied Claude 4.5 Opus with a score of 59.3 on Terminal-Bench 2.0. With less than 1/14 of the parameter count, it surpassed the previous generation 397B MoE flagship's 76.2, achieving 77.2 on SWE-bench Verified. The full model is 55.6 GB, and Q4_K_M quantization compresses it to 16.8 GB, allowing it to run on consumer-grade hardware. This provides local agent frameworks like OpenClaw and Hermes Agent with a truly capable local brain for the first time. (Context: After being named and blocked by Anthropic, OpenClaw suggested users switch to API Key or alternative models like Qwen and Kimi.) (Background: US AI needs to be "audited" and locked in labs! China is going all-in on open-source models, why?) On the evening of April 22, 2026, the Alibaba Qwen team quietly dropped a bomb on Hugging Face: Qwen3.6-27B is officially open-sourced under the Apache 2.0 license, free for commercial use by anyone. The numbers may seem ordinary, but the significance behind them is not—a 27B dense architecture (non-MoE) that, for the first time in terminal agent testing, caught up with Anthropic's closed-source flagship Claude 4.5 Opus. At the same time, with a size of 55.6 GB, it outperformed the previous generation 397B MoE monster that required 807 GB of VRAM to run fully. Local deployment, agent capability, and consumer-grade hardware compatibility—Qwen3.6-27B meets all three conditions. The Qwen team selected 10 benchmarks reflecting real-world agent programming capabilities, with Qwen3.6-27B's results as follows: Three key conclusions are worth highlighting: First, Terminal-Bench 2.0 score of 59.3 ties Claude 4.5 Opus—this is the first time a 27B dense model has caught up with Anthropic's closed-source flagship in terminal agent tasks, while the old Qwen3.5-27B version only had 41.6, representing a 17.7-point improvement in a single generation. Second, SWE-bench Verified score of 77.2 surpasses Qwen3.5-397B-A17B's 76.2—a 27B dense model defeated the previous generation 397B MoE flagship, with the model size shrinking from 807 GB to 55.6 GB, a reduction of over 14 times. Third, SkillsBench jumped from 27.2 to 48.2 (+77%), and Claw-Eval Pass^3 surpassed Claude 4.5 Opus's 59.6 with a score of 60.6—multi-turn, multi-step consistency is the biggest upgrade this time, meaning the model is less likely to crash or go off-track when continuously executing complex agent tasks. Knowledge and reasoning capabilities are equally impressive: MMLU-Pro 86.2, MMLU-Redux 93.5, GPQA Diamond 87.8, AIME 2026 94.1, and LiveCodeBench v6 83.9, comprehensively surpassing the previous generation with the same parameter count. Qwen3.6-27B is a pure dense architecture; the 27B parameter count is not MoE active parameters, but true parameters activated during every inference. The native context length is 262,144 tokens, expandable up to 1,010,000 tokens (approx. 1M) via YaRN, a must-have specification for coding agents requiring long document analysis or cross-repository understanding. The full-precision model is 55.6 GB; using Q4_K_M quantization, the size is compressed to 16.8 GB, which can be loaded directly on Mac M-series with 24 GB of unified memory or consumer-grade GPUs. The license is Apache 2.0, with no additional licensing fees for commercial use. Deployment is recommended via SGLang ≥0.5.10 or vLLM ≥0.19.0, with support also available for KTransformers and HF Transformers. Additionally
Data Status✓ Full text extractedRead Original (動區 BlockTempo)
🔍Historical Similar Events· Keyword + Asset Matching3 items
💡 Currently matching via keywords + symbols (MVP) · Will be upgraded to embedding semantic search later
Raw Information
ID:038ec11992
Source:動區 BlockTempo
Published:2026-04-23 06:04:06
Category:zh_news · Export Category zh
Symbols:Unspecified
Community Votes:+0 /0 · ⭐ 0 Important · 💬 0 Comments
Qwen3.6-27B released as open source, "top choice for Openclaw and Hermes": AI performance matches Claude Opus 4.5 at 14x lower cost | Feel.Trading