GLM 4.7 and MiniMax M2.1 Deliver Powerful, Cost-Effective Open-Weight Models
The AI development landscape has been energized by the unexpected release of two new open-weight models, GLM 4.7 and MiniMax M2.1, challenging the performance and cost structures of established proprietary LLMs. Both models, emerging from competing labs, promise significant capabilities for developers, particularly in coding tasks. ZAI’s GLM 4.7, an open-weight model with a substantial 717GB footprint, demonstrates strong gains over its predecessor in multilingual agentic coding, SWE bench, and terminal-based tasks, often performing on par with or ahead of models like GPT 5.1 and Sonnet 4.5 in specific coding benchmarks. Concurrently, MiniMax M2.1 (with open weights anticipated around Christmas Day) distinguishes itself with exceptional multiprogramming language support, excelling in Rust, Java, Golang, and TypeScript, alongside notable improvements in web and app development. MiniMax claims M2.1 benchmarks higher than Anthropic’s Opus and Sonnet, further underscoring the competitive pressure these new releases exert.
Evaluations in real-world coding scenarios, such as rebuilding complex features in VS Code environments like Open Code and Kiro, reveal distinct strengths. GLM 4.7 performs effectively in focused, smaller tasks but requires more explicit steering for general, multi-path problem-solving. MiniMax M2.1, despite its ‘smaller model’ feel, demonstrates remarkable proficiency in long-running agentic tasks, maintaining context and instruction fidelity over extended periods—a capability previously observed primarily in high-tier models like Opus and GPT-5. While both models exhibit occasional interaction quirks with tool harnesses, their ability to deliver functional code for complex features at a fraction of the cost is a game-changer. For instance, MiniMax M2.1 completed a substantial feature for approximately 4-10 cents, compared to Opus 4.5’s significantly higher operational costs. This cost differential is striking: GLM 4.7 is roughly one-tenth the price of Sonnet 4.5, and MiniMax M2.1 is reported to be up to 60 times cheaper than Opus 4.1 for comparable performance, making these models highly attractive for budget-conscious development workflows. Their self-hostable nature (for GLM 4.7, and expected for M2.1) further amplifies their potential to democratize advanced AI-driven development.