Anthropic Cuts Off Trae AI IDE Access to Claude Models, Citing Data Distillation Fears

Anthropic has reportedly severed access for Trae, an AI-powered VS Code fork developed by ByteDance/TikTok, to its Claude large language models. The move impacts Trae’s agentic features, particularly its solo mode, which was heavily integrated with Claude’s capabilities. This decision marks Anthropic’s latest in a series of restrictive actions, previously cutting off rival AI IDE Windsurf and even OpenAI from its models. Anthropic’s updated terms of service, published on September 4th, cite concerns over companies from “restricted regions,” like China, using its services via subsidiaries. The primary stated fear is the potential for such entities to advance their own AI development through techniques like distillation, thereby competing with US and allied technology companies.

The underlying tension appears to be Anthropic’s apprehension regarding its proprietary model’s ‘brain’ being replicated or improved upon. Observations suggest that Windsurf’s recently launched SWE-1.5 model, trained on user data, exhibits characteristics reminiscent of Claude. Furthermore, Trae’s “Seed Coder” model recently achieved a notable 78% on SWE-Bench, surpassing Claude 3.5 Sonnet’s 70% in mini SWE agent benchmarks, around the same period Anthropic revised its access policies. This high-performing model leverages self-curated training data, raising suspicions of effective distillation techniques. Anthropic is also noted for being the only major lab without open-weight models, a non-open-source CLI, and selective benchmark disclosures, indicating a highly guarded stance on its AI technology.

The cutoff has sparked frustration among Trae users, many of whom subscribed specifically for Claude model access. While Anthropic has framed its actions within the context of national security and data protection, the selective nature of these cutoffs (e.g., Cursor still retains access) suggests a strategic effort to protect its market position and intellectual property without alienating its broader developer base. The industry watches to see how Anthropic’s stringent policies will impact developer tool ecosystems and the broader competitive landscape for large language models.