Apple's Billion-Dollar Siri AI Move with Google Sparks Ecosystem Shift Amid Anthropic's Tightening Grip on Data
Apple is reportedly nearing a significant $1 billion deal with Google to integrate a custom Gemini 3-based model, boasting 1.2 trillion parameters, into Siri. This strategic partnership, initially reported by Mark Gurman, aims to address Apple’s challenges in the rapidly evolving AI sector, particularly its limited internal data for training competitive models due to its strong user privacy stance. Despite exploring options with OpenAI and Anthropic, Google’s offering was found to be the most cost-effective solution. The agreement entails Apple acquiring the model weights to operate on its own private cloud compute infrastructure, ensuring user data privacy while enhancing Siri’s capabilities. Mike Rockwell, the executive behind Vision Pro, has also recently taken over leadership for Siri’s development.
Simultaneously, Anthropic has enacted a controversial hard cutoff of Trae, the ByteDance-owned AI IDE, from accessing its Claude models. This action, not unprecedented for Anthropic (which previously restricted Windsurf and OpenAI), stems from concerns over data distillation and potential national security risks associated with entities from “adversarial nations” like China. Commentators noted that Anthropic stands as the only major AI lab without open-weight models or an open-source CLI, and its vigilant stance on preventing competitors from leveraging its model outputs for their own AI development is evident. This heightened sensitivity coincides with Trae’s “Seed Coder” model, which utilizes self-curated training data, achieving a high SWE-Bench score. These events underscore a broader industry shift towards the strategic importance of programmatically generated training data and the pivotal role of tool-calling capabilities in AI advancements, a point emphasized by content creator PewDiePie in recent discussions on local LLMs and AI art.