The AI Race Paradox: China Dominates Open-Weight Models While US Leads Closed AI
The global artificial intelligence race presents a nuanced picture: American companies like Google, Anthropic, and OpenAI currently lead in top-performing closed-source models. However, when the focus shifts to “open-weight” models—where the underlying parameters are downloadable and usable by third parties—China emerges as a dominant force. Chinese labs, including Kimmy, Deepseek, and MiniMax-m2, consistently rank at the top of open-weight benchmarks, with their models often significantly larger and more capable than their American open-weight counterparts, such as GPT-OSS-120B. This strategic divergence is largely driven by necessity for Chinese labs; open weights are crucial for gaining mindshare and overcoming trust barriers, as security concerns prevent many Western entities from using models hosted on Chinese infrastructure.
American labs, by contrast, prioritize a closed, API-driven monetization strategy, with an emphasis on controlling access and mitigating liabilities like security risks and copyright infringement, which are difficult to manage once model weights are released. OpenAI, however, has carved out a distinct niche in the open-weight space, focusing on consumer-runnable models (like GPT-OSS 20B and 120B) optimized for local inference on personal hardware like powerful laptops or Mac Studios. This approach prioritizes accessibility and user control over sheer scale, aiming to dominate the segment for on-device AI applications. Despite this, the broader incentive for US labs to release competitive, large-scale open-weight models remains low, suggesting that while the US may lead in locally-runnable models, China is poised to continue its leadership in the larger, infrastructure-hosted open-weight domain for the foreseeable future, fueled by a collaborative research culture and strategic imperative.