Anthropic's Rate Limit Changes and iMessage Plugin Spark Hypocrisy Accusations Amid Developer Outcry

Anthropic recently announced adjusted 5-hour session limits for Claude Code subscribers during peak weekday hours (5 AM - 11 AM Pacific), citing growing demand. This change, which sees users consuming their limits “faster than before” and affects approximately 7% of users, sparked significant user frustration, particularly due to its announcement after the changes were implemented and its limited communication via an individual DevRel’s Twitter account. Critics drew parallels to OpenAI’s more transparent approach, including frequent CodeX usage resets and flexible pricing tiers for batch processing. Compounding the issue, an Anthropic employee was observed promoting an iMessage plugin for Claude Code, a tool that appears to violate Apple’s terms of service against commercial activity and reverse engineering of iMessage APIs. This promotion comes weeks after Anthropic reportedly issued legal requests to third-party harnesses like Open Code, demanding they cease using Claude Code OAuth tokens via their platforms, citing ToS violations for accessing their “special endpoint.” This perceived double standard has led to widespread accusations of hypocrisy within the developer community.

The underlying factor driving Anthropic’s rate limit adjustments is the industry-wide compute crisis, characterized by GPU scarcity. While OpenAI aggressively acquired compute capacity, Anthropic and Google were comparatively slower, now relying on partnerships with Amazon and Google for data center access. This scarcity forces difficult internal GPU allocation decisions between research, product development, and user consumption. In this environment, the “harness”—the tools and environment in which an AI agent operates—plays a crucial role in model performance. Independent benchmarks show significant performance gains when models like Claude Opus (improving from 77% to 93%) are utilized within optimized harnesses like Cursor, highlighting the importance of these intermediary layers beyond the base model. This scenario underscores a fundamental divide: while competitors like OpenAI and GitHub encourage using their inference with various third-party harnesses, Anthropic’s policies aim to lock users into their proprietary Claude Code application, leveraging its subsidized compute to maintain ecosystem control. This approach contrasts with the broader developer ethos of open integration and user choice, further fueling discontent.