Anthropic's Capacity Crunch and the Strange xAI Deal
Published on 14.05.2026
Anthropic's Capacity Crunch and the Strange xAI Deal
TLDR: Anthropic spent weeks silently degrading Claude Code, banning corporate accounts, and pulling access from paying Pro users, all while CEO Dario Amodei admitted the company hit 80x revenue growth against a 10x plan. Now they're renting xAI's entire Colossus 1 data center, 220,000 NVIDIA GPUs, from a rival who called them "misanthropic and evil" a few months ago. The simplest explanation is the right one: they ran out of compute.
Summary: I've been watching the Claude Code situation unfold with a mix of sympathy and frustration. Devs paying twenty dollars a month had their Claude Code access yanked days into a subscription, and Anthropic let people believe the tool was just performing worse rather than admit it was being throttled. That kind of silent degradation is the worst possible failure mode for a developer tool, because trust is the entire product. If I can't predict what I'll get when I pay you, I have no business building workflows on top of you.
The xAI deal reframes everything. Colossus 1 is roughly 45 percent of xAI's current capacity and 20 to 25 percent of planned capacity. Handing that to a direct competitor is not a normal business move, and Elon was publicly calling Anthropic "misanthropic and evil" not long ago. The deal only makes sense if xAI has idle GPUs they can't otherwise monetize and Anthropic has buyers it can't otherwise serve. Grok never landed a serious B2B story, and the AI tooling surveys I've seen barely register it next to Claude, ChatGPT, and even open models like DeepSeek and Qwen.
There's also the political layer. Musk is suing OpenAI for $150B and wants Altman and Brockman out. Anthropic has its own public friction with OpenAI, the kind that shows up in awkward photo ops at summits. If you squint, propping up Anthropic is a way to hurt OpenAI without having to ship a better Grok. The enemy of my enemy gets your spare H100s.
For those of us building on these APIs, the lesson is unsentimental. We've been treating frontier model access like commodity infrastructure, and it isn't. Capacity is finite, contracts can shift overnight, and the provider can change the underlying behavior of the model without telling you. I would not bet a production roadmap on any single lab right now without a fallback model, a router, and a clear story for what happens when the rate limits tighten.
Key takeaways:
- Silent model degradation, what users are calling "nerfing," destroys developer trust faster than any outage
- Anthropic confirmed 80x growth against a 10x plan, which explains why everything downstream of compute broke
- xAI leasing roughly 45 percent of its current capacity to a rival signals weak Grok demand more than corporate friendship
- AI Gateway-style routing and multi-provider fallbacks are no longer optional for serious production work
- API rate limits and consumer plan limits are now product decisions, not just infrastructure ones, and they move
Why do I care: As a senior frontend dev shipping AI features, this is exactly the kind of supplier risk I have to design around. If I wire a customer-facing assistant directly to one model behind one provider, I'm one quiet capacity decision away from a regression I can't reproduce locally. I want my agentic features behind a gateway with explicit model strings, deterministic fallbacks, and observability on token latency and refusal rates so I can see degradation before users do. I also want my prompts and evals portable enough that swapping Claude for GPT or Gemini is a config change, not a refactor. The lesson from this week is that the model layer is going to keep being unstable for a while, and the architecture has to absorb that, not assume it away.
The Pulse: Did capacity shortages turn Anthropic hostile to devs?