← Back to Blog
May 2026 • 8 min read

Anthropic + Colossus One: Claude Code's Rate Limits Just Doubled

How a SpaceX-tied data center deal added 300 megawatts of capacity and what it means for developers running Claude Code at scale

The Announcement

Anthropic announced a partnership with SpaceX's Colossus One data center, picking up more than 300 megawatts of new capacity — the equivalent of over 220,000 NVIDIA-class GPUs at typical power-per-card numbers. The immediate, user-visible effect: Claude Code's rate limits doubled across all paid plans.

For developers, the relevant question isn't the megawatt figure. It's whether the agent stays responsive when you actually need it — in the last hour before a deploy, mid-debug, when twelve other people on your team are also slamming it.

What Doubled

Concurrent Sessions

The per-account concurrent session cap moved up across Pro, Team, and Enterprise tiers. The practical effect is that long-running background agents (review bots, refactor sweeps, scheduled jobs) no longer compete with your interactive sessions for headroom.

Token Throughput

Per-minute and per-hour token windows roughly doubled. For tasks that page through large repos or stream long edits, the difference between “pause for a beat” and “keep going” is now far more often the latter.

Off-Peak Headroom

Anthropic also expanded an off-peak credit pool: tasks scheduled outside business hours bill at a lower rate against your quota. Useful for nightly review sweeps, doc generation passes, or migration scripts that don't need to run in the foreground.

Why the Power Number Matters

The bottleneck on frontier-model deployment in 2026 isn't silicon — it's power and cooling. A 300 MW pickup is meaningful not because of the headline, but because it sidesteps the two-to-three-year permitting and grid-interconnect timelines that constrain greenfield builds.

Colossus One was already operating; Anthropic is buying time, not building it. That pattern — renting power from operators who solved the hardest physical problem first — is going to define the next two years of frontier inference.

What to Do With It

The honest answer for most teams is: nothing different yet. The rate-limit change removes a class of friction (the “Claude is busy” pause), but it doesn't change the shape of how you should design agent workflows. The discipline of small, well-scoped tasks with explicit success criteria still wins over “ask Claude to do everything in one shot.”

For teams running fleet-style agents — many sessions, many repos, scheduled jobs — the new headroom is more practical. The previous limits were the constraint forcing batched scheduling; that constraint just relaxed.

You don't feel the megawatts. You feel the lack of waiting.

Tags: Claude Code • Anthropic • Infrastructure