I built something called nanika. It runs Claude Code workers in parallel. It decomposes a mission into PHASE lines, dispatches workers, manages dependencies, and collects output. By any reasonable definition, it's a third-party harness on top of Claude.
As of today at 12pm PT, Anthropic has decided that's the wrong category to be in.
Starting April 4, 2026, Claude Pro and Max subscriptions no longer cover usage from third-party harnesses. The announcement uses the phrase "all third-party harnesses" — not "high-consumption agents," not "OpenClaw specifically." All of them. Nanika included.
The title is a little unfair to Anthropic. They didn't say "these orchestrators should not exist." But they did say: if you want to run them, you'll pay separately. And in a market where flat-rate subscription pricing was the only way most developers could justify the usage volume, that's functionally the same decision. The orchestrators that survive will be the ones that can absorb a 4-8x cost increase or move to a different model. Most won't.

Anthropic blocked "all third-party harnesses" — what that actually means
The quick facts, because the coverage has been murky:
OAuth tokens from Claude Free, Pro, and Max accounts are now prohibited in any product, tool, or service that isn't Claude.ai or Claude Code. This was technically already in ToS Section 3.7 since February 2024, but today is when the billing enforcement goes live.
Users who want to keep running third-party tools can either enable "extra usage" billing (pay-as-you-go, on top of their subscription), pre-purchase usage bundles at up to 30% discount, or switch to direct API keys. There's a one-time credit equal to your monthly subscription price as a consolation, and a refund window open until April 17 if you want out entirely.
What's covered under subscription going forward: Claude Code and Claude Cowork. Both Anthropic-owned.
The tools already caught in the enforcement wave before today: OpenClaw, OpenCode, Cline, Roo Code — all restricted or forced off Claude in January 2026. Today's announcement formalizes billing for whatever remained.
The thing that stood out to me reading the policy language is how deliberately broad it is. "All third-party harnesses" is not a definition that excludes edge cases. It's a definition designed to cover everything that isn't theirs.
The capacity argument doesn't survive the /loop test
Boris Cherny, Head of Claude Code, gave the official justification: "Our subscriptions weren't built for the usage patterns of these third-party tools."
Anthropic's spokesperson added that third-party tools create an "outsized strain on our systems."
There's also a technical argument Anthropic surfaced: Claude Code and Claude Cowork are built to maximize prompt cache hit rates — reusing previously processed text to reduce compute. Third-party harnesses, the argument goes, bypass these optimizations and consume more raw compute per token.
That's their strongest point. It's also the easiest to address without banning clients — you'd enforce cache compliance at the API layer, not prohibit third-party tools from running at all.
But the deeper problem with the capacity framing came from a comment in the HN thread that I haven't seen answered anywhere:
"There's a 5 hour limit and a weekly limit. Those are hard token limits. Everyone on a plan pays for the max set of tokens in that plan. The limits manage capacity. The solution to that isn't a change of ToS, it's adjusting the limits. In other words this is about Anthropic subsidizing their own tools to keep people on their platform. OpenClaw is just a good cover story for that."
And then: "You can maximize plans just as easily w/ /loop. I do it all the time on max 20x. The agent consuming those tokens is irrelevant."
That second point is the one that sticks. Claude Code has /loop built in — a first-party scheduled task runner that burns identical tokens to any third-party harness doing equivalent work. If the concern is token consumption, /loop should be restricted too. It isn't.
The distinction Anthropic is drawing isn't about tokens. It's about who gets credit for them.

The OpenClaw timeline is a competitive story, not a capacity one
The capacity argument also doesn't explain the timing.
In January 2026, Anthropic began mass enforcement. Engineer Thariq Shihipar announced it, citing "unusual traffic patterns and lack of telemetry" — note that second reason, I'll come back to it. Thousands of OpenClaw, OpenCode, Cline, and Roo Code users got banned or restricted.
On February 15, 2026, Peter Steinberger — OpenClaw's creator — joined OpenAI. His statement: "What I want is to change the world, not build a large company. Teaming up with OpenAI is the fastest way to bring this to everyone."
Before his move, Steinberger had told Yahoo Tech: "We told Anthropic that we have many users who only signed up for their sub because of OpenClaw and that it'd be a loss if they cut them off."
Anthropic updated their legal terms to make the ban explicit on February 20, five days after the OpenAI announcement. OpenCode removed Claude support entirely the same month, citing legal requests from Anthropic.
The sequence: OpenClaw gets 200,000+ GitHub stars and becomes arguably a bigger daily driver of Claude usage than Claude Code itself → Anthropic restricts it → creator joins OpenAI → OpenAI publicly signals they explicitly support third-party harnesses → Anthropic doubles down with April billing enforcement.
This isn't a capacity timeline. It's a competitive response timeline.
The telemetry angle Shihipar mentioned is interesting for a different reason. Anthropic wants observability over how their model is being used. When 200,000 developers are running sophisticated orchestration through their infrastructure, they have no visibility into what's happening at the workflow layer. Claude Code gives them that. Third-party harnesses don't. Whether that's purely a capacity concern or partly a product intelligence concern is a question Anthropic hasn't answered.
The API isn't an escape hatch — it's a cost restructure
The official guidance is: "use the API."
I've seen this treated as a lateral move in most of the coverage. It isn't.
A Claude Max subscription runs $100-200/month with heavy usage covered under the subscription limit. API pricing is per-token: Claude Sonnet 4 is currently $3 per million input tokens, $15 per million output tokens. For an orchestration workload running 8 parallel workers through multi-step missions — the kind nanika was built for — you're not talking about a similar bill. You're talking about something in the $600-2000/month range depending on context size and agent loop depth.
That's not a complaint about the API pricing. API pricing is fair for the compute involved. But "just use the API" as advice to developers currently on subscription plans is equivalent to "just pay 4-8x more." Some organizations can absorb that. Most hobbyist and indie developer use cases can't.
The practical effect is that subscription pricing was a cross-subsidy: heavy orchestration use was priced like light conversational use. If someone built a robot that used your gym membership to work out 24/7, the gym would shut it down too. Anthropic is ending that subsidy for third-party tools while keeping it for their own. That's the tax.
For anyone building orchestration tooling professionally, the architecture decision is now clear: API-key-first, not subscription-auth-first. The subscription path is contested ground. The API path costs more but is at least stable.

Nanika sits on exactly this line
I want to be specific about what the policy actually catches, because there's been a lot of confusion in the HN thread about which tools are affected.
Nanika dispatches Claude Code workers. The workers run inside Claude Code sessions. The orchestration layer manages the dependency graph and collects outputs. The question of whether that makes nanika a "harness" or a "Claude Code automation" is genuinely ambiguous under the current policy language — and I think the ambiguity is intentional.
If nanika spawns workers that each run a Claude Code session with an independent task, is that a third-party harness wrapping Claude, or is it Claude Code doing what Claude Code does? Anthropic hasn't drawn that line precisely. And that's a useful position for them to be in: keep third-party developers uncertain about where the boundary is, and they'll self-censor or migrate to the API out of caution.
The comment from the HN thread that rang most true to me: "Right now my own incentive is to stop being dependent on Claude for as much as I can as quickly as I can." That's the developer response Anthropic should be worried about. Not the ones who protest. The ones who quietly diversify.
For nanika specifically, the right architectural response is to make the model layer swappable — which I'd been putting off as unnecessary complexity. It's now necessary. Any serious orchestration tool that's locked to a single model provider is fragile. Not because providers are malicious, but because their commercial interests don't always align with the tools built on top of them. This is just how platform dynamics work.
When the model provider owns the tooling layer, they draw the line
I don't think Anthropic made this decision carelessly. They looked at a situation where third-party tools were responsible for a significant portion of their subscription load, those tools were giving them no observability, and one of those tools had just sent its creator to a competitor. The business case for enforcement is clear.
The question worth sitting with is what it tells you about building on this layer at all.
Anthropic is not just a model company. They built Claude Code, they're building Claude Cowork, and they have direct financial interest in owning the orchestration layer. The subscription restriction isn't just a pricing decision — it's a signal about where they see the product going. First-party orchestration tools get the subsidy. Third-party orchestration tools pay the real rate.
That's not inherently unfair. It's how platform businesses work. AWS does it. Apple does it. Google does it. You build on someone's platform knowing they might compete with you at any layer. The question isn't whether it's fair — it's whether you've priced in the risk.
For nanika, I hadn't priced it in well enough. The architecture assumed subscription auth would be stable. It isn't. Multi-model support and API-key-first auth are now higher priority than I'd planned.
The honest open question is whether Anthropic's move is smart long-term. OpenAI explicitly welcoming third-party harnesses while Anthropic restricts them creates a real developer preference signal. The developers most likely to build the next breakout agent tool aren't using Anthropic's first-party tools — they're building their own. Restricting that class of user might protect subscription margins short-term while eroding developer mindshare over a longer horizon.
I don't know how that plays out. But I know the orchestration layer is now contested ground, and building on it means understanding who controls the terrain.
Nanika is an open-source multi-agent orchestrator I've been building at github.com/joeyhipolito/nanika. The orchestration patterns described here are from real production use.