DEV Community

Cover image for Anthropic reversed its position on Claude CLI: last week it was gray, today it's green. My workflow didn't change.
Juan Torchia
Juan Torchia

Posted on • Originally published at juanchi.dev

Anthropic reversed its position on Claude CLI: last week it was gray, today it's green. My workflow didn't change.

Anthropic just confirmed that using Claude via CLI — the pattern that tools like OpenClaw popularized — is allowed. The community is celebrating. I use it too. But I'm watching this from a pretty specific vantage point: my workflow didn't change a single line. What changed was Anthropic's official position. And the more I think about that, the more it bothers me.

Not because they gave the green light. But because for weeks I was operating in a gray zone that should never have existed in the first place. And because the reversal arrived with no proactive communication, no changelog, no email to the developers who were already building on top of that foundation.

That's 32 years of watching how platforms relate to their ecosystems. This has familiar patterns.

Claude CLI usage policy reversal: what exactly changed and what didn't

Quick context: Claude CLI is the practice of accessing Claude — Anthropic's model — through command-line interfaces, scripts, or tools that automate interaction without necessarily going through the official API in a "conventional" way. OpenClaw was one of the tools that popularized this pattern: basically a wrapper that let you use Claude from your terminal like any other Unix tool.

For a period, Anthropic's terms of use were ambiguous about whether this was allowed. The conservative interpretation said no. The practical interpretation — the one 90% of the developers I know were using — said yes, with some reasonable limits.

Now Anthropic said explicitly: it's fine.

What changed: the official text.
What didn't change: what I was doing.

And there's the problem.

# What I had BEFORE the announcement
# (and still have AFTER, without changing anything)

#!/bin/bash
# Script to process code with Claude via CLI
# This lived in a gray zone. Today it lives in a green zone.
# My code doesn't know the difference.

export ANTHROPIC_API_KEY="$CLAUDE_API_KEY"

claude_review() {
  local file="$1"
  local context="$2"

  # Send the file to Claude for technical review
  cat "$file" | claude --system "You are a software architect reviewing code" \
    --message "Review this code and tell me what you'd improve: $context"
}

# Real usage in my development pipeline
claude_review "src/api/auth.ts" "focus on security and edge cases"
Enter fullscreen mode Exit fullscreen mode

This script existed before. It exists now. The difference is whether Anthropic officially approves of what I'm doing with it. And when I write it out like that, it sounds absurd. But that's exactly what happened.

The real problem isn't the reversal — it's the architecture of the relationship

Look, I get that policies evolve. Three decades of watching this. When I started working with Linux hosting at 19, acceptable use rules from providers changed every now and then and nobody lost their mind over it. It was part of the game.

But there's a fundamental difference between 2004 and 2026: the depth of integration.

Today I'm not using Claude to send you an email. I'm building entire systems where Claude is a structural piece. I have agents that pass tests, I have review pipelines, I have code generation workflows running in production. When Anthropic rewrites its terms — in any direction — they're touching my architecture. Even if they don't know it.

I wrote about this recently in the context of the Vercel breach and the outsourced threat model: the risk of building on third-party infrastructure isn't just technical. It's also contractual, legal, and about continuity. Now we add: it's also semantic. The rules governing what you can do with a tool can change while you're asleep.

Anthropic is more transparent than most. I already saw this when I analyzed the diff between Claude Opus 4.6 and 4.7 system prompts — there are changes there that directly affect how the model behaves in production, and none of us got an email. This time the change was in favor of developers. Next time it might not be.

// The problem of building on policies you don't control
// This isn't functional code — it's an architecture metaphor

interface ProviderPolicy {
  allowsCLI: boolean;             // Changed last week
  allowsAutomation: boolean;      // Going to change next?
  abuseDefinition: string;        // Ambiguous until it isn't
  lastUpdated: Date;              // They rarely tell you
}

// Your system assumes this is stable
// Your system is wrong
const buildSystem = (policy: ProviderPolicy) => {
  // Your entire agent architecture depends on this
  // And you have zero control over policy
  return new AgentSystem(policy);
};

// The solution isn't to stop building
// The solution is to build with abstraction layers
// that let you swap the provider without rewriting everything
interface ModelAdapter {
  complete(prompt: string): Promise<string>;
  // Doesn't care if it's Claude, GPT, Gemini, or local Llama
  // Doesn't care if it's via API, CLI, or SDK
}
Enter fullscreen mode Exit fullscreen mode

This connects directly to something I've been thinking about since I analyzed how Emacs solves the trust problem for tools: the tools that survive decades are the ones that give you real control over your environment. The ones that don't make you dependent on decisions you never made.

The gotchas nobody says out loud

When Anthropic says "it's allowed," there are a few things worth getting clear on before celebrating too hard:

"Allowed" isn't the same as "guaranteed". The terms can change again. Build your systems assuming they will change. Not out of paranoia — out of honest architecture.

The gray zone didn't disappear, it moved. Now the question is what counts as "reasonable" CLI usage and what starts to look like aggressive scraping or abuse. Those limits are still ambiguous.

Rate limits and costs are the new battleground. It's one thing for it to be allowed, another for it to be economically viable at scale. I had to learn this the hard way with an API key that nearly went through the roof on a Sunday night because a looping script had no backoff.

# Exponential backoff — learn it cheap or learn it expensive
# I learned it expensive

claude_with_retry() {
  local attempt=0
  local max_attempts=5
  local wait=1

  while [ $attempt -lt $max_attempts ]; do
    # Try the Claude call
    result=$(claude "$@" 2>&1)
    code=$?

    if [ $code -eq 0 ]; then
      echo "$result"
      return 0
    fi

    # If it's a rate limit, wait with exponential backoff
    if echo "$result" | grep -q "rate_limit"; then
      echo "Rate limit hit. Waiting ${wait}s..." >&2
      sleep $wait
      wait=$((wait * 2))  # Double the wait time
      attempt=$((attempt + 1))
    else
      # Error that isn't a rate limit — don't retry
      echo "Error: $result" >&2
      return 1
    fi
  done

  echo "Max retries reached" >&2
  return 1
}
Enter fullscreen mode Exit fullscreen mode

Your agent's identity matters more than you think. With the CLI, the context of "who is calling" becomes more opaque. I've been thinking about this since I wrote about inverted CAPTCHAs for AI agents: the identity problem doesn't go away because Anthropic says CLI usage is fine. The identity problem is structural.

The anonymity the CLI gives you has a brutal debugging cost. When something fails on an official SDK call to Claude, you have logs, request IDs, structure. When something fails in a bash script wrapping a CLI, you have a string in stderr and good luck.

FAQ: Claude CLI usage policy and what you actually need to know

What exactly did Anthropic reverse about Claude CLI usage?
Anthropic clarified that the usage pattern popularized by tools like OpenClaw — accessing Claude through command-line interfaces, scripts, and wrappers that automate interaction — is allowed within their terms of use. Previously, the terms were ambiguous enough to generate legitimate uncertainty about whether this type of use was authorized.

Can I use any CLI tool with Claude without restrictions?
Not exactly. "Allowed" comes with implicit and explicit conditions: you can't use it to generate spam, you can't do aggressive scraping of other services via Claude, and the rate limits of whatever plan you're on still apply. The reversal opens the door, it doesn't knock it down.

What happens if Anthropic changes its position again?
That's exactly the right question. If you built your workflow so that Claude CLI is a direct, unabstracted dependency, a policy change forces you to rewrite. If you built with an abstraction layer (an adapter that can point to Claude, GPT, a local model), the policy change becomes a configuration problem, not an architecture problem.

Is it better to use Anthropic's official SDK than the CLI for production?
For production: yes, almost always. The SDK gives you typing, structured error handling, logging, and an interface that won't silently change when you update a CLI version. For experimentation and local development: the CLI is fantastic. The distinction matters.

How does this affect data I send to Claude via CLI?
Same as any other access method: Anthropic has access to the prompts you send for safety monitoring, unless you have a specific enterprise agreement that limits that. This didn't change with the policy reversal. If you're sending proprietary code or sensitive data, review the privacy terms regardless of whether you're using CLI, SDK, or the web interface. I talked about something similar when the Notion email leak scandal broke: the data surface exposed by using third-party tools is always larger than we think.

Does this mean Anthropic is "on the side" of developers?
Anthropic clearly wants an active developer ecosystem — they have too much economic incentive not to. But "on your side" implies an alignment of interests that's more complicated than that. They're building a business with investors, with regulatory considerations, with legitimate safety pressures. They can be genuinely in favor of creative API usage and at the same time make decisions that affect you without consulting you. Both things are true.

What I learned watching this since 1994

When I was 5 and my dad showed me the Amiga, the rules about what you could do with hardware were physical. Either the processor supported it or it didn't. There was no lawyer at Commodore deciding whether my use case was terms-compliant.

Today I build on language models whose terms of use are living documents, interpreted by legal teams that sometimes have zero technical context for what developers are actually doing in practice. And those documents can change faster than my deploys.

Anthropic's reversal on Claude CLI usage is good news. Genuinely. But it leaves me with a stronger conviction than before: abstraction isn't an architectural luxury, it's a survival necessity. If your system only works because Anthropic says it's fine today, your system has a design problem that no policy announcement can fix.

Build an abstraction layer. Document your policy dependencies the same way you document your code dependencies. And keep an eye on changelogs — even if Anthropic doesn't send them to you by email.

The workflow didn't change. It still hasn't changed. But the next policy shift is going to find me a little more prepared for when it doesn't go my way.


This article was originally published on juanchi.dev

Top comments (0)