Claude Code v2.1.139 added /goal, which sets a completion condition and keeps the agent working across turns until the metric is met
claude agentsputs every running, blocked, or finished session in one pane, replacing tab-hopping across terminalsFive solo-studio loops where /goal earns its keep: blog publishing, image pipeline, garden-walk link sweep, syndication retry, and storefront audit
Smaller v2.1.139 wins worth turning on today: per-skill token costs in /context, transcript jump shortcuts, exec-form hooks, and hot-reload MCP
v2.1.140 on May 12 fixed /goal silently hanging when disableAllHooks was set, so upgrade before you build a loop you actually depend on
Claude Code 2.1.139 shipped on May 11, and 2.1.140 followed a day later to patch the rough edges. The release adds two features that change how I run the studio: a /goal command that lets you tell Claude "do not stop until X is true," and an agent view that puts every Claude Code session in one list. Here is what each does and the five workflows where I am already getting compound time back.
What /goal Actually Changes
/goal sets a completion condition. Claude keeps working turn after turn until the metric is met, the user cancels, or the model runs out of tokens. The command works in three places: interactive mode, the headless -p flag, and Remote Control. While the loop runs, a live overlay shows elapsed time, turn count, and tokens consumed, so you can pull the plug before a runaway agent burns through your budget.
Before /goal, my standard pattern was "ask, review, ask again." If I wanted three blog articles published, I prompted, watched, corrected, prompted again. Each transition was a context switch and a chance for me to forget what I had asked the first time. /goal turns the prompt itself into the success criterion. I write the rule once: "do not stop until three new articles are live and the verify script passes for each handle." Claude takes it from there, and the overlay tells me what I am paying for in real time.
The wrong way to use /goal is to write a fuzzy completion condition like "improve the site." The model has no way to know when it is done, and the loop will only stop when you cancel. The right way is to anchor every goal to a check you can script: "until grep returns zero matches," "until the API returns 200," "until the JSON file lists ten entries." Treat the success condition like a unit-test assertion.
Agent View: One Pane for Every Session
The second big addition is claude agents, a research-preview command that lists every Claude Code session you have open: running, blocked on your input, or completed. The output looks like a process viewer, and you switch between sessions instead of hunting for the right terminal tab.
I usually have three or four sessions in flight: one publishing, one reviewing a pull request, one in a worktree on a refactor, one running a /loop on something boring. Before agent view, I tracked them by memory and tab title. Now I run claude agents, see which one is blocked on me, jump in, unblock, and bounce out. Sessions that finished without me are flagged so I can sweep them at end of day. It is a small UX change that pays back disproportionately if you are the kind of person who runs more than one agent in parallel.
The pairing with /goal matters. If you fire a long /goal loop and walk away, agent view is the dashboard that tells you whether it is still chugging, blocked on a permission prompt, or finished an hour ago. Two features, same release, designed for each other.
5 Solo-Studio Workflows That Get Sharper
Here are the loops I rewrote this week.
1. Blog publish gate. /goal "three articles in /tmp/blog-posts/ are published, OG images uploaded, and the verify script passes for each handle." The publish flow already runs auto-checks for TLDR, word count, and brand voice. /goal turns the whole pipeline into one prompt and a coffee break. Pairs well with the patterns in Claude Managed Agents Just Got Dreams, 20-Way Parallelism, and Self-Checking Loops when you want each article handled by a different specialist.
2. Image pipeline. /goal "every article published in the last 14 days has a featured image set; rerun the OG generator until none are missing." Image generation fails on a percentage of runs because of API rate limits or template parsing. /goal absorbs the retries instead of me babysitting.
3. Garden-walk link sweep. /goal "no orphan articles remain in the latest garden-walk report." The walk script lists articles with zero inbound links. The fix is mechanical: add two cluster-mates and one product link. /goal walks the report and closes orphans one by one until the list is empty.
4. Syndication retry. /goal "every blog article published this week has a published_id in the Dev.to tracker and the Hashnode tracker." The auto-syndicate cron writes those IDs on success. When it fails, the cell is empty. /goal sweeps the tracker, retries every empty cell, and stops when the column is clean.
5. Storefront audit. /goal "no Liquid section uses #fff or em dashes; replace each violation with the brand token." The brand-check hook flags these on save, but legacy sections still leak old colors. /goal does the migration in one pass and reports the file list when it is done.
Each of these used to be a 30-to-60-minute babysit. Now they are a sentence and a glance at the overlay. For more on running multiple Claude sessions in parallel, see Claude Result Loops + Rubrics: 5 Self-Eval Patterns.
One pattern that keeps repeating: the loops that work best have a clear, machine-checkable stop condition and a small enough scope that the agent does not drift into adjacent work. "Fix every orphan article" works. "Improve the blog" does not. When in doubt, give /goal the exact shell command it should run to verify done. The overlay shows you the token cost as the loop progresses, which is the feedback you need to learn how much each kind of work actually costs. After a week of this, I have a rough number per loop type: image sweep is cheap, link sweep is moderate, audit-and-rewrite is the expensive one. That visibility used to require a tracing setup. Now it is in the box.
The doubled rate limits Anthropic announced last week (SpaceX Colossus 1: 300MW, 220K GPUs) are what make these loops affordable. /goal can chew through a long backlog without me pacing it manually, because the 5-hour Pro and Max windows now hold up. Two months ago I would have hit the ceiling on the second loop.
The Smaller Wins That Compound
The release also shipped quieter improvements that are easy to miss but worth turning on today.
Per-skill token costs. /context all now shows a per-skill token estimate, model-aware and rounded. If you load a lot of skills, this is the first time the harness tells you which ones are actually expensive. I trimmed two skills out of my default rotation after running it once.
Plugin inspection. claude plugin details shows the inventory of components a plugin ships and the per-session token cost. Useful when you suspect a third-party plugin is bloating your context window.
Transcript navigation. Press ? to see shortcuts, { and } to jump between prompts in a transcript, v to toggle the shortcut panel. Small change, huge for reading back long sessions.
Exec-form hooks. Hook definitions now accept an args: string[] field, which spawns the command directly without a shell. Faster, safer, no quoting traps. I migrated my brand-check hook the same afternoon.
MCP hot reload. /mcp Reconnect picks up .mcp.json edits without restarting Claude. Combined with the new CLAUDE_PROJECT_DIR environment variable that stdio servers automatically receive, writing a project-aware MCP server is finally pleasant. Background on Claude Code release cadence and what shipped last month: Claude Code's /recap and 1-Hour Caching.
The /goal hook bug. v2.1.139 had a sharp edge: if you set disableAllHooks or allowManagedHooksOnly, /goal would silently hang. v2.1.140 surfaces a clear message instead. If you are on 2.1.139, upgrade before you wire /goal into anything you depend on. The full topic hub lives at the Claude Code cluster on the Lab Overview.
Bottom Line
/goal and agent view together change what I expect from a single prompt. The old shape was "do this one thing and report back." The new shape is "do not stop until the metric is met, and I will check the dashboard." For a solo studio running blog, image, syndication, and storefront loops in parallel, that is the right shape. The two features are small on paper and large in practice.
If you want to dry-run /goal on a low-risk loop today, point it at your dead-link sweep, your unused asset cleanup, or your image-alt-text backfill. Anything where the success condition is greppable and the work is mechanical. You will know within an hour whether you trust it on something bigger.
Top comments (0)