DEV Community

Cover image for Claude Code Just Got 4 Huge Upgrades — Here's What Each One Actually Does
Akshay Kumar BM
Akshay Kumar BM

Posted on

Claude Code Just Got 4 Huge Upgrades — Here's What Each One Actually Does

I've been using Claude Code for a while now, and most updates are incremental — a quality-of-life tweak here, a small model improvement there. But the v2.1.71 drop (March 2026) genuinely changed how I think about what Claude Code is.

Four features landed at once. Together they shift Claude Code from "chat that helps me code" to something closer to an autonomous system that keeps working while I'm not. Let me walk through each one so you know exactly when and why to use it.


🧠 Feature 1: /loop — Recurring Prompts Inside Your Session

Before this, if you wanted Claude to check something repeatedly, you had to keep coming back, typing the same prompt, waiting, coming back again. Tedious.

/loop changes that. You give it a task in plain English and it sets up a cron job inside your current session that fires automatically.

# Check inbox every 10 minutes — no babysitting needed
/loop every 10 minutes check my inbox for important emails

# Daily content repurposing — runs the skill automatically
/loop every day check my YouTube for new videos and run my content repurposing skill

# One-off reminder (not just recurring)
/loop in 5 minutes remind me to update the PR description
Enter fullscreen mode Exit fullscreen mode

Under the hood, Claude uses three tools: cron_create, cron_list, and cron_delete. Standard 5-field cron expressions with minute-level precision. The natural language gets converted automatically.

sequenceDiagram
    participant Dev as Developer
    participant CC as Claude Code
    participant Cron as Cron Engine

    Dev->>CC: /loop every 10 minutes check my inbox
    CC->>Cron: cron_create("*/10 * * * *", task)
    loop Every 10 minutes
        Cron->>CC: Fire task
        CC->>Dev: Results / notification
    end
Enter fullscreen mode Exit fullscreen mode

Limitations to know going in:

  • Expires after 3 days — intentional safety cap (no runaway loops)
  • Session-only — close the terminal and all loops are gone
  • No catch-up — if a loop fires while the session is closed, it just disappears

📌 Pro Tip: Think of /loop as "help me right now on this project." Sprint monitoring, inbox watching, checking build status during a deploy — perfect. Long-term automation? That's next.


🤖 Feature 2: Scheduled Tasks — Permanent Daily/Weekly Automation

/loop is great for bursts. But what about running a skill every Monday morning? Or a daily standup summary at 9 AM? That needs something more permanent.

Scheduled Tasks is that something. You define a task once — with a name, prompt, model, schedule, and folder — and it runs forever. Each run spawns a fresh Claude instance, reads your project files, executes the prompt, and shuts down cleanly.

Here's a task I'd actually set up:

Name: Repurpose Videos
Prompt: Check my YouTube for new videos and run my content repurposing skill
         to create a newsletter and a tweet thread
Frequency: Daily at 9:00 AM
Model: Claude Sonnet
Enter fullscreen mode Exit fullscreen mode

Claude is smart enough to pull in the right skills automatically based on your prompt wording — it finds the YouTube tool and the content skill without you naming them explicitly.

graph TD
    A[Scheduled Task fires at 9AM] --> B[Fresh Claude instance starts]
    B --> C[Read project files]
    C --> D[Match skills from prompt context]
    D --> E[Execute task]
    E --> F[Save output]
    F --> G[Session ends]
Enter fullscreen mode Exit fullscreen mode

Current limitations:

  • Desktop App only — doesn't work from the terminal or VS Code extension yet (this will change)
  • Computer must be on and app must be open
  • Does catch up — miss a run? It queues and runs when you reopen. Unlike /loop, nothing is lost.

/loop vs Scheduled Tasks — Quick Reference

/loop Scheduled Task
Scope Current session Persistent
Instance Same throughout Fresh each run
Missed runs Lost forever Catches up
Expiry 3 days No expiry
Where Terminal / VS Code Desktop App only
Best for Right now, this sprint Every day, every week

🛠 Feature 3: Google Workspace CLI — Actually Talk to Your Google Stuff

This one surprised me because it's not strictly a Claude Code update — but it's a massive gap filler.

Until now, Claude Code's built-in skills could manage Gmail and Calendar. But Google Drive, Docs, Sheets, Slides? Nothing. And if you've tried building around the Google APIs directly, you know the pain: documents come out with raw markdown, you need extra API calls just to make them look right.

Google released an open-source Workspace CLI that changes the whole equation. One tool. Entire Google ecosystem. 100+ built-in recipes.

# Option A: Install in terminal directly
# (run the install command for the Google Workspace CLI)

# Option B: Just ask Claude Code
"Set up the Google Workspace CLI for me"
# It walks you through the whole thing
Enter fullscreen mode Exit fullscreen mode

The key difference from API-based approaches:

API approach (n8n, etc.):
  Claude → API call → raw markdown → extra formatting calls → "okay" doc

Google CLI approach:
  Claude → bash command → Google → properly formatted doc with headers,
           images, links, tables — done in one step
Enter fullscreen mode Exit fullscreen mode

📌 Pro Tip: Even though Google labels it "not officially supported," that's just because it's in beta. It works, it's open-source, and the setup is fast. Worth it immediately.


💻 Feature 4: Skills 2.0 — Stop Guessing, Start Measuring

This is the one I'm most excited about. And honestly, the one that's going to save me the most time.

Here's the old loop everyone was stuck in:

Build skill → run it → output is okay-ish → tweak something → run again → still not quite right → keep guessing forever

Skills 2.0 breaks that cycle with built-in eval and testing. Anthropic updated their own skill-creator skill to include this. You now get actual scored results against specific criteria — not just "it worked" or "it didn't."

How to run a useful eval

The key is being specific. Vague prompt = generic, useless results.

Don't do this:

Run some tests on my copywriting skill
Enter fullscreen mode Exit fullscreen mode

Do this:

Run a new test optimized for checking whether my copy follows the persuasive
techniques in my persuasion_toolkit.md reference file.

Criteria:
- Does it use the reference file first?
- Does it include curiosity gaps and open loops?
- How often does it use proof or founder-led stories?

Test: Write landing page copy for my school community
Runs: 5 times
Enter fullscreen mode Exit fullscreen mode

Claude launches 5 sub-agents in parallel to run the skill, then returns an interactive HTML report:

HTML eval report showing per-run scores and pass/fail breakdown

The report shows:

  • Per-run output (flick between all 5)
  • Formal grades section with specific failure reasons
  • Benchmark tab: run time and token count per run
  • A/B comparison with and without the skill

Real example output

Curiosity gaps: 50% pass rate (6/12 checks)

Failure reason: The copy lacks genuine curiosity gaps that sustain
across multiple sentences. It teases but immediately reveals.

Action: Strengthen the curiosity gap section in persuasion_toolkit.md
        or add explicit instruction to skill.md
Enter fullscreen mode Exit fullscreen mode

The A/B test workflow

Split terminal, two Claude sessions:

# Session A: with skill
Create landing page copy using the copywriting skill.
Score against persuasion toolkit adherence.

# Session B: without skill
Create landing page copy without any skill.
Score against the same criteria.
Enter fullscreen mode Exit fullscreen mode

This answers: Is my skill actually helping, or just burning tokens?

The iteration loop that actually works

graph LR
    A[Build skill.md] --> B[Run eval: 1-3 criteria]
    B --> C[Review HTML report]
    C --> D[Paste specific feedback to Claude]
    D --> E[Claude edits skill.md]
    E --> B
    B --> F{Score ≥ 9/10?}
    F -->|Yes| G[Ship it]
    F -->|No| D
Enter fullscreen mode Exit fullscreen mode

📌 Pro Tip: Only optimize for 1–3 criteria per eval run. More moving parts = harder to isolate what changed. Get one thing to 9/10, then move to the next.


✅ Why This Set of Updates Matters

These four features aren't independent tweaks. They stack:

/loop handles the "watch this for me right now" use case
✅ Scheduled Tasks handles the "do this every morning" use case
✅ Google Workspace CLI removes the biggest integration gap
✅ Skills 2.0 turns skill-building from guesswork into engineering

Before these, Claude Code was a very capable assistant you had to keep talking to. Now it's infrastructure. You set it up, point it at your workflow, and it keeps running.

The shift from "tool I use" to "system that works for me" is a real one — and these four updates are what got it there.


👋 Let's Connect

If you found this helpful or want to talk about how you're using Claude Code in your workflow, I'd love to connect!

🔗 Connect with me on LinkedIn
📧 Email: akshaykumarbedre.bm@gmail.com

Let's build something amazing together. 🚀

Top comments (0)