DEV Community

Venkata Pavan Kumar Gummadi
Venkata Pavan Kumar Gummadi

Posted on

How to Ship Features Faster with AI Coding Assistants and Git: A 90% First‑Try Workflow

An updated, battle-tested workflow for using Claude Code to ship four features in ~30 minutes refreshed for Opus 4.6, Sonnet 4.6, plugins, skills, and the new GitHub Actions integration

I use Claude Code to ship four small-to-medium features (or bug fixes) in roughly 25–35 minutes, with a first-try success rate around 90–95%. The trick isn't faster typing or a better model — it's spending most of my human time on the issue, not the code, and then letting Claude Code run autonomously across multiple Git worktrees in parallel.

In this article I'll walk through:

  1. The philosophy (orchestrate, don't micromanage)
  2. The two custom slash commands that do all the work
  3. How to wire it into GitHub Actions so the same workflow runs in CI
  4. What's actually changed in 2026 and why it matters

The full repo with commands, workflows, and a CLAUDE.md template is linked at the bottom.

Why this still works (and why it works better now)

The original premise hasn't changed: AI coding assistants reach their highest success rate when the specification is good. A vague prompt produces vague code. A precise, well-scoped GitHub issue produces production-ready code on the first try, most of the time.

What has changed since 2025 is the ground underneath this workflow:

  • Claude Opus 4.6 and Sonnet 4.6 ship with a 1M-token context window in beta, which means an entire mid-sized codebase can fit in a single planning session without aggressive trimming.
  • Claude Code plugins and skills let you package custom commands, agents, and project conventions as reusable, version-controlled units instead of scattered prompt files.
  • /rewind (added in late 2025) means you can undo a bad agent run in one keystroke instead of fighting Git.
  • The official anthropics/claude-code-action runs the same Claude Code runtime inside GitHub Actions, so the local workflow described here can be triggered from a PR comment by anyone on your team.
  • Headless mode (claude -p) is now stable enough to drive long-running, scheduled jobs — the same /solve-issue command I run locally also works in CI.

The workflow below assumes Claude Code v2.x or newer and a project that already has decent documentation: a PRD, a UI spec, and a software requirements doc. If your project has none of those, fix that first. Claude Code is roughly as good as the context you give it, and architectural decisions made in a vacuum are exactly where AI-generated code falls apart.

The philosophy: orchestrate, don't micromanage

Most developers use Claude Code like a fancy autocomplete — they type a prompt, watch every token, and interrupt the moment something looks off. That's a perfectly valid way to use the tool, but it caps your output at roughly the speed you can read.

The alternative is to treat Claude Code like a junior engineer you've handed a well-written ticket to. You don't read over their shoulder. You wait for the PR. The job of the human shifts from writing code to:

  • Defining what "done" means in concrete, testable terms
  • Picking the right architectural seam
  • Reviewing the diff with a clear head, not while it's being written

The two custom slash commands below codify exactly that split.

Step 1 — /create-issue: the highest-leverage 5 minutes you'll spend all day

This is the only step where I spend significant human attention. The command tells Claude Code to read the codebase, classify the request, decompose it into atomic tasks with acceptance criteria, and produce a GitHub issue I can review before it's filed.

Save this as .claude/commands/create-issue.md in your repo:

---
description: "Create a well-scoped GitHub issue from a freeform description"
argument-hint: <description of what you want built>
allowed-tools: Bash, Read, Glob, Grep, WebSearch
---

You are creating a GitHub issue from this description:

$ARGUMENTS

Follow these steps in order. Do not skip any.

## 1. Understand the project
- Detect the Git repo and remote (`git remote -v`)
- Read README.md, CLAUDE.md, and any docs/ folder
- Identify the project's stack and conventions

## 2. Explore the codebase
- Find files related to the request
- Look for similar existing patterns
- Note any dependencies or modules that will be touched

## 3. Classify
Decide whether this is a Bug, Feature, Enhancement, or Task.

## 4. Decompose
Break the work into atomic, independently testable tasks.
Each task should be one logical commit's worth of work.
Include both implementation and test tasks.

## 5. Define acceptance criteria
For each major component, write testable success conditions.
Cover the happy path, the obvious edge cases, and at least one failure mode.

## 6. Draft the issue
Format as:

    Title: <concise, action-oriented>
    Type: <Bug | Feature | Enhancement | Task>

    ## Context
    <why this matters, in 2-3 sentences>

    ## Tasks
    - [ ] ...
    - [ ] ...

    ## Acceptance Criteria
    - [ ] ...
    - [ ] ...

    ## Technical Notes
    <anything Claude Code will need to know to implement this>

    ## Related Files
    - path/to/file1
    - path/to/file2

## 7. Show me the draft
Print the full issue and ask: "Create this issue? (yes/no)"

## 8. File it
On confirmation, use `gh issue create` to file it and print the URL.
If `gh` isn't available, print the body for manual creation.
Enter fullscreen mode Exit fullscreen mode

The output format is the entire point. By the time I confirm, the issue contains everything Claude Code will need on the second pass: scope, file list, test conditions, edge cases. There is no ambiguity left to misinterpret.

For four issues this step takes me roughly 10–20 minutes total. I sometimes iterate on the draft with Opus 4.6 to add nuance — that's what I'm paying for, and it's where the success rate comes from.

Step 2 — /solve-issue: autonomous implementation

Once the issues are filed, the implementation step is almost entirely hands-off. Save this as .claude/commands/solve-issue.md:

---
description: Implement a GitHub issue end-to-end in an isolated worktree
argument-hint: <github issue url or number>
allowed-tools: Bash, Read, Write, Edit, Glob, Grep, WebFetch
---

Implement GitHub issue: $ARGUMENTS

Follow this 5-stage process. Commit at the end of each substantive step.

## 1. PLAN
- `gh issue view $ARGUMENTS` to fetch the issue
- Search for related PRs: `gh pr list --search "..."`
- Read the relevant files
- Write the plan to a comment on the issue before starting

## 2. WORKTREE + BRANCH
- Create a new worktree off main: `git worktree add ../wt-issue-<n> -b feat/issue-<n>`
- `cd` into it
- All work happens in the worktree

## 3. IMPLEMENT
- Work the plan in small, reviewable steps
- Commit after each step with a clear message
- Update README.md and docs/ as you go

## 4. TEST
- For UI changes, drive the browser via the Playwright MCP server
- Write or extend unit + integration tests
- Run the full suite and fix anything that breaks
- Do not move on until everything is green

## 5. LINT + DEPLOY
- Run the project's linter and fix all errors and warnings
- `gh pr create` with a description that links back to the issue
- Add `Closes #<n>` to the PR body
Enter fullscreen mode Exit fullscreen mode

I run this with --dangerously-skip-permissions once per issue. The name is intentionally scary, but the safety net is real: every change is committed in an isolated worktree, and /rewind can undo the whole session if something goes sideways.

Four issues, four worktrees, all running in parallel terminals. Implementation, tests, lint, and PR creation: roughly 15 minutes for the batch.

A note on parallelism. Four is not a benchmark, it's the number where I personally stop losing track. More worktrees mean more potential merge conflicts when the PRs land. Start with two and work up.

Step 3 — CLAUDE.md: project memory

Every project I work on has a CLAUDE.md at the root. This is the single highest-ROI file in the repo for anyone using AI coding assistants. Mine usually contains:

  • How to bring the dev environment up (Docker commands, env files, ports)
  • How to run the test suite, the linter, and the formatter
  • How to log into the local app (e.g. docker compose drush uli for Drupal, a seeded admin for Rails, etc.)
  • Project-specific conventions: directory layout, naming, where new components go
  • Anything that bit me once and I never want to explain to an agent again

A CLAUDE.md template is included in the repo at the bottom of this article.

Step 4 — Push the same workflow into CI with GitHub Actions

This is the part that didn't exist when I first wrote about this. Anthropic now ships an official GitHub Action — anthropics/claude-code-action@v1 — that runs the actual Claude Code runtime inside a GitHub Actions runner. Same binary, same tools, same CLAUDE.md, same slash commands.

Two workflows cover most of what a small team needs:

Workflow A — @claude mentions on issues and PRs

# .github/workflows/claude-mention.yml
name: Claude on mention

on:
  issue_comment:
    types: [created]
  pull_request_review_comment:
    types: [created]
  issues:
    types: [opened, assigned]

jobs:
  claude:
    if: |
      contains(github.event.comment.body, '@claude') ||
      contains(github.event.issue.body, '@claude')
    runs-on: ubuntu-latest
    permissions:
      contents: write
      pull-requests: write
      issues: write
      id-token: write
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Run Claude Code
        uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          claude_args: |
            --model claude-opus-4-6
            --allowed-tools Bash,Read,Write,Edit,Glob,Grep
Enter fullscreen mode Exit fullscreen mode

Drop @claude please implement this into any issue and the action will pick it up, plan, implement, and open a PR. It auto-detects whether it's reviewing a PR, answering a question, or implementing a feature based on context — no separate workflows needed.

Workflow B — automated PR review on every pull request

# .github/workflows/claude-review.yml
name: Claude PR review

on:
  pull_request:
    types: [opened, synchronize, reopened]
    paths-ignore:
      - "**/*.md"
      - "**/*.lock"
      - "package-lock.json"
      - "yarn.lock"
      - "pnpm-lock.yaml"

jobs:
  review:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Review with Claude
        uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          claude_args: |
            --model claude-sonnet-4-6
          prompt: |
            Review this pull request against the project's CLAUDE.md and
            existing patterns. Look for:
            - Logic bugs and missed edge cases
            - Security issues (auth, input validation, secrets in code)
            - Test coverage gaps
            - Deviations from project conventions

            Be specific. Reference file paths and line numbers. Group by severity.
            Skip nitpicks unless they affect correctness or readability.
Enter fullscreen mode Exit fullscreen mode

Sonnet 4.6 is the right choice for routine PR review — it's faster, cheaper, and accurate enough on diffs. Reserve Opus 4.6 for the heavier /solve-issue runs in Workflow A.

Setup checklist

  1. Install the Claude GitHub app on your repo: https://github.com/apps/claude (or run /install-github-app from inside Claude Code locally — it walks you through it)
  2. Add ANTHROPIC_API_KEY to your repo secrets
  3. Drop both workflows into .github/workflows/
  4. Open a test issue with @claude please add a hello-world endpoint

Cost in 2026

The plan landscape shifted in early 2026. Quick rundown:

  • Pro ($20/month) — includes Claude Code access and is genuinely usable for hobby projects, but you'll hit rate limits during a serious 4-issue session.
  • Max ($100 or $200/month) — 5x or 20x the usage of Pro. The $100 tier is the sweet spot for a solo developer running this workflow daily.
  • Team ($30/seat/month) — now bundles Claude Code access into every standard seat.
  • API pay-per-token — what the GitHub Action consumes, billed separately. Routine PR reviews on Sonnet 4.6 typically come in well under 10 cents apiece.

One important note on subscriptions and CI: as of April 2026, Anthropic explicitly blocks subscription tokens from being used by third-party CLI tools and runners. Use a real API key for anything running outside the official Claude Code client.

"90–95% first-try success" is for low-to-mid complexity work. Anything that touches three subsystems, requires a real architectural decision, or interacts with a flaky external API still needs a human in the loop.

Parallel worktrees create merge conflicts. The more streams you run, the more time you spend resolving them. Two or three is often a better steady state than four.

Reviewing AI-generated PRs is its own skill. The diffs look plausible. The temptation to rubber-stamp is real. Build a habit of running the tests yourself and reading the diff like a hostile reviewer, not a proud parent.

Top comments (0)