DEV Community

brian austin
brian austin

Posted on

How I use AI subagents to parallelize my development workflow

How I use AI subagents to parallelize my development workflow

For most of my career, I treated AI assistants as single-threaded: one prompt, one response, one task at a time.

Then I started using subagents. Now I run 3-4 parallel AI tasks simultaneously and ship features in half the time.

Here's exactly how my workflow looks.

The problem with single-threaded AI

When you have a big feature to ship — say, "add payment processing" — the naive approach is:

  1. Ask AI to write the backend
  2. Wait
  3. Ask AI to write the frontend
  4. Wait
  5. Ask AI to write the tests
  6. Wait

Total time: 3 AI sessions × 10 minutes each = 30 minutes minimum, and you're context-switching constantly.

The subagent approach

With subagents, I decompose the work first, then run tasks in parallel:

Session 1 (backend):

You are handling the backend for Stripe payment integration.
Scope: /src/routes/payments.js and /src/models/subscription.js only.
Write the route handlers and Stripe webhook processing.
Enter fullscreen mode Exit fullscreen mode

Session 2 (frontend):

You are handling the frontend for payment UI.
Scope: /src/components/PaymentForm.jsx only.
Assume the backend POST /api/payments endpoint accepts { amount, currency, token }.
Enter fullscreen mode Exit fullscreen mode

Session 3 (tests):

You are writing integration tests for payment flow.
Scope: /tests/payments.test.js only.
Mock Stripe. Test success path and failure path.
Enter fullscreen mode Exit fullscreen mode

All three run simultaneously. I merge the results.

Why this works

The key insight is scope isolation. Each subagent:

  • Has a single file or directory in scope
  • Doesn't need to know what other agents are doing
  • Can run completely in parallel
  • Produces clean, mergeable output

When tasks are truly independent (different files, different concerns), parallelization has zero downsides and cuts wall-clock time by 3x.

The coordination pattern

Here's my exact process:

# Before spawning subagents, I write a contract:

SHARED INTERFACE:
- POST /api/payments — accepts { amount: number, currency: string, token: string }
- Returns { success: boolean, subscriptionId?: string, error?: string }

Subagent A: implement this endpoint
Subagent B: call this endpoint from the UI
Subagent C: test this endpoint with mocks
Enter fullscreen mode Exit fullscreen mode

The interface definition is the only coordination needed. Once that's written, all three agents work independently.

Where subagents break down

Not everything parallelizes well:

Sequential dependencies: If Agent B needs Agent A's output to exist first, you can't parallelize. Classic example: schema migrations must run before the ORM code that uses them.

Shared state: If two agents both modify config.js, you'll get merge conflicts. Keep scopes strictly separate.

Discovery tasks: When you don't know what you want yet, a single exploratory session is better than multiple parallel ones.

Rule of thumb: if you can write the interface contract before starting, you can parallelize. If you're still discovering the interface, go single-threaded first.

My actual workflow in 2026

I use the SimplyLouie API ($10/month developer tier) for all my subagent sessions. It gives me direct API access with higher rate limits — critical when running 3-4 sessions simultaneously.

# Each subagent is just a curl call with a system prompt
curl https://simplylouie.com/api/chat \
  -H "Authorization: Bearer $API_KEY" \
  -d '{
    "model": "claude-sonnet",
    "system": "You are handling backend only. Scope: src/routes/payments.js",
    "messages": [{"role": "user", "content": "Implement Stripe webhook handler"}]
  }'
Enter fullscreen mode Exit fullscreen mode

I have a small shell script that spawns 3 of these in parallel and collects the outputs:

#!/bin/bash
# spawn_agents.sh

agent() {
  local name=$1
  local system=$2
  local task=$3
  curl -s https://simplylouie.com/api/chat \
    -H "Authorization: Bearer $API_KEY" \
    -d "{\"system\": \"$system\", \"messages\": [{\"role\": \"user\", \"content\": \"$task\"}]}" \
    > "/tmp/agent_${name}.json"
}

# Spawn in parallel
agent "backend" "Handle backend only. Scope: src/routes/" "Implement payment route" &
agent "frontend" "Handle frontend only. Scope: src/components/" "Build payment form" &
agent "tests" "Write tests only. Scope: tests/" "Test payment flow" &

wait
echo "All agents complete"
cat /tmp/agent_*.json
Enter fullscreen mode Exit fullscreen mode

Results after 3 months

I've shipped:

  • A full authentication system (backend + frontend + tests) in one session
  • A data export feature (CSV + JSON + API endpoint) in parallel
  • A notification system (email + push + in-app) simultaneously

Typical time savings: 40-60% faster on features that decompose cleanly.

The biggest unlock wasn't the speed — it was that I stopped context-switching. Each agent is laser-focused on one thing. The quality went up.

Start simple

You don't need complex orchestration to start. Just:

  1. Define the interface contract first
  2. Assign one file per agent
  3. Run them in parallel (multiple browser tabs, multiple API calls, whatever)
  4. Merge the outputs

That's it. No frameworks needed. The parallelization is in your hands, not in the model.


I run all my subagent sessions through SimplyLouie's developer API — $10/month flat rate, no per-token billing surprises. Worth it if you're running multiple concurrent sessions.

Top comments (0)