DEV Community

Ian Johnson
Ian Johnson

Posted on

Trunk-Based Development with Short-Lived Branches

Why Long-Lived Branches Kill Velocity

You've seen it. A feature branch that started two weeks ago. It's 47 commits behind main. Three people are waiting on it. The merge conflict is 400 lines. Nobody wants to review it because reviewing 2,000 lines of diff is nobody's idea of a good time.

Long-lived branches are where productivity goes to die. And when you add an AI agent to the mix, they get even worse. The agent writes code against the branch state. Main moves on. By the time you merge, half the agent's assumptions are wrong.

Trunk-based development fixes this. The rule is simple: branches live for hours, not days. Merge to main early and often. Keep main releasable at all times.

Trunk-based development doesn't necessarily mean merging changes straight to main. In my view, it's more about ensuring everything works together to really take advantage of CI. Short-lived branches give us this, as well as the safety net that many developers prefer. Concern about pushing directly to main is a developer preference. Personally, I prefer not to.

The Workflow

Here's what a typical feature looks like in this project:

  1. Branch — create a branch from main: feat/PROJ-431-dashboard-migration
  2. Build — write tests, implement the feature, run make lint && make test
  3. PR — open a PR. Small diff. Clear description. Conventional commit title.
  4. CI — GitHub Actions runs the full pipeline (lint, test, test-js)
  5. Merge — once CI is green, merge to main
  6. Deploy — CI triggers a Forge deployment webhook. Staging updates automatically.

The entire cycle (branch to merged) is usually same-day. Sometimes within an hour for smaller changes.

145 PRs in 3 Months

This project has 258 commits across ~3 months. 145 of those went through pull requests. That's roughly 1.6 PRs per day, every day.

Most PRs are small. A refactoring extraction. A test coverage expansion. A bug fix. A single feature. The biggest PRs were the frontend migration (Tailwind, jQuery removal), and even those were broken into sequential stages.

Small PRs have compounding benefits:

  • Easier to review — you can actually read the diff
  • Easier to revert — if something breaks, git revert one PR, not a 2,000-line changeset
  • Faster CI — smaller changes mean fewer test failures to debug
  • Less merge conflict risk — you're never far from main

Conventional Commits

Every commit follows the conventional commits format:

feat: add GET /api/dashboard endpoint (PROJ-430) (#130)
fix: resolve planner bugs (PROJ-432) (#131)
refactor: extract CreateOrderAction from OrdersController::store() (#80)
test: expand OrdersController test coverage (#59)
docs: document legacy Blade vs React SPA architecture (#119)
ci: add workflow_dispatch trigger for manual CI runs
chore: remove legacy frontend dependencies and dead code (#103)
Enter fullscreen mode Exit fullscreen mode

This isn't just aesthetics. Conventional commits create a machine-readable history. You can:

  • Generate changelogs automatically
  • See at a glance whether a commit is a feature, fix, or refactoring
  • Train an agent to follow the same convention (it will, if every existing commit uses it)

The commit message is a contract. feat: means new functionality. fix: means something was broken and now it's not. refactor: means the behavior didn't change. When the agent writes a commit message, these prefixes help me triage without reading the diff.

The CI Pipeline

Every push to main triggers the full pipeline:

Build  Code Quality  Tests  Deploy
         (make lint)    (make test + make test-js)
Enter fullscreen mode Exit fullscreen mode

The pipeline runs in Docker containers built from the same docker-compose.yml as local development. Same PHP version. Same Node version. Same MySQL. If it passes locally, it passes in CI.

The deploy step triggers a webhook with our cloud provider that pulls the latest code, runs migrations, rebuilds assets, and restarts workers:

cd staging.example.com
git pull origin main
composer install --no-dev --optimize-autoloader
php artisan migrate --force
npm ci && npm run build
php artisan queue:restart
php artisan config:cache
php artisan route:cache
php artisan view:cache
Enter fullscreen mode Exit fullscreen mode

Staging updates within minutes of a merge to main. Production deploys are triggered manually (or by the same webhook on the production server) after staging verification.

Infrastructure: Queue Workers and Redis

The deployment isn't just the web app. We also manage background infrastructure:

Queue workers process async jobs: CRM sync, notification dispatch, and background calculations. The Forge server runs supervised workers:

php artisan queue:work redis --queue=default,crm --sleep=3 --tries=3
Enter fullscreen mode Exit fullscreen mode

The queue:restart in the deploy script gracefully restarts workers so they pick up the new code.

Redis backs the queue and can optionally back the cache. Separate Redis databases (DB=0 for cache, DB=1 for queues) prevent queue operations from evicting cached data.

The Docker Compose stack mirrors this:

redis:
  image: redis:7-alpine
  profiles: [queue]

queue-worker:
  build: .
  command: php artisan queue:work redis --queue=default,crm
  profiles: [queue]
  depends_on: [redis, mysql]
Enter fullscreen mode Exit fullscreen mode

The profiles key means queue infrastructure only starts when you explicitly ask for it (docker compose --profile queue up). Local development doesn't need Redis running unless you're testing queue jobs.

The E2E Database

E2E tests (Playwright) run against a separate database: myapp_e2e. This gets its own migration and seeding:

make migrate-e2e    # Run migrations on E2E database
make seed-e2e       # Seed test users with proper roles, permissions, relationships
Enter fullscreen mode Exit fullscreen mode

The E2E seeder creates users with known credentials and realistic data. It's idempotent — running it twice doesn't create duplicates.

In CI, the E2E job spins up the full Docker stack (app, nginx, mysql) and runs Playwright against it. Same app, same database engine, same infrastructure as production. The only difference is the data is seeded, not real.

Continuous Delivery (Not Continuous Deployment)

An important distinction: we practice continuous delivery, not continuous deployment.

Every merge to main is deployable. The pipeline proves it: tests pass, linting passes, the build succeeds. But deploying to production is a conscious decision, not an automatic one.

This matters because:

  • Some features are gated behind environment checks or feature flags
  • Some changes need manual verification on staging first
  • Production deploys happen when we decide, not when the CI pipeline finishes

The codebase is always releasable. Whether we release is a business decision, not a technical one.

How This Enables Agent-Assisted Development

Trunk-based development + CI + conventional commits create something crucial for working with an AI agent: a fast, reliable feedback loop.

When Claude writes code:

  1. The tests tell me if it works (seconds to minutes)
  2. The linter tells me if it's clean (seconds)
  3. CI confirms both in an environment I trust (minutes)
  4. If it passes, I merge. If it doesn't, Claude fixes it.
  5. The conventional commit tells me what changed without reading the diff.

There's no "let me review this 2,000-line PR over the weekend." It's: did it pass? Merge. Did it fail? Fix. Ship it. Move on.

Dave Farley calls this "optimizing for feedback." The faster you know whether a change worked, the faster you can iterate. Trunk-based development with CI gives you feedback in minutes, not days.

The Takeaway

  1. Branches live for hours. If your branch is older than a day, something's wrong.
  2. Small PRs, merged often. 145 PRs in 3 months. Each one small enough to review in minutes.
  3. Conventional commits are a communication protocol. Both for humans reading the log and agents writing commits.
  4. CI is the source of truth. If it passes CI, it's good. If it doesn't, fix it before merging.
  5. Continuous delivery means always releasable. Deploy when you want, not when you have to.
  6. Infrastructure is code. Docker, queue workers, Redis, deploy scripts: all versioned, all reproducible.

The combination of tests, linting, CI, and trunk-based development creates a system where changes are small, verified, and frequent. That's exactly the system an AI agent thrives in.

Top comments (0)