Trunk-Based Development for Fast, Safe AI-Assisted Shipping
If you’re using AI to move faster, trunk-based development (TBD) is the safest way to keep momentum without letting your main branch drift into chaos. The core idea is simple: keep main always releasable by integrating small changes frequently and using feature flags to separate merge from release.
Below is a practical, AI-friendly workflow, with Go examples and a Bash helper for Git worktrees.
What Trunk-Based Development Really Means
TBD is not no branches at all and just commiting to main. It’s short-lived branches (or even small feature branches) and continuous integration:
-
main(trunk) is always in a releasable state. - Developers (and agents) merge small changes frequently.
- Feature flags let you merge unfinished work safely.
- CI must be fast and trusted.
This is ideal for AI-assisted development since the agent can generate changes quickly, and you can integrate them safely without long lived branches or merge hell.
A Simple, Repeatable TBD Workflow
- Pull trunk before starting.
- Create a small branch or worktree for the task.
- Make a minimal change (prefer additive changes).
- Gate risky behavior with a feature flag.
- Run fast checks.
- Merge to trunk quickly.
- Remove the flag after the feature is stable.
Short feedback loops make errors cheap to have the benefits of shippign code fast.
Go Example: Feature Flag in a Handler
package main
import (
"net/http"
"os"
)
// It's better to have a separate service for dealing with feature flags
func enabled(key string) bool {
return os.Getenv(key) == "true"
}
func profileHandler(w http.ResponseWriter, r *http.Request) {
if enabled("FF_NEW_PROFILE") {
w.Write([]byte("new profile experience"))
return
}
w.Write([]byte("old profile experience"))
}
func main() {
http.HandleFunc("/profile", profileHandler)
_ = http.ListenAndServe(":8080", nil)
}
Why this works: you can merge the new code, keep it off in production, and enable it later with an environment change. Using this approach new features will just be isolated and controlled in an feature flag service.
Git Worktrees: AI Safe Isolation
Worktrees let you check out multiple branches in parallel without stashing or messing with your main working directory. This is perfect for AI experiments:
- One worktree per task.
- If the experiment fails, delete it, causing no side effects to trunk.
- You can keep one worktree clean for
mainand another for AI-generated changes.
Bash Helper: Worktree Workflow
Create scripts/worktree.sh and use it to manage fast task worktrees:
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR=$(git rev-parse --show-toplevel)
BASE_DIR="${ROOT_DIR%/*}"
usage() {
echo "usage: worktree.sh add <name> <branch> | rm <name> | list | cleanup"
}
case "${1:-}" in
add)
name=${2:-}
branch=${3:-}
if [[ -z "$name" || -z "$branch" ]]; then
usage
exit 1
fi
path="${BASE_DIR}/${name}"
if git show-ref --verify --quiet "refs/heads/${branch}"; then
git worktree add "$path" "$branch"
elif git show-ref --verify --quiet "refs/remotes/origin/${branch}"; then
git worktree add -b "$branch" "$path" "origin/${branch}"
else
git worktree add -b "$branch" "$path"
fi
echo "created: ${path}"
;;
rm)
name=${2:-}
if [[ -z "$name" ]]; then
usage
exit 1
fi
git worktree remove "${BASE_DIR}/${name}"
echo "removed: ${BASE_DIR}/${name}"
;;
list)
git worktree list
;;
cleanup)
git worktree prune
git gc --prune=now
;;
*)
usage
exit 1
;;
esac
Example usage:
# create an experiment worktree
./scripts/worktree.sh add encurtador-exp feat/exp-feature
# list worktrees
./scripts/worktree.sh list
# delete when done
./scripts/worktree.sh rm encurtador-exp
Final Takeaways
Try to apply those tools into your workflow to boost your productivity while using AI! Know the benefits of it:
- Trunk-based development keeps you fast and safe.
- Feature flags let you merge continuously without shipping unstable code.
- Git worktrees make AI experiments low risk.
- Smaller changes, faster feedback, fewer surprises.
Top comments (1)
Agentic workflows feel like the next evolution of dev tooling.
The challenge seems less about capability and more about trust in automation.
Have you found a good balance between autonomy and human review?