Since my last post, our team has started experimenting with Trunk-Based Development (TBD). The early results are promising, and we’ve taken the time to document our process and agreements.
First things first. Before starting a new way of working, it is super important to get some agreements about responsibilities, and setting expectations. This is an important step in the process.
📋 Team Agreements
Expectation | Details |
---|---|
Responsibility for Quality | You own your code from commit to production impact. |
Small, Frequent Commits | Atomic and meaningful changes only. |
Tag Commits Properly | - Task related work are always tagged. Use AB# - Scout commits are not necessary to tag |
All New Code Must Be Tested | Preferably with unit tests. Test-Driven Development is Encouraged. |
Discretion with Breaking Changes | Plan carefully and minimise impact. |
Reviews are Non-Judgmental | Focus on the code, not the coder. |
Minor Changes by Reviewer | Reviewers may commit small fixes directly. (Let the author know) |
Pull Requests are used selectively | - Breaking Changes - Explicit feedback is requested from author - Change require manual test on non-local dev-environment |
Everyone Reviews | Regardless of seniority. |
Feature Toggles when Needed - (Still gathering knowledge on how to feature-flag) | Hide incomplete features, do not delay merging. |
As for the agreements, it’s vital that the whole team agrees on the basic premise.
🚀 Our Workflow - SOP
So, how does this actually look like in practice?
-
Direct Commits to Main
- Commit directly to main.
- Use short-lived branches only when absolutely necessary. Those can be:
- Breaking Changes
- Explicit Feedback is needed (requested via Pull Request)
- Required for Manual Tests (if no other way of testing the change)
-
Commit Tagging Requirement
- Format:
AB#<TaskNumber> <Commit Message>
- Example:
AB#4321 Add validation for payment processing form
- Format:
-
Mandatory Test Coverage for New Code
- All new code must be covered by a test, preferably a unit test.
- Use the most appropriate automated test type if unit tests are not practical.
- When changing code in existing codebases with no associated unit tests, create a new unit test class and test that change specifically.
-
Use Discretion with Breaking Changes
- Avoid breaking changes between services.
- If unavoidable, minimize the impact and number of breaking commits.
- If a commit is breaking, consider doing a minimal Pull-Request (only 1 commit to rollback if environment breaks).
- Communicate proactively with affected developers.
-
Automated Testing First
- Build checks, linting, static analysis, unit tests.
- (Optional) Deploy to Development if needed.
-
Post-Merge Non-Blocking Code Reviews
- Author moves task to “To Review” after merging.
- Author requests explicit review feedback as comments in the Pull Request, which must be resolved by the reviewer.
- Reviewer actions:
- Reviews tagged commits
- Reviews Pull-Requests
- Reviewers can:
- Leave comments
- Make minor fixes directly with a follow-up commit (inform the author)
-
Act on Review Feedback Quickly
- Critical issues: fix ASAP.
- Non-critical issues: address in follow-up work.
-
Collaborative Testing
- Code must be well-tested.
- Encourage manual exploratory testing by another team member.
In short: commit often, keep changes small, ensure tests are in place, and use reviews as a collaborative learning tool.
💭 Room for Improvements
We consider this a first draft, and there are areas we want to refine as we iterate:
Feature Flagging Process
- While implementing feature flags is straightforward, the challenge lies in managing them over time.
- Open questions we need to solve:
- What is the process for introducing and removing feature flags while avoiding long-term technical debt?
- As a sub-team within a larger organization, how do we align with other teams on consistent use and governance of feature toggles?
Continuous Delivery Maturity
-
Pain points:
- Currently, we are constrained by rigid approval gates across Development and Live environments.
- Shared environments introduce dependencies and bottlenecks.
- Delivery pipelines are standardized across teams, limiting flexibility.
-
Next steps:
- Explore ways to streamline approvals while maintaining quality and compliance.
- Evaluate opportunities for team-specific delivery pipelines or improved coordination mechanisms across teams.
Knowledge Sharing & Adoption Across Teams
Our team has been able to adopt Trunk-Based Development quickly due to its maturity and seniority. However, other teams may not yet have the same level of experience or comfort with these practices.
-
Pain points:
- The company has a strong background in open-source ways of working, which creates some hesitancy and skepticism toward adopting different practices like TBD.
- Limited visibility into how our process works for teams outside our own.
- Risk that TBD practices stay siloed within our team instead of being scaled across the organization.
- Uneven skill levels across teams can slow down adoption.
-
Next steps:
- Build confidence internally by gathering more data and experience with TBD in our own team.
- Document learnings in lightweight, accessible formats (playbooks, demos, internal blog posts).
- Use side-by-side comparisons with familiar open-source workflows to show how TBD can complement, not replace, existing practices.
- Run knowledge-sharing sessions to explain the “why” behind our approach.
This SOP is a living document, and we’ll keep refining it as we go. If you’ve tried Trunk-Based Development in your team, I’d love to hear what worked for you — and what didn’t.
Top comments (0)