The story usually starts the same way.
You bring in an external team. You create accounts. They get access to:
- the monorepo,
- a Jira project,
- maybe a Slack channel called #vendor-xyz.
A week later, code is flowing. Branches appear, PRs open, standups are happening.
Then you get the first set of questions:
“Where should I read the requirements for this story?”
“Who signs off if this change breaks another team’s API?”
“Which staging environment mirrors production?”
And you realise you never answered those properly. You just gave them Git access and hoped for the best.
The integration problem no one owns
Most teams treat “integration” as wiring: accounts, boards, repos, VPN. Once those are done, everyone assumes things are “set up”.
But the things that make or break the next quarter are much more boring:
- where product context lives day to day,
- who answers which kind of question,
- how code moves from PR → staging → production when multiple teams are touching it,
- who owns edge cases, tests, and incident response.
When those stay fuzzy, people still ship. But the work doesn’t join up.
You get:
- more handoffs,
- more waiting,
- more “oh, I thought you were handling that part”,
- and bugs that appear late because assumptions didn’t match.
Ownership becomes a weird grey zone between “the internal team” and “the vendor”. Which usually means: nobody.
What weak integration looks like in a repo
Here are a few patterns I’ve seen repeat across different companies and vendors.
1. Duplicate work
Two tickets, slightly different wording, describe roughly the same change.
One team adds a validation rule in the frontend form.
Another team ships a different rule in the API.
QA hits the endpoint, sees inconsistent behaviour, and asks which one is correct. The “answer” is buried across:
- a Slack thread,
- a Figma comment,
- and a PR description.
Everyone did something logical from their point of view. The integration work was simply never made explicit.
2. Priority drift
Internally, you’re planning against a roadmap theme: “finish migration to new billing system this quarter”.
The external team, meanwhile, is just burning through whatever hit their backlog. Nobody ever connected their queue to your roadmap, so they’re shipping useful things… just not the things that unblock the main goal.
On status reports, both teams look busy. The initiative itself keeps slipping.
3. Review and environment friction
You see this in three places:
PR reviews:
A big change opens against main. Internal devs assume the vendor leads will review. Vendor leads expect internal maintainers to own the final check. PR sits for days.Environments:
The “staging” the vendor tests on doesn’t match the “staging” your internal team trusts. Different configs, different feature flags, sometimes a different database snapshot.Dependencies:
A critical integration point shows up late because one team assumed “the other side” owned that boundary. Nobody wrote it down.
None of these are huge on their own. Combined, they slow everything down and crank up the anxiety around every release.
4. Trust slowly leaking away
This part sneaks up on you.
Internal leads start reading vendor PRs with more suspicion. They leave more comments, ask for more screenshots, request extra tests.
Vendor engineers notice that questions take a while to get answered, so they stop asking as much. They guess more, ship more partial-context changes, and hope it’s right.
Feedback loops stretch. Everyone is slightly on edge around release time. The roadmap deck looks fine; the feeling on the ground does not.
And then your roadmap stops matching production
On slides:
- Q1 goal: Ship X
- Q2 goal: Extend X with Y
In production:
- Part of X is live in one region,
- Another part only exists behind a feature flag,
- Some critical glue is still on someone’s Trello board.
When you do release reviews, the same pattern appears:
- integration pieces missing,
- dependencies nobody tracked,
- QA responsibility split across “whoever had time”.
Planning meetings become stitching sessions: trying to reconcile what was planned with what emerged from two partially aligned teams.
Give that a couple of quarters and the roadmap isn’t a guide anymore. It’s a narrative you write afterwards to explain whatever happened.
Why I ended up writing a Team Integration Workbook
After watching this happen a few times, I stopped blaming “bad vendors” and started looking at the first 30–60 days.
Those first weeks are where:
- people are still open to changing habits,
- process isn’t calcified,
- everyone is being polite and optimistic.
And yet that’s exactly when teams postpone decisions like:
- who approves releases,
- who owns which repos and which parts of the system,
- where product context is kept (and updated),
- how breaking changes are proposed and rolled out,
- what “done” means when there are two orgs involved.
To save myself from re-doing this from scratch every time, I started collecting the questions and templates that helped.
Over time that turned into a Team Integration Workbook: a set of canvases, checklists, and workshops you can run with internal and external leads.
This is aimed at the people stuck between engineering and delivery: CTOs, VP Eng, Heads of Product, programme managers, tech leads who are about to be responsible for “making the vendor work”.
Here’s what’s inside, roughly.
What’s in the workbook
Nothing fancy. No frameworks with cute acronyms. Just stuff I’ve seen teams need again and again.
Integration maturity model
A one-page way to answer: “Where are we already leaking?” across:
- planning,
- reviews,
- releases,
- access,
- ownership.
The point is to make both teams describe today the same way. Even “we’re in bad shape” is useful if everyone agrees on where.
Kickoff checklist + team charter
These are the questions that sound boring but pay off in weeks:
- Who can merge to protected branches?
- Who approves releases, and for which services?
- Where do requirements live, and who keeps them up to date?
- How are questions asked (thread, issues, office hours) and who responds?
- What does escalation look like when something blocks delivery? You run this once at the start with people who can actually make decisions.
Roles and responsibilities workshop
This is where implied ownership dies.
You explicitly assign owners (by person or role) for things like:
- API contracts and schema evolution,
- test coverage standards (unit, integration, E2E),
- incident response and on-call,
- monitoring/alerting,
- integration points between systems,
- regression checks after big changes.
The awkward part is the point: better to have that conversation at the start than in the middle of an incident.
Shared roadmap, risks, and planning rhythm
A light shared view so the vendor’s backlog points at the same targets as your internal roadmap.
- shared dates,
- shared definitions of milestones,
- a spot to write down known risks and dependencies.
Plus a simple template for the planning cadence you’ll both use.
Health check + joint retro
A small repeatable format (takes ~30–45 minutes) for:
- “what felt slow lately?”,
- “what was unclear?”,
- “what surprised us?”,
- “what should we change in how we work together?”.
Use it while changes are still cheap, instead of waiting for a quarterly review where everybody is already frustrated.
A 30–60 day flow you can copy
Whether you use this workbook or roll your own version, the flow is basically this:
- Get a shared baseline.
Sit down with leads from both sides and agree on where integration is already painful: planning, reviews, releases, access, ownership. Write it down.
- Name your scenario.
Is this:
- a full external squad fully owning a domain?
- a few engineers embedding into existing teams?
- a contained project with a narrow API surface?
The answers to “who owns what” are different in each case. Make sure your setup reflects that, not some generic vendor checklist.
- Run a real kickoff (not just intros).
In one session, decide:
- who approves releases to which environments,
- who reviews which repos,
- where requirements and specs live,
- how to propose and roll out breaking changes,
- what “done” means across both teams.
Capture it in a charter you can send to every new person who joins later.
- Agree on a shared planning rhythm.
Pick a cadence that both sides stick to (weekly, biweekly) and tie it to the same milestones. Internal sprint reviews and vendor demos should point at the same goals, not parallel ones.
- Do tiny health checks every couple of weeks.
Nothing huge. Just a recurring slot where you ask:
- what was painful in the last iteration?
- what was unclear?
- one thing we’ll change in how we collaborate?
Tweak early instead of after six months of accumulated friction.
Closing thoughts
If you’ve worked with external teams for a while, you know that people can be busy, tickets can be moving, and yet the output doesn’t line up with what you intended to build.
That doesn’t come from one big mistake. It comes from dozens of small, unmade decisions in the early days of the relationship.
You don’t need a massive process overhaul for this. You just need a deliberate pass over ownership, decision paths, and delivery flow while things are still fresh.
That’s why I put the Team Integration Workbook together>>> https://mev.com/blog/team-integration-workbook-practical-playbook-to-plug-external-teams-into-your-delivery-system Use it (or your own equivalent) in the first weeks with a vendor: run the sessions, write things down, set a shared rhythm, and keep checking in before the collaboration drifts.
If you’re curious, grab the PDF, run a kickoff with the people who can make the calls, and see what changes in the next release or two.
Top comments (0)