I’m curious how teams are adapting AI coding tools like GitHub Copilot or Cody in daily workflows.
We found them super fast individually — but at the team level, issues popped up:
- Code quality and architecture drifting
- Security risks from AI-added dependencies
- Misaligned features vs. Jira tickets
- Difficult code reviews from AI output
How are you managing these as a team? Do you review AI code differently? Do you have process checks in place?
Would love to hear your experience.
Top comments (1)
I've been one of the early trailblazers for AI on our team, and once I realized the wild disconnect between AI use and AI understanding, I started sharing just about everything I could think of. There’s definitely a growing chorus about the unreliability and drop in quality that can come with AI-first techniques - but honestly? In my experience, 90%+ of the issues tied directly to AI can be solved with actual training.
The core problem? Most people using AI have no idea where the answers are coming from. And fewer still know how to work within those limits to build reliable, consistent solutions.
For example: I didn’t get a Copilot kickoff meeting or training session. I got: an automated security email saying “you now have access.” 😂 That was the whole onboarding. Since then, I’ve overcorrected (hard) - and now spend way too much time trying to pass on what I’ve learned to anyone who’ll listen.
If your team doesn’t already have some kind of shared training, required knowledge exchange, or even a clear certification path - that’s the place to start.
Once that’s in motion, here’s what I’ve been slowly (and sometimes painfully) iterating through to improve how we actually implement AI across our workflows:
1️⃣ Set the expectation: AI is a tool - you are the human.
It’s your code. Doesn’t matter if it came from Copilot, ChatGPT, or divine intervention - you own it. That means no committing what you don’t understand, and no merging anything that hasn’t been properly tested.
Every repo should be set up with:
And most importantly, none of:
2️⃣ Track AI involvement.
I prefer conventional commits with something that flags AI-assisted changes, but honestly - use whatever sticks. The point is: you want to know if a particular commit came from Copilot or another source once the fire is out and you're trying to figure out what exactly happened.
A good tracking system means you don’t have to guess whether it was an innocent bug, an unlucky fluke, or a whole workflow that’s quietly turned into “in Copilot we trust” without proper review.
3️⃣ Define custom instructions — in every repo.
If you’re not doing this yet, you’re flying blind. AI doesn’t magically get smarter just because it has access to your codebase - you have to tell it how to help you.
The point stands though - without context, you can’t expect reliable results.
If you're curious - I cover points 1 and 2 more fully in my RAI post, and go deep on repo instructions in this one.
Currently though? I'm still solidly in the "WHY AREN'T YOU USING THIS??" phase - most of these practices started in my own personal workflows and I'm slowly introducing them to the team like a stealth sloth on a mission 🦥💻
Hope this helps! Would love to hear what’s worked (or hasn’t) on your side too!