Three months ago I switched from PC to a Mac. Last week I needed to start six projects at once — multiple frontend apps, a backend — and doing it manually in the terminal every single time was killing my flow.
So I turned to Cursor:
Write me a shell script that starts everything with one command.
It worked. I didn't know the syntax. I didn't know the Mac conventions. I wouldn't have been able to write it alone. But in few minutes I had a tool that saved me tens of minutes every single day — built in an environment I barely knew.
That moment made me think about something I'd been noticing across my team.
The walls getting thinner
A backend developer on my team needed analytics dashboards. In the past, that meant writing a ticket, waiting for a frontend developer to pick it up, reviewing it, iterating.
Instead, he spent a day with AI and shipped the dashboards himself. They weren't pixel-perfect. But they worked, they were accurate, and they unblocked the team immediately.
A frontend developer needed a new property saved to the database. New endpoint, new field, migration. Normally a backend task. He knew the codebase well enough to understand the pattern — and AI helped him fill in the parts he didn't know. He didn't need to wait for anyone.
And me, a .NET developer writing shell scripts on a Mac.
None of us crossed into the other role's territory because we suddenly became experts. We crossed because AI lowered the barrier enough that our existing context — knowing the codebase, understanding the goal, recognizing what good looks like — was sufficient to get the job done.
Genuinely exciting and a little dangerous
The exciting part is obvious. Teams move faster. Individuals are less blocked. You don't need to interrupt a specialist for every small thing outside your primary domain. The cost of "I'll just do it myself" has dropped dramatically.
But here's what worries me.
When a frontend developer adds a backend endpoint, he's working in territory where he doesn't have the full mental map. He doesn't know which invariants the data model is protecting or which edge cases the original author thought through. Just happy path.
The specialist would have caught the other paths.
Is the specialist's job harder?
This is the part I think gets lost in the excitement about AI breaking down role barriers.
When AI enables everyone to touch everything, the specialist — the person who actually owns a domain — needs to be more vigilant, not less.
- More code is being written in your domain by people who don't fully understand it.
- More PRs are landing with logic that looks right but carries hidden assumptions.
- More edge cases are being handled confidently by someone who didn't know there was an edge case.
You're no longer only reviewing what your colleagues write. You're reviewing what their AI wrote for them in your territory.
That's a different kind of review. Before, when a colleague added a backend endpoint, I could assume they'd at least seen the pattern before.
Now I'm reviewing code written by someone who asked AI to fill in the parts they didn't know — and AI confidently filled them in.
Now I have to think about those things explicitly, and I think that's harder than it sounds.
Where this leaves us
AI is genuinely changing what individual developers can do.
I'm not going back to manually starting projects in six terminal tabs.
My backend colleague isn't going to stop building his own dashboards when he needs them.
The productivity gains are real.
But. Codebase isn't just a collection of working features. It's a system with assumptions, constraints, and accumulated decisions — and someone has to hold that knowledge. AI doesn't hold it. The person who wrote the ticket doesn't hold it. The specialist holds it.
The barrier to contributing outside your role has dropped. The responsibility of owning your domain hasn't.
If anything, it's grown.
Are you seeing this in your team — people crossing role boundaries more often with AI? And how are you handling code review when AI is doing the crossing? Do you think the specialist has more work with code reviews?
Further reading
- Stack Overflow Developer Survey 2024 — data on AI tool adoption and developer productivity across thousands of engineers
- Quantifying GitHub Copilot's impact on developer productivity — GitHub's own research on how AI assistance changes how fast developers ship
- How AI Changed Software Engineering — Gergely Orosz on what's actually shifting in real engineering teams
Top comments (2)
I can relate, people start building things outside their domain because they can (their models can...) and it's making things much more volatile IMO. Backend devs often don't know about frontend architecture and while initial gains are a big plus, this becomes a huge problem in the long run, especially when the frontend is expected to grow / scale (in terms of code AND in terms of performance)., because the frontend is the place where new features are added with much more complexity than the backend because, well, humans are involved.
However, if ownership gets fuzzy, software becomes unreliable (we know this from the 80s and 90s software days...). Clear boundaries help to keep ownership in line. Also - knowledge needs to be exchanged - not just written down somewhere. Some codebases have a bus factor of Zero nowadays and it scares me.
The bus factor point is real, and honestly scarier than the productivity gains are exciting. I've seen it already — someone ships something with AI help, it works, but three weeks later nobody can explain why it was done that way. Not even the person who wrote it.
The "knowledge exchanged, not just written down" part I'd push back on slightly though. I don't think AI makes that worse — bad team culture does. AI just exposes it faster. If your team wasn't talking before, now the silence costs more.