There is a question that keeps coming up in conversation at the moment, but one that doesn't seem to have a good clear answer anywhere: How, as engineers, do we use AI tooling productively and not deskill ourselves in the process — and how do we keep supporting those earlier in their careers?
My answer has always been that it is more important than ever to understand what good looks like. Which is fine in principle. It is less useful when someone pushes back and asks what that means in reality, or how you actually apply it when an engineer just raised a pull request with 500 lines of AI-generated code that nobody actually understands.
That gap is where the idea for Human-First Engineering came from. It is a lightweight manifesto and framework you can adopt today to address these very issues. Everything is published at humanfirstengineering.dev, with the source on GitHub under a permissive licence so you can fork it, adapt it, and run with it.
Why is it different this time?
As engineers we have long had productivity assistive tooling. Compilers, IDEs, autocomplete, intellisense, static analysis — these all made our lives easier and largely automated the tedious bits. But AI tooling is different. It can automate the parts engineers actually learn from: reading code, working through a bug, sketching a design, explaining what your code does to another human. Those moments of friction are where the intuition comes from. They are how engineers build the judgement they rely on later.
Automate those at scale, without thinking about it, and you get a generation of engineers who can ship things but cannot reason about what they shipped or why.
And that risk is not evenly distributed. It falls most heavily on the engineers entering the industry now. The senior engineers of five years hence are the juniors of today. If AI quietly routes them around the struggle that creates seniors, we are not going to notice the damage until it is too late to fix.
The failure modes I see
A lot of the conversation about AI-assisted engineering gets framed as "too much AI vs too little." In practice, the real failure modes are subtler.
The most common one is using these tools without really understanding what they are generating. Engineers accept output because it looks plausible, without the reading and reasoning that would normally catch the issues. The mistakes that show up here are rarely spectacular — they are small, quiet, and compound. Clean architecture; separation of concerns; security; performance; all become invisible in an ever-growing weave of spaghetti code, that is a maintainability timebomb waiting to explode.
Another issue that gets discussed less than it should: dependency risk at the individual level. I've heard from engineers who lean heavily on an AI tool, hit their token or quota limit mid-cycle, and then spend the rest of the period noticeably less productive than they were before the tool existed, even with more generous enterprise subscriptions. A skill you have outsourced is a skill you don't have when the outsourcing stops.
Both of these are symptoms of the same thing: treating AI as a productivity multiplier without treating the use of AI as a skill in its own right. That framing is what this framework is built to address.
What follows is an attempt to make that framing concrete — something a team can actually pick up and use.
What is the framework?
Human-First Engineering is built on eight beliefs and five core pillars. The beliefs are the why; the pillars are the how. The pillars are where the operational content lives:
- Think first — understand the problem before you prompt. AI accelerates execution, not understanding.
- Own the output — every line has a named human owner. If you cannot explain it, you do not ship it.
- Grow through AI, not around it — use AI to reach harder problems and understand more deeply, not to avoid the discomfort of not knowing. This is the pillar that protects the pipeline.
- Use AI intelligently — model choice, context, prompting, and instruction files are skills. Using AI well is part of the modern craft.
- Trust AI, but verify everything — calibrate trust to the risk. Human reasoning leads on anything sensitive, irreversible, or security-critical.
If you need the whole thing in one line:
Think first. Own what you ship. Grow through AI, not around it. Use AI intelligently. Verify everything.
Each pillar has a small set of concrete behaviours. No process. No ceremony. The aim is to be light enough to actually remember, and concrete enough to change the questions asked in a code review.
The whole thing is deliberately small. You should be able to read the manifesto and framework in ten minutes.
The toolkit
The framework on its own is useful as a shared mental model, but adoption needs more than beliefs. So alongside it there is a toolkit — the bit that turns the manifesto into something a team can actually run.
- Implementation Guide — a ten-step rollout plan: introduce the manifesto, then the framework, then embed both into the rituals you already have.
- Practices — concrete patterns for how engineers use AI day to day.
- Slide Deck — a ready-to-present deck for a 30–45 minute team session, editable like code.
- Developer FAQ — the questions engineers actually ask. Honest answers, not corporate ones.
- For Early-Career Engineers — written for junior engineers, not about them. Practical habits for using AI to grow rather than stall.
- Templates and Prompts — drop-in instruction files for GitHub Copilot and Claude Code, plus reusable prompts for framing, reviewing, and risk-assessing AI-assisted work.
How to adopt it
Three sensible paths, depending on how much energy you have:
- Just the framework. Read the manifesto and framework. Share both with your team. Use the one-line summary as a shared vocabulary in code reviews and retros. That alone will shift how people talk about AI-assisted work.
- Framework plus a team session. Add the slide deck and run a 30–45 minute session. Follow up with the developer FAQ and the early-career guide as written references.
- Full adoption. Run the implementation guide end-to-end. Drop the instruction files into your repositories. Add the prompts to your team's shared prompt library. Set a quarterly review on the calendar.
Everything is CC BY-NC-SA 4.0. Fork the repo, cut what does not fit your context, add the examples that will land with your team. The goal is for the principles to be lived, not for the artifacts to be preserved.
If it is useful to you, take it. If it sparks a better idea, tell me about it — I'd love to hear what you do with it.
Top comments (0)