I was talking with Brian (@bdougieyo) and said I thought I was already at a higher level in the "Continuous AI maturity model".
I kinda bragged about it. But then I stopped to think harder. Am I really there?
That question is what led me to write this.
What is the Continuous AI Maturity Model?
The idea of maturity models isn’t new. Frameworks like the AI Maturity Model have long been used to describe how organizations grow in their use of artificial intelligence, moving from early experiments to full integration.
The first place I came across a Continuous AI Maturity Model was in @bekahhw’s article - A Developer's Guide to Continuous AI. She framed it as a way to understand how developers and teams build up their use of Continuous AI over time. It gives us a shared language for naming where we are today and spotting what the next step might be in the adoption of Continuous AI.
So what does this look like in practice? The model breaks down into levels, each with its strengths and limitations.
Level 1 — Manual AI assistance
This is where most developers are today. You copy some code or error into ChatGPT or another tool. You ask a question. You get back a function or a fix. It saves time.
The strength here is speed. You can solve problems faster. You can move past blockers. But the limit is clear too. You only get value when you remember to ask. The help is disconnected from your workflow. Nothing repeats itself.
Examples of level 1
- Writing a test only when you ask the AI to write one
- Asking for a regex when you can’t remember the pattern
- Copying error logs into the AI to get a fix suggestion
Level 2 — Workflow automation
Here AI starts to live inside the work itself. It takes care of repeatable tasks while you still keep oversight. The workflow runs every time, not just when you think to ask.
The strength here is consistency. Everyone benefits from the same automation. The limit is trust. You still need to review and guide the AI. It can make mistakes.
Examples of level 2
- AI adds missing documentation during a pull request review
- AI suggests changes for style and small bugs directly in the PR
- AI updates a ticket when a branch is merged
- AI generates tests when new code is pushed
Level 3 — Zero intervention workflows
Here AI completes a process end to end with no human input. This is only safe today for narrow and low-risk workflows.
The strength here is scale. Work happens even when no one touches it. The limit is scope. You can’t trust AI to handle complex or high-risk work on its own.
Examples of level 3
- AI merges dependency updates after tests pass
- AI keeps a changelog updated without review
- AI closes stale issues with a clear response
Why it matters
Not every developer or team is in the same place. Some are still mostly level 1. Others are testing level 2. Very few are ready for level 3. This model gives us a way to name where we are. It sets the tone for what we can aim at next without overpromising.
Moving up the levels
Moving from level 1 to level 3 in one leap is not realistic. Progression is what matters. Each step builds on what you’ve already learned.
Level 1 is where you get familiar with how your AI system behaves. You see its coding style when given your project context. You notice the way it writes documentation. You start to recognize patterns when you ask it to repeat the same tasks. At this stage, the key is learning how the AI works and where it fits.
Level 2 is when you take those patterns and set them into rules. Instead of reminding the AI every single time, you define the standards. You write down how you want code to be generated. You capture your preferred style for documentation. You build recipes that can run automatically or manually in place of those repeatable workflows.
A simple example from my own use: I have told my AI agent again and again to interact with GitHub through the CLI, not by trying to read the web page. When writing issues, pull requests, or comments, I ask it to create the content in a temporary markdown file and use the content as value for --body-file
flag with the CLI, deleting the temporary markdown file after. This avoids bad inline formatting and keeps the output clean. Instead of repeating this instruction every time, I can set it once as a rule in the assistant’s configuration or in something like an AGENTS.md file, that's some new convention now.
All of this compounds in level 2. The more rules you set, the more the system adapts. If you’re using a tool like Continue that collects development data as you build, you get even more leverage. It starts to feel like the assistant is learning with you, while you also fine-tune it with your rules to match your taste.
Level 3 becomes possible only after enough usage at level 2. By then you can measure something important: the intervention rate. This is how often you still need to step in and fix the AI’s output. If the rate is high, you are not ready. But if over time the rate drops, because your rules are solid and the assistant is using project data well, then you have a system that can safely run end to end workflows without oversight.
The key is that level 3 is not magic. It only works because you built the foundation through repeated use, feedback, and rules at level 2.
Reflection
When I told Brian I was at level 2, maybe even 3 haha, I wanted to believe it. But the truth is I’m still mostly at level 1 with some bits of level 2 sprinkled in. I prompt the AI, I have a few rules, but I still clean things up often.
And that’s okay. The point isn’t to climb as fast as possible. It’s to know where you are, notice your improvements, and keep building. Each level builds on the one before. The more you use AI with care, the more you prepare it — and yourself — for higher levels of maturity.
The Continuous AI Maturity Model isn’t about chasing some end state. It’s about knowing where you stand, what’s working for you, and what step makes sense next.
Top comments (2)
I'm working on some continuous ai workflows! Super excited to share soon.
Great! Now, I can't wait to see them. 😁