In the last few years, the rise of AI has completely changed how we envision the future of software development.
It started as a simple helper — a way to speed up repetitive tasks — but with the latest models, it’s clear that AI can be much more than that: a productivity booster, a code reviewer, or even a virtual pair-programming buddy.
Since it’s becoming obvious that most developers are now using AI in their daily workflow, it’s worth asking ourselves: Do our current best practices still matter in this new era?
Let’s explore that question through some well-known Clean Code principles and see whether they remain relevant when working alongside AI.
We’ll focus primarily on principles like:
- SRP (Single Responsibility Principle)
- DRY (Don’t Repeat Yourself)
- YAGNI (You Aren’t Gonna Need It)
- TDD (Test driven development)
💡 Note: This article focuses mainly on the usage of agentic AI within IDEs — tools that can actively assist, refactor, and reason about your code in context.
SRP vs LOB
The Single Responsibility Principle (SRP) has long been one of the cornerstones of Clean Code. It encourages developers to keep each function or class focused on one purpose.
Sometimes, this principle has been pushed to extremes — In his book Clean Code Uncle Bob argued that functions should be as short as possible, ideally just a few lines long
However, modern development practices have introduced a counter-trend: Locality of Behavior (LOB).
Frameworks like React and Tailwind CSS naturally encourage colocating logic, styling, and data fetching within a single component. Instead of scattering behavior across multiple files, developers now value keeping everything needed for a feature in one place, making it easier to reason about and modify.
When you work with AI tools, the quality of their output depends heavily on the context you provide. You usually have to give the model the files relevant to your task — and every additional file increases token usage, cost, and noise.
With a strong SRP design:
- Each file or function serves a single, well-defined purpose.
- You can confidently include only the relevant files in your AI context.
- The model doesn’t waste tokens reading unrelated styling or data logic.
Let’s take the following React component that fetches to-dos from an API and displays them with Tailwind styling.
//TodoList.tsx
export function TodoList() {
const [todos, setTodos] = useState<Todo[]>([]);
useEffect(() => {
fetch("https://api.example.com/todos")
.then((res) => res.json())
.then(setTodos);
}, []);
return (
<ul className="p-4 space-y-2">
{todos.map((todo) => (
<li
key={todo.id}
className={`p-2 rounded ${
todo.completed ? "bg-green-100" : "bg-gray-100"
}`}
onClick={() => alert(`Clicked: ${todo.text}`)}
>
{todo.text}
</li>
))}
</ul>
);
}
Now imagine you need to change the API endpoint to /v2/todos
.
If you ask an AI model to do that, you have to provide this entire file — including UI and styling — just so it can modify one line inside the fetch
call.
That’s inefficient and increases the chance the model might accidentally alter unrelated parts of your component.
// useTodos.ts
export async function fetchTodos(): Promise<Todo[]> {
const res = await fetch("https://api.example.com/todos");
return res.json();
}
With this structure, the fetching logic lives in useTodos.ts
.
If you want the AI to update the API endpoint, you only need to provide that one small file. That means:
- Less context
- Less noise
- Clearer intent
This example is intentionally small, but in a large codebase the impact compounds dramatically.
A single change — like updating an endpoint, adjusting a data format, or modifying a service call — might otherwise require passing dozens of mixed-concern files to the AI.
In other words:
SRP doesn’t just make code easier for humans to maintain — it makes it easier for AI to understand and modify accurately, because each file or module has a clear, focused purpose.
DRY Principle
Don’t Repeat Yourself (DRY) is a principle aimed at reducing repetition of information across a system.
Its core idea is simple: a single change in your logic should require a change in only one place.
Traditionally, this made code easier to maintain and less error-prone.
But with AI-assisted development, does DRY still matter if an AI can perform massive refactorings in seconds?
Actually, it matters even more.
When you ask an AI model to update or refactor your code, it only “sees” the files you include in context.
If you’ve duplicated logic across multiple files and forget to include one of them, that part simply won’t be updated — leaving you with inconsistent behavior.
You could technically index your entire codebase and feed it all to the AI, but that’s:
- time-consuming,
- expensive in tokens, and
- riskier — because the more context you give, the more the model can hallucinate or make unrelated edits.
By keeping your code DRY, you reduce the surface area the AI has to understand.
One source of truth means one place to update — whether the change comes from you or from an automated assistant.
In other words:
DRY doesn’t just prevent human mistakes anymore — it prevents AI mistakes too.
YAGNI
YAGNI — You Aren’t Gonna Need It — is a principle that’s probably caused me the most trouble when working with AI.
Imagine you’ve developed an interface with multiple conversion methods, but you only need the first one. You might have added the others “just in case.”
From my experience, AI tools often try to use those extra methods, even though they’re irrelevant. The result? Your codebase grows in complexity with features you don’t actually need.
The same applies when your requirements are vague. If your prompt doesn’t clearly define the scope of what needs to be done, the AI may try to handle edge cases or scenarios that are unnecessary, adding more code than required.
That’s why it’s crucial to:
- Review AI-generated code carefully
- Ask yourself if it’s relevant
- Remove or ignore parts that aren’t needed
In other words:
YAGNI doesn’t just protect humans from over-engineering — it protects AI-assisted development from generating unnecessary complexity, keeping your codebase focused and maintainable.
The Impact of TDD
Test-Driven Development (TDD) is a methodology highly praised in the developer community, yet the majority of developers still write tests after the production code.
Nowadays, it’s common to see developers generating unit tests using AI based on existing code. This is a trap you should avoid. AI-generated tests will almost always validate the code as it exists, rather than validating the requirements.
Remember: the goal of automated testing is to ensure that your code meets the intended requirements and prevents regressions when changes are made — not just to check that the code “works as written.”
TDD and AI
When combined with AI, TDD offers a clear advantage: it defines the scope of what should be generated.
- You write tests that capture exactly what the behavior should be
- AI generates code to satisfy those tests
- This prevents unnecessary or irrelevant code from being created
However, this only works if your tests are well-written and meaningful. Poor tests can create a false sense of security: the AI (or you) might make changes that break requirements, but because the tests still pass, you wouldn’t notice.
To mitigate this risk, consider mutation testing — a technique that intentionally introduces faults into your code to check if your tests catch them. This ensures your tests are actually capable of detecting errors, reinforcing confidence in both your code and the AI-generated output. There are great tools that automate this process, such as PIT for Java or Stryker for JavaScript, which make mutation testing accessible even in large projects.
In other words:
TDD doesn’t just guide humans — it guides AI. Well-written tests define the desired behavior, prevent over-engineering, and ensure that both developers and AI work safely within the intended scope.
Set the Standard
Even though principles like SRP, DRY, and YAGNI are widely discussed in the developer community, they are far from universal in existing codebases.
This matters when using AI: LLMs are trained on vast amounts of code, much of which doesn’t follow the clean code principles you might consider acceptable.
To avoid generating “bad code,” one technique that has worked well for me is providing examples of what I consider clean.
Let's take React as an example. Many developers mix UI, data fetching, and logic in a single component. I personally prefer a structure that separates concerns:
- API client for all requests
- Custom hooks for handling operations
- Styled components in separate files
- A main component that orchestrates everything
This is likely not the default structure an LLM would propose. To guide it toward my standard, I start by prompting the AI to build the feature piece by piece, rather than asking it to generate an entire screen at once (e.g., “Create a CRUD screen for Product”).
Once the structure is in a shape I like, I can provide these components as context/examples for the LLM to follow in future generations. Over time, this effectively trains the AI to adhere to your personal coding standards, improving both quality and consistency.
In other words:
You can teach AI your standards the same way you teach a junior developer: by showing examples, guiding step by step, and providing clear context. This ensures the code it generates aligns with the structure and quality you expect.
Conclusion
It’s tempting to think that with AI, clean code principles might become less relevant — that the model will “figure everything out” and fix problems automatically.
While this might hold true for small, isolated codebases, the reality changes as your system grows in size and complexity.
When companies demonstrate their AI tools, they often focus on speed and simplicity, not on sustainable architecture. The examples shown — a to-do app built in minutes — don’t reflect the challenges of real-world software, where understanding the full context and maintaining the system long-term is crucial.
In practice, your job isn’t to build a toy app from scratch; it’s to build and maintain large, evolving systems — systems where only you truly understand the intent, the domain, and the trade-offs.
That’s why clean code principles and good practices are more important than ever before.
They don’t limit what AI can do — they guide it, ensuring that the code it generates remains understandable, maintainable, and aligned with your vision.
Top comments (2)
clean code principles like SRP, DRY, and YAGNI are still super relevant, even with AI helping us out. AI can speed up coding, but if we’re not careful, it can also introduce chaos if we don't maintain structure and clarity. Definitely need to guide it with good practices to avoid ending up with spaghetti code. Keep the standards high
This was a great read 👏 — I really liked how you connected clean code principles like SRP and DRY to AI context efficiency. I’ve noticed something similar while building AI-based tools — when the structure is clear, AI performs much better and produces more consistent results.
Clean architecture isn’t just for humans anymore — it’s the foundation for effective AI collaboration too. Loved this perspective 💡