DEV Community

Cover image for AI Can Write Code. It Still Doesn’t Know Your Company.
Vishnu Viswambharan
Vishnu Viswambharan

Posted on

AI Can Write Code. It Still Doesn’t Know Your Company.

The gap that shows up only in real systems

You can now ask an AI to build a feature and get a reasonably clean implementation within seconds. The structure looks right, it follows familiar patterns, and in many cases it is good enough as a starting point.

What becomes visible only when you try to use that code in a real system is something else. It doesn’t fully fit.

A familiar problem in a new form

This is not very different from hiring a strong developer and asking them to start contributing without proper onboarding. They understand the language and the frameworks, and they can solve problems. But they don’t yet understand how things are done in your system.

They don’t know which internal libraries are expected, which patterns are actually followed in practice, or which earlier decisions still shape the system. So their code is technically correct, but it needs adjustment. Not because it is wrong, but because it does not align.

AI is currently in a very similar position. It has strong general knowledge of software development, but it does not have access to the context that defines how software is actually built inside a company.

The instinct to provide more data

The natural reaction to this gap is to provide more context. Teams start connecting documentation, repositories, tickets, and internal tools, assuming that better access will lead to better alignment.

In practice, this often introduces a different problem. Internal knowledge is rarely clean or consistent. Some documents describe systems that have evolved. Some decisions were never documented. Some patterns exist in reality but not in writing. In many cases, there are multiple valid ways to do the same thing, without a clearly preferred one.

When all of this is exposed to AI, it does not automatically become clearer. It simply becomes available.

When more information reduces clarity

This is similar to what happens when a new developer is given access to all onboarding material without guidance. They now have more information, but it becomes harder to understand what actually matters.

It is not obvious what is current, what is deprecated, or what is actually followed in practice. Without that clarity, they rely on judgment and guesswork.

AI behaves in the same way. It produces answers that are informed and structured, but still slightly misaligned with the system.

Why the output still feels “off”

This is why AI-generated code often looks correct while still requiring adjustment. It follows general best practices, but may bypass internal abstractions, duplicate existing platform capabilities, or introduce patterns that are inconsistent with the rest of the codebase.

A common example is service-to-service communication. AI might suggest a standard HTTP client setup, while many companies already provide an internal client that handles retries, authentication, and observability. The generated solution works, but it bypasses important parts of the platform.

Another example shows up in event-driven systems. AI may generate a clean producer or consumer using a generic library, while the company expects a specific abstraction that enforces schema validation, tracing, or error handling. Again, nothing is technically wrong, but it does not align with how the system is designed to operate.

These are not obvious errors. They appear as small inconsistencies that accumulate over time and show up during reviews and integration.

Why tooling alone does not solve it

Most companies are currently trying to address this through tooling. Wrapper CLIs, predefined workflows, skills, and integrations with internal systems are becoming common. These approaches improve usability and reduce friction when interacting with AI.

They are useful, but they do not fully address the underlying issue.

Because the problem is not just access to internal knowledge. It is the usability and clarity of that knowledge.

If a system has multiple competing patterns, AI will reflect that. If documentation is outdated, AI will use it. If conventions are implicit, AI will not infer them reliably.

Providing more data does not resolve these issues. It often amplifies them.

The shift from data to clarity

The shift that seems necessary is not about making AI better at coding. It is about making internal systems easier to understand.

In practice, AI-generated code often needs adjustment before it can be merged, not because it is incorrect, but because it does not align with internal expectations. That gap becomes smaller in systems where defaults are clear and patterns are consistent.

Organizations that are seeing better outcomes with AI tend to have clearer defaults, more consistent patterns, and a smaller gap between what is documented and what is actually practiced.

In those environments, AI outputs require less correction. Not because the model behaves differently, but because the system itself is easier to reason about.

What this means for platform and DevEx teams

This has implications for platform and DevEx teams. The focus is no longer only on enabling developers through tools and automation. It also includes reducing ambiguity, defining clear defaults, and ensuring that internal knowledge remains usable and current.

It is less about documenting everything, and more about making the important things clear.

A second consumer of internal systems

One way to think about this is that companies now have two consumers of their internal systems.

Humans, who can ask questions, resolve ambiguity, and build context over time.

And AI, which depends on the structure and clarity of the information it is given.

Most systems today are designed for the first. Very few are ready for the second.

Where this is heading

The companies that will get more value from AI are not necessarily the ones adopting more tools or building more integrations. They are the ones that make their internal knowledge easier to consume, more consistent, and closer to how things are actually done.

At that point, AI stops behaving like a capable but untrained developer and starts producing output that fits more naturally into the system.

That difference is subtle, but it changes how useful AI becomes in day-to-day engineering work.


Disclaimer: The views and ideas expressed in this article are my own and do not represent those of my current or previous employers.

Top comments (0)