Architecture-as-code is an area I think has been underserved by the LLM boom — most of the attention went to code generation, but the highest-leverage use case might actually be architecture documentation, drift detection, and validation.
What I've found interesting in practice: LLMs are genuinely useful for bridging the gap between the architecture diagram and the running system. You can feed in your IaC, your service mesh config, and your deployment manifests, then ask "does the actual system reflect the intended architecture?" — and get surprisingly useful gap analysis. The challenge is that LLMs will confidently describe an architecture that should exist based on the docs, not the one that actually exists, so grounding is critical.
One pattern that's worked well: treating the LLM as a "second reader" during architecture review rather than a generator. Rather than asking it to produce diagrams from scratch, you give it the existing C4 model and ask "what assumptions is this architecture making that aren't explicit?" That adversarial framing tends to surface the implicit dependencies and failure modes that normally only emerge in production.
Curious how you're handling the staleness problem — architecture docs tend to go stale faster than the code they describe. Are you regenerating the architectural view on each commit, or is this more of a point-in-time analysis workflow?
Thank you for the detailed comment.
I completely agree that the LLM revolution is undeservedly bypassing the field of architecture. Yet if we treat architecture as code, the same high level of automation that agents provide in regular coding becomes available to us.
Specifically, to address the problem of architectural drift, we can now add a stage gate to the CI/CD pipeline that checks code changes against the C4 model of the system. This wasn’t possible before, but now an AI agent can handle this task quite easily. If it finds discrepancies, it can send a notification to the architect or even make a merge request to architecture and block deployment to production.
In our department, we currently run regular architectural governance sessions where, among other things, we identify mismatches between the code and the architecture. Our next step is to introduce AI checks into the CI/CD pipeline. Architectural governance will remain, but I expect its function will shift more toward reviewing complex cases rather than hunting for discrepancies between the code and the architecture.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Architecture-as-code is an area I think has been underserved by the LLM boom — most of the attention went to code generation, but the highest-leverage use case might actually be architecture documentation, drift detection, and validation.
What I've found interesting in practice: LLMs are genuinely useful for bridging the gap between the architecture diagram and the running system. You can feed in your IaC, your service mesh config, and your deployment manifests, then ask "does the actual system reflect the intended architecture?" — and get surprisingly useful gap analysis. The challenge is that LLMs will confidently describe an architecture that should exist based on the docs, not the one that actually exists, so grounding is critical.
One pattern that's worked well: treating the LLM as a "second reader" during architecture review rather than a generator. Rather than asking it to produce diagrams from scratch, you give it the existing C4 model and ask "what assumptions is this architecture making that aren't explicit?" That adversarial framing tends to surface the implicit dependencies and failure modes that normally only emerge in production.
Curious how you're handling the staleness problem — architecture docs tend to go stale faster than the code they describe. Are you regenerating the architectural view on each commit, or is this more of a point-in-time analysis workflow?
Thank you for the detailed comment.
I completely agree that the LLM revolution is undeservedly bypassing the field of architecture. Yet if we treat architecture as code, the same high level of automation that agents provide in regular coding becomes available to us.
Specifically, to address the problem of architectural drift, we can now add a stage gate to the CI/CD pipeline that checks code changes against the C4 model of the system. This wasn’t possible before, but now an AI agent can handle this task quite easily. If it finds discrepancies, it can send a notification to the architect or even make a merge request to architecture and block deployment to production.
In our department, we currently run regular architectural governance sessions where, among other things, we identify mismatches between the code and the architecture. Our next step is to introduce AI checks into the CI/CD pipeline. Architectural governance will remain, but I expect its function will shift more toward reviewing complex cases rather than hunting for discrepancies between the code and the architecture.