DEV Community

Cover image for AI writes code fast. But who checks the architecture?
Mushegh Gevorgyan
Mushegh Gevorgyan

Posted on

AI writes code fast. But who checks the architecture?

We're in the age of AI-assisted coding. Claude Code, Cursor, Codex — these tools can build entire projects in hours. And they're getting better every month.

But there's a problem nobody's talking about enough: AI coding tools don't think about architecture.

They generate code that works. It passes tests. It compiles. But over weeks and months, your codebase quietly accumulates architectural debt that humans don't notice, and even AI code reviews miss — circular dependencies between services, data layer code calling the API layer directly, god modules with 50 exports, dead code that's no longer used.

I started noticing this in my own projects. AI agents were writing code for my projects faster than ever, but when I started reviewing the generated code more closely, it was a mess. Services that should never talk to each other were tightly coupled. Layers that should be separate were bleeding into each other. But no one was watching the system as a whole. Not me, not the AI writing the code.

So I built TrueCourse.

What it does

TrueCourse is an open-source CLI + Web UI that analyzes your JavaScript/TypeScript codebase for the structural and semantic issues that humans and AI code reviewers miss.

Architecture violations — circular dependencies, layer violations, god modules, dead modules, tight coupling between services

Code intelligence — empty catch blocks that swallow errors, race conditions from shared mutable state, functions whose names don't match their behavior, security anti-patterns like Math.random() for tokens or eval() with dynamic input

Cross-service flow tracing — automatically detects request flows across service boundaries and visualizes them as end-to-end traces

Database analysis — detects ORMs (Prisma, TypeORM, Drizzle, etc.), generates ER diagrams, checks for missing indexes and schema issues

It combines deterministic rules (AST-based static analysis) with AI-powered review for deeper semantic issues. You choose the LLM provider, or use Claude Code with no API key needed.

How it works

One command:

npx truecourse analyze
Enter fullscreen mode Exit fullscreen mode

On first run, it starts a local server, sets up an embedded PostgreSQL database (no Docker needed), and walks you through configuring an LLM provider. It works with Claude Code (no API key needed), or your own Anthropic/OpenAI keys.

Violations print in your terminal, and the web UI opens automatically with an interactive dependency graph. Click any node to see its connections, violations, and source code with inline markers.

Built for humans and AI agents

This was a deliberate design choice. TrueCourse has two interfaces:

The Web UI is for developers who want to explore and understand their codebase visually — dependency graphs, inline code viewer, analytics dashboard, diff mode.

The CLI is for AI coding agents, CI pipelines, and automation. It outputs structured data that agents can consume. You can run analyze from Claude Code and review results in the UI.

Both share the same analysis engine and database.

Diff mode

This is the feature I use most. Run:

npx truecourse analyze --diff
Enter fullscreen mode Exit fullscreen mode

It compares your uncommitted changes against the last analysis and shows you exactly which violations your changes introduce or fix. The graph dims unaffected nodes and highlights what you touched. It's like a pre-commit architecture check.

What's next

TrueCourse is MIT licensed and very early (v0.1.x). Python support is coming. Custom rule generation is planned. If you're interested in contributing, there are open "good first issue" tickets on GitHub.

I'd love feedback — especially from anyone dealing with architecture debt in AI-generated codebases. Try it out and let me know what breaks.

GitHub: https://github.com/truecourse-ai/truecourse
npm: https://www.npmjs.com/package/truecourse

Top comments (0)