DEV Community

Cover image for How AI-First Architecture Made Me 3x Faster: The Design Decisions That Changed Everything
Ben Dechrai
Ben Dechrai

Posted on • Originally published at bendechr.ai on

How AI-First Architecture Made Me 3x Faster: The Design Decisions That Changed Everything

After three months of building with AI-first architecture, I'm shipping features 3x faster than before. But not for the reason most people think. The secret isn't better AI prompts - it's returning to engineering practices we abandoned decades ago: specifications, clear boundaries, and development rigor that used to be called "waterfall overhead."

Why This Matters Right Now

If you're using Claude, Cursor, or any AI coding assistant, you've probably hit the same frustrations I did:

  • AI generates code that almost works but breaks existing patterns
  • Context switching between frontend/backend confuses the AI
  • You spend more time fixing AI mistakes than you save

The solution isn't better prompts, it's architecture designed for AI comprehensibility.

What You'll Learn

In this two-part series, I'll show you:

  • Part 1 : The 6 architectural decisions that transformed my AI development workflow
  • Part 2 : The development environment setup, real productivity gains, and how to spot when AI goes wrong

"But I Don't Trust AI to Write My Code"

I get it. Even if you never let AI write a single line of code, these architectural patterns make your codebase clearer for human developers too. Clear boundaries, consistent patterns, explicit dependencies - these aren't "AI tricks," they're just good software engineering amplified.

The Core Hypothesis

I've been writing software for over twenty-five years, and I've watched countless architectural trends come and go. But AI-assisted development isn't just another trend - it fundamentally changes how we should think about code organisation. The traditional arguments for monoliths versus microservices, or tight coupling versus separation of concerns, need to be re-evaluated through a new lens: How does this architecture perform when an AI is reading, understanding, and generating code?

My hypothesis was specific: AI models work 3x more effectively with clear boundaries than blended architectures. A frontend that only does frontend work. An API that only does API work. No magic, no clever abstractions that require context the AI doesn't have.

The Architecture: Radical Separation of Concerns

I landed on a monorepo structure with three completely independent packages. When I say independent, I mean it. Each package could theoretically be extracted into its own repository tomorrow without breaking anything.

The structure looks like this: a frontend application that runs as a static site on Netlify, a backend API that runs in a Docker container on Railway, and a shared UI component library that both consume. The frontend and backend communicate exclusively through HTTP APIs. No shared database connections, no importing backend code into the frontend, no clever webpack tricks to blur the boundaries.

Excessive for a portfolio site? Absolutely. Next.js could handle this in one codebase. But I wasn't optimising for simplicity - I was optimising for AI comprehension.

How Frontend-Only Architecture Eliminates AI Confusion

Next.js is the obvious choice for React applications in 2025 - it has the ecosystem, tooling, and deployment optimisations. TanStack Start just hit version one with compelling type-safety. So why didn't I choose them?

Next.js is a full-stack React framework with server-side rendering, static generation, and API routes built-in. TanStack Start is a newer full-stack framework with end-to-end type safety.

Both are full-stack frameworks that blur frontend and backend boundaries. When you open a component, you might be looking at client-side code, server-side code, or static generation logic. That mental model creates cognitive overhead - for humans and AI alike.

TanStack Start's type-safety is impressive, but it explicitly blurs those lines with server functions callable from the frontend. That's magical when it works, but antithetical to radical separation.

Vite with TanStack Router gave me something different. It gave me a frontend that is purely a frontend. When a coding assistant opens a file in the app directory, there's zero ambiguity about what that code does - it runs in the browser, it consumes APIs, it renders UI. That's it. The AI doesn't need to understand server-side rendering modes, data fetching strategies, or when code runs where. It just needs to understand React components built with Radix UI primitives that fetch data and render UI.

Vite is a build tool that serves your code during development and bundles it for production. Unlike Next.js, it doesn't add server-side logic - it's purely for building frontend apps. TanStack Router handles client-side routing (changing pages without server requests).

The simplicity paid dividends immediately. When I asked the coding assistant to implement a new page, it could focus entirely on the presentation logic without worrying about data fetching strategies or server-side concerns. Those live in the API layer where they belong.

Building APIs That AI Can Navigate Blindfolded

For the backend, I chose Hono - a lightweight web framework with exactly one job: routing HTTP requests to handlers. It's fast, it's simple, and it doesn't try to do anything else.

Hono is a web framework (like Express) that routes HTTP requests to handler functions. What makes it special: it runs anywhere - Node, Deno, Bun, edge workers - with the same code.

I paired Hono with a strict service layer pattern. Every route handler is essentially a thin wrapper that does three things: validates the request, calls a service function, and returns a response. The actual business logic lives in service files. Database access happens through Prisma. File system operations happen through dedicated utility modules.

This pattern is hardly novel - it's been a best practice in backend development for decades. But it matters enormously for AI-assisted development. When a coding assistant needs to add a new API endpoint, the pattern is crystal clear: create a route that validates input, create a service function that contains the logic, write tests for the service function. The AI doesn't need to make architectural decisions. It follows the established pattern.

The service layer also acts as a natural checkpoint against duplicate implementations. When all business logic lives in clearly named service functions, a coding assistant is more likely to discover existing functionality before creating new implementations. Instead of scattered utility functions across multiple files, the service layer provides a central place where AI can find existing solutions to common problems.

// Route handler - thin wrapper (routes/articles.ts)
app.get('/articles/:id', async (c) => {
  const id = c.req.param('id')
  const article = await ArticleService.getById(id)
  return c.json(article)
})

// Service function - contains the logic (services/ArticleService.ts)
export async function getById(id: string) {
  const article = await prisma.article.findUnique({
    where: { id },
    include: { tags: true }
  })
  if (!article) throw new NotFoundError()
  return article
}
Enter fullscreen mode Exit fullscreen mode

I considered using Fastify for performance, but Hono's simplicity won out. I considered skipping the service layer and putting logic directly in route handlers for less ceremony, but the separation proved invaluable when testing and when asking a coding assistant to implement features.

Database and State: PostgreSQL with Prisma

For the database, I went with PostgreSQL and Prisma. This was less controversial - Prisma has become the de facto ORM for TypeScript projects, and for good reason. The type safety is excellent, migrations are straightforward, and the Prisma Client API is intuitive.

Prisma is an ORM (Object-Relational Mapper) that lets you work with databases using TypeScript instead of SQL. You define your data models, Prisma generates type-safe database queries, and handles migrations automatically.

What mattered more was how I structured database access. I use Crystal DBA's Postgres MCP server that provides access to the database. This means a coding assistant can query the database directly when it needs context about the data model or existing data. It can ask "what blog posts exist?" or "show me the schema for the users table" without me needing to copy and paste SQL results.

This turned out to be one of the most powerful decisions I made. When implementing a new feature, the coding assistant could inspect the actual database state to understand relationships and existing data patterns. This dramatically reduced the back-and-forth of "what does this data look like?" questions.

The Styling Solution: Centralized CSS with Semantic Classes

Here's where I went against the grain: I didn't use Tailwind CSS. In 2025, that's almost heretical - Tailwind has won the CSS framework wars. But it makes centralized theming harder, not easier.

Tailwind encourages you to compose utility classes directly in your JSX. This is powerful and fast, but it scatters styling decisions across your entire codebase. Want to change your primary color? You need to find every instance of bg-blue-600 and update it. Yes, you can use the Tailwind config to customize colors, padding, border-radius and more, but at what point does your Tailwind config just become a central definition of how to style elements? And what if you want all your form elements to have a rounded corner they never had before? Now you're editing multiple files.

Instead, I went with a centralized CSS approach using semantic class names. All styling lives in a single global stylesheet with clear, meaningful class names like .button-primary, .form-input, and .card-container. Components use these semantic classes in their JSX, creating a clean separation between styling definitions and component logic.

This approach gives you the best of both worlds: centralized styling control with semantic meaning. Want to change your primary button color? Update one CSS rule. Need to add rounded corners to all form elements? One change in the stylesheet affects everything. The styling is centralized, predictable, and maintainable.

For AI-assisted development, this pattern is incredibly clear: when a coding assistant needs to create a new component, it uses semantic class names that describe what elements are, not how they look. The AI doesn't need to make styling decisions - it just applies the appropriate semantic classes and the global stylesheet handles the visual presentation.

// Instead of scattered Tailwind classes:
<button
  className={
    "bg-blue-600 hover:bg-blue-700 px-4 py-2" +
    "rounded font-medium text-white"
  }
>
  Submit
</button>

// Use centrally defined semantic classes:
<button className="button-primary">
  Submit
</button>
Enter fullscreen mode Exit fullscreen mode

I considered CSS Modules, but they create the same scattering problem as Tailwind - styling decisions spread across multiple files. I also considered styled-components and other CSS-in-JS solutions. While the developer experience is excellent, CSS-in-JS adds runtime overhead, requires additional build configuration, and creates another abstraction layer that both humans and AI need to understand.

I also considered using shadcn/ui, which many people suggested. The component quality is excellent, and the copy-paste model means you own the code. But shadcn/ui is tightly coupled to Tailwind CSS. You can't use it without buying into the Tailwind ecosystem, which conflicted with my centralized styling goal.

The UI Library: Radix Primitives with Custom Styling

For the shared component library, I used Radix UI primitives as the foundation. If you're not familiar with Radix, it provides unstyled, accessible components that handle all the complex interaction patterns - dropdowns, dialogs, tooltips, that sort of thing. They handle keyboard navigation, screen reader support, focus management, and all the accessibility concerns that are easy to get wrong.

Radix UI provides headless (unstyled) component primitives with built-in accessibility. You get complex interactions like dropdowns and dialogs that work perfectly with keyboards and screen readers, then style them however you want.

I wrapped these Radix primitives in my own styled components that use the same centralized CSS approach as the main application. This gave me accessible components with my own visual design language, without the bloat of a full component library.

The key insight here is that Radix handles the hard part - accessibility and interaction patterns - while my styling layer handles the easy part - colors, spacing, and typography. This separation of concerns made it trivial for the coding assistant to generate new components. The pattern was always the same: wrap a Radix primitive, apply semantic CSS classes from the global stylesheet, export with a clear TypeScript interface.

I considered building everything from scratch without Radix, but accessibility is genuinely hard to get right. I also considered using a full component library like Chakra UI or Material UI, but they come with heavy styling opinions that conflict with the centralized CSS approach. Radix gave me the accessibility foundation without forcing visual decisions.


Continue to Part 2, where I dive into the development environment setup, testing strategy, deployment decisions, and the real results. How do git worktrees enable parallel AI experiments? What are the actual productivity numbers? And what do you watch for when reviewing AI-generated code?


Have a burning question or comment? Find me on LinkedIn or Bluesky. I'd love to hear from you.

Top comments (0)