DEV Community

Thomas Jamet
Thomas Jamet

Posted on

Why I Built a Multi-LLM Development Framework (And Why You Might Need One Too)

The Problem Nobody Talks About

There's a lot of content about building apps with AI coding assistants. "Build a full-stack app in 10 minutes!" "AI writes your entire codebase!"

But here's what nobody discusses: maintaining that code for 6 months.

After working with AI assistants across dozens of projects, I noticed a pattern. Every project turned into chaos. Inconsistent file layouts, AI agents constantly asking "where is this file?", and technical debt that was invisible until it wasn't.

This is what I call "vibe coding" — ad-hoc development without organizational structure. It works for demos. It fails for production.

The Core Insight

AI assistants are great at generating code but provide no organizational structure. They respond to prompts, but they don't enforce patterns. Each interaction starts from minimal context.

This means:

  • Every project has a unique (chaotic) structure
  • AI agents waste cycles re-orienting themselves
  • Knowledge doesn't transfer between projects
  • Cognitive overhead compounds with scale

The Solution: Consistent Patterns

I built the Multi-LLM Development Framework — an open-source Python tool that creates consistent workspace structures across Gemini, Claude, and Codex.

The idea is simple: if all your projects follow the same patterns, you spend less time explaining structure to AI (and yourself) and more time actually building.

What It Does

  • Tiered Workspaces: Lite/Standard/Enterprise matched to project complexity
  • Provider-Agnostic: Works with Gemini, Claude, or Codex
  • Standardized Structure: Predictable directory layouts
  • Skills + Workflows: Reusable capabilities and orchestrated sequences
  • Session Management: Makefile-based workflow control

Usage

# Create a Standard tier workspace with Claude
python bootstrap.py -t 2 -n myproject --provider claude

# Or with Gemini (default)
python bootstrap.py -t 2 -n myproject
Enter fullscreen mode Exit fullscreen mode

Why This Matters

For you:

  • Same structure across all projects = lower cognitive overhead
  • Faster onboarding when returning to old projects
  • Patterns that scale from 1 project to 10

For AI agents:

  • Predictable file locations = fewer clarifying questions
  • Consistent patterns = faster execution
  • Structured context = better suggestions

The Modular Architecture

The framework itself follows these principles. The source is broken into maintainable building blocks that compile into a single distributable file (~5K lines).

You get:

  • Modular development — edit individual components
  • Single-file distribution — easy deployment

Try It

GitHub: https://github.com/thomas-jamet/Multi-LLM-Development-Framework

MIT licensed. Feedback welcome.

Top comments (0)