DEV Community

Pavel
Pavel

Posted on

My LLM Was a Terrible Co-Pilot, So I Built It a Better Cockpit: Introducing AxleLLM

We’ve all been there. You’re staring at a tricky problem, so you tab over to your favorite LLM.

"Write me a JavaScript function that..."

Sometimes, what you get back is pure magic. It’s elegant, efficient, and saves you an hour of work. But other times... it's a disaster. It hallucinates a library that doesn’t exist, uses outdated syntax, or produces a buggy, convoluted mess that takes you longer to debug than it would have taken to write from scratch.

This is the cycle of hope and frustration with AI in development. LLMs are incredibly powerful, but they lack context. They are brilliant at generating text that looks like code, but they struggle with the implicit rules, structure, and constraints of a real-world application.

I realized the problem wasn't the AI. The problem was the environment I was asking it to work in. Asking an LLM to write imperative code in a large project is like putting a brilliant but distractible apprentice in a workshop full of unlabeled, razor-sharp tools. They're going to make a mess.

So, I asked myself: What if, instead of asking the LLM to write messy code, I built it a better cockpit? A playground with simple, unbreakable rules and big, obvious buttons for the important stuff. A framework where the only way to build is the right way.

That’s why I created AxleLLM.

What is AxleLLM? The "Guardrails" Your AI Needs

AxleLLM is a declarative engine for building native, cross-platform desktop apps with Node.js and Electron.

Its core idea is to change the nature of our requests to the AI. Instead of "write code," the request becomes "describe the system."

The entire application—data, UI, logic, everything—is defined as a series of plain JavaScript objects in a manifest directory. This structure acts as "guardrails" for the LLM. It can't generate buggy loops or import sketchy libraries. It can only do one thing: describe the system's architecture by filling in the blanks.

From Frustration to Flow in 60 Seconds

This isn't just theory. You can experience this new way of working right now. You can go from an empty folder to a running, native desktop application with three commands:

# 1. Create a new app using the AxleLLM CLI
npx axle-llm new my-first-app

# 2. Navigate and install dependencies
cd my-first-app
npm install

# 3. Launch the app in development mode
npm run dev
Enter fullscreen mode Exit fullscreen mode


(Pro Tip: I'd include a GIF here showing these commands running and the app window popping up.)

And that's it. A "Hello, World!" desktop app is running on your screen, with hot-reloading ready to go.

Anatomy of an LLM-Friendly App

Let's look at the "Hello, World!" app that the CLI just created. It's the perfect example of how this declarative structure makes AI collaboration easy and safe.

1. The Data (manifest/connectors.js)

First, we declare all the data our app will use. The LLM can't guess where state comes from; it must define it here. This eliminates a whole class of undefined errors.

// manifest/connectors.js
"viewState": {
  "type": "in-memory",
  "initialState": { "message": "Hello, World!" }
}
Enter fullscreen mode Exit fullscreen mode

2. The UI (app/components/pages/home-page.html)

The UI is just "dumb" HTML with Mustache tags. The LLM can't write complex, error-prone client-side JavaScript. It can only map the data we just defined to the screen.

<!-- app/components/pages/home-page.html -->
<h1>{{ data.viewState.message }}</h1>

<button atom-action="POST /action/change-message" ...>
  Change Message
</button>
Enter fullscreen mode Exit fullscreen mode

3. The Logic (manifest/routes.js)

This is the magic. The business logic isn't code; it's a simple array of steps. It’s a recipe. And LLMs are fantastic at writing recipes.

// manifest/routes.js
"POST /action/change-message": {
  "type": "action",
  "writes": ["viewState"], // The engine will auto-save this state
  "update": "homePage",     // and auto-update this component
  "steps": [
    { "set": "data.viewState.message", "to": "'Hello, AxleLLM!'" }
  ]
}
Enter fullscreen mode Exit fullscreen mode

We just described a fully reactive application without writing a single line of traditional, imperative code. We gave the LLM a structured language of architecture, and it delivered a perfect result.

The "Right Tools" for the Job

AxleLLM provides a few key features that make this collaboration work so well:

  • ** The Super Validator:** Think of this as a senior developer who instantly code-reviews your LLM's architecture. Before the app even runs, it catches mistakes like trying to use data that wasn't loaded or linking to a component that doesn't exist.
  • ** The Secure Native Bridge:** Want to save a file? The LLM doesn't need to learn the fs module and its security risks. It just needs to add one line to a whitelist (manifest/bridge.js). The framework handles the safe implementation. The AI declares the intent, the framework provides the safe execution.
  • ** Declarative Steps:** The limited, powerful set of steps (like set, if, run:set) forces the LLM to build logic from reliable, tested building blocks, not from scratch every time.

A New Way to Collaborate

AxleLLM isn't about replacing the developer. It's about upgrading the relationship we have with our AI assistants. It's about moving from a frustrating "boss-intern" dynamic to a productive "architect-apprentice" collaboration.

It's about spending less time debugging quirky AI code and more time on the creative, high-level design of your application.

The project is open-source and I'm actively developing it. I invite you to join this experiment.

Try it out. Create your first app in under a minute and see what it feels like to have an AI co-pilot that actually stays on course.

I'd love to hear your thoughts in the comments below

Top comments (1)

Collapse
 
xzdes profile image
Pavel

Tried AxleLLM. Built a simple app to scan local Git repos and show file changes.

Followed the LLM prompts, created manifests, and it just worked. No need to write traditional code — pretty neat.

The app does what it’s supposed to: lists repos, shows changes. The setup process was straightforward, though I can see how more complex apps might require some extra thinking.

Apps you build with this approach can be more powerful than typical low-code solutions. The combination of LLM guidance and declarative design allows for creating quite sophisticated applications without deep coding knowledge.

Might be useful for quick prototyping or non-technical users. Worth checking out if you’re into low-code/no-code development.