I have spent the last decade creating complexity. We all have.
We convinced ourselves that to put a button on a screen, we needed a build step, a virtual DOM, three state management libraries, and a hydration strategy. It was madness. (necessary madness, perhaps, but madness nonetheless).
Yesterday, I looked at the node_modules folder of a basic Next.js project. It contained 847 packages. Eight hundred and forty-seven dependencies just to render text on a screen. We built these towers of abstraction to make JavaScript palatable for human typists. We optimized for "Developer Experience" because humans make syntax errors and struggle with raw DOM manipulation.
But humans aren't writing the code anymore.
I've written a comprehensive deep-dive into the philosophy behind this, but today I want to show you the code. I want to show you what happens when you stop building for humans and start building for the machine.
The Token Economy
The first thing you learn when you start building production AI systems is that verbosity is expensive. It costs money (tokens) and it costs time (latency).
Intermediate frameworks—React, Vue, Angular—are incredibly verbose. They require boilerplate, imports, type definitions, and specific syntax structures.
Let's look at the math.
I asked an LLM to "create a button that logs a click" using modern React best practices.
Click to see the React Boilerplate
import React, { useState } from 'react';
interface ButtonProps {
label: string;
onClick: () => void;
variant?: 'primary' | 'secondary';
}
export const ActionButton: React.FC<ButtonProps> = ({
label,
onClick,
variant = 'primary'
}) => {
const [isClicked, setIsClicked] = useState(false);
const handleClick = () => {
setIsClicked(true);
console.log('Button clicked');
onClick();
setTimeout(() => setIsClicked(false), 200);
};
const baseStyles = "px-4 py-2 rounded font-semibold transition-all";
const variantStyles = variant === 'primary'
? "bg-blue-500 text-white hover:bg-blue-600"
: "bg-gray-200 text-gray-800 hover:bg-gray-300";
return (
<button
className={`${baseStyles} ${variantStyles} ${isClicked ? 'opacity-75' : ''}`}
onClick={handleClick}
>
{label}
</button>
);
};
That is approximately 180 tokens. It requires the model to understand the component lifecycle, the import system, and TypeScript interfaces.
Now, consider the raw HTML/JS approach.
<button
onclick="console.log('clicked')"
class="px-4 py-2 rounded font-semibold bg-blue-500 text-white hover:bg-blue-600 transition-all active:opacity-75">
Click Me
</button>
That is roughly 45 tokens.
If you are generating a dashboard with fifty interactive elements, the React approach blows up your context window and increases generation time by 400%.
The AI doesn't need the component abstraction. It doesn't need the safety of the Virtual DOM. It generates perfect syntax every time. When you remove the framework, the browser becomes an incredibly fast, efficient runtime.
The New Stack: Rust + Python
We are seeing a bifurcation in the stack.
- The Brain (Python): Python dominates the control plane. It's where the models live, where the orchestration happens.
- The Brawn (Rust): If the code is generated, we want the runtime to be bulletproof and fast. Rust gives us type safety and C++ level performance without the memory leaks.
The middle ground—JavaScript application logic—is collapsing.
Here is how I am building UIs now. I call it the Disposable UI Pattern.
Step 1: The Rust Server
We don't need a build step. We need a server that takes a request, asks an agent for the UI, and returns raw HTML.
I'm using Axum here because it's fast and ergonomic.
use axum::{
response::Html,
routing::get,
Router,
};
use std::net::SocketAddr;
// This is where we pretend to be a complex AI agent
// In production, this calls a Python service or an LLM API directly
async fn generate_ui(prompt: &str) -> String {
// Imagine a call to OpenAI/Anthropic here
format!(
r#"
<div class="p-8 max-w-2xl mx-auto bg-white rounded-xl shadow-lg flex items-center space-x-4">
<div>
<div class="text-xl font-medium text-black">Generated Content</div>
<p class="text-gray-500">You asked for: {}</p>
<div class="mt-4">
<button class="px-4 py-2 bg-purple-600 text-white rounded hover:bg-purple-700 transition">
Action
</button>
</div>
</div>
</div>
"#,
prompt
)
}
async fn dashboard_handler() -> Html<String> {
// 1. Receive User Request
// 2. Contextualise (User ID, Data State)
// 3. Generate UI based on *current* state
let ui_component = generate_ui("A user dashboard panel").await;
let page = format!(
r#"
<!DOCTYPE html>
<html>
<head>
<script src="https://cdn.tailwindcss.com"></script>
</head>
<body class="bg-slate-100 h-screen flex items-center justify-center">
{}
</body>
</html>
"#,
ui_component
);
Html(page)
}
#[tokio::main]
async fn main() {
let app = Router::new().route("/", get(dashboard_handler));
let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
println!("listening on {}", addr);
axum::Server::bind(&addr)
.serve(app.into_make_service())
.await
.unwrap();
}
This compiles to a single binary. It starts instantly. It uses negligible memory.
Step 2: The Python Orchestrator
The Rust server handles the traffic. The Python layer handles the intelligence.
I've stopped writing generic endpoints. Instead, I write "Intent Handlers."
# pseudo-code for the thinking layer
def handle_user_intent(user_input, database_context):
"""
Decides what UI the user actually needs right now.
"""
# Is the user confused? Generate a help modal.
if analysis.is_confused(user_input):
return generate_html("help_modal", context=database_context)
# Does the user want data? Generate a table.
if analysis.requires_data(user_input):
sql = generate_sql(user_input)
data = run_safe_query(sql)
return generate_html("data_table", data=data)
return generate_html("standard_response")
The key shift here is that the UI is not static.
In a React app, the components are defined at build time. You have a TableComponent and a ModalComponent. You toggle visibility with boolean flags.
In this architecture, the UI is ephemeral. If the user needs a chart, the agent writes the SVG. If the user needs a form, the agent writes the <form> tag. The architecture doesn't dictate the UI; the data dictates the UI.
Why This Terrifies Frontend Developers
(and why it shouldn't)
When I show this to React developers, they recoil.
"Where is the state management?"
"What about re-renders?"
"How do I debug the component tree?"
You don't. That's the point.
The "Component Tree" is an artifact of the framework. It's a mental model we forced upon ourselves to manage complexity.
In the Disposable UI model, the state lives on the server (in the database or the AI context). The client is just a dumb terminal rendering HTML.
If the state changes? You generate new HTML.
"But that's slow!"
Is it? Have you profiled a heavy React app lately? The hydration waterfall alone often takes longer than it takes a modern LLM to spit out 2KB of HTML and for the browser to paint it.
The Maintainability Paradox
The strongest argument against this approach is maintainability. "If the AI generates the code, how do we maintain it?"
This reveals a fundamental misunderstanding of the shift we are undergoing.
You do not maintain the output. You maintain the system.
If the generated HTML is ugly, you don't edit the HTML file. You edit the prompt. You edit the CSS variables injected into the context.
It is similar to how we treat compiled code. If your C++ compiler outputs a binary that segfaults, you don't hex-edit the binary. You fix the C++ source.
In this paradigm:
- Source: The Prompt + Context + Database Schema
- Compiler: The LLM
- Binary: The HTML/JS
We are moving up the abstraction ladder. We are becoming architects of systems that write code, rather than writers of code.
Handling Complexity (The "Real World" Check)
I can hear the objections. "This works for a toy blog, Edward, but not for my Enterprise SaaS."
Let's break that down.
"We need interactivity"
Standard HTML is interactive. input, details, dialog. For complex state (like a drag-and-drop kanban board), the AI can generate a script block with vanilla JS.
// The AI generates this specific logic for this specific view
document.querySelectorAll('.card').forEach(card => {
card.addEventListener('dragstart', e => { ... });
});
Because the script is generated for this specific state, it doesn't need to handle every possible edge case of a generic KanbanComponent. It just needs to work for the data currently on the screen.
"We need security"
This is the real concern. If you let an AI write SQL or raw HTML, you are inviting injection attacks.
This is where Rust shines.
We don't just pipe the output to the browser. We parse it.
fn sanitize_output(raw_html: String) -> String {
// Use a strict HTML sanitizer library
// Strip out dangerous tags <script src="...">
// Ensure all attributes are quoted
ammonia::clean(&raw_html)
}
The Rust layer acts as the gatekeeper. It enforces the contract. The AI can hallucinate whatever it wants, but the Rust compiler and runtime libraries ensure that what reaches the user is safe.
The Death of the Toolchain
Think about your current toolchain. Webpack. Babel. ESLint. Prettier. TypeScript. Jest. Cypress.
These tools exist to catch human errors.
- Prettier: Because humans argue about semicolons.
- ESLint: Because humans forget to handle promises.
- TypeScript: Because humans pass strings to functions expecting integers.
The AI does not argue about semicolons. It formats perfectly. If you provide the correct context (Rust structs), it respects types.
When you remove the human error factor from the syntax level, the entire toolchain becomes dead weight.
I deleted my node_modules folder. I deleted my package.json. I deleted my webpack.config.js.
I replaced them with a Cargo.toml and a Python script.
The silence is deafening. And beautiful.
TL;DR
- Frameworks are expensive: React/Next.js add token overhead and latency that AI agents shouldn't have to pay.
- HTML is efficient: Browsers are optimized for raw HTML/CSS. AI generates this natively and correctly.
- Rust is the Runtime: Use Rust for speed, safety, and sanitization of AI outputs.
- Python is the Brain: Keep the logic in the language the models understand best.
- Maintenance shifts up: Don't debug the code. Debug the prompt and the architecture.
Let's Chat
I know this triggers the "but my component library!" reflex. I had it too. But ask yourself: are you optimizing for the user, or for your own comfort with the tools you already know?
Built something similar? Completely disagree? I'm genuinely curious.
More technical breakdowns at tyingshoelaces.com. I write about what works in production, not what looks good in demos.

Top comments (0)