Hey #DevCommunity! 👋
I've been playing around with a wild idea this weekend. It's not about building a polished product, but more about asking a "what if?" question about how we write application logic.
Imagine changing your app's behavior... without touching a single line of code. Sounds like low-code/no-code, right? But what if the "logic engine" behind it wasn't a visual workflow, but a set of rules written in plain, natural English? And what if it ran entirely on your device, powered by a small LLM like Gemini Nano?
That's the rabbit hole I went down.
The Spark: An MIT Paper and the LLM "Brain"
I recently came across a fascinating academic paper from MIT ("What You See Is What It Does"). It talks about a new idea where "dumb," independent parts of an app are linked by simple, event-based rules.
My brain immediately went: "What if an on-device LLM could be that rules engine? What if it could understand these 'rules' in real-time?"
So, I built a little app to find out.
I call it Event-Driven AI (for now!). It's a super-simple To-Do app, but here's the core idea:
-
Dumb UI, Smart Brain: The app's JavaScript UI is incredibly "dumb." When you click "Add Todo," it just broadcasts an event:
"User clicked 'Add Todo' with text 'Buy milk'". It doesn't know what to do next. - English Rules Engine: This event gets sent to a local, on-device LLM (Gemini Nano, running via the window.ai API).
-
Real-Time Planning: The LLM then consults a set of "Rules" that I've written in a in plain English.
- Example Rule: "When a user adds a new todo item, first call the
addTodoItemtool with the item's text, then call theupdateCountertool."
- Example Rule: "When a user adds a new todo item, first call the
-
Action!: Based on these rules, the LLM generates a simple JSON plan (e.g.,
[{"tool_to_call": "addTodoItem", ...}, {"tool_to_call": "updateCounter", ...}]). My JavaScript then simply executes this plan, calling the specific functions (my "tools").
The "Aha!" Moment (or perhaps "Wait, that's wild!")
The coolest part is the "Update & Relaunch AI Engine" button. You can literally change the app's entire business logic by simply editing the English rules in the <textarea>, click the button, and the app instantly behaves differently. All without touching a single line of JavaScript.
Imagine the possibilities:
- Adding new features by just writing a new rule.
- A non-technical product manager tweaking app behavior directly.
- Hyper-personalized app experiences driven by an AI interpreting user context and custom rules.
Let's address the elephant in the room (before the performance-gurus find me! 😉)
Is this "production ready"? Absolutely NOT!
- Performance: It's significantly slower than native JavaScript logic. This is an experiment, not an optimization.
- Predictability: LLMs, by their nature, are not 100% predictable. This is the trade-off for their flexibility.
- Security: Running any logic from an LLM needs careful sandboxing. My simple demo doesn't claim to solve this for production.
This is just an exploration, not a finished product. It's about exploring the boundaries of "easy to read" and "dynamic" logic when an LLM is your interpreter.
I'm genuinely curious about your thoughts on this. Is this a totally crazy idea, or could this concept evolve into something genuinely useful for specific use cases (like flexible, offline game AI, or hyper-personalized mobile apps)?
I'd love your feedback, your critiques, and especially any similar "crazy ideas" you've been playing with!
Check out the full source code and a live demo on GitHub. Fork it, break it, tell me what you think!
Top comments (3)
This is wild in the best way. Turning plain English into functional app logic with on-device LLMs feels like the first step toward making software behavior editable by anyone — not just devs. It’s like “natural language meets event architecture,” and that combo unlocks serious possibilities.
You're totally right — it’s not “production-ready” yet, but that's not the point. The idea that logic can be redefined on the fly via a local LLM flips the script on how we think about UX and app behavior. I can already imagine:
PMs shipping logic changes without needing sprint cycles
Users customizing workflows like macros, but in plain English
Offline apps that feel dynamic without cloud dependencies
Honestly, it feels like a cousin of MindsEye — we’ve been exploring agentic flows that interpret and adapt behaviors through feedback and rules, and this leans into that idea beautifully.
Exactly that was was the idea, also a bit provocative but who knows what will be the limits of LLM in the near future
While it is great from a technology perspective. I think it is a bad thing if people want to create mission critical applications with it.
Instead of creating code that is optimized to be read by machines, you hand off whatever string users are going to input to a technology that is at best a highly skilled drafting tool. And you want that tool to run on the user's device.
If you ever had to debug problems that happen at random. Fixing LLM bugs will give you not nightmares but nightapocalypses.
I'm happy with the productivity boost AI provides, but that is where I draw the line.