DEV Community

Cover image for What if your app's logic was written in... plain English? A crazy experiment with on-device LLMs!
Pietro Ghezzi
Pietro Ghezzi

Posted on

What if your app's logic was written in... plain English? A crazy experiment with on-device LLMs!

Hey #DevCommunity! πŸ‘‹

I've been playing around with a wild idea this weekend. It's not about building a polished product, but more about asking a "what if?" question about how we write application logic.

Imagine changing your app's behavior... without touching a single line of code. Sounds like low-code/no-code, right? But what if the "logic engine" behind it wasn't a visual workflow, but a set of rules written in plain, natural English? And what if it ran entirely on your device, powered by a small LLM like Gemini Nano?

That's the rabbit hole I went down.

The Spark: An MIT Paper and the LLM "Brain"

I recently came across a fascinating academic paper from MIT ("What You See Is What It Does"). It talks about a new idea where "dumb," independent parts of an app are linked by simple, event-based rules.

My brain immediately went: "What if an on-device LLM could be that rules engine? What if it could understand these 'rules' in real-time?"

So, I built a little app to find out.

I call it Event-Driven AI (for now!). It's a super-simple To-Do app, but here's the core idea:

  1. Dumb UI, Smart Brain: The app's JavaScript UI is incredibly "dumb." When you click "Add Todo," it just broadcasts an event: "User clicked 'Add Todo' with text 'Buy milk'". It doesn't know what to do next.
  2. English Rules Engine: This event gets sent to a local, on-device LLM (Gemini Nano, running via the window.ai API).
  3. Real-Time Planning: The LLM then consults a set of "Rules" that I've written in a in plain English.
    • Example Rule: "When a user adds a new todo item, first call the addTodoItem tool with the item's text, then call the updateCounter tool."
  4. Action!: Based on these rules, the LLM generates a simple JSON plan (e.g., [{"tool_to_call": "addTodoItem", ...}, {"tool_to_call": "updateCounter", ...}]). My JavaScript then simply executes this plan, calling the specific functions (my "tools").

The "Aha!" Moment (or perhaps "Wait, that's wild!")

The coolest part is the "Update & Relaunch AI Engine" button. You can literally change the app's entire business logic by simply editing the English rules in the <textarea>, click the button, and the app instantly behaves differently. All without touching a single line of JavaScript.

Imagine the possibilities:

  • Adding new features by just writing a new rule.
  • A non-technical product manager tweaking app behavior directly.
  • Hyper-personalized app experiences driven by an AI interpreting user context and custom rules.

Let's address the elephant in the room (before the performance-gurus find me! πŸ˜‰)

Is this "production ready"? Absolutely NOT!

  • Performance: It's significantly slower than native JavaScript logic. This is an experiment, not an optimization.
  • Predictability: LLMs, by their nature, are not 100% predictable. This is the trade-off for their flexibility.
  • Security: Running any logic from an LLM needs careful sandboxing. My simple demo doesn't claim to solve this for production.

This is just an exploration, not a finished product. It's about exploring the boundaries of "easy to read" and "dynamic" logic when an LLM is your interpreter.

I'm genuinely curious about your thoughts on this. Is this a totally crazy idea, or could this concept evolve into something genuinely useful for specific use cases (like flexible, offline game AI, or hyper-personalized mobile apps)?

I'd love your feedback, your critiques, and especially any similar "crazy ideas" you've been playing with!

Check out the full source code and a live demo on GitHub. Fork it, break it, tell me what you think!

GitHub Repo

Top comments (0)