DEV Community

Cover image for I've Seen the Future of UI Development. It's Insane, Written in Rust, and Rendered by an AI.
Pavel
Pavel

Posted on • Edited on

I've Seen the Future of UI Development. It's Insane, Written in Rust, and Rendered by an AI.

Hey dev.to community!

We’re all used to thinking about UI frameworks in the same terms: React, Vue, Svelte, Flutter, SwiftUI... We argue about the Virtual DOM, reactivity, and performance. But what if we're all looking in the wrong direction? What if the next big leap isn't about how we render, but who does the rendering?

I decided to test a crazy idea. What if, instead of writing code to draw every pixel of a button, we just described that button with words and let an AI draw it for us?

And after a debugging marathon that nearly broke my brain, I did it. Meet the Shadowin AI-Render Engine, a working prototype of a UI framework where the visuals are generated by Stable Diffusion in real-time.

The Concept: A Skeleton of Rust, A Skin of AI

The entire idea behind Shadowin is built on a simple but powerful separation of concerns:

  1. The Logic is Rust. My code, written from scratch in Rust, is only responsible for the "physics" of the interface. It knows there's a button with ID=0, its dimensions are 200x60, it's located at coordinates (50, 50), and it's currently being hovered over. It knows that a click should increment a counter. But it has absolutely no idea what this button looks like.

  2. The Visuals are AI. When it's time to draw, my Rust code doesn't touch the pixel buffer. Instead, it forms a text prompt and sends it to a locally running Stable Diffusion server:

    "a crisp UI button with the text 'Submit', photorealistic, octane render, trending on artstation, dark sci-fi style, neon blue highlights, glowing, hovered state"

The neural network generates an image, and my engine simply "stamps" it onto the screen.

Here's what it looks like in action:

When you hover or click, it's not just the color that changes—the entire generated texture of the button swaps out!

How It Works Under the Hood

It's not magic, it's Rust and a bit of madness.

  • The Engine: Written from scratch in Rust, using winit for windowing and pixels for direct framebuffer access. No browsers, no Electron.
  • The AI Communicator: A module that talks to the stable-diffusion-webui API via HTTP requests using reqwest.
  • Caching: Image generation is slow. That's why the engine caches every generated asset. The first launch "warms up" the cache by generating all necessary states for all widgets. After that, the interface runs smoothly.
  • Synchronicity: After some painful experiments with async, I settled on a simple, blocking approach. Yes, the application freezes during generation, and that's honest. Make it work first, then make it fast.

Why Is This a Glimpse into the Future?

Imagine the possibilities:

  • Natural Language Theming: A user could type in the settings, "I want an interface in the style of a Fallout terminal" or "make everything look like a watercolor painting," and the entire UI would instantly transform.
  • Next-Level Adaptive Design: A "Delete" button could become visually more "alarming" based on the importance of the data being deleted. The interface could change its "mood" depending on the time of day.
  • Infinite Uniqueness: Your instance of the application will look different from anyone else's.

Yes, this approach has huge downsides. Accessibility is a nightmare. Performance needs serious work. And as you can see from the demo, getting Stable Diffusion v1.5 to render text clearly and consistently is a challenge in itself. I chose not to compromise by rendering text on top of a generated background; I forced the neural network to draw the button entirely, text included, to prove the purity of the concept. Using more modern models specifically trained for text rendering (like DeepFloyd-IF or a fine-tuned SDXL) would solve this.

But that doesn't matter. What matters is that it works. It's proof that we can create interfaces in a completely new way.

I believe the future lies in declarative, context-aware, and personalized interfaces. And AI is the key to unlocking that future.

The entire project is up on GitHub. Check it out, try it, break it. Let's brainstorm together where this crazy idea could lead.

What do you think? Is this a breakthrough technology or just a fun toy? Let me know in the comments

Top comments (2)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.