DEV Community

kashif iqbal
kashif iqbal

Posted on

Lets build a full-stack LLM app with Autonomk

πŸ‘‹ Introduction

Imagine you have a requirement to build a private LLM app, you want to control where your data is stored, how your users log in and how your app is deployed. Along with those requirements you have to integrate that into your existing stack, which further limits your options. It has never been easier to build a private full-stack LLM app. In this article we'll look at a number of different solutions, including what we are doing at Autonomk.

πŸ“ What is Autonomk?

Autonomk is a collection of frontend apps: web, mobile (coming soon) and desktop (coming soon), and an OpenAPI schema which allows you to build your api in whichever technology you want and deploy as you wish. In summary, Autonomk does not provide the full-stack app, instead we provide the frontend and the OpenAPI schema on how to build the api - you build the backend.

Autonomk is built with the core philosophy, You data. Your AI. This means we subscribe to the following values:

  • Open source technology

  • No vendor lock-in

  • Full control over your backend stack

  • Client-server architecture

Frontend

Head over to Autonomk and download the Autonomk AI client. This is a collection of pre-built static files for the web app. This approach does not require you to run any frontend dev tools locally. It's a just collection of Javascript, HTML and CSS that is deployed from your web server.

The web app is a single page application (SPA) built with React and supports client-side routing. This means there is minimum setup required on your web server.
It's as simple as configuring one end point on your server to return the web app. This could also easily be done using a CDN like Netlify or Vercel if you want.

Backend

On the Autonomk github page download a quickstart code, these are currently available in Java, C# and Python. Follow the instructions on the README page. All you have to do is deploy the Autonomk AI client static files from your backend. No signing up, no docker, nothing complicated.

πŸ”₯ Autonomk AI Client

Styling

Give your Autonomk AI Client its own look and feel. Autonomk uses tweakcn. Styling is simple, generate the css from the tweakcn generator and copy it into the public/styles/theme.css folder. A more comprehensive style guide is currently in development, which will give you more control over your css.

Configuration

At present, the Autonomk AI client provides a few configurations, which can be found in the public/scripts/env.js file in the Autonomk AI client

Variable Description
HOST The restful api endpoint the AI client communicates with
TITLE The title of the AI client, used on the login pages and in the html document
DEFAULT_THEME The default theme, either light or dark

πŸ›€οΈ Roadmap

We have a number of exciting products and features coming up, including:

  1. Admin console
  2. Desktop app
  3. Mobile app
  4. Style guide
  5. SSE and websocket support
  6. Multimodal upload support, include images and videos

🧱 How Autonomk stacks up

There are 3 LLM focused solutions to building a full-stack app other than Autonomk.

  1. Open WebUI is an open source solution with no vendor lock-in. However, it currently lacks the ability to theme the app easily. It also enforces specific types of technologies which can be restrictive. If you are happy with everything that Open WebUI offers, then I would recommend it over Autonomk.

  2. LM studio
    is not a full-stack solution like Open WebUI, but a desktop app. It is great for working locally on your computer.

  3. Gradio is a python based LLM app builder. It offers lots of benefits including custom theming, self-hosting and no vendor lock-in. Although, this differs to how Autonomk works. Whereas, Autonomk provides the web app, Gradio expects you to build your own. Gradio also expects you to design your own api, Autonomk provides you with an OpenAPI schema on how to do that. If you want to build your own LLM web app, and are happy to use python then I would recommend this over Autonomk and Open WebUI.

Tech Tech Agnostic Custom theming Real time No Vendor lock-in OpenAPI Admin Console Self-hosted AI Specific
Autonomk βœ… βœ… βœ… βœ… βœ… 🚧 βœ… βœ…
Open WebUI ❌ βœ… βœ… βœ… ❌ βœ… βœ… βœ…
LM Studio βœ… ❌ βœ… ❌ ❌ ❌ ❌ βœ…
Gradio ❌ βœ… βœ… βœ… ❌ ❌ βœ… βœ…
Streamlit ❌ ❌ βœ… ❌ ❌ ❌ βœ… ❌
Dash ❌ βœ… βœ… ❌ ❌ ❌ βœ… ❌
Shiny ❌ ❌ βœ… ❌ ❌ ❌ βœ… ❌
Retool ❌ βœ… βœ… ❌ ❌ ❌ βœ… ❌

βœ… = supported
❌ = not supported
🚧 = in development

The remaining solutions in the table above, Streamlit, Dash, Shiny and Retool, are not designed for LLM specific apps, but can be used to build private LLM apps.

🏁 Conclusion

So what should I choose?

  1. A desktop app to run your LLM locally: LM studio

  2. A full-stack app built with python and docker: Open WebUI

  3. You want to build your own frontend with python and your own backend: Gradio

  4. You want a pre-built frontend and want to build your own backend: Autonomk

If you have made it this far, thank you for reading what Autonomk has to offer and how it differs from other solutions. If you would like any further information drop us an email at hello@autonomk.com or follow us on github.

Top comments (0)