DEV Community

Cover image for I Put an LLM in My Browser and Now It Writes My Commit Messages. The Results Were… Unexpected😭✨
Sylwia Laskowska
Sylwia Laskowska

Posted on

I Put an LLM in My Browser and Now It Writes My Commit Messages. The Results Were… Unexpected😭✨

Hi folks! Not long ago I posted an article about the funniest and weirdest commit messages from my projects (If You Think YOUR Commit Messages Are Bad, Just Wait…).
The post itself was cool, but — as usual — your comments were pure gold.

A few people pointed out that…
👉 “Why write commit messages yourself when AI can do it?”

And then I thought: SAY. NO. MORE.
And since I'm a frontend developer, then of course:

Let’s do it in JavaScript! 😎

A hot topic at frontend conferences lately is running LLMs directly in the browser. No server, no tokens, no payments, no sending your code anywhere.
The main player here is Transformers.js, Hugging Face’s LLM runtime for browsers.

And since I’ve been wanting to play with it for months… now I had the perfect excuse.
The result?
👉 in two evenings, I built a prototype app
👉 (I promise, tomorrow I’ll finally turn on Netflix like a normal human)


🚀 TL;DR

Repo here: https://github.com/sylwia-lask/friendly-commit-messages
Feel free to play with it, use it, learn from it… or make PRs, because this is more of a POC than production-ready 😂


🛠 How does it work?

The idea was simple:

  1. You paste a git diff / code snippet
  2. The model analyzes the changes
  3. It generates a commit message
  4. All locally in the browser
  5. No asking an API for permission to exist

Sounds beautiful, right? And honestly?
It was really fun. But… not without some adventures 🤣


🤖 Choosing the model — a.k.a. “do I even know what I’m looking for?”

Most tutorials show super simple cases — e.g., a model that completes sentences.
But I needed:

  • code understanding
  • inferring intent
  • generating commit messages
  • compatibility with Transformers.js + ONNX (otherwise the model won’t run in the browser!)

The first problem:
👉 I couldn’t find a list of models that actually work with Transformers.js.

If this ever happens to you — here’s the link right away:
The List Of Free Models

Note:
not every model runs in the browser!
You need to filter by:

  • support for transformers.js
  • ONNX format (best for browser)
  • pipeline tag text-generation / chat-completion

I eventually chose:
👉 onnx-community/Qwen2.5-Coder-0.5B-Instruct

Why?

  • it’s small → fast in the browser
  • trained on code → commit messages are basically code reasoning
  • works with Transformers.js out of the box

But remember:
this is NOT GPT-5 or Gemini-3, just a tiny model.

And you can tell 😅


🧪 Examples

✔ When I paste proper code → I get a reasonable commit message (maybe far from perfect, but well, that's the effect of just two evenings of coding 😎)

The image shows

✔ When I paste broken code → I get the prompt-defined response “That's not even a code!!!”

The image shows

❌ When I asked about the weather in Brussels…
The model happily responded 🤣

The image shows

Small LLMs be like.

Moral of the story:
👉 In projects like this, the hardest part is the prompt + model selection, not the actual coding.


⚙️ Performance — or “why is my UI dying?”

This was a funny discovery.

The model loads only once.
Cool.

But:
👉 inference blocks the main thread
👉 React doesn’t have time to render “Generating…”
👉 The UI looks like nothing is happening

I could have thrown in a hack like:

setTimeout(() => runModel(), 0)
Enter fullscreen mode Exit fullscreen mode

but…
👉 Don’t do that. It just masks the real issue.

The real solution:
👉 move the model to a Web Worker

Transformers.js works beautifully in workers.
100% recommended.


🎁 The final result

What I ended up with:

  • lightweight UI in React + Tailwind
  • Transformers.js + ONNX running in the browser
  • a Web Worker hosting the model
  • a prompt that detects non-code inputs
  • a commit message generator that works offline (!)

Plus of course:
🥚 a little easter egg — I couldn’t resist adding commit names like:

  • “initial commit”
  • “do the needful”
  • “it finally works I guess”

🎓 Lessons learned

  • LLMs in the browser = super fun, but:

    • the models are small
    • prompt engineering matters A LOT
    • model selection is half the battle
  • Web Workers are a MUST if you don’t want UI freezes

  • Transformers.js is genuinely well made

  • You can build full, local AI tools without any backend at all!


💬 What do you think?

  • Have you ever tried running LLMs in the browser?
  • Do you have any favorite ONNX models?
  • Or maybe you want a version of this app with:

    • answer streaming?
    • model selection?
    • multiple commit message suggestions?

Let me know 💜

Repo again:
https://github.com/sylwia-lask/friendly-commit-messages


🦄 That’s it — thanks for reading!

And remember:
commit messages don’t have to be perfect — you just need a cute little local AI to generate them.

Top comments (5)

Collapse
 
adamthedeveloper profile image
Adam - The Developer

That's awesomeeeeee, I'm definitely giving this a try

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Yay!! So happy to hear that!
If you end up experimenting with it, I’d love to see what you build ✨

Collapse
 
toboreeee profile image
Laurina Ayarah

Definitely trying this. I was at Google DecFest last weekend, and someone talked about this too...I'm definitely trying this!

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Yesss!! That makes me so happy!
Browser LLMs are such a fun rabbit hole - let me know what you build! 🤖✨

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

nice!