DEV Community

Cover image for TextGen vs LM Studio: Picking a Local LLM Runner in 2026
Alan West
Alan West

Posted on

TextGen vs LM Studio: Picking a Local LLM Runner in 2026

I've been running local LLMs on my workstation for about two years now. Started with llama.cpp raw on the command line, moved to LM Studio when I wanted a real GUI, then drifted back to text-generation-webui (now branded as TextGen) when I needed more control over sampling and LoRAs. So when the TextGen post hit r/LocalLLaMA announcing a native desktop app, I was curious enough to spend a weekend with it side-by-side with LM Studio.

If you're picking between the two — or thinking about migrating from one to the other — here's what I've actually found.

Why this matters now

LM Studio has been the default "easy mode" for local LLMs for a while. Slick installer, model browser baked in, OpenAI-compatible server with two clicks. The catch: it's closed source. For some teams that's a non-starter, and I get it — when something is running models that touch sensitive code or data, knowing what the binary is doing matters.

TextGen (the project formerly known as text-generation-webui, maintained by oobabooga on GitHub) has always been the open-source counterweight. The tradeoff was that it ran as a Gradio web UI you'd launch from a terminal — powerful, but not exactly desktop-app polish. According to the Reddit announcement, that's changed: it now reportedly ships as a packaged desktop app. I haven't dug through the official changelog in detail, so treat that framing as "the project is moving in that direction" rather than a feature audit.

Side-by-side: what each one is actually good at

Let me skip the marketing checklist and tell you how they actually feel.

LM Studio

  • Onboarding: Best in class. Download, install, click a model, start chatting. My non-developer brother got it running unprompted.
  • Model discovery: Built-in Hugging Face browser with quant filtering. Saves a lot of clicking.
  • Local server: One toggle gives you an OpenAI-compatible endpoint on localhost:1234. This is honestly the killer feature.
  • Closed source: You can't audit it, can't fork it, can't ship it inside your own product.
  • Licensing: Free for personal use; commercial use has its own terms — read them.

TextGen

  • Onboarding: Better than it used to be. The desktop packaging removes the "open a terminal and pray" step that scared off a lot of people.
  • Extensibility: Extensions for RAG, TTS, character cards, training adapters. If you want to mess with sampling parameters most apps hide, it's all there.
  • Backends: Supports llama.cpp, Transformers, ExLlamaV2, and others — useful if you're benchmarking or working with non-GGUF formats.
  • Open source: AGPL-licensed. You can read every line, patch it, and self-host.
  • Polish: Still rougher than LM Studio in places. UI is denser. That's a feature if you're a power user, a bug if you're not.

Talking to them from code

The practical question for most developers isn't "which UI is prettier," it's "how do I point my app at this thing." Both expose an OpenAI-compatible API, which means migration is mostly a base URL change.

With LM Studio's local server running:

from openai import OpenAI

# LM Studio defaults to port 1234
client = OpenAI(
    base_url="http://localhost:1234/v1",
    api_key="not-needed",  # local server ignores this but the SDK requires a value
)

resp = client.chat.completions.create(
    model="local-model",  # LM Studio uses whatever model is currently loaded
    messages=[{"role": "user", "content": "Summarize this commit message: 'fix off-by-one in pager'"}],
)
print(resp.choices[0].message.content)
Enter fullscreen mode Exit fullscreen mode

Same app, pointed at TextGen instead:

from openai import OpenAI

# TextGen's OpenAI-compatible endpoint defaults to port 5000
client = OpenAI(
    base_url="http://localhost:5000/v1",
    api_key="not-needed",
)

resp = client.chat.completions.create(
    model="local-model",
    messages=[{"role": "user", "content": "Summarize this commit message: 'fix off-by-one in pager'"}],
)
print(resp.choices[0].message.content)
Enter fullscreen mode Exit fullscreen mode

That's it. Two-line migration for the call site. The real work, if any, is config and model files — more on that below.

Migrating from LM Studio to TextGen

If you're moving the other direction, the steps are pretty similar. Here's the rough path I took on my own machine.

1. Inventory your models

LM Studio stores models under ~/.cache/lm-studio/models/ (macOS/Linux) by default. TextGen reads from its own models/ directory inside the install path. You don't have to redownload — symlinking works fine:

# point TextGen at LM Studio's existing GGUF cache
ln -s ~/.cache/lm-studio/models/TheBloke ~/textgen/models/TheBloke

# verify TextGen sees them
ls ~/textgen/models/
Enter fullscreen mode Exit fullscreen mode

One gotcha: TextGen expects the file directly in models/<name>/, while LM Studio sometimes nests deeper. Check the layout before assuming the symlink "just worked."

2. Mirror your sampling settings

This is where most migrations get weird. LM Studio exposes a small set of sliders. TextGen exposes... all of them. If your app depended on specific defaults, copy them across explicitly rather than trusting either tool's defaults to match.

# example presets/my-preset.yaml in TextGen
temperature: 0.7
top_p: 0.9
top_k: 40
repetition_penalty: 1.1
# LM Studio doesn't expose these, so the defaults probably differ
min_p: 0.05
typical_p: 1.0
Enter fullscreen mode Exit fullscreen mode

I ran into a case last month where output quality "regressed" after migrating, and it was 100% because the default repetition_penalty was different. Lost an hour to that.

3. Update your API base URL

The one-liner from the code section above. If you're using environment variables (and you should be), this is a single config change:

# before
export OPENAI_BASE_URL="http://localhost:1234/v1"

# after
export OPENAI_BASE_URL="http://localhost:5000/v1"
Enter fullscreen mode Exit fullscreen mode

Which should you pick?

Honestly, it depends on what you're optimizing for. I keep both installed.

  • Pick LM Studio if: you want the lowest-friction way to try local models, you're showing this to non-technical teammates, or you just want a server endpoint that works without thinking.
  • Pick TextGen if: you need open source for compliance or auditing, you want to tweak sampling beyond what LM Studio exposes, you're running training/LoRA workflows, or you're going to extend the tool itself.

The native desktop packaging closes a real gap that used to push people toward LM Studio. It doesn't make TextGen "the same" — the UI philosophies are different and that's fine — but the install-and-run experience is no longer the deciding factor.

If you've been on the fence because you wanted open source but didn't want to babysit a Gradio process, this is a reasonable moment to give TextGen another look. And if LM Studio is working for you and you don't care about source access, there's no urgent reason to switch. "It works, leave it alone" is a valid engineering position.

Top comments (0)