DEV Community

Alex Spinov
Alex Spinov

Posted on

LM Studio Has a Free API You've Never Heard Of

LM Studio is a desktop app for running LLMs locally with a beautiful GUI. But what most people miss is that it includes a local OpenAI-compatible API server — point any app at localhost and get free AI inference.

What Makes LM Studio Special?

  • Desktop GUI — browse, download, and chat with models visually
  • OpenAI-compatible API — localhost server, same SDK
  • No coding needed — download and run models with clicks
  • GGUF support — optimized quantized models
  • Free — completely free for personal use

The Hidden API: Local OpenAI Server

from openai import OpenAI

# LM Studio runs an OpenAI-compatible server
client = OpenAI(base_url='http://localhost:1234/v1', api_key='lm-studio')

# Chat completions
response = client.chat.completions.create(
    model='local-model',
    messages=[
        {'role': 'system', 'content': 'You are a helpful coding assistant.'},
        {'role': 'user', 'content': 'Write a Python function to check if a string is a palindrome.'}
    ],
    temperature=0.7
)
print(response.choices[0].message.content)

# Streaming
stream = client.chat.completions.create(
    model='local-model',
    messages=[{'role': 'user', 'content': 'Explain recursion simply.'}],
    stream=True
)
for chunk in stream:
    print(chunk.choices[0].delta.content or '', end='')

# Embeddings
embedding = client.embeddings.create(
    model='local-model',
    input='What is machine learning?'
)
print(f'Dimensions: {len(embedding.data[0].embedding)}')
Enter fullscreen mode Exit fullscreen mode

Use with Any Framework

// LangChain
import { ChatOpenAI } from '@langchain/openai';
const model = new ChatOpenAI({ baseURL: 'http://localhost:1234/v1', apiKey: 'lm-studio' });

// Vercel AI SDK
import { createOpenAI } from '@ai-sdk/openai';
const lmstudio = createOpenAI({ baseURL: 'http://localhost:1234/v1', apiKey: 'lm-studio' });
Enter fullscreen mode Exit fullscreen mode

Quick Start

  1. Download LM Studio from lmstudio.ai
  2. Search and download a model (e.g., Llama 3)
  3. Click "Start Server" in the Local Server tab
  4. API ready at http://localhost:1234/v1

Why Developers Use LM Studio

A developer shared: "I prototype AI features with LM Studio locally, then switch to OpenAI for production. Same code, same SDK — I just change the base URL. Development is free and my data stays private."


Building AI apps? Email spinov001@gmail.com or check my AI toolkit.

Local AI or cloud AI? How do you develop with LLMs?

Top comments (0)