Elevate Your Coding: Integrating Cursor with Local LLMs (LM Studio) and GitHub Copilot
Prerequisites
Before you begin, make sure you have:
Cursor installed
LM Studio installed (see lmstudio.ai)
ngrok (optional, discussed later)
A GitHub Copilot subscription (optional but recommended)
One or more local models downloaded (e.g. Gemma2, Llama3, DeepSeekCoder)
Part 1: Setting Up the Engine – LM Studio & ngrok
The goal of this section is to run a local LLM and expose it as an API that Cursor can consume.
- Install LM Studio Download the appropriate package for your OS. Run the installer and launch the application. LM Studio Model List
Example model selector inside LM Studio.
- Choose and Download a Model Use the search bar to find a model. Some good starting points:
Llama3(8B) or Gemma2(9B) for general use
DeepSeekCoder for coding-heavy tasks
GLM4 for modern language capabilities
Click Download and wait for the model to finish downloading.
- Start the Local Server Switch to the Local Server tab ( icon). Select your downloaded model from the dropdown. Ensure CORS is enabled and note the default port (1234). Click Start Server. You'll see logs confirming the service is running. Server Settings
Server configuration panel with model selected.
- Expose the API with ngrok (Optional but Recommended) While Cursor can hit http://localhost, using ngrok provides a stable, publicly reachable URL and avoids sandbox restrictions.
install (macOS example)
brew install ngrok
authenticate
ngrok config add-authtoken
start a tunnel for LM Studio's port
ngrok http 1234
Copy the resulting "Forwarding" URL (e.g. https://a1b2-c3d4.ngrok-free.app); you'll need it when configuring Cursor.
ngrok dashboard
ngrok running and showing a forwarding URL.
Sample API call via ngrok
Traffic arriving at LM Studio through the ngrok tunnel.
Part 2: Connecting the Cockpit – Configuring Cursor
Now lets teach Cursor to use your local model endpoint instead of the default cloud API.
- Open Cursor AI Settings Launch Cursor. Open Settings (Ctrl+Shift+J / Cmd+Shift+J). Navigate to the Models section.
Configure the Custom OpenAI API
In the OpenAI API area enter a placeholder key such as lm-studio.
Set the Base URL to your endpoint:
Local: http://localhost:1234/v1
ngrok: https://.ngrok-free.app/v1
Tip: don’t forget the trailing /v1 LM Studio mimics the OpenAI path.Model Overrides
Cursor may not recognize your model name automatically.
Add a custom model name matching the ID shown in LM Studio (e.g. llama-3-8b-instruct).
If you get an "Invalid Model" warning, rename the model in LM Studio to something Cursor expects (like gpt-4), or use the override option.
- Verify the Connection Open Cursor Chat (Ctrl+L / Cmd+L). Send a simple prompt: "Are you running locally?". Watch the LM Studio logsyou should see POST requests hit the server. LM Studio Logs
Github : https://github.com/ketanvijayvargiya/Cursor-LMStudio





Top comments (0)