<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ketan Vijayvargiya</title>
    <description>The latest articles on DEV Community by Ketan Vijayvargiya (@ketan_vijayvargiya_be9fb0).</description>
    <link>https://dev.to/ketan_vijayvargiya_be9fb0</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ketan_vijayvargiya_be9fb0"/>
    <language>en</language>
    <item>
      <title>Use Cursor with LM Studio</title>
      <dc:creator>Ketan Vijayvargiya</dc:creator>
      <pubDate>Sun, 01 Mar 2026 12:50:22 +0000</pubDate>
      <link>https://dev.to/ketan_vijayvargiya_be9fb0/use-cursor-with-lm-studio-56e1</link>
      <guid>https://dev.to/ketan_vijayvargiya_be9fb0/use-cursor-with-lm-studio-56e1</guid>
      <description>&lt;p&gt;Elevate Your Coding: Integrating Cursor with Local LLMs (LM Studio) and GitHub Copilot&lt;/p&gt;

&lt;p&gt;Prerequisites&lt;br&gt;
Before you begin, make sure you have:&lt;/p&gt;

&lt;p&gt;Cursor installed&lt;br&gt;
LM Studio installed (see &lt;strong&gt;lmstudio.ai&lt;/strong&gt;)&lt;/p&gt;

&lt;p&gt;ngrok (optional, discussed later)&lt;/p&gt;

&lt;p&gt;A GitHub Copilot subscription (optional but recommended)&lt;/p&gt;

&lt;p&gt;One or more local models downloaded (e.g. Gemma2, Llama3, DeepSeekCoder)&lt;/p&gt;

&lt;p&gt;Part 1: Setting Up the Engine – LM Studio &amp;amp; ngrok&lt;br&gt;
The goal of this section is to run a local LLM and expose it as an API that Cursor can consume.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install LM Studio
Download the appropriate package for your OS.
Run the installer and launch the application.
LM Studio Model List&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlpk2kemz2xti8wm7q21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlpk2kemz2xti8wm7q21.png" alt=" " width="800" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example model selector inside LM Studio.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Choose and Download a Model
Use the search bar to find a model. Some good starting points:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Llama3(8B) or Gemma2(9B) for general use&lt;br&gt;
DeepSeekCoder for coding-heavy tasks&lt;br&gt;
GLM4 for modern language capabilities&lt;br&gt;
Click Download and wait for the model to finish downloading.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start the Local Server
Switch to the Local Server tab ( icon).
Select your downloaded model from the dropdown.
Ensure CORS is enabled and note the default port (1234).
Click Start Server. You'll see logs confirming the service is running.
Server Settings&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33ytfpgxwr63ify3v5x0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33ytfpgxwr63ify3v5x0.png" alt=" " width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Server configuration panel with model selected.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Expose the API with ngrok (Optional but Recommended)
While Cursor can hit &lt;a href="http://localhost" rel="noopener noreferrer"&gt;http://localhost&lt;/a&gt;, using ngrok provides a stable, publicly reachable URL and avoids sandbox restrictions.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  install (macOS example)
&lt;/h1&gt;

&lt;p&gt;brew install ngrok&lt;/p&gt;

&lt;h1&gt;
  
  
  authenticate
&lt;/h1&gt;

&lt;p&gt;ngrok config add-authtoken &lt;/p&gt;

&lt;h1&gt;
  
  
  start a tunnel for LM Studio's port
&lt;/h1&gt;

&lt;p&gt;ngrok http 1234&lt;br&gt;
Copy the resulting "Forwarding" URL (e.g. &lt;a href="https://a1b2-c3d4.ngrok-free.app" rel="noopener noreferrer"&gt;https://a1b2-c3d4.ngrok-free.app&lt;/a&gt;); you'll need it when configuring Cursor.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ti7vdosmo1dy6hkcs6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ti7vdosmo1dy6hkcs6p.png" alt=" " width="800" height="84"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ngrok dashboard&lt;/p&gt;

&lt;p&gt;ngrok running and showing a forwarding URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1ct86f3gut4k7zo6xgg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1ct86f3gut4k7zo6xgg.png" alt=" " width="800" height="64"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sample API call via ngrok&lt;/p&gt;

&lt;p&gt;Traffic arriving at LM Studio through the ngrok tunnel.&lt;/p&gt;

&lt;p&gt;Part 2: Connecting the Cockpit – Configuring Cursor&lt;br&gt;
Now lets teach Cursor to use your local model endpoint instead of the default cloud API.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open Cursor AI Settings
Launch Cursor.
Open Settings (Ctrl+Shift+J / Cmd+Shift+J).
Navigate to the Models section.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the Custom OpenAI API&lt;br&gt;
In the OpenAI API area enter a placeholder key such as lm-studio.&lt;br&gt;
Set the Base URL to your endpoint:&lt;br&gt;
Local: &lt;a href="http://localhost:1234/v1" rel="noopener noreferrer"&gt;http://localhost:1234/v1&lt;/a&gt;&lt;br&gt;
ngrok: https://.ngrok-free.app/v1&lt;br&gt;
Tip: don’t forget the trailing /v1 LM Studio mimics the OpenAI path.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Model Overrides&lt;br&gt;
Cursor may not recognize your model name automatically.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Add a custom model name matching the ID shown in LM Studio (e.g. llama-3-8b-instruct).&lt;br&gt;
If you get an "Invalid Model" warning, rename the model in LM Studio to something Cursor expects (like gpt-4), or use the override option.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verify the Connection
Open Cursor Chat (Ctrl+L / Cmd+L).
Send a simple prompt: "Are you running locally?".
Watch the LM Studio logsyou should see POST requests hit the server.
LM Studio Logs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxjoajwl083jp09elped.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxjoajwl083jp09elped.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Github : &lt;a href="https://github.com/ketanvijayvargiya/Cursor-LMStudio" rel="noopener noreferrer"&gt;https://github.com/ketanvijayvargiya/Cursor-LMStudio&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>tooling</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
