DEV Community

Cover image for How to connect a local AI model to VS Code.
Sushan
Sushan

Posted on

How to connect a local AI model to VS Code.

You can try out the latest Ollama models on VS Code for free.

We are using Ollama, which is a free local AI model running application developed by the Llama community.
​​

Installing and using Ollama

You can download Ollama from its website.
​​

Now you'll be able to access ollama using your terminal.

  1. Open your terminal.
  2. Type ollama to verify if it's been installed.

  3. Run a model you like(depending on your hardware), using the command:
    ollama run qwen3:4b
    This command will pull and run the model.

  4. Change the model name to your preferred model and install it.
    To view all the available models, go to ollama.com/search

  5. If you want to run a high-end AI model, you can use Ollama Cloud for free.
    Run them like this: ollama pull qwen3-coder:480b-cloud
    ​ㅤ

Integrating with VS Code

Make sure the Ollama server is running in the background.
Verification: Check this URL, localhost:11434, and see if Ollama is running.
If not: Run it using the command ollama serve.

  1. Open VS Code -> Copilot chat sidebar.
  2. Select the model dropdown -> Manage models -> Select Ollama Select the desired models. Open the model dropdown and choose the model.

The option will disappear once you turn the Ollama server off.

Top comments (0)