DEV Community

Daniel San
Daniel San

Posted on

Llama 3.2 Running Locally in VSCode: How to Set It Up with CodeGPT and Ollama

Image description

Llama 3.2 models are now available to run locally in VSCode, providing a lightweight and secure way to access powerful AI tools directly from your development environment.

With the integration of Ollama and CodeGPT, you can download and install Llama models (1B and 3B) on your machine, making them ready to use for any coding task.

In this guide, I’ll walk you through the installation process, so you can get up and running with Llama 3.2 in VSCode quickly.

Step-by-Step Installation Guide: Llama 3.2 in VSCode

Step 1: Install Visual Studio Code (VSCode)

To start, make sure you have Visual Studio Code installed. If you don’t have it yet, download it from here and follow the instructions for your operating system.

Step 2: Install CodeGPT Extension

The CodeGPT extension is necessary to integrate AI models like Llama 3.2 into your VSCode environment. Here’s how to get it:

  1. Open VSCode.
  2. Click on the Extensions icon on the left sidebar.
  3. Search for “CodeGPT” in the marketplace.

Image description

Step 3: Install Ollama

Ollama enables local deployment of language models. To install it:

  1. Visit the Ollama website.
  2. Download the appropriate installer for your operating system.
  3. Follow the installation instructions provided on the site.
  4. Once installed, verify it by typing the following in your terminal:
ollama --version
Enter fullscreen mode Exit fullscreen mode

output: ollama version is 0.3.12

Step 4: Download Llama 3.2 Models

With CodeGPT and Ollama installed, you’re ready to download the Llama 3.2 models to your machine:

  1. Open CodeGPT in VSCode
  2. In the CodeGPT panel, navigate to the Model Selection section.
  3. Select Ollama as the provider and choose the Llama 3.2 models (1B or 3B).

Image description

Click “Download Model” to save the models locally.

Step 5: Verify Your Setup

Once the model is downloaded, you can verify it’s ready to use:

  1. Open a code file or project in VSCode.
  2. In the CodeGPT panel, make sure Llama 3.2 is selected as your active model.
  3. Begin interacting with the model for code completions, suggestions, or any coding assistance you need.

Image description

Ready to Use Llama 3.2 in VSCode

That’s it! With Llama 3.2 running locally through CodeGPT, you’re set up to enjoy a secure, private, and fast AI assistant for your coding tasks — all without relying on external servers or internet connections.

If you found this guide helpful, let us know in the comments, and feel free to reach out if you encounter any issues during the setup!

Top comments (0)