DEV Community

Daniel San
Daniel San

Posted on

32

Llama 3.2 Running Locally in VSCode: How to Set It Up with CodeGPT and Ollama

Image description

Llama 3.2 models are now available to run locally in VSCode, providing a lightweight and secure way to access powerful AI tools directly from your development environment.

With the integration of Ollama and CodeGPT, you can download and install Llama models (1B and 3B) on your machine, making them ready to use for any coding task.

In this guide, I’ll walk you through the installation process, so you can get up and running with Llama 3.2 in VSCode quickly.

Step-by-Step Installation Guide: Llama 3.2 in VSCode

Step 1: Install Visual Studio Code (VSCode)

To start, make sure you have Visual Studio Code installed. If you don’t have it yet, download it from here and follow the instructions for your operating system.

Step 2: Install CodeGPT Extension

The CodeGPT extension is necessary to integrate AI models like Llama 3.2 into your VSCode environment. Here’s how to get it:

  1. Open VSCode.
  2. Click on the Extensions icon on the left sidebar.
  3. Search for “CodeGPT” in the marketplace.

Image description

Step 3: Install Ollama

Ollama enables local deployment of language models. To install it:

  1. Visit the Ollama website.
  2. Download the appropriate installer for your operating system.
  3. Follow the installation instructions provided on the site.
  4. Once installed, verify it by typing the following in your terminal:
ollama --version
Enter fullscreen mode Exit fullscreen mode

output: ollama version is 0.3.12

Step 4: Download Llama 3.2 Models

With CodeGPT and Ollama installed, you’re ready to download the Llama 3.2 models to your machine:

  1. Open CodeGPT in VSCode
  2. In the CodeGPT panel, navigate to the Model Selection section.
  3. Select Ollama as the provider and choose the Llama 3.2 models (1B or 3B).

Image description

Click “Download Model” to save the models locally.

Step 5: Verify Your Setup

Once the model is downloaded, you can verify it’s ready to use:

  1. Open a code file or project in VSCode.
  2. In the CodeGPT panel, make sure Llama 3.2 is selected as your active model.
  3. Begin interacting with the model for code completions, suggestions, or any coding assistance you need.

Image description

Ready to Use Llama 3.2 in VSCode

That’s it! With Llama 3.2 running locally through CodeGPT, you’re set up to enjoy a secure, private, and fast AI assistant for your coding tasks — all without relying on external servers or internet connections.

If you found this guide helpful, let us know in the comments, and feel free to reach out if you encounter any issues during the setup!

Image of Datadog

Master Mobile Monitoring for iOS Apps

Monitor your app’s health with real-time insights into crash-free rates, start times, and more. Optimize performance and prevent user churn by addressing critical issues like app hangs, and ANRs. Learn how to keep your iOS app running smoothly across all devices by downloading this eBook.

Get The eBook

Top comments (1)

Collapse
 
vis7 profile image
Vishvajeet Ramanuj • Edited

I encountered problem in codeGPT extension. I downloaded model "qwen2.5-coder:3b" successfully but after download it start instalation, but it take more than 1 hour still installation not completed. I don't know what is the problem, I restarted vs code too. it shows me window of downloading model agian. although model is downloaded and I am interacting with it using cli, but I am facing this problem in vs code

Image of Docusign

🛠️ Bring your solution into Docusign. Reach over 1.6M customers.

Docusign is now extensible. Overcome challenges with disconnected products and inaccessible data by bringing your solutions into Docusign and publishing to 1.6M customers in the App Center.

Learn more

👋 Kindness is contagious

Please leave a ❤️ or a friendly comment on this post if you found it helpful!

Okay