DEV Community

Joe Attardi
Joe Attardi

Posted on • Originally published at joeattardi.dev on

Steps for installing a local AI copilot in Visual Studio Code

Does your company block ChatGPT or GitHub Copilot? Do you have security or trust concerns sending your code to a third party AI service? You might not know this, but you can run a large language model (LLM) locally on your computer, and even integrate it with Visual Studio Code.

Using the Ollama tool, you can download and run models locally. In this post, Ill guide you through the steps to run the Code Llama model using Ollama, and integrate it into Visual Studio Code.

Code Llama is an LLM from Meta that is focused on generating and talking about code. Its based on their Llama 2 model, and supports many different languages.

Installing Ollama and the Code Llama model

Your first step is to install Ollama. Go over to https://ollama.com where you can download and install it. Once Ollama is up and running, you should have a new terminal command, ollama. To see if its installed correctly, open a terminal and run:

ollama -v
Enter fullscreen mode Exit fullscreen mode

This should print the Ollama version. If you see this, then youre good to go! Next, download the Code Llama model by running this command:

ollama pull codellama
Enter fullscreen mode Exit fullscreen mode

This may take a while, depending on your Internet connection. The 7b version of Code Llama is 3.8 gigabytes. Go get a cup of coffee, tea, or your favorite beverage while Ollama downloads this model.

Setting up CodeGPT

CodeGPT has a Visual Studio Code extension where you can interact with models directly in the editor. In VS Code, go to the Extensions tab and search for codegpt. Youll see several results, make sure to get the one with the blue check mark:

Once CodeGPT is installed, you should see a new CodeGPT icon in the editors sidebar. When you click on this, youll be taken to the CodeGPT interface. Click the dropdown menu at the top of this panel and select Ollama as the provider, and codellama as the model:

Once youre up and running, you will see a text area at the bottom of this panel to start chatting. Try entering a prompt such as Generate the code for a simple React component.

Code Llama will start processing your request. Keep in mind that running a model locally is not as powerful, or fast, as an online service like Meta AI or ChatGPT. After a few seconds, you should have a result in the chat window.

Setting up completion

You can also use CodeGPT to suggest code completion, like GitHub Copilot and similar tools do. To set this up, in the CodeGPT Chat window, click the Menu button at the top left part of the screen. A menu will slide out with several options.

Select Autocomplete to set up code completion.

Code Llama has a code variation that you can use for code completion. It is a separate model, so youll have to make another large download. Select the codellama:code model from the AI Model dropdown:

Next, make sure to click the toggle switch to enable completion:

Now, as you type in your editor, Code Llama will make suggestions for you. For example, here it is filling in the PropTypes for a Greeter component:

If you like a suggestion, you can press the Tab key to accept it:

Have fun!

Thats really all there is to it. You now have AI chat and code completion integrated in Visual Studio Code!

]]>

API Trace View

How I Cut 22.3 Seconds Off an API Call with Sentry 👀

Struggling with slow API calls? Dan Mindru walks through how he used Sentry's new Trace View feature to shave off 22.3 seconds from an API call.

Get a practical walkthrough of how to identify bottlenecks, split tasks into multiple parallel tasks, identify slow AI model calls, and more.

Read more →

Top comments (0)

Billboard image

Create up to 10 Postgres Databases on Neon's free plan.

If you're starting a new project, Neon has got your databases covered. No credit cards. No trials. No getting in your way.

Try Neon for Free →

👋 Kindness is contagious

Please leave a ❤️ or a friendly comment on this post if you found it helpful!

Okay