DEV Community

Cover image for Configuring Ollama and Continue VS Code Extension for Local Coding Assistant
Manjush
Manjush

Posted on β€’ Edited on β€’ Originally published at manjushsh.github.io

134

Configuring Ollama and Continue VS Code Extension for Local Coding Assistant

πŸ”— Links

GitHubGitHub Pages

Prerequisites

  • Ollama installed on your system. You can visit Ollama and download application as per your system.
  • AI model that we will be using here is Codellama. You can use your prefered model. Code Llama is a model for generating and discussing code, built on top of Llama 2. Code Llama supports many of the most popular programming languages including Python, C++, Java, PHP, Typescript (Javascript), C#, Bash and more. If not installed, you can install wiith following command:
ollama pull codellama 
Enter fullscreen mode Exit fullscreen mode

You can also install Starcoder 2 3B for code autocomplete by running:

ollama pull starcoder2:3b
Enter fullscreen mode Exit fullscreen mode

NOTE: It’s crucial to choose models that are compatible with your system to ensure smooth operation and avoid any hiccups.

Installing Continue and configuring

You can install Continue from here in VS Code store.

After installation, you should see it in sidebar as shown below:

Continue in VSCode

Configuring Continue to use local model

Click on settings icon:

Configure settings icon

Add configs:

{
      "apiBase": "http://localhost:11434/",
      "model": "codellama",
      "provider": "ollama",
      "title": "CodeLlama"
    }
Enter fullscreen mode Exit fullscreen mode

and also add tabAutocompleteModel too

"tabAutocompleteModel": {
    "apiBase": "http://localhost:11434/",
    "title": "Starcoder2 3b",
    "provider": "ollama",
    "model": "starcoder2:3b"
  }
Enter fullscreen mode Exit fullscreen mode

Update config

Select CodeLlama, which would be visible in dropdown once you add it in configuration

Pick modal added in dropdown

And you can also chat as normal as shown below

Chat

And you can also select a codeblock file and ask AI:

Code

Thanks for the support

References:

API Trace View

How I Cut 22.3 Seconds Off an API Call with Sentry πŸ•’

Struggling with slow API calls? Dan Mindru walks through how he used Sentry's new Trace View feature to shave off 22.3 seconds from an API call.

Get a practical walkthrough of how to identify bottlenecks, split tasks into multiple parallel tasks, identify slow AI model calls, and more.

Read more β†’

Top comments (2)

Collapse
 
vexit profile image
Vex Vendetta β€’

Does Continue collect your data ?

Collapse
 
manjushsh profile image
Manjush β€’

I think it depends.

You are not required to provide us with any personal data in order to use our open-source software.
However, to the extent you choose to interact with us directly or utilize one of our non-open-source offerings, we may collect the following categories of personal data you provide in connection with those offerings

You can read more here

Postmark Image

Speedy emails, satisfied customers

Are delayed transactional emails costing you user satisfaction? Postmark delivers your emails almost instantly, keeping your customers happy and connected.

Sign up

AWS GenAI LIVE!

GenAI LIVE! is a dynamic live-streamed show exploring how AWS and our partners are helping organizations unlock real value with generative AI.

Tune in to the full event

DEV is partnering to bring live events to the community. Join us or dismiss this billboard if you're not interested. ❀️