You can try out the latest Ollama models on VS Code for free.
It also offers Ollama Cloud, which helps you run powerful models.
More: Ollama Cloud
First, pull the coding models so they can be accessed via VS Code:
ollama pull qwen3-coder:480b-cloud
- Open the Copilot chat sidebar
- Select the model dropdown -> Manage models -> Select Ollama
- Select the desired models.
- Open the model dropdown and choose the model.

Top comments (2)
What type of hardware should be needed to run qwen3-coder:480b conveniently at home?
Running qwen3-coder:480b locally requires a minimum of 250GB of memory or unified memory.
Read more at Ollama's website :
ollama.com/library/qwen3-coder