DEV Community

Cover image for Using GitHub Copilot CLI with Azure AI Foundry (BYOK Models) – Part 2

Using GitHub Copilot CLI with Azure AI Foundry (BYOK Models) – Part 2

In Part 1, you ran GitHub Copilot CLI against a local model using LM Studio.

That setup gives you control. No external calls. No data leaving your machine. But it comes with clear limits. Small models struggle, and output quality drops fast on anything non-trivial.

This is where Azure AI Foundry comes in.

Instead of running models locally, you connect Copilot CLI to a cloud-hosted model you control. You still use the same BYOK (Bring Your Own Model) approach, but now the model runs on Azure.

That changes the trade-offs:

You gain better models and stronger reasoning
You keep control over the endpoint and deployment
You accept cost and network dependency

This is not a replacement for the local setup. It is the next step if you want privacy but at the same time, more power.


Setting Up Azure AI Foundry

This is the only "Azure-heavy" part. Keep it minimal. You just need a working endpoint and a deployed model.

1. Create or Use an Existing Resource

Go to Azure AI Foundry in the Azure portal.

You need:

  • A resource already created
  • Access to it (permissions matter)

If you already use Azure OpenAI, you can reuse it.


2. Deploy a Model

Azure AI Foundry

This step is mandatory.

In Azure, you cannot call a model directly. You must deploy it first.

Steps:

  1. Open your resource
  2. Go to Model deployments
  3. Select a model (for example GPT-4 class)
  4. Assign a deployment name

Example:

copilot-gpt53codex
Enter fullscreen mode Exit fullscreen mode

This name is what Copilot CLI will use later.

Azure Foundry AI Catalog

This is how the GPT 5.3 Codex looks like, but you can also deploy models directly from the HuggingFace catalog.

GPT 5.3 Codex

If your AI Foundry is good, you can deploy the models directly with the Deploy button in the model details page.


3. Get Endpoint and API Key

From the resource, retrieve:

  • Endpoint URL
  • API key

Typical endpoint:

https://<your-resource>.openai.azure.com/
Enter fullscreen mode Exit fullscreen mode

You will combine this with the deployment in the next step.


4. Build the Final Endpoint

Copilot CLI expects a full path, not just the base URL.

Format:

https://<your-resource>.openai.azure.com/openai/deployments/<your-deployment>/v1
Enter fullscreen mode Exit fullscreen mode

Example:

https://my-ai.openai.azure.com/openai/deployments/copilot-gpt53codex/v1
Enter fullscreen mode Exit fullscreen mode

If this path is wrong, nothing will work.


5. Quick Validation

Before using Copilot CLI, validate:

  • The deployment exists
  • The API key is valid
  • The endpoint is reachable

If you skip this, debugging later becomes painful.


Connecting Copilot CLI to Azure

This is the same pattern as Part 1. Different endpoint. More constraints.

Set the Environment Variables

You must point Copilot CLI to your Azure deployment.

PowerShell example:

$env:COPILOT_PROVIDER_BASE_URL="https://<your-resource>.openai.azure.com/openai/deployments/<your-deployment>/v1"
$env:COPILOT_MODEL="<your-deployment>"
$env:AZURE_OPENAI_API_KEY="<your-api-key>"
Enter fullscreen mode Exit fullscreen mode

What matters:

  • COPILOT_PROVIDER_BASE_URL → full Azure endpoint
  • COPILOT_MODEL → deployment name, not model name
  • AZURE_OPENAI_API_KEY → required for authentication

No COPILOT_OFFLINE here. Requests go to Azure.


Run a Simple Test

Open Copilot CLI with the following command in a project folder you want to test:

copilot --banner
Enter fullscreen mode Exit fullscreen mode

Ask a simple tasks like: "Give me the list of all the files larger than 2MB".

If everything is configured correctly:

  • The response comes from your Azure AI Foundry Model
  • The answer is really fast, at least faster than the answers from the Part 1

Reality Check

This is not “Copilot with Azure magic”.

It is:

  • Copilot CLI acting as a thin client
  • Azure acting as the model provider

You gain better models, not better tooling.


👀 GitHub Copilot quota visibility in VS Code

If you use GitHub Copilot and ever wondered:

  • what plan you’re on
  • whether you have limits
  • how much premium quota is left
  • when it resets

I built a small VS Code extension called Copilot Insights.

It shows Copilot plan and quota status directly inside VS Code.

No usage analytics. No productivity scoring. Just clarity.

👉 VS Code Marketplace:
https://marketplace.visualstudio.com/items?itemName=emanuelebartolesi.vscode-copilot-insights

Top comments (0)