GitHub Copilot billing now has three cost drivers you need to track: seats, premium requests, and GitHub Actions minutes for Copilot code review. The newest change is that Copilot code review on pull requests consumes Actions minutes from the billing account that owns the repository. For API teams, that matters because API PRs often include OpenAPI specs, generated clients, handlers, and tests in the same diff.
This guide shows how to model those costs before they appear on your invoice, how to reduce unnecessary review minutes in CI, and how to keep API specification, contract testing, and AI review steps organized with Apidog.
If you are also estimating direct model API usage, see the deeper guides on GPT-5.5 pricing and DeepSeek V4 pricing.
TL;DR
- Copilot now has three meters: seat licenses, premium requests, and Actions minutes for Copilot code review.
- Copilot code review on PRs runs through GitHub Actions and consumes the org’s normal Actions allowance.
- API repositories tend to consume more because PRs often include specs, generated clients, server code, and tests.
- Premium requests apply to agentic workflows such as Workspace, agent mode, Copilot Spaces, and non-default model selection.
- Standard chat and inline completions remain unmetered for paid tiers.
- Set spending limits before the next billing cycle.
- Start by budgeting 400–800 Actions minutes per month per active API repo, then revise after 30 days of real usage.
The three Copilot billing meters
Copilot used to be simple to forecast. Now you need to track three separate meters.
Meter 1: per-seat license
This is the flat monthly user cost.
- Copilot Business:
$10/user/month - Copilot Enterprise:
$19/user/month
This covers:
- Chat
- Inline completions
- Multi-line suggestions
- IDE integrations
- Access to the standard model pool
Seats are the easiest cost to forecast, but they are also easy to over-provision.
Practical action:
Once per quarter:
1. Export active Copilot users.
2. Identify users with little or no activity.
3. Reclaim unused seats.
4. Reassign seats only when needed.
Meter 2: premium requests
Premium requests are GitHub’s usage unit for more expensive Copilot features.
They apply to workflows such as:
- Agent mode
- Workspace
- Copilot Spaces
- Model selection beyond the default model
Current rates, subject to change:
| Feature | Cost in premium requests |
|---|---|
| Default model chat | Free for paid tiers |
| Inline completions | Free for paid tiers |
| Agent mode using default model | 1 per request |
| Workspace using default model | 1 per request |
| Selecting Claude Sonnet 4.5 | 1.5x multiplier |
| Selecting GPT-5.5 | 2x multiplier |
| Selecting GPT-5.5 Pro | 6x multiplier |
| Copilot Spaces query | 1 per query |
Included monthly quota:
- Copilot Business:
300 premium requests/seat - Copilot Enterprise:
1,000 premium requests/seat
Overage:
$0.04 per premium request
For API teams, premium requests usually come from prompts like:
Regenerate the OpenAPI client.
Write contract tests for this endpoint.
Refactor this handler and update related tests.
Create a migration plan for this API version.
Those requests can become multi-step agentic tasks internally. One visible prompt may consume several premium requests.
Use this estimate:
premium_overage =
max(0, requests_used - included_requests) × $0.04
Example for Copilot Business:
included_requests = seats × 300
Meter 3: GitHub Actions minutes for Copilot code review
This is the newest cost driver.
When Copilot reviews a pull request, that review runs through GitHub Actions infrastructure. The minutes it consumes are deducted from the same Actions pool used by your CI workflows.
Important details:
- These minutes are included in your existing GitHub Plans Actions quota.
- They are not a separate Copilot-specific quota.
- Private repo Actions usage is metered against your minute budget.
- Public repo Actions usage is free.
Typical Copilot review usage for API PRs:
Small PR: 2–3 Actions minutes
Medium PR: 4–6 Actions minutes
Large PR: 10–15 Actions minutes
A practical starting estimate:
monthly_review_minutes = pull_requests_per_month × 4
For example:
50 PRs/month × 4 minutes = 200 Actions minutes/month
That is only Copilot review usage. Your normal CI, tests, security scanners, and deploy jobs still consume Actions minutes separately.
Why API repositories consume more
API repositories tend to hit all three meters harder than smaller application repos.
1. API PRs are larger
A typical API change may touch:
openapi.yaml- Generated clients
- Server handlers
- Request/response DTOs
- Contract tests
- Integration tests
- Documentation examples
Copilot review has more context to inspect, so the workflow runs longer.
2. Generated code increases review size
Many teams commit generated clients into the repository.
That means a single endpoint change can produce large diffs across multiple languages:
clients/js/**
clients/python/**
clients/go/**
If Copilot reviews those files, it spends Actions minutes on code that usually should not be manually reviewed.
3. Multiple automated reviewers run on the same PR
A common API PR may trigger:
- Copilot review
- CodeQL
- Snyk
- Custom security scanning
- Contract tests
- Integration tests
- API linting
Each job has its own cost profile. Copilot review is just the newest line item.
How to estimate your monthly Copilot cost
Use a three-step model.
Step 1: calculate seat cost
Business seat cost = active_users × $10
Enterprise seat cost = active_users × $19
Example:
10 Enterprise users × $19 = $190/month
Step 2: estimate premium request overage
Estimate usage per developer.
Typical ranges:
Chat-heavy developer: ~150 requests/month
Agent-heavy developer: ~600–800 requests/month
Business included quota: 300 requests/seat/month
Enterprise included quota: 1,000 requests/seat/month
Formula:
premium_overage =
max(0, requests_used - included_requests) × $0.04
Example:
5 Business users × 300 included = 1,500 included requests
Actual usage = 2,500 requests
Overage = (2,500 - 1,500) × $0.04
= 1,000 × $0.04
= $40
Set an org-level spending limit so runaway agent usage cannot exceed your budget.
Step 3: estimate Actions minutes for Copilot review
Formula:
review_minutes = prs_per_month × average_review_minutes
For medium API PRs:
review_minutes = prs_per_month × 4
If the review minutes exceed your remaining Actions quota:
review_overage =
max(0, review_minutes - actions_quota_remaining) × $0.008
For Linux private repos, this uses the common $0.008/minute estimate.
Example:
200 PRs/month × 4 minutes = 800 review minutes/month
For an Enterprise org, that may fit comfortably inside the included Actions quota. For a smaller Team or Business setup with heavy CI usage, it can push you into overage.
Example monthly estimate
For a 10-developer Enterprise team merging 200 API PRs per month:
Seats:
10 × $19 = $190
Premium request overage:
~$40
Copilot review minutes:
200 PRs × 4 minutes = 800 minutes
Actions overage:
$0 if inside quota
Estimated monthly cost:
$190 seat baseline + ~$40 usage overage = ~$230
For a smaller Business team with the same PR volume, Actions and premium-request quotas are tighter, so overage may appear sooner.
CI changes to reduce Copilot review cost
You can reduce Copilot review minutes without disabling the feature entirely.
1. Skip bot PRs
Most teams do not need Copilot review on dependency bumps from Dependabot or Renovate.
on:
pull_request:
types: [opened, synchronize]
jobs:
copilot-review:
if: github.actor != 'dependabot[bot]' && github.actor != 'renovate[bot]'
runs-on: ubuntu-latest
steps:
- uses: github/copilot-review@v1
This avoids spending review minutes on routine version bumps.
2. Restrict review to meaningful paths
For API repos, review the files that humans care about:
- API specs
- Server handlers
- Internal business logic
- Tests
Skip generated clients where possible.
on:
pull_request:
paths:
- 'apis/**/*.yaml'
- 'cmd/**'
- 'internal/**'
- 'tests/**'
This keeps review focused and usually reduces runtime.
3. Exclude generated clients
If generated clients are committed, add path rules to avoid reviewing them.
Example repository layout:
apis/openapi.yaml
internal/
tests/
clients/js/
clients/python/
clients/go/
Prefer reviewing:
apis/**
internal/**
tests/**
Avoid reviewing:
clients/**
If your workflow supports path filters, keep generated code out of the review job.
4. Use labels to trigger expensive reviews
Instead of reviewing every PR, require an explicit label.
Example policy:
Only run Copilot review when the PR has the label: review-please
This works well when:
- Small changes do not need AI review.
- Large API changes need extra review.
- Maintainers want control over cost.
A label-driven model can cut review volume substantially while preserving value on high-risk changes.
5. Run cheap validation before AI review
Do not spend Copilot review minutes on PRs that fail basic contract checks.
Run these first:
- OpenAPI validation
- API linting
- Contract tests
- Mock-server checks
- Unit tests
Then run Copilot review only if those pass.
Example workflow structure:
jobs:
contract-validation:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run contract validation
run: apidog-cli validate
copilot-review:
needs: contract-validation
runs-on: ubuntu-latest
steps:
- uses: github/copilot-review@v1
The goal is simple: fail fast before the expensive review step.
Governance controls every API team should set
These controls take little time and prevent most billing surprises.
1. Set an org-level spending limit
Do this at the organization level, not only per repository.
Recommended approach:
1. Estimate normal usage.
2. Add a small buffer.
3. Set the limit 10–20% below the amount that would surprise finance.
4. Review after the first billing cycle.
Do not leave overage unlimited unless someone is actively monitoring usage.
2. Enable premium request alerts
GitHub sends alerts at usage thresholds such as:
- 50%
- 75%
- 90%
Route these alerts to a shared channel:
Billing email → Slack / Teams / incident tool
Do not rely on one admin’s inbox.
3. Define a repository policy for Copilot review
Decide where Copilot review should run.
Options:
Every PR
Only labeled PRs
Only protected branches
Only selected repositories
Only selected teams
For API teams, a good default is:
Run on high-risk API repos.
Use path filters.
Skip bots.
Skip generated code.
4. Roll out Enterprise features per team
Avoid enabling every Copilot feature across the whole org at once.
Use a staged rollout:
Week 1: Platform/API team
Week 2: One service team
Week 3: Expand if usage is predictable
Week 4: Review billing data
This gives you enough data before scaling usage.
Where Apidog fits
Apidog is not a Copilot replacement. It helps keep API design, mock testing, and contract validation in one workflow so Copilot review can focus on code that already passed cheaper checks.
A practical API workflow:
- Keep the API spec and saved request examples in Apidog.
- Commit the collection or spec alongside the repository.
- Run contract tests against the Apidog mock server.
- Fail fast if the spec or contract is invalid.
- Run Copilot review only after contract validation passes.
Recommended sequence:
OpenAPI validation
→ Apidog mock/contract tests
→ Unit tests
→ Copilot code review
→ Merge
This sequencing matters because Copilot review is the expensive step. If a PR fails because an example response does not match the spec, catch that before spending Actions minutes on AI review.
The API testing without Postman guide covers the Apidog mock workflow. The DeepSeek V4 API guide shows the same pattern applied to a model API.
What to watch during the next billing cycle
Put these checkpoints on your calendar.
Days 1–7
Premium request usage usually looks normal.
Most teams stay under the included quota during the first week.
Check:
- Active Copilot users
- Premium request usage
- Repositories with Copilot review enabled
Days 14–21
Heavy users may cross the included premium-request quota.
If you set a spending limit, some premium requests may start failing once the limit is reached.
Check:
- Users with high agent-mode usage
- Teams using Workspace heavily
- Repos with unusually frequent PR review runs
Days 28–30
Actions minutes from Copilot review compound near the end of the cycle.
Compare:
Current month Actions usage
Previous month Actions usage
Copilot review workflow minutes
Normal CI workflow minutes
Then adjust:
- Tighten path filters
- Exclude generated clients
- Skip bot PRs
- Move heavy users to Enterprise if needed
- Remove inactive seats
Common mistakes
Avoid these patterns.
1. No spending limit
A single runaway agent workflow can consume unexpected budget.
Always set a cap.
2. Copilot review enabled everywhere
Do not enable review on every repository by default.
Start with repositories where review has clear value:
- Public API services
- Security-sensitive services
- High-change API gateways
- Shared platform libraries
3. Generated clients included in review
Generated client diffs inflate runtime and rarely need AI review.
Filter them out.
4. Bot PRs reviewed
Skip:
dependabot[bot]
renovate[bot]
internal release bots
auto-bump bots
5. No baseline metrics
Before changing workflows, export current usage.
Track monthly:
- Copilot seats
- Premium requests
- Actions minutes
- Copilot review workflow duration
- PR count per repo
Without a baseline, you cannot prove that a workflow change saved money.
FAQ
Is the seat price still $10 per user?
Copilot Business is $10/user/month, Copilot Enterprise is $19/user/month, and Copilot Pro for individuals is $10/month.
The seat tier also determines the included premium-request quota.
Are inline completions metered now?
No.
Default model chat and inline completions are unmetered for paid tiers. Premium requests apply to more expensive features such as agent mode, Workspace, Copilot Spaces, and non-default model selection.
What happens when the premium request quota runs out?
Requests may fail with a quota error unless you allow overage.
If overage is enabled, usage bills at:
$0.04 per premium request
up to the spending limit you configure.
Are Actions minutes for Copilot review billed separately?
No.
They consume the same GitHub Actions minute pool as your normal CI jobs.
Track total Actions usage and adjust workflow triggers to avoid surprise overage.
Can I disable Copilot code review entirely?
Yes.
An organization admin can opt repositories out at the policy level. You can also control enrollment per team.
Will Copilot review work on private API specs?
Yes.
For private repositories, the review consumes Actions minutes. For public repositories, Actions usage is free.
Copilot review reads API specs, handlers, tests, and related files like other source code.
Does Copilot review also use premium requests?
Currently, Copilot review consumes Actions minutes only. The reviewer’s model usage is part of the Copilot platform and is not separately billed as premium requests.
This may change later, so monitor the GitHub changelog.
For teams running both Copilot review and direct model API calls in CI, the GPT-5.5 free Codex guide covers the per-token side. Apidog can handle the mock and contract layer so AI review runs only after cheaper checks pass.
Top comments (0)