If your CI pipeline or security tooling makes a pre-flight call to GET /rate_limit before uploading a SARIF file to GitHub, May 19, 2026 is your deadline. GitHub is removing the code_scanning_upload object from the response. Eleven days of runway from this article.
The headline change is small: one key disappears from a JSON response. The interesting part is what was actually inside that key — and the silent decision your gating logic has been making since you wrote it.
The exact shape change
Today, GET /rate_limit returns this under resources (truncated to the relevant keys):
{
"resources": {
"core": { "limit": 5000, "used": 1, "remaining": 4999, "reset": 1372700873 },
"code_scanning_upload": { "limit": 5000, "used": 1, "remaining": 4999, "reset": 1372700873 }
}
}
Starting May 19, 2026:
{
"resources": {
"core": { "limit": 5000, "used": 1, "remaining": 4999, "reset": 1372700873 }
}
}
That's the whole change. The code_scanning_upload key is gone. There is no replacement key, because — as we'll get to in a second — there was never a separate quota to replace.
The four places this breaks
The pattern is the same KeyError/undefined/null pointer shape we covered with GitHub's merge_commit_sha removal last month. Different surface, same failure class.
1. Pre-flight gates on SARIF uploads.
The most common pattern in code-scanning automation:
import requests
resp = requests.get("https://api.github.com/rate_limit", headers=headers).json()
csu = resp["resources"]["code_scanning_upload"]
if csu["remaining"] < 10:
print(f"Low budget — sleeping until {csu['reset']}")
time.sleep(csu["reset"] - time.time())
# upload the SARIF
upload_sarif(...)
After May 19, the second line raises KeyError: 'code_scanning_upload'. The job exits non-zero, the SARIF never uploads, the security dashboard goes stale, nobody notices because the alert was wired to the upload-success webhook, not the rate-limit-check failure.
2. PyGithub and Octokit field accessors.
If your code uses PyGithub's RateLimit object, the attribute access is rate_limit.code_scanning_upload. After May 19, that attribute will resolve to None (PyGithub builds the object lazily from the JSON response — missing keys become None rather than AttributeError), and rate_limit.code_scanning_upload.remaining will then raise AttributeError: 'NoneType' object has no attribute 'remaining'.
Octokit's TypeScript types currently mark the field as required. A typed octokit.rest.rateLimit.get() consumer that destructures the field will fail at compile time the next time you bump @octokit/openapi-types past the cutoff. That's actually the good failure mode — the type checker catches it before deploy.
3. Dashboards graphing the field separately.
If you graph code_scanning_upload.remaining over time on a Grafana panel, the metric flatlines on May 19 and your alert thresholds (e.g., "page if rate-limit headroom < 100") fire constantly until someone notices the panel is querying a key that no longer exists. Whether this is loud or silent depends on how your collector handles missing keys — Telegraf's HTTP-JSON input plugin emits a 0 for missing fields by default, which is the worst possible outcome (silent under-reporting).
4. Schema-validating clients.
Any client that validates against an OpenAPI schema and treats code_scanning_upload as required will reject the new response as malformed until the schema is bumped. This is the most niche failure but the loudest — and it's how people who run openapi-typescript against GitHub's spec catch this kind of change automatically. Most teams don't.
The deeper twist: the field was always shadowing core
This is the part that makes the change interesting rather than just annoying. From the GitHub Community discussion thread that prompted the deprecation:
"Rate Limit endpoint shows
coreandcode_scanning_uploadconsuming the same quota during job"
It does, because they are the same quota. The code_scanning_upload object never held its own bucket. SARIF uploads consume from the standard core rate limit (5,000/hr authenticated, 15,000/hr for GitHub Apps). The duplicate object in the response was a documentation-of-intent artifact — GitHub once planned to give SARIF its own bucket, never did, and the field has been a confusing copy of core ever since.
Which means any of these patterns has been wrong for years:
# Pattern A — gating on the wrong field
if rate_limit.code_scanning_upload.remaining > 10:
upload_sarif()
# This was always equivalent to checking core. The if/then was redundant.
# Pattern B — assuming separate budgets
core_budget = rate_limit.core.remaining
sarif_budget = rate_limit.code_scanning_upload.remaining
total_budget = core_budget + sarif_budget # double-counted; the budget is one bucket
Pattern B is the silent-fail mode. Code that double-counts the budget thinks it has 10,000 requests of headroom when it actually has 5,000. On a busy day with concurrent CI shards, the second half of the budget evaporates faster than the calculation expects, and the job hits HTTP 403 with X-RateLimit-Remaining: 0 partway through the run — without the rate-limit pre-check ever flagging it, because the pre-check was reading the (duplicate) code_scanning_upload value while the upload calls debited from core.
The May 19 removal is GitHub making this implicit truth explicit. The code that breaks loudly (KeyError) was less wrong than the code that was silently summing the same quota twice.
The migration
For the loud-fail case (KeyError, AttributeError):
# Before
csu = resp["resources"]["code_scanning_upload"]
if csu["remaining"] < 10:
sleep_until_reset(csu["reset"])
# After
core = resp["resources"]["core"]
if core["remaining"] < 10:
sleep_until_reset(core["reset"])
For PyGithub:
# Before
rate = gh.get_rate_limit()
if rate.code_scanning_upload.remaining < 10:
...
# After
rate = gh.get_rate_limit()
if rate.core.remaining < 10:
...
For Octokit users on TypeScript: bump @octokit/openapi-types past the cutoff, run tsc, and let the type errors guide the rename.
For dashboards: drop the code_scanning_upload panel. The core panel was always showing the same numbers; you don't need both.
The harder check: is your code-scanning quota actually safe?
Here's the question the migration itself doesn't answer: now that code_scanning_upload and core are the same explicit bucket, does your CI fleet actually fit inside core once you stop double-counting?
For a single repo doing a few SARIF uploads per push, yes. For a security team running GitHub Advanced Security across hundreds of repos with parallel CodeQL workflows on every PR, the answer might be no. The 5,000/hr limit per token is shared across:
- All
core-bucket calls (most of the REST API) - All SARIF uploads (was already true; now visibly so)
- All Dependabot manifest fetches you do via the API
- Any other automation hitting the same auth
Run the numbers before May 19. If your aggregate core consumption was sitting comfortably below 5,000 because you assumed code_scanning_upload was a separate budget, you may be about to discover otherwise — except the discovery happens via 403s on uploads, not via your rate-limit pre-check.
The clean fix for high-volume code scanning is using a GitHub App with installation tokens (15,000/hr, 12,500/hr per installation, plus the API-only core headroom). That's a token-architecture change, not a one-line field rename.
The pattern across these GitHub deprecations
This is the third quiet-field removal we've covered in five weeks:
- April 27 —
merge_commit_sharemoved from PR responses in the 2026-03-10 API version - April 28 — Seven org security fields retired (PATCH returned 200 but applied nothing)
- May 19 —
code_scanning_uploadremoved from/rate_limit
Each one is a one-line schema change that becomes a multi-hour incident in production because the failure surfaces aren't obviously connected to the announcement. They're not the headline breaking changes from the API version page — they're the small reshapes that nobody on the team is monitoring.
The unifying habit: pin the response shape of every GitHub endpoint your automation depends on, diff it on a schedule, and alert when a key disappears. That's what FlareCanary does — it polls the endpoints you point at, learns the response shape, and flags removed fields with severity classification so the SARIF-upload script's pre-flight check finds out about the change in your alerting channel rather than at 2 AM during a security release.
Action items, in order, before May 19
-
Today: grep your codebases for
code_scanning_upload. Anywhere it appears in JSON parsing, dashboard config, or schema validation is a migration target. -
This week: rename to
corein pre-flight checks and verify the gate still does what you want — most "the SARIF budget is fine" gates were already lies. -
This week: verify
core-bucket headroom across your aggregate token usage. If you've been flying close to the limit while assuming you had two buckets, plan token splits or move to GitHub App auth. -
Before May 19: bump
@octokit/openapi-typesand PyGithub past the cutoff in CI to surface compile/runtime errors before the production endpoint changes shape. - After May 19: keep the rename. GitHub has been quietly trimming dead fields all spring. The next one will follow the same pattern.
A 200 response on /rate_limit tells you the request was accepted. It doesn't tell you the field you're reaching for is still there.
If your code-scanning automation has been gating on code_scanning_upload and you found something interesting when you ran the numbers — drop a reply with the shape of the surprise. The shadow-quota pattern is broader than just this field.
Top comments (0)