On October 7, 2025, GitHub stripped a bunch of fields out of the Events API without changing a version number. The commits array on PushEvent. The author name and email. author_association on issue/PR/review/comment events. All gone.
No HTTP error. No deprecation warning at request time. No API version bump. The endpoint still returned 200 OK. The JSON was still valid. The shape was just different than it used to be.
If you had a CI hook, an abuse-detection pipeline, a dashboard, an internal tool — anything that read PushEvent.payload.commits — it started returning undefined overnight.
What Actually Changed
From GitHub's August 8 changelog:
On PushEvent:
-
payload.commits[]— removed - Commit SHAs, author names, author emails, commit messages — all gone
On IssuesEvent, PullRequestEvent, IssueCommentEvent, PullRequestReviewEvent, PullRequestReviewCommentEvent:
-
author_association— removed
GitHub ran a "brownout" test on September 8, 2025 — one day where the fields were pulled, then restored — and then made the removal permanent in October. The stated reason was abuse: scrapers were using the Events API to harvest commit metadata at scale. Fair enough. But from the consumer side, the surface looked like this:
Before (September):
{
"type": "PushEvent",
"payload": {
"commits": [
{
"sha": "a1b2c3...",
"author": { "name": "Jane", "email": "jane@example.com" },
"message": "Fix tokenizer"
}
]
}
}
After (October):
{
"type": "PushEvent",
"payload": {}
}
Still 200 OK. Still valid JSON. Just silently missing the thing a lot of tooling was reading.
Why Nobody's Tests Caught It
This is the interesting part. Let's walk through why the usual safety nets didn't trip:
Unit tests didn't catch it because unit tests use fixtures, and fixtures are frozen in time. The test data had commits; production no longer did.
Integration tests didn't catch it unless they were running against the live GitHub API and asserting on the shape of the response. Most integration tests assert on behavior ("does our system process a push event?"), not structure.
TypeScript didn't catch it because TypeScript can't catch what it can't see. The field type is still defined in your Octokit types. The runtime object just doesn't have the field. Your code happily accesses payload.commits and gets undefined, then calls .map() on it and throws.
The API version didn't change because GitHub's Events API isn't versioned the way REST APIs with dated versions are. There was no pinned version to stay on. Consumers who wanted the old shape didn't have that option.
Error monitoring didn't flag it early because for a lot of code paths, the failure mode wasn't an exception — it was empty output. Your abuse detector processed the event, saw no commits, and marked the user clean. Your dashboard showed a zero. Your pipeline ran through the "empty case" branch.
Here's what showed up in GitHub Community Discussion #177111 after the change landed:
"This is a silent breaking change. Our abuse-detection pipeline has been running on empty events for a week and we only noticed because a nightly job alerted on throughput being anomalously low."
That's the worst version of a schema change. Not a crash. Not an alert. Just quietly wrong data flowing through your system.
The General Pattern
This isn't a GitHub problem. It's an API surface problem, and it happens constantly:
-
Stripe's 2025-03-31 "Basil" release removed
billing_thresholdsfrom subscriptions and killed the Upcoming Invoice API outright. Teams that had moved their account default version without re-pinning webhooks got silently migrated. -
Plaid's May 2025 changes renamed
ziptopostal_code,statetoregion, and flipped some empty-string fields tonull. Anything doing.trim()on those fields stopped working. -
OpenAI's Responses API exposed per-turn shape variance —
reasoningappears and disappears depending on whether a tool was called — which static typing can't model.
The common thread: the API provider has legitimate reasons to change the shape (abuse mitigation, data correctness, new capabilities). The consumer's tests assume a frozen structure. The gap between those two realities is where production breaks.
How to Actually Catch This
There are three honest defenses, in order of how much they actually help:
1. Pin what you can pin. Stripe, OpenAI, Shopify — these APIs offer explicit version headers. Use them. Don't move forward without a deliberate upgrade. This doesn't help for GitHub's Events API (no versioning) but it helps everywhere it's available.
2. Assert on structure in integration tests. Not just "did we process the event" but "does the payload have the field we rely on." This catches the problem in CI instead of prod — but only if your tests actually run against live endpoints regularly.
3. Monitor the response shape in production. This is the one most teams skip. Poll the endpoints you depend on (or sample live traffic), record the structure over time, and diff against a learned baseline. When a field disappears or changes type, you get an alert before your dashboards go empty.
The third defense is what I've been building at FlareCanary. Point it at your critical endpoints — the ones whose schema changes would make your Monday morning terrible — and it polls them on a schedule, learns the expected structure, and flags drift. Removed fields, type shifts, nullability changes, new fields that might signal a migration. Severity-classified so a new optional field is informational and a removed field is an alert.
You don't strictly need a tool for this. You can cron a script that calls your top 5 endpoints, hashes the field set, and diffs. The point is that some layer needs to be watching the shape, not just the status code.
The Harder Question
The thing the GitHub Events change really surfaces is this: how many of the APIs your service depends on actually have a team watching their response shape?
Most teams I've talked to know their dependency graph at the package level. They can tell you what version of Stripe's SDK they use, what OpenAI model they call, what GitHub endpoints they hit. Almost none of them can tell you whether the response from those endpoints has changed structure in the last month.
That's the monitoring gap. HTTP status codes tell you an endpoint is up. Response times tell you it's fast. Neither tells you the data contract is still what you thought.
If any of the Events API consumers mentioned in the community threads had been diffing /events responses against a baseline, they'd have caught the September 8 brownout and had a full month's warning before the permanent cut. The capability to catch it existed. The habit to watch for it didn't.
That's the real lesson, and it applies to every API you don't control.
If you've been hit by an API schema change that slipped through your tests, I'd genuinely like to hear about it — especially the "empty output, no error" variety. Replies below, or hit me up if you want to compare notes.
Top comments (0)