YouTube's official Data API has strict quota limits (10,000 units/day). But there's another way.
The Innertube API
YouTube.com itself loads all data through an internal API called Innertube. When you scroll down to load comments, it calls:
POST https://www.youtube.com/youtubei/v1/next
With a JSON body containing the video ID and a continuation token.
How Comments Are Structured (2025+ Format)
YouTube recently changed their comment data format. Comments are now stored in:
frameworkUpdates.entityBatchUpdate.mutations[].payload.commentEntityPayload
Each mutation with type commentEntityPayload contains:
-
properties.content.content— comment text -
properties.authorButtonA11y— author name -
toolbar.likeCountNotliked— like count -
properties.publishedTime— relative time -
toolbar.replyCount— number of replies
Getting the Continuation Token
Fetch the video page, extract ytInitialData from the HTML, then navigate:
contents.twoColumnWatchNextResults.results.results.contents
→ itemSectionRenderer.contents
→ continuationItemRenderer.continuationEndpoint.continuationCommand.token
Key Benefits vs Official API
| Official API | Innertube API | |
|---|---|---|
| API Key | Required | Not needed |
| Quota | 10K units/day | No quota |
| Rate limit | Yes | Reasonable use |
| Comment format | Legacy | Latest |
I built a YouTube Comments Scraper using this approach — free on Apify Store (search knotless_cadence). Also built YouTube Channel and Search scrapers using the same technique.
Has anyone else worked with the Innertube API?
Top comments (0)