The rise, fall, and cautionary lessons of the most influential API in social media history.
There was a time when "Twitter API" was synonymous with innovation.
In 2007, Twitter opened one of the most generous APIs the tech world had ever seen. Third-party developers didn't just use it—they built Twitter. The retweet button, the @ mention, push notifications, the entire concept of a "timeline algorithm"—all invented by external developers using Twitter's API before Twitter adopted them as core features.
Fast forward to 2025. The X API now costs $42,000/month for basic enterprise access. Free-tier apps can post 17 tweets per day. Academic researchers who once had unlimited access now have... nothing.
What happened between these two points is one of the most dramatic API stories in tech history. And unlike Reddit (which had one bad pricing decision), X's API decline was a slow, methodical destruction spanning over a decade.
Act I: The Golden Age (2006–2012)
The API That Built a Platform
Twitter's early API was radically open:
GET https://api.twitter.com/1/statuses/public_timeline.json
No auth required. No rate limits worth worrying about. The entire public firehose, available to anyone.
This wasn't naive generosity—it was strategy. Twitter in 2007 was a fragile startup with a website that went down so often it spawned the famous "Fail Whale." Third-party clients weren't a threat; they were life support.
What developers built:
| App | Innovation | Twitter Later Adopted? |
|---|---|---|
| Tweetie | Pull-to-refresh UI | Yes (acquired, became official app) |
| Tweetbot | Smart timeline filters | Partially |
| TweetDeck | Multi-column dashboard | Yes (acquired) |
| Twitterrific | The word "tweet" itself | Yes (trademarked it) |
The API's design was simple and RESTful:
GET /1.1/statuses/home_timeline.json # Your timeline
GET /1.1/statuses/show/:id.json # Single tweet
POST /1.1/statuses/update.json # Post a tweet
GET /1.1/search/tweets.json # Search
GET /1.1/users/show.json # User profile
Clean. Predictable. Easy to learn. The URL structure mirrored how users thought about Twitter.
The Streaming API: Ahead of Its Time
Twitter introduced a streaming API before real-time was fashionable:
GET https://stream.twitter.com/1.1/statuses/filter.json?track=keyword
A persistent HTTP connection that pushed tweets in real-time. This powered:
- Breaking news dashboards
- Sentiment analysis tools
- Social listening platforms
- Academic research tools
In 2010, this was revolutionary. WebSockets weren't widely supported yet. Server-Sent Events were barely a spec. Twitter solved real-time data delivery with a simple, elegant streaming endpoint.
Act II: The Doors Start Closing (2012–2022)
API v1.1: The First Betrayal
In 2012, Twitter released API v1.1 with a devastating blog post titled "Changes coming in Version 1.1 of the Twitter API."
The key changes:
- Authentication required for all endpoints — No more anonymous access
- User token limits — Third-party clients capped at 100,000 users
- Display Requirements — Strict rules on how tweets must be rendered
- Rate limits tightened — 15 requests per 15-minute window for many endpoints
The 100,000 user cap was the kill shot for third-party clients. Popular apps like Tweetbot and Twitterrific couldn't grow beyond that limit. Twitter was telling developers: stop building Twitter clients. That's our job now.
Rate Limits (v1.1):
├── App-level: 300 requests / 15 min (search)
├── User-level: 900 requests / 15 min (timeline)
├── Post tweet: 300 per 3 hours
└── DM: 1,000 per 24 hours
The Object Model: Actually Well-Designed
Credit where due—Twitter's data model was clean:
{
"id": 1234567890,
"id_str": "1234567890",
"text": "Hello world",
"user": {
"id": 987654321,
"screen_name": "developer",
"followers_count": 1000
},
"entities": {
"hashtags": [{ "text": "api", "indices": [6, 10] }],
"urls": [...],
"user_mentions": [...]
},
"created_at": "Mon Mar 10 07:00:00 +0000 2025",
"retweet_count": 42,
"favorite_count": 108
}
The entities object was brilliant. Instead of forcing developers to parse tweet text with regex to find @mentions, hashtags, and URLs, Twitter pre-parsed everything and provided exact character positions (indices). This was genuinely innovative API design.
The id_str pattern was pragmatic. JavaScript's Number type couldn't handle 64-bit tweet IDs precisely, so Twitter included string versions of every ID. A practical solution to a real problem.
API v2: The Rewrite Nobody Asked For
In 2020, Twitter launched API v2—a ground-up rewrite with a different philosophy:
GET /2/tweets?ids=123,456&tweet.fields=created_at,public_metrics&expansions=author_id&user.fields=username
The fields system: Instead of returning everything, v2 required explicit field selection:
tweet.fields=created_at,text,public_metrics,entities
user.fields=username,profile_image_url,verified
Expansions (similar to Stripe's expand[]):
expansions=author_id,attachments.media_keys
On paper, this was better: smaller payloads, explicit data fetching, modern design principles.
In practice, it was a nightmare:
- Two APIs to maintain: v1.1 endpoints still worked but were "deprecated" (yet never actually removed)
- Field confusion: Getting the same data required completely different parameters in v1.1 vs v2
- Incomplete coverage: Many v1.1 features weren't available in v2 for years
- Breaking the ecosystem: Libraries needed to support both versions simultaneously
The migration was never completed cleanly. To this day, some functionality requires v1.1.
Act III: The Musk Era (2022–Present)
November 2022: The API Apocalypse
Within weeks of Elon Musk's acquisition:
- The entire API team was gutted
- Documentation started decaying
- Endpoints broke without explanation
- Rate limits changed without notice
February 2023: The New Pricing
The old pricing:
Standard (v1.1): Free — 500,000 tweets/month read
Premium: $149/mo — 2.5M tweets/month
Enterprise: Custom pricing
Academic Research: Free — Full archive access
The new pricing:
Free: $0/mo — 1,500 tweets/month READ, 50 tweets/month POST
Basic: $100/mo — 10,000 tweets/month READ, 3,000 POST
Pro: $5,000/mo — 1M tweets/month READ, 300,000 POST
Enterprise: $42,000/mo — Starting price, negotiable
The math for a small bot:
Old: Free (Standard API)
New: $100/month (Basic) for far less access
The math for a research institution:
Old: Free (Academic Research API)
New: $42,000/month minimum (Enterprise, since Academic tier was eliminated)
What Broke
Academic research collapsed: Thousands of studies relied on Twitter data. Researchers couldn't justify $42,000/month.
Bots died: The vibrant ecosystem of creative bots (@everyword, @MothGenerator, @big_ben_clock) went silent. At $100/month for 50 posts/day, hobby projects became untenable.
Monitoring tools scrambled: Social listening platforms that powered PR, marketing, and crisis management had to renegotiate or leave.
Archive access vanished: The full-archive search that academics and journalists relied on was locked behind Enterprise.
The Technical Decay
Beyond pricing, the API itself deteriorated:
Reliability dropped:
- Endpoints started returning 500 errors more frequently
- Webhook delivery became inconsistent
- The streaming API (Filtered Stream) had unexplained disconnects
Documentation rotted:
- Pages referenced features that no longer existed
- Code examples used deprecated authentication methods
- The developer portal had persistent bugs
- Support tickets went unanswered for months
Rate limits became unpredictable:
Documented: 300 requests / 15 minutes
Actual: Sometimes 50, sometimes 300, sometimes 429 for no clear reason
Developers reported being rate-limited well below documented thresholds, with no explanation and no support channel to ask.
The Technical Autopsy: What Was Good, What Was Bad
✅ What X/Twitter Got Right (Historically)
1. The Entities System
Pre-parsed metadata in tweets was genuinely innovative:
"entities": {
"hashtags": [
{ "text": "API", "indices": [20, 24] }
],
"urls": [
{ "url": "https://t.co/xxx", "expanded_url": "https://example.com", "indices": [25, 48] }
]
}
No regex needed. Exact positions. This saved developers thousands of hours of text parsing.
2. Snowflake IDs
Twitter's ID generation system (Snowflake) became an industry standard:
Snowflake ID: 1234567890123456789
├── Timestamp: 41 bits (69 years of milliseconds)
├── Datacenter: 5 bits
├── Worker: 5 bits
└── Sequence: 12 bits
Properties:
- Time-sortable: Higher ID = newer tweet (no need for created_at in queries)
- Distributed: No central ID counter needed
- Unique: Guaranteed uniqueness across data centers
Discord, Instagram, and many others adopted Snowflake or similar schemes. This is Twitter's most lasting technical contribution to API design.
3. OAuth 1.0a Implementation
Twitter was one of the first major platforms to implement OAuth properly, and their implementation became a reference for the industry. The three-legged OAuth flow for Twitter was literally the example in OAuth tutorials for years.
❌ What X/Twitter Got Wrong
1. The v1.1 → v2 Migration Disaster
Running two API versions simultaneously for 5+ years with neither fully deprecated nor fully featured is an anti-pattern. Compare to Stripe's approach: date-based versioning with automatic backward compatibility.
2. Authentication Complexity
X currently supports:
- OAuth 1.0a (for v1.1 endpoints)
- OAuth 2.0 Authorization Code with PKCE (for v2)
- OAuth 2.0 App-Only (Bearer Token)
- API Key + Secret (Basic Auth for tokens)
Four auth methods across two API versions. Compare to Stripe: one API key.
3. The Timestamp Format
"created_at": "Mon Mar 10 07:00:00 +0000 2025"
This is Ruby's Time#to_s format. Not ISO 8601. Not Unix timestamp. A human-readable string that requires custom parsing in every language.
Stripe uses Unix timestamps. Reddit uses Unix timestamps. Most modern APIs use ISO 8601. Twitter chose... whatever Ruby printed by default in 2006.
v2 fixed this with ISO 8601, but v1.1 still returns the old format.
4. No Webhooks (Until Account Activity API)
For years, the only way to know about new mentions, DMs, or followers was polling. The Account Activity API (webhooks) came late, required a CRC challenge implementation, and was limited to specific use cases.
The Contrast Table
| Aspect | X/Twitter | Stripe | |
|---|---|---|---|
| ID system | Snowflake (excellent) | Prefixed (excellent) | Fullnames (good) |
| Versioning | v1.1/v2 coexistence (messy) | Date-based (clean) | /api/v1/ (basic) |
| Auth | 4 methods across 2 versions | 1 API key | OAuth 2.0 |
| Pricing change | Academic: free → $42K/mo | Stable, granular | Free → $0.24/1K calls |
| Developer notice | Days/weeks | Months/years | ~60 days |
| Documentation | Decaying | Best-in-class | Incomplete |
| Unique innovation | Entities, Snowflake IDs | Expandable objects, idempotency | Thing system |
Lessons Specific to X
1. Don't Build Two APIs When One Will Do
The v1.1/v2 split created years of confusion. If you're going to rewrite your API, commit fully: set a deprecation date, provide migration tools, and finish the new version before announcing the old one's death.
2. Pricing Must Match Value Perception
$42,000/month for an API that's less reliable than when it was free is not a value proposition. It's extortion.
If you must charge, the pricing should reflect:
- Clear value (better SLA, more features, dedicated support)
- Predictable costs (not "starting at" with hidden variables)
- Tiered access (don't force small developers into enterprise pricing)
3. Technical Debt Kills Trust
When your documented rate limits don't match actual behavior, when endpoints break without changelogs, when support tickets go unanswered—you're telling developers that your platform is unreliable.
No amount of good original design can overcome operational neglect.
4. Your API's Legacy Is Bigger Than Your Platform
Twitter's Snowflake IDs are used everywhere. The entities model influenced how other platforms structure metadata. OAuth adoption was accelerated by Twitter's implementation.
X is destroying a technical legacy that transcended the platform itself. That's not just a business loss—it's a loss for the developer community.
Conclusion: The Saddest API Story in Tech
Reddit's API story is about a bad decision. X's API story is about a slow, systematic destruction of something that was once genuinely great.
Twitter didn't just have a good API—they had an API that shaped how we think about social platforms. The streaming API. The entities model. Snowflake IDs. OAuth adoption. The concept of a developer ecosystem as a growth engine.
All of it, methodically dismantled.
The lesson isn't just "don't raise prices"—Reddit already taught us that. The lesson from X is deeper:
A great API is a public good. When you destroy it, you don't just lose developers. You lose the innovations they would have built, the research they would have conducted, and the trust that took a decade to earn.
Stripe builds trust by making changes slowly and carefully. Reddit lost trust with one bad decision. X lost trust by making it clear that developers simply don't matter.
Three platforms. Three approaches. One conclusion:
Your API is your reputation. Treat it accordingly.
Building an API you want developers to love? Apidog helps you design, test, and document APIs that stand the test of time. Start free.
Top comments (0)