Twitter's official API now costs $200/month minimum just to read tweets (source). The free tier is write-only — you can post, but you can't search, pull timelines, or read anything except your own profile (source).
Here's the alternative: use your browser cookies to call the same GraphQL API that twitter.com uses internally. No API key. No developer account. No approval process.
If you're in a hurry, this is all it takes:
pip install scweet
from Scweet import Scweet
s = Scweet(auth_token="your_auth_token", proxy="http://user:pass@host:port")
tweets = s.search("python programming", since="2025-01-01", limit=100)
print(f"Got {len(tweets)} tweets")
print(tweets[0])
Why Not the Official API?
X restructured its API pricing in February 2023, then raised prices again in October 2024. Here's what it looks like now:
| Tier | Price | What you get |
|---|---|---|
| Free | $0 | Write-only. 500 posts/month. No search. No read access. Only endpoint: GET /2/users/me
|
| Basic | $200/month | 15,000 read requests/month. 7 days of search history |
| Pro | $5,000/month | 1 million tweets. Full archive search |
| Enterprise | $42,000+/month | Custom. Compliance streams |
Sources: TechCrunch, X Developer Community, xpoz.ai pricing breakdown
X also launched a pay-as-you-go model in February 2026, but plugging equivalent Basic-tier usage into it comes out to ~$575/month — not actually cheaper.
The approach in this guide costs $0 if you run locally, or about $0.30 per 1,000 tweets if you use the hosted cloud version.
How This Works
When you scroll your Twitter feed in a browser, the web app makes GraphQL API calls to X's backend. These calls are authenticated with cookies set when you log in — specifically, auth_token and ct0.
Scweet replays those same calls from Python. It sends the exact same requests the browser would, using curl_cffi to match the browser's TLS fingerprint. From X's perspective, it looks like normal browser activity.
No headless browser. No Selenium. No Playwright. Just HTTP requests with the right cookies.
What You'll Need
- A Twitter/X account (free)
- Python 3.9+
- A proxy (strongly recommended — reduces rate limit hits and ban risk)
That's it.
Step 1: Install Scweet
pip install -U scweet
Step 2: Get Your auth_token
- Log in to x.com in Chrome or Firefox
- Open DevTools — press F12 (or right-click > Inspect)
- Go to Application > Cookies >
https://x.com - Find
auth_token— copy the value
That's your credential. Scweet will automatically bootstrap the ct0 CSRF token from it, so you only need this one value.
Your
auth_tokenstays valid for weeks to months. When it expires, Scweet raises anAuthError— just repeat this step with a fresh cookie.
Step 3: Your First Scrape
from Scweet import Scweet
s = Scweet(
auth_token="YOUR_AUTH_TOKEN",
proxy="http://user:pass@host:port"
)
# Search for tweets about Bitcoin from 2025 onward
tweets = s.search("bitcoin", since="2025-01-01", limit=200, save=True)
print(f"Collected {len(tweets)} tweets")
save=True writes the results to a CSV file automatically. You can also use save_format="json" or save_format="both".
Each tweet record includes: tweet_id, timestamp, text, likes, retweets, comments, tweet_url, user (screen_name, name), image_links, and the full raw GraphQL payload if you need it.
From the CLI (no Python code needed):
scweet --auth-token YOUR_AUTH_TOKEN --proxy http://user:pass@host:port \
search "bitcoin" --since 2025-01-01 --limit 200 --save
What Else Can You Scrape?
Tweet search is just one of the endpoints. Here's the full surface:
# Profile timeline — a user's own tweets
timeline = s.get_profile_tweets(["elonmusk"], limit=200)
# Followers
followers = s.get_followers(["elonmusk"], limit=1000)
# Following
following = s.get_following(["elonmusk"], limit=500)
# User info — bio, follower count, verification status, account creation date
profiles = s.get_user_info(["elonmusk", "openai"])
All methods accept a list of usernames, so you can batch multiple targets in a single call. And every method has an async variant (asearch(), aget_profile_tweets(), aget_followers(), etc.) for async pipelines.
Scaling: Multiple Accounts
A single account handles hundreds to a few thousand tweets per day before hitting rate limits. For larger jobs, Scweet supports multi-account pooling.
Create a cookies.json with multiple accounts, each with its own proxy:
[
{
"username": "account_1",
"cookies": { "auth_token": "..." },
"proxy": "http://user1:pass1@host1:port1"
},
{
"username": "account_2",
"cookies": { "auth_token": "..." },
"proxy": "http://user2:pass2@host2:port2"
}
]
s = Scweet(cookies_file="cookies.json")
# Scweet rotates accounts automatically — handles rate limits, cooldowns, and retries
tweets = s.search("AI startups", limit=10000, save=True)
Scweet manages the pool in SQLite: leases, heartbeats, daily counters, cooldowns, and automatic failover when an account hits its limit. You don't touch any of that — it just works.
If a long scrape gets interrupted, pass resume=True and it picks up exactly where it left off:
tweets = s.search("AI startups", limit=10000, save=True, resume=True)
Don't Want to Manage Cookies or Accounts?
The Scweet Apify actor does everything above — but managed for you. No cookies, no account pooling, no proxy setup. It includes a free tier (up to 1,000 tweets/day) and scales to millions at $0.25 per 1,000 tweets.
You can run it from the Apify web UI, or call it programmatically via the Apify API:
from apify_client import ApifyClient
client = ApifyClient("YOUR_APIFY_TOKEN")
run = client.actor("altimis/scweet").call(run_input={
"search_query": "bitcoin",
"max_items": 500,
})
tweets = client.dataset(run["defaultDatasetId"]).list_items().items
print(f"Got {len(tweets)} tweets")
This is useful if you want to integrate Twitter data into a pipeline, a cron job, or an n8n/Zapier workflow without managing Python dependencies or browser cookies yourself.
Links
- GitHub: github.com/Altimis/Scweet — full source, 250+ tests, MIT license
-
PyPI:
pip install scweet - Apify actor: apify.com/altimis/scweet — hosted, no-code, free tier included
- Full documentation: DOCUMENTATION.md
Top comments (0)