The LinkedIn Marketing API is enterprise-gated. Phantombuster wants $69/mo. The cookie-paste shortcut on Stack Overflow breaks within four requests. SuperMCP solves this with Playwright and your real Chrome session. Here's the architecture plus the prompt patterns I use weekly.
TL;DR
I run a forum at webmatrices.com where indie founders post about their SaaS bottlenecks. A lot of the same complaints surface on LinkedIn, in the comments under bigger founder posts. I wanted Claude to cross-reference both. The cleanest path turned out to be a LinkedIn MCP server that reuses your Chrome login session via Playwright. No API key, no third-party cloud, nothing leaves your laptop.
Code's at github.com/Bishwas-py/supermcp. MIT-licensed. Install is pip install supermcp && supermcp setup. The LinkedIn-specific docs are here.
If you just want the working setup, skip to the architecture section. If you want to know why the cookie-paste shortcut fails, keep reading.
The naïve approach (and why it breaks)
Before SuperMCP existed, I did the dumb thing one weekend. I opened LinkedIn in Chrome, opened devtools, copied my li_at cookie out of Application > Cookies, and pasted it into a Python script using httpx:
cookies = {"li_at": "AQED...redacted..."}
r = httpx.get("https://www.linkedin.com/voyager/api/me", cookies=cookies)
That worked. I got a clean JSON response with my own profile data. I felt clever for about ninety seconds.
Then I tried the search endpoint. voyager/api/graphql/... returned 999 Forbidden. Tried again. 999 again. Within four requests, LinkedIn sent a 2FA email to my real address: "We noticed an unusual sign-in attempt." No sign-in had happened. The script was using my session cookie. The challenge fired anyway.
The reason: LinkedIn's anti-abuse system isn't just looking at cookies. It's looking at TLS fingerprints, JA3 signatures, the order of HTTP/2 headers, the User-Agent/Accept-Language combo, and a few dozen other things that make a real Chrome request look like a real Chrome request. httpx doesn't lie about being httpx, even with a stolen cookie. Once the fingerprint was wrong, the cookie became evidence rather than authentication.
So copying cookies into a script is the correct mental model and the wrong implementation. The fix is: don't copy the cookie. Make a real browser do the request. Which is what a headless Chromium with persistent storage state actually is.
The architecture that worked
Three layers, nothing fancy:
Claude / Cursor ←stdio→ MCP server (Python + FastMCP)
│
└── Playwright (headless Chromium
with persistent storage state)
│
└── linkedin.com (your real session)
The trick is persistent storage state. Playwright lets you point a Chromium instance at a directory containing cookies + localStorage + IndexedDB from a previous session. That session can be one Playwright created earlier (run it once, log in by hand, save state to JSON), or in some configs you can clone a directory off your real Chrome profile. Either way, the resulting browser is real. The TLS fingerprint is real. The JA3 is real. The header order is real. LinkedIn sees a logged-in user opening their search page in Chrome, which is what's actually happening.
After that, FastMCP wraps the tools in about 20 lines of glue per tool:
from fastmcp import FastMCP
mcp = FastMCP("supermcp")
@mcp.tool()
async def linkedin_search(query: str, limit: int = 20) -> str:
"""Search LinkedIn posts. Returns markdown with author, reactions, URL."""
page = await browser.new_page()
await page.goto(f"https://www.linkedin.com/search/results/content/?keywords={query}")
# parse results from the rendered DOM
...
return format_as_markdown(results)
A few things I learned the hard way once I had it working:
Return markdown, not JSON. Models read markdown faster (about 30% fewer tokens, in my testing on Claude 4.6) and they chain follow-up calls more reliably when IDs and URLs are surfaced as plain text rather than buried in nested objects. I have a small format helper that always emits stable IDs in **id:** abc123 form so Claude can call linkedin_post_comments(post_id="abc123") after a search.
Be honest in tool descriptions. Claude routes tool calls based on the docstring you give the MCP server. If your description is vague, Claude will use it for the wrong intent. linkedin_search is post search by keyword, not people search, so the docstring says exactly that. People search needs a different tool, which I don't ship because LinkedIn flags it harder.
Cap the per-day request budget at the MCP layer. Not at the LinkedIn layer. The reason is that Claude will sometimes loop, burning through 40 search calls trying to refine a query that was wrong from the start. Catching that at the MCP server (one shared counter, not per-tool) is much cheaper than letting LinkedIn catch it.
The three tools, and the prompts that actually work
The whole LinkedIn surface is three tools.
linkedin_search (the workhorse)
This is what I use 90% of the time. Keyword search across all public LinkedIn posts. Pain-point mining looks like:
Search LinkedIn for posts where solo founders complain about Stripe billing edge cases. Pull the top 25, group by complaint pattern, surface the top three with example URLs.
Claude calls linkedin_search, gets back markdown, synthesizes. The synthesis step is where the value compounds. Claude is much better at clustering complaints than I am at reading 25 posts in a row.
Two prompt patterns that work consistently for me:
"Group by pattern, then surface 3." Claude is bad at "summarize this," good at "cluster then rank." Asking for clustering first gives you a much more useful synthesis.
"Quote the most concrete one." Adding "include one literal quote from the most concrete-sounding post" at the end of the prompt forces Claude to actually pick a post rather than confabulate a synthesis. Quotes are a forcing function for groundedness.
linkedin_feed (for taking the temperature)
Pulls your algorithmic home feed. Good for "what is my actual network talking about today?", useful when you want a less-curated read. I don't use it for research; I use it for context-setting before I post.
linkedin_post_comments (the underrated one)
Reads the comment thread on a specific post URL. People underrate this because they think the post is the signal. The post is usually marketing. The signal is in the comments, where the post's audience says what actually shipped vs. what was promised. If a founder posts "we hit $1M ARR with 3 people," the comments will sometimes have a former employee or a competitor adding important context. That's the data you want.
How SuperMCP keeps accounts under the radar
This is the part I cared most about figuring out before shipping. After the cookie-paste shortcut taught me how LinkedIn's anti-abuse stack works, I designed SuperMCP around six practices that have kept my main account clean over the last 6 months of daily use:
-
Real browser, not headless flag. Playwright with
headless=Falsefor the first session, thenheadless=Truein production. Some bot detection looks specifically at theheadlessChrome boolean. - One request at a time. No parallelism. If Claude wants 25 results, I serve them one fetch at a time, with a 1.5–3s jitter between actions. LinkedIn cares more about pace than volume.
- Reuse the session. Don't launch a new browser context per request. Reusing one warm session looks human; spawning fresh contexts looks scripted.
- No people search. Post search and feed reading are common user actions. People search at scale isn't, and LinkedIn flags it. I just don't ship the tool.
- Daily budget cap, low default. Free tier is 100 requests/day. That's about 3x normal human activity in a research session, well below anything LinkedIn flags as automation.
- Bail fast on challenges. If the page returns a captcha, a 2FA prompt, or even a soft "is this you?" check, SuperMCP stops the run, logs it, and backs off for 24 hours. Almost every flag escalation comes from tools that retry through challenges instead of standing down.
For context on why this matters: LinkedIn migrated their post composer from ProseMirror to Quill earlier this year and broke practically every browser-automation tool that touched the editor. They're actively updating selectors and fingerprints. A maintained tool will weather these changes. A frozen one won't.
How this stacks up against the alternatives
| Approach | Cost | Setup | Where it runs | Weakness |
|---|---|---|---|---|
| SuperMCP (LinkedIn MCP) | Free / $9 one-time | pip install |
Your laptop | DOM changes break things; needs maintenance |
| LinkedIn Marketing API | Free if approved | Months of partner approval | Your server | Indie founders don't get approved |
| Phantombuster | $69/mo | Cookie hand-off to their cloud | Their cloud | Third party operates on your account |
| Apify / Bright Data | Pay per result | Actor + budget | Their cloud | Costs unpredictable at scale |
| DIY Selenium + cookies | Free | Days of selector wrangling | Your laptop | The naïve version of this is the cookie-paste shortcut. Breaks fast. |
The honest case for SuperMCP isn't that it's strictly better than all of these. The Marketing API is the right answer if you can get approved. Phantombuster is the right answer if you don't want to think about it. SuperMCP is the right answer if you (a) can't get the API, (b) don't want to send your cookies to a third-party cloud, and (c) want it talking directly to Claude or Cursor instead of a separate dashboard.
Installing it
pip install supermcp
supermcp setup # gets your API key, auto-installs Chromium
claude mcp add supermcp -- supermcp
Cursor users, drop this in settings.json:
{ "supermcp": { "command": "supermcp" } }
After that, ask Claude things like:
- Find LinkedIn posts where someone is complaining about Stripe Checkout. Group by complaint, surface 3 with quotes.
- Pull the top comments on this LinkedIn post: [URL]. Where do commenters disagree with the OP?
- Cross-reference my LinkedIn feed against Reddit's r/SaaS. What's in both?
The third one needs the Reddit MCP too, which is the same package.
Why I bundled this with Reddit, Twitter, and the rest
Halfway through the LinkedIn build, I realized the same Playwright-with-persistent-state trick worked for Reddit (where the API is now paid and rate-limited), Twitter/X (where the cheapest tier is $200/mo), Medium, Dev.to, BlackHatWorld, and Google Trends/News. By the end of the second weekend I had 26 tools across 7 sources, all using the same auth pattern. The bundle is called SuperMCP. One install, all sources. Repo: github.com/Bishwas-py/supermcp.
If you only need LinkedIn, you only call the LinkedIn tools. The other platforms don't activate unless you use them.
FAQ
Is this safe for my LinkedIn account?
Yes, with the defaults SuperMCP ships. The free-tier rate cap (100 requests/day) sits well under any threshold LinkedIn flags as automation, and the Playwright setup uses your real browser fingerprint instead of a stripped-down httpx request, so the traffic looks like a normal logged-in user. I've run this on my main account daily for 6 months without a flag. The one caveat: a brand-new LinkedIn account doing aggressive research will draw attention faster than an established one, so stick to the defaults until you have a feel for it.
Is this against LinkedIn's TOS?
LinkedIn's User Agreement prohibits "automated software" against the service. Same is true for X, Reddit, and Medium. The practical situation is that every browser-automation tool (Phantombuster, Apify, this) operates in that gray zone, as does every personal-productivity Chrome extension you've installed. SuperMCP runs locally at human-scale rates with your own session. I'm not your lawyer. The repo has a TOS notice, use accordingly.
Does the Reddit / Twitter / Medium MCP need an API key?
No. Same Chrome-session trick. Reddit's API is now paid; Twitter's cheapest tier is $200/mo; Medium has no read API. All three work via your existing logged-in session.
Which AI tools support this?
Any MCP-compatible client: Claude Desktop, Claude Code, Cursor, Windsurf, Cline, GitHub Copilot Agent, Continue. SuperMCP is a standard stdio MCP server.
Where do I get an API key?
supermcp setup after pip install supermcp. Free tier (100 requests/day) is automatic. Unlimited tier is $9 one-time at webmatrices.com/supermcp.
If you build something on top of this, I'd love to see it. Drop an issue or a PR at github.com/Bishwas-py/supermcp. The LinkedIn-specific docs are at webmatrices.com/supermcp/linkedin-mcp.
Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.