DEV Community

Tim Kulbaev
Tim Kulbaev

Posted on

The AI that helped you build it can post about it — from the same session

Here's the thing about building in public: the best posts come from the moment something clicks. Not from a content calendar. Not from a "let me write about what I shipped last Tuesday" session two days later.

The insight is sharpest when it's fresh. But by the time you've switched apps, opened LinkedIn, and started typing, that sharpness is gone.

I built mcp-linkedin to fix this — not as a scheduling tool, but as a way to publish from the same session where something happened.


The actual unlock

When you've been working with Claude Code or Cursor for two hours, your AI assistant has complete context of everything that happened. It knows what you tried, what failed, what surprised you, and what the insight was.

You can just say:

"Write a LinkedIn post about what we just built. 
Lead with the surprising part."
Enter fullscreen mode Exit fullscreen mode

And it produces something specific and sharp — because it was there. It doesn't need to be told what the project is, what the insight was, or what tone to use. It already knows.

That's not possible from a scheduled pipeline. A pipeline doesn't know you spent three hours debugging something and the lesson was unexpected. The same-session context is the value.


What it does

mcp-linkedin is an MCP server that gives your AI assistant four native tools: linkedin_publish, linkedin_comment, linkedin_react, and linkedin_delete_post. Once registered in your MCP config, your AI can publish without you opening a browser.

Me: "Turn what we just shipped into a LinkedIn post. Under 200 chars."

Claude: [writes post, previews it, waits for my confirmation]

Me: "Good, publish."

Claude: [publishes, auto-likes, returns the clickable URL]
Enter fullscreen mode Exit fullscreen mode

Under 60 seconds. Zero context switch.


How it differs from every other MCP LinkedIn tool

There are about a dozen MCP LinkedIn servers on GitHub. They fall into two camps — and both have problems.

Browser automation tools (the most common approach) drive a headless Playwright instance to simulate clicking around LinkedIn. They break whenever LinkedIn changes their DOM. They require an active browser session. They're fragile by design.

LinkedIn API tools use the official API — which requires going through LinkedIn's partner approval process. Good luck with that.

mcp-linkedin uses Unipile as the API bridge. Unipile handles LinkedIn OAuth so you don't need API approval, and there's no browser to break. It's the only MCP LinkedIn server that takes this approach.

On top of that, two things none of the others do:

1. Dry run is on by default — publishing is a two-step confirmation

Every call to linkedin_publish defaults to dry_run: true. The AI produces a preview — final text, character count, media validation, resolved @mentions — and shows it to you before anything goes live.

{
  "status": "preview",
  "post_text": "Spent 3 hours on this config. The lesson was not what I expected.",
  "character_count": 64,
  "ready_to_publish": true
}
Enter fullscreen mode Exit fullscreen mode

Only after you confirm does it publish. The AI cannot skip this step. That matters when your AI assistant is running autonomously — you don't want it deciding on its own that now is a good time to post.

2. Ships with a SKILL.md — the AI knows the correct workflow

I checked every MCP LinkedIn repo on GitHub. None of them ship a SKILL.md. They're just tool definitions — they tell the AI what the tools do, not how to use them safely.

mcp-linkedin ships with a SKILL.md that Claude Code loads automatically. It tells the AI:

  • Always preview before publishing
  • Never publish without explicit confirmation
  • Always return the clickable post URL after publishing
  • How to handle every error state

You don't have to remind it. The correct workflow is part of the tool.


Setup

git clone https://github.com/timkulbaev/mcp-linkedin.git
cd mcp-linkedin
npm install
Enter fullscreen mode Exit fullscreen mode

Sign up for Unipile, connect your LinkedIn account, get your API key and DSN. Free tier covers standard usage.

Add to ~/.claude/mcp.json:

{
  "mcpServers": {
    "linkedin": {
      "command": "node",
      "args": ["/absolute/path/to/mcp-linkedin/index.js"],
      "env": {
        "UNIPILE_API_KEY": "your-api-key",
        "UNIPILE_DSN": "apiXX.unipile.com:XXXXX"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart Claude Code. Done.


The result

I went from posting once a week to 4–5 times a week. Not because I changed my habits — because the post now happens in the same session where the insight happened.

The AI already has the context. The friction is gone. The moment doesn't pass.

MIT licensed, 28 unit tests, Claude Code + Cursor + Windsurf compatible: github.com/timkulbaev/mcp-linkedin


Tim Kulbaev is an AI automation consultant at TMC AI, helping companies build automated workflows with AI. timconsulting.co

Top comments (2)

Collapse
 
mihirkanzariya profile image
Mihir kanzariya

The SKILL.md approach is really clever. Most MCP tools just expose raw API calls and leave it to the user to figure out safe usage patterns. Having the tool ship with its own instructions that the AI loads automatically is a much better design.

I've been thinking about this same friction with building in public. You finish a feature, the context is fresh, but by the time you switch to Twitter or wherever to write about it, half the details are gone. Anything that keeps the publishing step inside the dev flow is going to win imo.

Curious how you handle the preview step in practice. Do you always review before publishing or do you ever let it go fully autonomous?

Collapse
 
tim_kulbaev profile image
Tim Kulbaev

Yeah, Twitter/X is actually on my roadmap and I'm planning to add it pretty soon!

And to your question — I always review before publishing. That's actually the whole point of the dry run being on by default. For now I just prefer to review and refine the text that goes out, even when the AI drafts it. The context awareness is great, but the final call should always be mine.

If you want to follow along as I add more platforms, feel free to subscribe to updates here or watch the repo on GitHub!