For 22 days, I've been running a system that autonomously manages a professional LinkedIn profile. Daily posting, strategic engagement, DM triage, weekly reporting — all orchestrated by Claude Cowork with zero manual intervention.
Today I'm open-sourcing the entire system as a Claude skill. Here's how it works, what went wrong, and what I learned.
The Problem
Building a professional LinkedIn presence takes 3-4 hours daily: content creation, engagement, analytics, DM management. For a solo professional, that's unsustainable.
I needed a system that could do all of this autonomously while maintaining a recognizable, human voice — and without getting flagged as AI.
Architecture: 5-Phase Wizard
The skill isn't a template. It's a guided wizard that builds a personalized automation system:
Phase 1: Identity & Voice
15 questions extract your personal tone of voice: rhetorical patterns, vocabulary preferences, emotional registers, even a profile blacklist. The system doesn't start from "what to post" but from "who you are when you write."
Phase 2: Strategy & Content
A pillar calendar maps 7 days to 7 content types, each with a different emotional register:
- Monday: Behind the scenes (curious, vulnerable)
- Tuesday: Tool/Workflow (pragmatic, generous)
- Wednesday: Hot take (provocative, moral)
- Thursday: Case study (proud, specific)
- Friday: How-to (didactic, patient)
- Saturday: Storytelling (personal, reflective)
- Sunday: Soft CTA (direct, confident)
Phase 3: Engagement & Anti-Detection
This is where it gets interesting. See the next section.
Phase 4: Task Plan — Review & Approve
The user sees all 10 scheduled tasks with times, frequencies, and dependencies. Nothing runs without explicit approval.
Phase 5: Create Tasks & Iterate
Tasks are created as cron jobs. The first week is closely monitored. Data feeds the next iteration cycle.
Anti-Detection: The Hard Part
Generating LinkedIn content is relatively easy. Not getting caught is the real engineering challenge.
I built 3 layers:
Layer 1: NDI (Natural Dialogue Index)
Every engagement session receives a naturalness score from 1-10 based on:
- Comment structure variation
- Topic diversity (AI vs. non-AI)
- Tool mention frequency
- Rhetorical pattern repetition
If the session scores below 5.0, the system stops and recalibrates before continuing.
Layer 2: 7 Anti-Pattern Rules
Born from actual Day 1 mistakes:
- Max 2 Claude/Anthropic mentions per 5 comments
- Varied comment structures (never the same pattern twice consecutively)
- At least 1 non-AI topic per session
- Zero "evangelization" expressions
- When someone agrees with us, just like — don't reply seeking another angle
- Mandatory context research before commenting on posts with specific facts
- No more than 15 comments or 30 likes per session
Layer 3: Epistemic Verification Gate
This was born from a real incident on Day 7. The system commented on a post citing a specific company case, and the AI-generated comment contained an inference that sounded factual but was wrong. The post author asked "what do you mean?" — exposing the gap.
Now, if a post cites a specific case, company, person, or event, the system MUST:
- Read the full post (not just the preview)
- Web search the case if it references specific facts
- Never invent factual claims
- Use opinion framing ("I wonder if...", "the pattern I see is...") when context is unclear
The Stack
ORCHESTRATION EXECUTION INFRASTRUCTURE
Claude Cowork Chrome MCP Google Cloud
→ 10 cron tasks → LinkedIn DOM → Cloud Run
→ Skills (.md) → post publishing → Vercel (site)
→ Sub-agents → engagement sessions → Sanity CMS
→ screenshot capture
No intermediate tools. No Zapier, n8n, or Make. Claude IS the workflow — not a component in someone else's workflow.
Results (22 Days, Unfiltered)
| Metric | Value |
|---|---|
| Followers | 45 → 55 (+22%) |
| Engagement rate | 3.0% (vs 2.21% baseline) |
| AI-written comments | 75+ |
| L1 proof events | 13 |
| Detection incidents | 0 |
| Engagement score | 8.0/10 avg |
| Posts published | 20/21 (95%) |
The numbers aren't spectacular. They're real. And that's the point.
What Went Wrong
- Chrome MCP instability: ~57% success rate in Week 1. The browser extension disconnected frequently during engagement sessions.
- Day 1 pattern repetition: 3/5 comments had identical structure. Led to the 7 anti-pattern rules.
- Day 7 epistemic failure: Wrong inference about a case cited by another professional. Led to the Verification Gate.
- Engagement "evangelization": Comments like "I use Claude daily for this" sounded like brand promotion. Now capped at 1 per session max.
The Repo
GitHub: github.com/videomakingio-gif/claude-linkedin-automation
Install:
npx skills add videomakingio-gif/claude-linkedin-automation
Full writeup with all data: giovanniliguori.it/blog/claude-linkedin-automation-skill-open-source
The skill covers LinkedIn only. I have 21 total tasks covering blog, SEO, and operations, but only the 10 LinkedIn tasks are published — because publishing only what you can prove is more credible than promising everything.
Feedback and PRs welcome. Especially the uncomfortable kind.
Top comments (0)