How I automated my own LinkedIn + Dev.to publishing in one afternoon (and what broke along the way)
I'm a backend engineer transitioning to full-stack and trying to build more public presence. The plan: every time I ship something, a single command should write a proper article and post it to LinkedIn and a dev blog — so my GitHub activity actually becomes visible to recruiters without me having to remember.
What I ended up building is a skill called publish-project that sits inside my LOS memory system. You run one command, it reads a GitHub repo plus local project notes, writes a real article, and publishes it to Dev.to and LinkedIn in the same run.
Here's the honest log of how it came together, including the three dead ends.
Why not just "write a script"?
I'm tired of writing scripts that generate LinkedIn posts with [Add your motivation here...] placeholders I then fill in manually. The whole point was to eliminate that step. If I'm still hand-editing the output, automation bought me nothing.
So the bar was: the article content has to come from real files on disk, not from a template the LLM fills in later.
Dead end 1: Medium
I started with Medium because the original Medium API docs describe a clean self-issued integration token flow. Looked straightforward.
It isn't. Medium's docs now say, verbatim:
IMPORTANT: We don't allow any new integrations with our API.
If your account pre-dates the cutoff, your existing token still works. Mine didn't. There is no way to generate a new integration token for a new Medium account as of 2026. I checked the settings page — the Integration Tokens section literally isn't rendered for new accounts.
Pivot.
Dev.to: the easy path
Dev.to's API is the opposite experience.
- Go to
https://dev.to/settings/extensions - Generate an API key (one click)
-
POST /api/articleswith your markdown and tags - Done
headers = {
"api-key": api_key,
"Content-Type": "application/json",
}
data = {
"article": {
"title": title,
"body_markdown": content,
"tags": tags, # max 4, lowercase, alphanumeric
"published": True,
}
}
requests.post("https://dev.to/api/articles", headers=headers, json=data)
That's the whole integration. Worked first try. I wrapped it in a DevtoClient class with two methods (get_user, publish_article) and moved on.
Dead end 2: LinkedIn's UGC Posts API
LinkedIn was supposed to be the easy one. I followed an old tutorial, built the request, and got this back:
Error 422: com.linkedin.common.error.BadRequest
"com.linkedin.ugc.UGCContent" is not a member type of union [...]
After some digging I found the cause: LinkedIn deprecated the /v2/ugcPosts endpoint in favour of a new /rest/posts endpoint. The old payload shape (specificContent.com.linkedin.ugc.UGCContent.shareCommentary.text) is gone. The new one is dramatically simpler:
POST https://api.linkedin.com/rest/posts
Headers:
Authorization: Bearer <token>
X-Restli-Protocol-Version: 2.0.0
Linkedin-Version: 202604
Content-Type: application/json
Body:
{
"author": "urn:li:person:<id>",
"commentary": "Your post text",
"visibility": "PUBLIC",
"distribution": {
"feedDistribution": "MAIN_FEED",
"targetEntities": [],
"thirdPartyDistributionChannels": []
},
"lifecycleState": "PUBLISHED",
"isReshareDisabledByAuthor": false
}
Two things that tripped me up:
The endpoint is
https://api.linkedin.com/rest/posts, nothttps://api.linkedin.com/v2/rest/posts. I hadLINKEDIN_API = "https://api.linkedin.com/v2"as a constant and was string-concatenating/rest/postsonto it, which produced a 404RESOURCE_NOT_FOUND. I split the constant intoLINKEDIN_API_V2(for/v2/userinfo) andLINKEDIN_API(bare base URL) to fix this.A successful
POST /rest/postsreturns 201 with an empty body. The post ID comes back in thex-restli-idresponse header. My client initially tried to.json()the empty body and crashed with "Expecting value: line 1 column 1". Fixed with atry/exceptthat falls back to reading the header.
LinkedIn also has content-hash deduplication. If you POST the exact same commentary twice within a short window, the second call fails with DUPLICATE_POST. This is actually a gift — it saved me from spamming my own feed while testing.
Dead end 3: generating text that looked real but wasn't
This is where I spent the most time and made the most mistakes.
Attempt 1: I wrote a template function that produced markdown with placeholders: "Here's what I learned: [Add 3-5 takeaways]". It "worked" in that the script ran. But the published article was garbage because I forgot that nobody was going to fill those placeholders in. I published it to Dev.to before I read what it looked like. I had to delete it.
Attempt 2: I hardcoded all the text directly into the Python script — a long f-string with generic "the task system was the whole point" narration. Better than placeholders, but it was the same article regardless of which repo I pointed it at. Not reusable. Still technically wrong in places.
Attempt 3: I rewrote the whole generator to read real files:
-
GET https://api.github.com/repos/{owner}/{repo}for metadata -
GET /repos/{owner}/{repo}/readme(withAccept: application/vnd.github.v3.raw) for full README content - Local filesystem reads from
memory/projects/<project>/tasks.mdanddecisions.md - A small markdown table parser that extracts rows into dicts, keyed by column header
The table parser turned out to be the crucial piece. My tasks.md and decisions.md follow a consistent shape:
| Task | Why | Priority | Status |
|------|-----|----------|--------|
| Adopt BMAD patterns (task-001) | Onboarding gate, state tracking | 1 | Done |
| Cross-repo sync (task-002) | Both repos need identical task-system | 1 | Done |
The parser walks the lines, finds a header row matching a required set of columns, and yields each data row as {"task": ..., "why": ..., "status": ...}. The generator then interpolates real task names and real decisions into the article body — no placeholders, no hardcoded claims.
The three mistakes I made after that
Even with real data, I still shipped bad articles:
I claimed "5 tasks built it" based on the fact that
los-starter/tasks.mdhas 5 rows. But those 5 are just the infrastructure-bootstrap tasks for one subfolder — the actual LOS project is many skills, a landing page, a memory system, an update mechanism. The number was technically correct but the framing was dishonest. Cut it.I called LOS a "file-based OS for my dev work". It isn't. LOS is a memory system for Claude Code — markdown files that Claude reads at session start so every conversation picks up where the last one left off. Calling it an "OS" flattened the core idea.
I included 8 skills in the skills list. The LOS-starter README says 6. I had added
/update-losand/publish-projectto my "core skills" set because they exist on disk, but they aren't in the public skill lineup. Trimmed to 6.
Each of these was a two-line fix. Each one was also the kind of subtle wrong you can only spot by reading the output carefully, not by running tests. If you're automating publishing, read every post before it goes live, even in --confirm mode.
What the skill does now
# Preview everything without posting
python publish_project.py --repo LOS-starter
# Publish article to Dev.to + post to LinkedIn
python publish_project.py --repo LOS-starter --confirm
# Post a fresh LinkedIn post pointing at an existing Dev.to article
python publish_project.py --repo LOS-starter --linkedin-only \
--article-url https://dev.to/you/your-article --confirm
# Just write the article to a local markdown file for review
python publish_project.py --repo LOS-starter --save out.md
The pipeline:
- Fetch GitHub — repo metadata and full README from the GitHub API
-
Read local project — look for
memory/projects/<name>/, parsetasks.mdanddecisions.mdas markdown tables -
Read core skills — scan
.claude/skills/*/SKILL.md, extract description from frontmatter, filter to the skills actually listed in my public README - Pick the richer README — compare section counts between GitHub README and local README, use whichever has more structure
- Generate article — interpolate all of the above into an article body with real sections: idea, stack, features, skills, one skill deep-dive, real tasks table, real decisions list, architecture notes, try-it block
- Build LinkedIn post — short version with both the Dev.to URL and the GitHub URL on their own lines (LinkedIn renders link previews only when a URL is on its own line)
- Publish — Dev.to first (because we need the URL for the LinkedIn post), then LinkedIn with the fresh URL included
How to feed it
Three inputs:
-
A GitHub repo name via
--repo <name>. Bare repo names are resolved against my GitHub username; fullowner/repoworks too. -
A local project folder at
memory/projects/<slug>/(optional but strongly recommended). This is where the interesting data lives:README.md,tasks.md,decisions.md. Without this, the article is just README paraphrase. With it, the article has specific decisions with dates and real task names. -
.envwith three tokens:
GITHUB_TOKEN=ghp_...
DEVTO_API_KEY=...
LINKEDIN_ACCESS_TOKEN=...
GitHub token is optional (raises the rate limit from 60/hr unauthenticated to 5000/hr). The other two are required if you want to actually publish.
That's the whole contract. If you keep your project notes in the LOS format — one folder per project with a README, a tasks table, and a decisions table — the skill has everything it needs.
What I would do differently
Two things I'd fix if I were building this again:
Idempotent publishing. The current script calls POST /api/articles on every --confirm run, which means every iteration creates a new Dev.to article. I generated four stale drafts before I realised this. The right design: maintain a local .published.json state file mapping repo_name → article_id, and on subsequent runs hit PUT /api/articles/{id} to update the existing article in place. Dev.to supports it; I just didn't wire it up. Next iteration.
Preview the final LinkedIn text. My preview mode showed the LinkedIn post before the Dev.to URL was known, so the preview was missing the "Full write-up on Dev.to:" line even though the published version had it. That's a confusing UX — the preview didn't match the output. I patched it to use a placeholder URL in the preview and rebuild the post with the real URL at publish time, but the cleaner fix is to publish to Dev.to first (always), then show the final LinkedIn text, then prompt for confirmation.
Why this matters
I am trying to go from "person with a decent GitHub" to "person recruiters find". The difference isn't the code — it's whether anyone sees it. A project that ships with an accompanying write-up every time will, over a year, build far more public signal than a project that ships in silence.
This skill is not glamorous. It's a markdown table parser, two HTTP clients, and one generator function. It cost me an afternoon plus three hours of debugging the old LinkedIn API. But it means my next project ships with a real article and a real LinkedIn post, and the one after that, and the one after that — without me having to remember to write them.
Full LOS repo: https://github.com/GadOfir/LOS-starter
More on YouTube: https://www.youtube.com/@GadOfir
Top comments (0)