DEV Community

Cover image for Turning Weekly GitHub Activity Into Blog Posts on Notion + DEV.to

Turning Weekly GitHub Activity Into Blog Posts on Notion + DEV.to

Yash Kumar Saini on March 27, 2026

This is a submission for the Notion MCP Challenge What I Built Every Monday standup, someone asks: "What did you work on last week?" An...
Collapse
 
anmolbaranwal profile image
Anmol Baranwal

nice! you added the blog tone option.. did you like the output of those blogs? or tested with different models. I feel like claude will give you the best results.

Collapse
 
yashksaini profile image
Yash Kumar Saini

yes bro i tied with some ollama models and groq support models and also gemini best result was given by gemini and also yes all tones were good especially the professional kind, but I use casual mostly

Collapse
 
tamsiv profile image
TAMSIV

This resonates so much. I've been doing something similar for my solo project — using git log as the single source of truth for marketing content. Every feature, every bug fix, every refactor is a potential story.

My workflow: I maintain a posts-log.md file that tracks what's been published. The delta between that file and git log is literally my content backlog. No brainstorming sessions needed — the code writes the narrative.

The 3-agent pipeline approach is clever. I've been doing it more manually (Claude Code session → draft → publish), but automating the GitHub-to-Notion step would save real time. Especially when you're shipping fast and forget what you built last Tuesday.

One thing I'd add: commit messages matter more than you'd think. Good commit hygiene = better auto-generated content.

Have you measured how much engagement the auto-generated posts get vs. manually written ones?

Collapse
 
yashksaini profile image
Yash Kumar Saini

Well that can be checked once I publish enough of the auto generated posts vs written by me, however since I have the auto-generated posts as draft ready, I can edit them as much as I want adding and removing sections or lines, images and links. so the final draft which will be published can not be entirely considered as auto-generated posts.

Collapse
 
tamsiv profile image
TAMSIV

That's a smart approach. Having AI generate drafts that you then edit manually is the best of both worlds. You get the speed boost without losing your voice. I do something similar for marketing content: the AI gives me a structured first draft from git commits, then I rewrite the parts that need a human touch. The key is keeping the final edit pass in your hands.

Thread Thread
 
yashksaini profile image
Yash Kumar Saini

That's great well you can always set multiple style of tones and blog draft lenght, I usually have based on n minutes.

Collapse
 
sharonoliva profile image
sharon oliva

That is actually an interesting workflow. I have been trying to stay more consistent with writing, and using weekly GitHub activity as a base sounds practical. Do you usually just summarize commits, or do you expand them into more detailed write-ups?

Also curious if you are doing this manually or using some kind of automation.

Collapse
 
yashksaini profile image
Yash Kumar Saini

Thanks a lot sharon, I'm using Mastra and Notion MCP to fully automate this thing, getting weekly drafts ready, I can just edit add images or links and then click on pulbish, The blog platform supported in DEV.to right now. I have used Notion as my free database to store all my blogs drafts.

I'm collecting everything, the issues, pull requests, discussions, reviewed PRs by me, discussion participation, forked repo content, new repo content and more. then I have a narrator agent powered by gemini model which write a fair medium - large blog around 7-10 mins read of time.

If you are a open source developer who does and contributes in many different projects and areas., you are sitting on a ton load of content to make and publish.

It also help you answer questions like "what did you work on in last week, or last month" or in some particular week as well

Collapse
 
novaelvaris profile image
Nova Elvaris

The decision to keep harvest and publish as pure function calls instead of agent reasoning is the kind of restraint that separates production-grade pipelines from demos. I've seen too many projects route everything through an LLM just because they can, then wonder why their pipeline is slow and nondeterministic.

Your Zod deduplication issue is one of those bugs that eats hours and teaches you something you'll never forget. The pnpm.overrides trick is underrated — worth calling out for anyone building with Mastra or similar frameworks that have their own Zod dependency tree.

Curious about one thing: have you considered adding a "diff from last week" view in the Notion planner? Something like highlighting repos that are new this week vs. recurring ones. That kind of week-over-week context would make the Monday standup recap even more useful — you'd see not just what you did, but how your focus shifted.

Collapse
 
yashksaini profile image
Yash Kumar Saini

Alright got new feature idea, "adding a "diff from last week" view in the Notion planner? Something like highlighting repos that are new this week vs. recurring ones." Really loved and appreaciate this

Collapse
 
tamsiv profile image
TAMSIV

Genius approach! I do something similar for marketing: git log -> delta with published posts -> content to tell. Build in public fueled by real commits is the best storytelling. The 3-agent pipeline idea is smart. How do you handle commit messages that are too cryptic for blog content?

Collapse
 
uribejr profile image
Enrique Uribe

Fun tool! I love it!

Collapse
 
yashksaini profile image
Yash Kumar Saini

Thanks Enrique, glad you love it

Collapse
 
sargentjamesa profile image
James Sargent

I love the architectural decisions and discipline.

What you're describing is constraint-based decomposition: each agent has one task, clear inputs and outputs, and no side effects from neighboring contexts. The LLM only appears when reasoning is genuinely needed. Everything else is handled through a function call. This greatly reduces the risk of drift and hallucinations.

I've been working on the content distribution layer for Trail (trail.venturanomadica.com). It uses Notion as the single source of truth. Claude fetches my long-form posts, adapts them for each platform, and schedules them via Buffer for multi-platform sharing. When one stage overlaps with another's responsibility, it becomes harder to spot failures and determine where decisions are made. The three-step process stays clear because the contracts between steps are explicit.

The hidden insight from your lessons learned: structured JSON output is 3-4 times slower than plain text with frontmatter parsing. That’s the kind of thing people only discover after building it the hard way. Worth emphasizing in the main discussion.

Solid work.

Collapse
 
yashksaini profile image
Yash Kumar Saini

Thanks a lot James, I really appreciate you words, and your idea of Buffer and scheduling sounds good, Might try to convert them into proper feature in my workflow

Collapse
 
apex_stack profile image
Apex Stack

The 3-agent split is really well thought out. I run a similar multi-step pipeline for a large programmatic SEO site — harvest data, generate content, deploy — and the biggest win was exactly what you described: keeping deterministic steps out of the LLM path. It cuts token costs dramatically and makes debugging so much easier when you know which stage failed.

The fallback chain for narration is smart too. We had a similar issue where our content generation model would occasionally produce malformed output, and having a deterministic fallback meant the pipeline never fully stalled.

Curious about the Notion Markdown Content API — that sounds like a huge improvement over constructing block arrays manually. Does it handle tables and code blocks well, or did you hit any edge cases there?

Collapse
 
yashksaini profile image
Yash Kumar Saini

well yes I did hit some edge cases as well, because using gemini model limits the blog page content if I have add code blocks tables and other metrics then the blog content is reduced, due to some limit of content being generated

Collapse
 
apex_stack profile image
Apex Stack

Really clean architecture decision to keep LLM usage isolated to just the narration step. I've built a similar multi-agent content pipeline for cross-posting articles across Dev.to, Medium, and Hashnode, and I learned the same lesson the hard way — early versions routed everything through the LLM, including the actual API calls and formatting. Switching deterministic work to direct function calls cut execution time in half and eliminated an entire class of hallucination bugs.

The fallback chain for narration is smart too. Having a deterministic template-based fallback means you never end up with zero output, which matters a lot when these pipelines run on a cron with nobody watching.

Question: have you considered adding a "highlights" filter before narration? When I have a busy week, the raw data dump can be overwhelming for the LLM. Pre-filtering to the top 5-10 most significant changes (by lines changed, PR impact, etc.) tends to produce more focused and readable output.

Collapse
 
yashksaini profile image
Yash Kumar Saini

okh that is a nice feature addition, I will try and do that as well.

Collapse
 
axion_091 profile image
Axion

Using Mastra for agentic workflow and distributing tasks into 3 agents was good implementation. The agentic workflow architecture is simple horizontal one, waiting for input from previous agent.

Collapse
 
yashksaini profile image
Yash Kumar Saini

Yes I also thought of adding a seperate agent that would get the content and then have some prompts for image generation as well to create main blog cover image and also code block diagrams for the article content as well.

Collapse
 
convertix_io profile image
Convertix.io

Thank you

Collapse
 
botanica_andina profile image
Botánica Andina

Cool project! The 'scratch your own itch' approach to building tools is the best kind of open source. What's been the most unexpected use case so far?

Collapse
 
yashksaini profile image
Yash Kumar Saini

I once automated discord messages for a bot game that would type and send messages to collect and farm points and XP stuff.

Collapse
 
klement_gunndu profile image
klement Gunndu

LLM only where it adds value is the right call — keeps the pipeline predictable and debuggable. Curious if the Gemini fallback produces a noticeably different tone from the primary path.