DEV Community

Cover image for I vibe-coded an internal tool that slashed my content workflow by 4 hours

I vibe-coded an internal tool that slashed my content workflow by 4 hours

Dumebi Okolo on February 27, 2026

One of the biggest challenges I face as a content expert is repurposing my written blogs for social media. Before now, I had to ask AI for summarie...
Collapse
 
rohan_sharma profile image
Rohan Sharma

The right link made me reach to the right blog. LOL

btw, this looks good. I will try posting on X.

Collapse
 
dumebii profile image
Dumebi Okolo

You tried it! ๐Ÿ˜
Glad to see it worked.
The discord function can't work for you though, as it's currently hard coded.
But v2 is rolling out soon, and it'd be better.

Collapse
 
rohan_sharma profile image
Rohan Sharma

I'm not going to use the discord one anyway. I've no personal community to share within. LOL

Great work btw.

Collapse
 
harsh2644 profile image
Harsh

The repurposing problem is so real writing the blog is only half the work, then you have to rewrite everything for Twitter, LinkedIn, Instagram separately. The fact that you actually built a tool to solve your own problem instead of just complaining about it is the best kind of developer thinking. Curious does the tool adapt the tone for each platform automatically, or do you give it platform-specific instructions each time?

Collapse
 
precious_du profile image
Precious

Yeah, great to have actually built a tool for it. peak content engineer moment.

Collapse
 
dumebii profile image
Dumebi Okolo

I gave it platform-specific instructions in the agent_instruction.md file.
Thank you so much.

Collapse
 
apogeewatcher profile image
Apogee Watcher

Sounds promising! We're also building our own internal tool for writing, image generation, posting on our site and sharing on social media. It's all part of the main repository, so the AI agents have full context on planning, specifications, code, and public content.

Collapse
 
dumebii profile image
Dumebi Okolo

That sounds amazing!
Giving agents full context of the repo is the context infusion any internal tool that needs company centric context needs.
Iโ€™m currently focusing on the 'content-out' flow, but I can see how having the planning specs in the context window would make the social media strategy even more targeted.
Are you using a RAG-based approach for the repository context, or just feeding it in via the long context window?

Collapse
 
apogeewatcher profile image
Apogee Watcher

No special equipment is involved. This is all basically done in Cursor. Just a few project rules so that the model knows where to find things.

Collapse
 
precious_du profile image
Precious

Ah. This is nice!
A feature suggestion, although I understand that this is an internal tool and that's why some things are hardcoded: make the social media selector dynamic. Such that someone can put in their own social media and post on it.

Collapse
 
dumebii profile image
Dumebi Okolo

Watch out for V2!

Collapse
 
mahima_heydev profile image
Mahima From HeyDev

Nice write-up. Iโ€™ve had good results with this kind of โ€œvibe-coded then hardenedโ€ workflow, but only if you add a quick safety net: basic unit/integration tests around the core flow, plus a couple of guardrails (lint, typecheck, and a CI step that fails fast).

For Next.js specifically, did you run into any friction with auth/session cookies or request header bloat as the tool grew? Thatโ€™s been a sneaky source of prod-only bugs for teams Iโ€™ve worked with.

If you end up sharing the repo, Iโ€™m curious what your minimal โ€œdefinition of doneโ€ checklist looks like for these internal tools.

Collapse
 
dumebii profile image
Dumebi Okolo

Great question, Mahima!
For v1, I kept it stateless to avoid the exact 'header bloat' you're mentioning.
The tool currently just takes the URL and processes it in a single lifecycle.
However, as I move toward v2 with persistent user accounts, Iโ€™m looking at middleware to keep the session light.

โ€‹My 'Definition of Done' for vibe-coded internal tools:
โ€‹Schema Enforcement: Does the LLM output strictly valid JSON?
โ€‹Error Boundaries: Does a 'chatty' AI response crash the UI?
โ€‹The Deployment Handshake: Can I deploy the result to at least one live endpoint (like the Discord Webhook) in under 1 second?

Collapse
 
anmolbaranwal profile image
Anmol Baranwal

just tried it. great work Dumebi! ๐Ÿ”ฅ

I think the best way to improve this would be to give users (or yourself) the option to choose whether they want the agent to create posts using a default style or allow them to attach a few sample posts for each platform (linkedIn, x, discord). The agent could then use those as prompt context and generate posts in the same style.

by the way, where have you hosted this? ๐Ÿ˜‚ deployment platforms use such weird names these days instead of the actual project name.

Collapse
 
klement_gunndu profile image
klement Gunndu

The JSON sanitization middleware for Gemini output is smart โ€” ran into the same code-block wrapping issue with Claude and ended up building a similar regex strip layer.

Collapse
 
dumebii profile image
Dumebi Okolo

Yaaay. I didn't understand it initially, but I've understood it a lot better now.

Collapse
 
vibeyclaw profile image
Vic Chen

Love this โ€” the JSON sanitization middleware for Gemini output is the kind of unglamorous detail that makes or breaks a production AI tool. I've run into the same markdown-in-JSON issue with Claude outputs. The instinct to vibe-code internal tooling is underrated; there's no better way to tighten your own workflow than shipping something quick and iterating on it. Curious what the next bottleneck is now that you've freed up 4 hours โ€” usually clearing one constraint just surfaces the next one.

Collapse
 
matthewhou profile image
Matthew Hou

The HITL architecture choice is the most important thing here, and I'm glad you called it out.

I've seen the opposite pattern fail badly โ€” people build fully automated content pipelines, ship garbage for two weeks, then wonder why engagement tanked. The METR study showed developers think AI makes them 24% faster but actually measure 19% slower. The perception gap is real, and it applies to content too: AI-generated social posts feel right but often miss nuance that only the author would catch.

The Gemini Pro choice over Flash for strict JSON enforcement is smart. Structured output is where cheaper models fall apart first โ€” they hallucinate keys, skip required fields, return markdown when you asked for JSON. That validation layer probably saves you more debugging time than the cost difference.

Curious: do you have any automated checks on the generated posts before they hit the approval queue? Even simple things like length limits or keyword blocklists can cut the review burden significantly.

Collapse
 
dumebii profile image
Dumebi Okolo

In the agent_instruction.md, I have blockers for specific keywords.

Collapse
 
mahima_heydev profile image
Mahima From HeyDev

Really nice example of โ€œvibe codingโ€ used the right way - tight scope, measurable payoff (4 hours/week), and you shipped something people actually used.

One thing thatโ€™s saved us on these internal tools is adding a couple guardrails early: basic auth + rate limits, structured logs, and a tiny test around the โ€œhappy pathโ€ prompt so regressions are obvious when you tweak it.

Curious - did you end up caching LLM outputs per input (or per content block) to keep costs predictable once usage picked up?

Collapse
 
maxxmini profile image
MaxxMini

The HITL approach is a smart call. I see a lot of 'fully automated' content tools that end up posting generic garbage because there's no human checkpoint.

Your JSON sanitization middleware for Gemini output is practical โ€” I hit the exact same issue with LLMs wrapping responses in markdown code blocks. The regex strip approach works but gets fragile with nested JSON. Have you considered using response_mime_type in the Gemini API config to force structured output? It eliminates the code block wrapping entirely.

The Discord webhook as a deployment channel is clever too โ€” zero auth overhead and you get mobile notifications for free. Curious if you've thought about adding a 'schedule' step between approval and posting, so you can batch-approve a week of content in one sitting.

Collapse
 
dumebii profile image
Dumebi Okolo

For the scheduling, I thought about it. Then, I thought, wouldn't Judy scheduling take out the HITL factor? Unless the plan is to hatch edit then schedule everything for sending.

Collapse
 
coderom profile image
coder om

Looks like worth enough to try it.