DEV Community

Cover image for I vibe-coded an internal tool that slashed my content workflow by 4 hours
Dumebi Okolo
Dumebi Okolo

Posted on • Edited on

I vibe-coded an internal tool that slashed my content workflow by 4 hours

One of the biggest challenges I face as a content expert is repurposing my written blogs for social media. Before now, I had to ask AI for summaries or try to get them myself. I became very busy recently, and I don't have time for that anymore.
The best solution for me was building a tool that helps me generate social media content from my blog and posts on my behalf.
I was in a meeting of content professionals recently. A key point that was hammered on regarding the use of AI in content creation is the need to maintain a strict Human-in-the-Loop (HITL) workflow.
This resonated well with me.
I had initially planned to build an agent to automate and schedule social media posts. This, however, leaves out the HITL factor, so I restrategized.

Here is the technical breakdown of how I built an Agentic Content Engine using Next.js 15, Gemini 3.1 Pro, and Discord Webhooks.

Agentic Human-in-the-Loop (HITL) architecture

The Problem: The "Context Gap"
Most AI social media tools are just wrappers for generic prompts. They don't know my research, they don't know my voice, and they definitely don't know the technical nuances of my articles.
So,
I needed a tool that:

  • Reads my actual dev.to articles.

  • Strategizes a 3-day multi-platform campaign.

  • Displays it in a way that I can audit, edit, and then—with one click—Deploy.

Even though this app was "vibe coded" (shoutout to the AI for keeping up with my pivots 😂😂), the architecture is solid.

The core philosophy of this build is Agency over Automation. The agent doesn't just act; it reasons, structures, and then waits for human approval before posting

The AI Stack

  • Reasoning Engine: Gemini 3.1 Pro (Tier 1 Billing). I opted for Pro over Flash to handle complex instruction following and strict JSON schema enforcement.
  • Frontend: Next.js 15 (App Router) for server-side rendering and SEO efficiency.
  • Styling: Tailwind CSS with @tailwindcss/typography for professional markdown rendering.
  • Deployment: Discord Webhooks for an immediate, zero-auth execution pipeline.

Handling AI Hallucinations in Next.Js

A common failure in vibe coding, I have found, is the LLM returning "chatty" text when the UI expects structured data.
To solve this, I implemented a Strict JSON Enforcement pattern in the API route.

Gemini often wraps its JSON output in markdown code blocks. If you pass this directly to JSON.parse(), the app crashes.

To solve this, I used Sanitization Middleware.
I built a regex-based sanitization layer to strip the noise and ensure the frontend receives a clean array.

// app/api/generate/route.ts
const rawOutput = data.output; // The raw string from Gemini

// Regex to extract only the JSON content
const cleanJson = rawOutput.replace(/```
{% endraw %}
json|
{% raw %}
```/g, "").trim();

try {
  const campaignData = JSON.parse(cleanJson);
  return NextResponse.json({ campaign: campaignData.campaign });
} catch (error) {
  console.error("JSON Parsing failed:", rawOutput);
  return NextResponse.json({ error: "Failed to parse Agent strategy" }, { status: 500 });
}

Enter fullscreen mode Exit fullscreen mode

UI/UX Strategy: The Kanban "Board" Approach

The v1 of the UI was so messy. The tool worked but you'd have to dig through mountains of text to even understand what was going on.
I tried formatting it into a table for some structure. Somehow, that was worse!
Finally, to optimize for a "Human-in-the-Loop" workflow, I moved to a columnar dashboard.
Social posts, especially threads on X, can be long, and that would have made even the boards to be clumsy and unkempt.
To keep the UI clean, I built a PostCard component that caps content at 250 characters with a state-managed "Read More" toggle.

const [isExpanded, setIsExpanded] = useState(false);
const displayContent = isExpanded ? content : content.substring(0, 250) + "...";
Enter fullscreen mode Exit fullscreen mode

This ensures the user can audit the text without scrolling for "miles."


Photo dump: Agentic Content Flow in Action

  1. The Starting Point Here’s the clean, minimal dashboard before the magic happens. I wanted it to feel like a professional "Command Centre," not a messy chatbot window.

homepage

  1. The 3-Day Campaign Map Once I paste my URL, the Agent goes to work. It returns a structured 3x3 grid. I added a 250-character truncation with a "Read More" toggle because, let's face it, nobody wants a wall of text when they're trying to strategise.

content generation

  1. The Deployment Here is the best part. I hit "Post to Discord," and boom—success. No manual copy-pasting, no switching tabs. It’s live.

posted to discord success message

discord success

What's next

This is what I have built so far. I am calling it BloggerHelper v1
My next updates are:

  1. Integrating the X and LinkedIn feature.
  2. Putting more work into the context tank. So far, the agent's context has been obtained from the article and some instructions in the agents_instruction.md file. I will work more on this
  3. Putting an edit feature, where I can edit a post before it goes out.
  4. Making it take in more context than just my blog posts

Conclusion: The Engineering of Presence

Even though this tool was designed to help me cut down on work hours, it was also to take me from just a technical writer to a content engineer/architect, where my primary goal isn't to just create content but create solutions that make for easy content flow.
Also, as I position myself as an AI influencer, I want to show myself building more with AI and evangelising its adoption.

Let's connect on LinkedIn!

What’s your take on Agentic Workflows? Are you building for full automation, or are you keeping the human in the loop?

Let’s discuss below. 👇

UPDATE!!!!

I just used my tool to get my social media caption/content for this post. See below.

am content generator

You can try it out here, but mercy on my API credit!!

Top comments (24)

Collapse
 
rohan_sharma profile image
Rohan Sharma

The right link made me reach to the right blog. LOL

btw, this looks good. I will try posting on X.

Collapse
 
dumebii profile image
Dumebi Okolo

You tried it! 😁
Glad to see it worked.
The discord function can't work for you though, as it's currently hard coded.
But v2 is rolling out soon, and it'd be better.

Collapse
 
rohan_sharma profile image
Rohan Sharma

I'm not going to use the discord one anyway. I've no personal community to share within. LOL

Great work btw.

Collapse
 
harsh2644 profile image
Harsh

The repurposing problem is so real writing the blog is only half the work, then you have to rewrite everything for Twitter, LinkedIn, Instagram separately. The fact that you actually built a tool to solve your own problem instead of just complaining about it is the best kind of developer thinking. Curious does the tool adapt the tone for each platform automatically, or do you give it platform-specific instructions each time?

Collapse
 
precious_du profile image
Precious

Yeah, great to have actually built a tool for it. peak content engineer moment.

Collapse
 
dumebii profile image
Dumebi Okolo

I gave it platform-specific instructions in the agent_instruction.md file.
Thank you so much.

Collapse
 
apogeewatcher profile image
Apogee Watcher

Sounds promising! We're also building our own internal tool for writing, image generation, posting on our site and sharing on social media. It's all part of the main repository, so the AI agents have full context on planning, specifications, code, and public content.

Collapse
 
dumebii profile image
Dumebi Okolo

That sounds amazing!
Giving agents full context of the repo is the context infusion any internal tool that needs company centric context needs.
I’m currently focusing on the 'content-out' flow, but I can see how having the planning specs in the context window would make the social media strategy even more targeted.
Are you using a RAG-based approach for the repository context, or just feeding it in via the long context window?

Collapse
 
apogeewatcher profile image
Apogee Watcher

No special equipment is involved. This is all basically done in Cursor. Just a few project rules so that the model knows where to find things.

Collapse
 
precious_du profile image
Precious

Ah. This is nice!
A feature suggestion, although I understand that this is an internal tool and that's why some things are hardcoded: make the social media selector dynamic. Such that someone can put in their own social media and post on it.

Collapse
 
dumebii profile image
Dumebi Okolo

Watch out for V2!

Collapse
 
mahima_heydev profile image
Mahima From HeyDev

Nice write-up. I’ve had good results with this kind of “vibe-coded then hardened” workflow, but only if you add a quick safety net: basic unit/integration tests around the core flow, plus a couple of guardrails (lint, typecheck, and a CI step that fails fast).

For Next.js specifically, did you run into any friction with auth/session cookies or request header bloat as the tool grew? That’s been a sneaky source of prod-only bugs for teams I’ve worked with.

If you end up sharing the repo, I’m curious what your minimal “definition of done” checklist looks like for these internal tools.

Collapse
 
dumebii profile image
Dumebi Okolo

Great question, Mahima!
For v1, I kept it stateless to avoid the exact 'header bloat' you're mentioning.
The tool currently just takes the URL and processes it in a single lifecycle.
However, as I move toward v2 with persistent user accounts, I’m looking at middleware to keep the session light.

​My 'Definition of Done' for vibe-coded internal tools:
​Schema Enforcement: Does the LLM output strictly valid JSON?
​Error Boundaries: Does a 'chatty' AI response crash the UI?
​The Deployment Handshake: Can I deploy the result to at least one live endpoint (like the Discord Webhook) in under 1 second?

Collapse
 
anmolbaranwal profile image
Anmol Baranwal

just tried it. great work Dumebi! 🔥

I think the best way to improve this would be to give users (or yourself) the option to choose whether they want the agent to create posts using a default style or allow them to attach a few sample posts for each platform (linkedIn, x, discord). The agent could then use those as prompt context and generate posts in the same style.

by the way, where have you hosted this? 😂 deployment platforms use such weird names these days instead of the actual project name.

Collapse
 
klement_gunndu profile image
klement Gunndu

The JSON sanitization middleware for Gemini output is smart — ran into the same code-block wrapping issue with Claude and ended up building a similar regex strip layer.

Collapse
 
dumebii profile image
Dumebi Okolo

Yaaay. I didn't understand it initially, but I've understood it a lot better now.

Collapse
 
vibeyclaw profile image
Vic Chen

Love this — the JSON sanitization middleware for Gemini output is the kind of unglamorous detail that makes or breaks a production AI tool. I've run into the same markdown-in-JSON issue with Claude outputs. The instinct to vibe-code internal tooling is underrated; there's no better way to tighten your own workflow than shipping something quick and iterating on it. Curious what the next bottleneck is now that you've freed up 4 hours — usually clearing one constraint just surfaces the next one.

Collapse
 
matthewhou profile image
Matthew Hou

The HITL architecture choice is the most important thing here, and I'm glad you called it out.

I've seen the opposite pattern fail badly — people build fully automated content pipelines, ship garbage for two weeks, then wonder why engagement tanked. The METR study showed developers think AI makes them 24% faster but actually measure 19% slower. The perception gap is real, and it applies to content too: AI-generated social posts feel right but often miss nuance that only the author would catch.

The Gemini Pro choice over Flash for strict JSON enforcement is smart. Structured output is where cheaper models fall apart first — they hallucinate keys, skip required fields, return markdown when you asked for JSON. That validation layer probably saves you more debugging time than the cost difference.

Curious: do you have any automated checks on the generated posts before they hit the approval queue? Even simple things like length limits or keyword blocklists can cut the review burden significantly.

Collapse
 
dumebii profile image
Dumebi Okolo

In the agent_instruction.md, I have blockers for specific keywords.

Collapse
 
mahima_heydev profile image
Mahima From HeyDev

Really nice example of “vibe coding” used the right way - tight scope, measurable payoff (4 hours/week), and you shipped something people actually used.

One thing that’s saved us on these internal tools is adding a couple guardrails early: basic auth + rate limits, structured logs, and a tiny test around the “happy path” prompt so regressions are obvious when you tweak it.

Curious - did you end up caching LLM outputs per input (or per content block) to keep costs predictable once usage picked up?

Some comments may only be visible to logged-in visitors. Sign in to view all comments.