**
What if a blog could write itself?
**
That was the question I started with. Six weeks later, I shipped GenBlog — a full-stack AI-assisted content platform that generates complete, SEO-optimized articles (with a relevant featured image, tags, and meta description) from a single topic prompt. Or on a schedule, with zero human input.
Live demo: gen-blog.vercel.app
**
The problem I wanted to solve
**
Content-driven websites have a brutal cold-start problem. You need posts to get traffic, but writing quality posts takes hours. Tools like ChatGPT help, but you're still copy-pasting into a CMS, sourcing images manually, writing your own meta descriptions, and publishing by hand.
I wanted a system that handles all of that automatically — and still produces content that actually looks good and reads well.
What GenBlog does
Give it a topic. It returns a full article — structured markdown, a title, a slug, tags, a meta description, and a contextually relevant high-resolution featured image. You can trigger this manually from the admin dashboard, or let a cron job fire it automatically on a schedule.
Generated drafts land in the admin portal where you can review, edit, publish, or delete. Published posts are immediately live on the reader-facing frontend with full SEO metadata and beautiful markdown rendering.
The stack
Frontend — Next.js 16 (App Router) with React 19, Tailwind CSS 4, React Markdown with GFM support, and Next Themes for dark/light mode. State is handled purely with React hooks and context — no Redux, no Zustand. Icons are Lucide React for consistency. API calls go through Axios with custom interceptors.
Backend — Express.js running on Node.js v22+. MongoDB + Mongoose for data persistence. JWT for secure admin authentication. Node-Cron for scheduled generation. The two external APIs doing the heavy lifting are Google Gemini Pro for content generation and Unsplash for images.
Deployment — Frontend lives on Vercel (edge network, seamless Next.js integration). Backend is a Web Service on Render so the cron jobs keep running persistently and don't get killed by serverless cold starts.
**
How the AI pipeline works
**
When a generation is triggered (manually or by cron), the backend service builds a detailed prompt and sends it to the Gemini Pro API. The prompt asks for structured output — a title, meta description, comma-separated tags, and a long-form markdown body. Once the content comes back, a second call goes to the Unsplash API using the generated title as the search query to find the best-matching featured image. Both pieces get assembled into a Mongoose document and saved as a draft.
The whole thing takes about 8–12 seconds end-to-end. Not instant, but fast enough that the admin UX doesn't feel painful.
**
Things I learned the hard way
**
Prompt engineering matters a lot. Early versions returned inconsistently formatted markdown — sometimes with extra preamble, sometimes with missing sections. I eventually added strict output formatting instructions and a few few-shot examples directly in the system prompt. Quality improved dramatically.
Cron jobs and serverless don't mix. I started with a serverless function on Vercel for the backend. The cron would schedule fine but the function would time out or get killed mid-generation. Moving to a persistent Render Web Service fixed this entirely.
Unsplash rate limits are real. The free tier is 50 requests/hour. For a blog that auto-generates posts, that ceiling can come up fast. I added basic caching on the image lookup so repeated topics don't burn unnecessary requests.
SEO from day one. Next.js App Router makes dynamic metadata generation clean, but you have to plan your data model around it. I made sure every post has a slug, meta description, and OG image field from the very first schema design — retrofitting it later would have been messy.
**
Project structure
**
GenBlogs/
├── frontend/
│ ├── src/app/ # App Router pages & layouts
│ ├── src/components/ # UI components
│ └── src/lib/ # Axios client & utilities
└── backend/
├── src/models/ # Mongoose schemas
├── src/routes/ # API endpoints
├── src/services/ # Gemini & Unsplash logic
└── src/index.js # Entry point
**
What's next
**A few things I'm planning to add:
Multi-model support — let users choose between Gemini, Claude, and GPT for generation
Content quality scoring before auto-publish (checking readability, length, keyword density)
Newsletter integration — auto-send a digest when new posts go live
Analytics dashboard to track which AI-generated topics actually get traffic.
If you've built something similar or have opinions on the Gemini vs GPT-4 quality tradeoffs for long-form content, I'd love to hear them in the comments. And if you check out the live site, let me know what you think!

Top comments (0)