DEV Community

Cover image for I Built an AI That Turns 1 App Screenshot Into 10 Store Ready Marketing Formats in 30 Seconds
Rafał Żbikowski
Rafał Żbikowski

Posted on

I Built an AI That Turns 1 App Screenshot Into 10 Store Ready Marketing Formats in 30 Seconds

Every indie developer knows the pain. You've spent weeks building your app, and now you need App Store screenshots, Google Play graphics, Instagram banners, Product Hunt assets...

So you open Figma. Download templates. Adjust sizes. Write headlines. Pick colors. Export. Repeat for every format.

4 hours later, you're still not done.

I got tired of this cycle, so I built MockupGen AI a tool that does all of it in 30 seconds.

What It Does

You upload 1-8 screenshots from your app. The AI:

  1. Analyzes your UI using Claude's vision capabilities
  2. Extracts your brand colors automatically from the screenshot
  3. Writes marketing headlines tailored to your app
  4. Generates 10 format-specific layouts — each with proper dimensions
  5. Delivers everything as a ZIP — ready to upload

The whole process takes about 30-60 seconds.

The Formats

From a single upload, you get:

  • App Store iPhone 6.9" (1320×2868)
  • App Store iPhone 6.7" (1290×2796)
  • Google Play feature graphic
  • Instagram Story & Post
  • Twitter/X header
  • Product Hunt banner
  • Website hero section
  • ...and more (10 formats total)

The Tech Stack

For the developers here, this is what's under the hood:

  • Frontend: Next.js 14 (App Router)
  • AI Engine: Anthropic Claude API (vision + text generation)
  • Graphics: Canvas 2D API (server-side rendering, no Figma/Puppeteer)
  • Auth: Kinde
  • Database: Supabase
  • Payments: Stripe (subscriptions with webhooks)
  • File Storage: Cloudflare R2
  • Hosting: Vercel
  • Analytics: PostHog

Why Canvas 2D Instead of Puppeteer?

This was a deliberate choice. Puppeteer spins up headless browsers — it's slow, memory-hungry, and expensive at scale. Canvas 2D runs server-side, generates pixel-perfect graphics in milliseconds, and costs almost nothing to operate.

The tradeoff is that layout logic is more manual. Every text wrap, every gradient, every device frame is hand-coded. But the result is consistent, fast, and reliable.

How Claude Vision Works Here

The AI doesn't just "look" at your screenshot. It:

  • Identifies the app's purpose and target audience
  • Extracts dominant and accent colors (hex values)
  • Suggests marketing angles based on visible UI elements
  • Generates headlines that match the app's tone

All of this happens in a single API call, keeping latency low.

The Build Journey

I'm a solo developer with a background in cybersecurity (20 years in corporate IT). MockupGen AI started as a tool I built for my own apps — I have several iOS/macOS apps on the App Store, and I was spending way too much time on marketing graphics.

The first version was rough. Just App Store formats with hardcoded layouts. But once I added Claude's vision API for color extraction and headline generation, the quality jumped dramatically.

The biggest challenge was getting Canvas 2D to produce graphics that look designed, not generated. Tiny details matter: proper text kerning, gradient angles that feel natural, device frames with realistic shadows. It took hundreds of iterations.

Numbers So Far

  • Generation time: ~30 seconds for all 10 formats
  • Formats per generation: 10 (22+ individual graphics)
  • Design skills required: 0

Try It

The first generation is free, no credit card required:

mockupgenai.com

I'd love feedback from the dev.to community. What formats would you add? What would make this more useful for your workflow?

Top comments (0)