DEV Community

cottom
cottom

Posted on

How I Built a Reusable Image-Generation Stack for Marketing Visuals

Here is the stack the team ships marketing visuals from. Three GPT-driven tools and one curated prompt library, refined over six months of weekly use.

Generation: ImgGPT

ImgGPT is a GPT-driven image generator with native marketing presets - aspect-ratio guards, brand-color modifiers, copy-space anchors. The output is consistent enough to ship without a designer pass for ~80% of cases.

What makes it durable for marketing work:

  • Brand-color modifier per render, applied as a token that the model honors more reliably than free-form color words
  • Copy-space anchors (top-left, lower-third) so the text overlay always lands in a clean area
  • Aspect-ratio guards baked into the prompt - no post-crop required for 16:9, 1:1, or 4:5

Prompt library: GPT Images Prompt

The biggest unlock in image generation is not a new model. It is the discipline of re-testing old prompts every quarter as model behavior drifts. GPT Images Prompt is a curated index of ChatGPT image prompts filtered by style, intent, and category. The selling point: prompts get re-tested against the current model.

After three quarters of disciplined re-testing across ~800 prompts:

  • ~40% still work as written (the durable ones)
  • ~35% drift slightly but produce usable output
  • ~20% drift significantly - re-tagged with the model they currently work on
  • ~5% completely broken - archived

Editing and brand presets: GPT Images

GPT Images handles the edit pass. Brand presets (palette + lighting + composition modifiers) save once and apply across all renders, so a campaign of ten variants stays visually coherent. Without shared presets, you are hand-typing composition anchors into every prompt - typos and drift are guaranteed.

The four composition anchors that lock down brand consistency

When a campaign needs ten variations of the same brand asset, the failure mode is composition drift. Same colors but different subject placement, focal weight, negative space.

The fix is not more prompts. It is one prompt with four strong composition anchors:

  1. copy space {direction} - locks where text overlays
  2. camera angle {type} - locks point of view
  3. focal weight {position} - locks visual hierarchy
  4. composition rule of thirds, subject at intersection - locks classic frame

When all four anchors are in the prompt, the model has very little discretion about layout. Only the subject changes between variants.

What still drifts despite anchors

  • Text overlay readability - sometimes the model fills "copy space" with subtle texture
  • Subject scale - "subject at lower-left third" might render too small. Add "subject occupies 40% of frame"
  • Edge crops on wide aspect ratios - add "full subject visible" guard

Closing

The library compounds when you keep only the prompts that survive a re-test against the current model. That is the real unlock - not the model itself, but the discipline of pruning what no longer works.

Top comments (0)