DEV Community

Cover image for From a rambling brief to a finished ad: what one chat actually looks like
M. AI. for Mausa AI

Posted on • Originally published at mausa.ai

From a rambling brief to a finished ad: what one chat actually looks like

Most AI tools punish messy briefs. Type a long, conversational paragraph with three changes-of-mind and a dependent reference, and they answer one part and lose the rest. The thing nobody talks about is how human a real creative brief actually is.

This post walks through one Mausa AI chat that takes a rambling, agency-style brief and turns it into a finished 16-second ad — hero image, logo, demo video, composite mockup, voiceover, music mix — without anyone opening a second tool.

The product is called "A Table" — a fictional wall-mounted, fold-down USB-C desk we made up for the demo. Don't try to buy it.

The brief

Here is the entire input.

The user's full brief, asking Mausa AI to generate a hero image, logo, demo video, composite marketing image, narration, voiceover, and final assembled video for a fictional product called A Table.

Read it once. It is not a prompt. It is a creative brief — multi-objective, conversational, with internal references ("the table image we created"), a tone request ("excited female voice"), and a very human hedge at the end ("can you begin with a plan and the first steps?"). The kind of thing you would type into Slack to a junior creative, not into an AI tool.

This is the part most people do not expect to work. Single-shot AI tools usually need you to break that into ten sequential prompts, switching apps three times. Here it just becomes the input.

The plan

Mausa AI does not start generating. It plans first.

Mausa AI returning a 7-step task list — hero image, logo, demo video, composite, narration, voiceover, final assembly — with the first step active and a 0% progress bar.

Seven steps, in order. Hero image, logo, demo video, composite marketing image, narration script, voiceover, final assembly. A small but meaningful detail: it picks an aesthetic — "tech-forward dark, premium, moody, clean Scandinavian lines" — without asking. That is the job of a creative director, not a tool: read the brief and make a judgment call so you do not have to.

The progress bar starts at 0%. The first task lights up. The chat keeps moving.

Mid-flight pivots are easy

Halfway through, I changed my mind. The original brief asked for a video of a woman wrapping the table — fine, but I wanted a continuation showing the table folding back into the wall, starting from the exact last frame of the first clip.

User asking Mausa AI to take the last frame of the first video and animate a continuation where the table folds vertically. Mausa AI accepts and loads the ffmpeg playbook.

That is a tooling-heavy ask. You need to extract the last frame of the first video, feed it into the video model as a conditioning image, generate a coherent continuation that matches the lighting, color, and physics of the original, and end up with two clips that join cleanly.

In Mausa AI, you just say it. "Now let's get the last frame of this video, and create a new video starting from the last frame of this one." The agent loads the right tool playbook and goes. No format dance. No upload step. No "what tool should I use to extract a frame?"

Joining the clips

When I asked it to join the two clips, it noticed something I had not.

Mausa AI flagging an aspect-ratio mismatch (277:207 vs 16:9) and normalizing both clips to 16:9 before joining. The result is a clean 10-second loop showing the wall-mounted table.

The two clips had different aspect ratios — 277:207 vs 16:9. If you concat them naively, the second clip will letterbox or stretch, and your final cut looks broken. Mausa AI flagged the mismatch on its own and normalized both to 16:9 before joining. Ten seconds total: deploy the table, use it, fold it back up.

The point is not that this is hard to do — it is not, if you are an editor. The point is that it is the kind of detail that gets missed when you are juggling six tools yourself, and Mausa AI was the one paying attention.

The composite

Next: the marketing composite. The brief had asked for the hero image with a logo top-right, headline copy, bullet features, a price, all stacked into one frame.

A finished marketing composite — dark moody mockup with the A Table logo top-right, INTRODUCING headline, three feature bullets, STARTING FROM €70 price, and a SHOP NOW button.

This is the kind of image that lives on a landing page or a paid social slot. The structure is conventional: dark moody background, product centered, brand mark top-right, lockup of selling points lower-left, a price callout, a CTA button. What is interesting is how unremarkable this is to produce — same chat, same conversation, no app switch.

The cut order, and a small disagreement

When I described how I wanted the final video assembled — logo, demo, composite, logo, with each still on screen for two seconds — Mausa AI pushed back.

User proposing the cut order with @-mentions of clips and 2-second holds. Mausa AI agreeing on logo timing but bumping the composite to 3 seconds, with a reason about info density.

It accepted the two-second logo holds. But it bumped the composite to three seconds, with a reason: "there's a lot of info to read — headline, features, price — and 2s feels rushed."

It is a small thing. But it is the only moment in the whole chat where Mausa AI tells me I am a little wrong. A pure execution tool would have done what I said. A collaborator points out when an instruction is going to make the output worse. The output gets better because of the disagreement, not in spite of it.

Final timing: logo (2s) → video (10s) → composite (3s) → logo (2s) = ~17s.

The final cut

The last beat is the music mix.

The finished 16-second ad in the chat — A TABLE logo card on a black background — with a note that music sits at 7% volume and fades out from second 12 to 15 so the voiceover stays front and center.

Sixteen seconds of finished video. Music at 7% volume, fading out from second 12 to second 15 so the voiceover sits cleanly on top through the back half. That is not a setting I touched — Mausa AI defaulted to a mix that prioritizes the VO, and offered one-tap variations for the alternatives I might want ("a touch louder", "fade out sooner", "add intro fade-in").

If I want a different mix, I tap one. If the cut is right, I keep it.

What was missing from this workflow

Look at what did not happen:

  • I did not open a second app.
  • I did not manage references between tools.
  • I did not extract a frame manually.
  • I did not normalize aspect ratios.
  • I did not compose the marketing image in a layout app.
  • I did not sequence clips in an editor.
  • I did not mix audio.

The chat carried the work between steps. I handled the brief, the redirects, and the final approval. That is the part that is actually mine — the editorial calls. Everything else is plumbing.

Why this matters

Most creative AI tools today are good at one step. The moment you need three steps, you are tool-hopping, and the cost of creative work is not generation — it is the round-trips between brief, tool, output, review, and fix.

Mausa AI's bet is that creative work is one continuous conversation. You brief once. You redirect mid-flight. The agent plans, walks through the work step by step, and shows you the seams — the aspect ratio it caught, the timing it bumped, the music level it picked — so you can correct or accept.

The one tool you ever open is the chat.

Top comments (0)