A few weeks ago I posted about building a transcription tool. The responses were helpful. A few people asked about what I do with the transcripts after.
Honest answer: for a while, not much.
I'd get this wall of text with speaker labels and timestamps, and then... stare at it. The transcription part worked. But a raw transcript is like having all the ingredients dumped on your counter. You still have to cook.
The problem I kept running into
I do interviews for work. Marketing stuff mostly. The goal is usually to turn a 45-minute conversation into something publishable—a blog post, social clips, whatever.
So I'd paste the transcript into Claude or ChatGPT and say something like "turn this into a blog post."
The output was... fine? Generic. It would summarize instead of pulling actual quotes. It'd lose the person's voice. I'd spend an hour fixing it and think "I could've just written this myself."
Same thing with meeting notes. "Summarize this meeting" gets you a summary. But what I actually needed was: what did we decide, who's doing what, and what's the follow-up. Different problem.
So I started building prompts
Not because I planned to. I just kept tweaking the same prompts over and over until they actually worked.
The blog post one took the longest. I needed it to:
- Keep the interviewee's actual voice (not sanitize everything into corporate speak)
- Pull real quotes, not paraphrase everything Structure it like a real article, not a book report
- Lead with something interesting, not "In this interview, we discussed..."
That one prompt went through probably 15 versions before it stopped annoying me.
Then I built one for meeting summaries that extracts decisions and action items separately. One for turning podcasts into social posts. One for cleaning up the speaker labels in raw transcripts.
At some point I looked up and had 90+ of them.
What I learned about prompting
Most of my early prompts were too vague. "Summarize this" doesn't tell the model what you actually care about.
The ones that work best are almost annoyingly specific:
- What's the exact output format?
- What should it include vs. ignore?
- What tone? What length?
- What questions should it ask me before it starts?
That last one was a breakthrough. The best prompts don't just run—they clarify first. "Before I process this, tell me: how many speakers, what are their names, what's the context?"
Turns out you get way better output when the model understands what it's working with.
Some that ended up being useful
A few I keep coming back to:
- Transcript cleaner — Takes raw output with "Speaker 0" and "Speaker 1" labels and turns it into something readable with real names and proper formatting. Sounds trivial but it's the one I use most.
- Interview → blog post — Extracts the interesting parts of a conversation and structures them into an actual article. Keeps quotes intact. Writes transitions that don't sound like AI wrote them (usually).
- Meeting action items — Pulls out decisions, tasks, and owners from a meeting transcript. Ignores the 40 minutes of small talk to find the 5 things that actually matter.
- Podcast social package — Generates a batch of social posts from an episode transcript. Quote cards, discussion questions, that kind of thing.
I also built some weird specific ones for legal transcripts (deposition analysis, contradiction detection) that I'm not sure anyone else needs. But they exist.
Where they live now
I put them on GitHub and linked them from the transcription site:
https://brasstranscripts.com/ai-prompt-guide
They're organized by use case. Some have full write-ups explaining how to use them, others are just the prompt.
They work with Claude, ChatGPT, Gemini—whatever. The transcript format matters more than which model you use.
Still iterating
Some of these are solid. Others I'm still not happy with. The social media ones especially—getting an LLM to write something that doesn't sound like an LLM wrote it is its own challenge.
If you've built prompts for processing transcripts (or any structured text really), curious what approaches have worked for you. The "ask clarifying questions first" pattern has been the biggest improvement for me, but I'm sure there are techniques I haven't tried.
Top comments (0)