Most freelancers know the feeling: a client sends a messy email, and you spend 40 minutes writing a proposal that feels generic the moment you hit send.
I spent a few weeks trying to fix this for myself. Along the way I learned some things about what actually makes AI proposal tools useful versus just fast. Writing them down in case someone else is working on similar problems.
The problem with "AI proposal generators"
Most tools in this space take one of two approaches:
- Template library + fill-in-the-blanks. You pick "Web design proposal" and get back a generic doc with [CLIENT_NAME] and [SCOPE] tokens to replace. This produces something faster than a blank page, but the output reads like a template because it is one.
- ChatGPT-style one-shot generation. You paste a brief, you get a proposal. This works surprisingly well on a clean brief, but it has no memory of what you've written before, no understanding of your voice, and it'll happily invent a timeline the client never mentioned.
Neither approach actually reads the brief. They treat it as a seed, not as a source of truth.
What reading the brief actually means
Real client briefs are messy. Here's one I tested with (paraphrased):
"Hey, we run a small dental practice in Austin. Current site is from 2015 and looks terrible on mobile. Need a redesign — maybe 5-6 pages, online booking, a blog section. Budget around $4-5k. Want to launch before January. Can you send a proposal?"
Inside this paragraph there are at least 7 facts the proposal MUST preserve:
- Location: Austin
- Vertical: dental practice
- Existing pain: 2015 site, bad on mobile
- Page count: 5-6
- Features: online booking, blog
- Budget range: $4-5k
- Deadline: before January
If your proposal tool doesn't lock these facts and carry them through to the output, it's gaslighting the user. The freelancer then has to read the entire AI output just to verify it didn't hallucinate a different budget.
What I built
I'm calling it ProposalFlow. It works roughly like this:
- User pastes the brief verbatim.
- The model is instructed to extract a structured context block (service type, budget, timeline, audience, platform, deliverables) before writing anything.
- Generation is constrained to use those extracted values exactly — not paraphrase them, not round the budget, not shift the deadline.
- The output is a four-section draft: diagnosis / approach / investment / next steps. Short paragraphs, no headings, email-ready.
- One-click tone rewrites (confident / shorter / more natural / add deliverables) that preserve all the locked facts.
The part that took the longest wasn't the prompt — it was collecting 20+ real briefs from freelancer friends so I had ground truth to test against. Synthetic briefs are too clean. Real briefs have typos, contradictions, and unmentioned assumptions.
The prompt structure (simplified)
typescript
const systemPrompt = `
You are an expert freelance proposal writer. Extract concrete facts
from the brief before writing. Never invent a budget, timeline, or
scope the brief does not contain. If information is missing, say
"to be confirmed" rather than guessing.
Structure the draft as:
- Opening (one-sentence diagnosis of the real problem)
- Approach (how you would tackle it)
- Investment (exact figures from the brief)
- Timeline (exact dates or ranges from the brief)
- Next steps (concrete, actionable)
Detect the language of the brief and respond in that same language.
`;
Top comments (0)