The Grind (That Led Nowhere)
Six months ago, I was in full builder mode.
PropTools - A Bootstrap site comparing futures prop firms. Spent months building it. Clean UI. Good data. Launched it.
Then discovered PropTraderTools already existed. Exact same concept.
Modryn Studio - Weeks building a Next.js marketing site for my web development company. TypeScript. Tailwind. Beautiful animations.
Zero clients.
Client Portal - Built an entire customer management system. Authentication. Dashboards. The works.
Problem? No customers to manage.
I kept building. Nothing was working.
The Pivot (That Also Failed)
"Small businesses need websites. I'll build those."
I created a CLI tool with templates for contractors, restaurants, salons. Could generate a professional site in 2 hours.
Built a free salon website for a friend. She loved it.
Perfect. Time to scale.
Except cold calling isn't my thing. Couldn't sell a single site.
I was stuck:
- ✅ Fast at building
- ❌ Terrible at selling
- ❌ No idea what to build next
The Experiment That Changed Everything
Instead of building another product, I tried something different.
I asked GPT & Claude: "Give me coding tests. Small projects. I want to know what kind of developer I am."
I got specs. Real, detailed specifications:
PROJECT: Task Timer CLI
REQUIREMENTS:
- Start/stop timers with task names
- Store data in SQLite
- Export to CSV
- Show daily summaries
CONSTRAINTS:
- No GUI, CLI only
- < 500 lines of code
- Single executable
SUCCESS CRITERIA:
- User can track 3 tasks simultaneously
- Data persists across sessions
I finished it in 23 minutes.
Wait, what?
The same type of project that used to take me days.
The Pattern
I asked for more specs. More projects.
- ToDo API: 18 minutes
- Markdown blog generator: 47 minutes
- URL shortener: 31 minutes
The difference wasn't my coding ability.
It was the specs.
When I had a clear specification:
- ✅ No analysis paralysis
- ✅ No scope creep
- ✅ No "wait, what am I building?"
- ✅ Just execute
When I wrote my own specs, I'd spend 2-3 hours staring at blank pages, second-guessing decisions, ending up with vague notes.
The Real Problem
I didn't need to learn more frameworks.
I didn't need better design skills.
I didn't need more business ideas.
I needed better specs.
But writing good specs myself? That was the hard part.
So I built a tool to do it for me.
Introducing SpecifyThat
A conversational spec generator that asks the 13 critical questions every project needs answered.
How It Works
- 13 Targeted Questions - Each maps to a section of a technical spec
- AI Gap Filling - Click "I don't know" for expert suggestions
- 10-Minute Specs - From vague idea to executable spec
The Meta Part
I used Claude to generate a spec for SpecifyThat. Then I built SpecifyThat using that spec.
The tool generated the specification that built itself.
The Tech Stack
Frontend: Next.js 14 + TypeScript
AI: Claude Sonnet 4 (Anthropic API)
Styling: Tailwind CSS
Deployment: Vercel
Build Time: 2 days
Key Technical Decisions
1. Claude's tool_use for Structured Outputs
Instead of parsing text responses (which fail ~15% of the time), I used Anthropic's tool definitions:
const analyzeProjectTool: Anthropic.Messages.Tool = {
name: 'analyze_project',
description: 'Analyze a project description',
input_schema: {
type: 'object',
properties: {
type: {
type: 'string',
enum: ['single', 'multiple']
},
summary: { type: 'string' },
units: {
type: 'array',
items: {
type: 'object',
properties: {
id: { type: 'number' },
name: { type: 'string' },
description: { type: 'string' }
}
}
}
},
required: ['type']
}
};
const message = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
tools: [analyzeProjectTool],
tool_choice: { type: 'tool', name: 'analyze_project' },
messages: [{ role: 'user', content: prompt }]
});
Result: 100% valid JSON responses. Zero parsing failures.
2. XML Wrappers Prevent Prompt Injection
User input gets wrapped in XML tags before sending to the AI:
function buildAnalysisPrompt(userInput: string): string {
return `Analyze the following project description:
<user_project_description>
${userInput}
</user_project_description>
Determine if this is one buildable unit or multiple...`;
}
This prevents users from manipulating the AI's responses with instructions hidden in their project descriptions.
3. Gibberish Detection Saves Money
Launch day: Users typed "asdfasdf" and got AI-generated responses.
Added validation:
function isGibberishInput(text: string): boolean {
// Check vowel ratio
const vowels = text.match(/[aeiou]/gi)?.length || 0;
const vowelRatio = vowels / text.length;
if (vowelRatio < 0.08) return true;
// Check for repeated patterns
const pattern = /(.)\1{4,}/;
if (pattern.test(text)) return true;
// Check for keyboard mashing
const keyboards = ['qwertyuiop', 'asdfghjkl', 'zxcvbnm'];
for (const kb of keyboards) {
if (text.toLowerCase().includes(kb)) return true;
}
return false;
}
4. Multi-Unit Detection
Complex projects get automatically detected and broken down:
interface AnalysisResult {
type: 'single' | 'multiple';
summary?: string;
units?: {
id: number;
name: string;
description: string;
}[];
}
When a user describes something like "a SaaS with auth, billing, and dashboards," the AI detects 3 buildable phases and lets them spec one at a time.
Why this matters: Scope creep kills projects. Breaking them into shippable units increases completion rates.
Features I Didn't Plan (But Users Needed)
The "I Don't Know" Button
Insight: Developers don't struggle with coding. They struggle with making spec decisions.
Ideation Mode
Some users clicked "I don't know" on Q2 (project description) because they literally didn't have a clear idea yet.
Added a 3-step discovery wizard:
- What problem frustrates you?
- Who will use this?
- What category? (9 options with icons)
The AI synthesizes these answers into a project description, which the user can edit before proceeding.
Result: Even vague ideas become actionable specs.
File Upload Support
Users wanted to paste existing docs. Added support for .txt and .md files (up to 10MB).
const handleFileUpload = async (file: File) => {
const content = await file.text();
await analyzeProject(userInput, content);
};
The AI analyzes both the text input AND the attached document together.
What I Learned
1. Good Specs Are Force Multipliers
With a clear spec, I can build in hours what used to take days.
The bottleneck isn't coding ability. It's specification clarity.
2. Users Know When They Don't Know
People struggle with decisions, not execution.
3. Build What You Need First
I built SpecifyThat because I needed it. Every feature exists because I ran into that problem myself.
That's why it works. I'm the target user.
4. Ship Fast, Iterate Later
2 days from idea to production. Bugs and all.
Bugs I fixed:
- Gibberish input validation (added after seeing "asdfasdf")
- Back button on multi-unit screen (wasn't clearing state)
- Character counter on Q1
The 6-Month Lesson
Old approach:
- Spend months building
- Launch with no audience
- Wonder why nobody cares
- Repeat
New approach:
- Identify YOUR problem
- Build the solution fast
- Ship publicly
- Let the market tell you if it matters
SpecifyThat exists because I wasted 6 months learning this lesson the hard way.
Try It Yourself
SpecifyThat is free, requires no login, and works on mobile.
Try it: https://specifythat.com
Open source: https://github.com/modryn-studio/specifythat
What's your biggest challenge when planning projects?
Do you spend more time planning or building? Let me know in the comments.
Top comments (0)