🔥 You think you can handle AI roasting your web dev portfolio? Give it a shot while also getting lighthouse performance metrics and design feedback based on your actual site design.
As a AI engineer and web developer, I wanted to build a product that would be both technically challenging and fun. So I built Roast My Portfolio — an AI-powered web app that analyzes developer portfolio websites and (depending on the mode) either gives professional, actionable UX feedback, or completely roasts it with brutal honesty and sarcasm.
It’s deployed live here if you think you can handle the feedback!
🧠 Tech at a Glance
| Feature | Stack |
|---|---|
| Frontend | React + Next.js + TypeScript (Vercel) |
| Backend API | FastAPI (Python), deployed on Railway |
| AI Models | LLM multi-mode via Groq (Llama 3.3 70B) |
| Automation | Lighthouse CLI for web performance |
| Screenshot Capture | Browserless |
| Auth & DB | Supabase |
| Deployment | Vercel (FE), Railway (Dockerized BE) |
🚀 How It Works
User enters a portfolio URL and chooses “Roast” or “Serious” mode.
The backend scrapes the site, gathers content, captures a screenshot, and generates a Lighthouse performance report.
All of that gets bundled into a prompt (plus persona-based styling) and sent to the AI model (Groq).
The AI returns structured JSON:
{
"score": 7,
"feedback": "...",
"visual_design_feedback": "...",
"suggestions": [...]
}
The frontend displays the results using a custom UI with animated score graphics and metric visualizations.
🤖 AI Prompt Strategy
I built two prompts using LangChain templates:
🔥 Roast Mode – sarcastic, witty, comedic tone using the Groq API.
💼 Serious Mode – professional UX & design critique using Groq as well.
A key setup was enforcing consistent JSON responses to keep parsing stable:
Return ONLY valid JSON, no markdown or additional commentary.
This allowed reliable extraction of feedback, score, and suggestions.
💡 Challenges Solved
CORS & domain routing issues between Vercel and Railway.
Getting Docker builds to work with FastAPI and managing .env variables.
Handling AI output consistency (LLMs love adding extra flourish).
Making Lighthouse work without blocking performance.
Avoiding token overflow with prompt truncation.
📈 What’s Next
PDF export of roast results for sharing.
“Before/After portfolio improvements” tracker.
User accounts with analysis history.
If you're exploring AI-powered tooling or want feedback on your portfolio (painful or professional), give it a try.
👉 [Live Demo Link Here]
🎯 Would love to hear your feedback and roast suggestions!
📝 Final Thoughts
This project helped me bridge my front-end experience with backend AI engineering, and taught me a lot about:
AI model prompting strategies
LLM Structured outputs
Integrating custom datasets (screenshot + Lighthouse + scraped content)
If you’re trying to make the jump into AI engineering — build something weird, something useful, or something hilarious. You’ll learn way faster.
Top comments (0)