TL;DR: Designed, developed and deployed a fully functional portfolio webapp without writing a single line of code (100% vibe coded). But here's the twist — it's not just another AI project. It follows SOLID principles and security best practices, has 150+ test cases, an automated CI/CD pipeline, and runs on AWS. Let me show you how.
The Challenge
As a TPM working with AWS and Azure daily, I wanted a portfolio that reflected my engineering standards — not a template, not a drag-and-drop builder, but a properly architected production-grade application.
The constraint? Build it entirely through AI prompts using Claude.
The Stack
- Frontend: Next.js 14, TypeScript, Tailwind CSS, Framer Motion
- Testing: Vitest + React Testing Library (150+ tests)
- CI/CD: GitHub Actions → AWS Amplify
- Infrastructure: AWS Amplify (Hosting), API Gateway, Lambda, SNS
- AI Tools: Claude (development), Gemini (image generation)
Phase 1: Documentation First
Before any code, I created:
| Document | Purpose |
|---|---|
| PRD | Product requirements and user stories |
| TRD | Technical requirements and constraints |
| ARCHITECTURE.md | System design and component hierarchy |
| CI-CD.md | Pipeline architecture |
| ADRs | Architecture Decision Records |
Why this matters: AI assistants perform significantly better with context. These documents became the "memory" that kept Claude aligned across dozens of sessions.
Prompt pattern that worked:
"Read CLAUDE.md and docs/PLAN.md. We're starting Phase 3..."
Phase 2: SOLID Principles in Practice
Every component was built with intention:
Single Responsibility
// Each component does ONE thing
<Hero /> // Landing section only
<ProjectCard /> // Single project display
<EmailCapture /> // Form handling only
Open/Closed
// Extend via props, don't modify
interface SectionProps {
className?: string; // Extensible
children?: ReactNode; // Composable
}
Interface Segregation
// Small, focused interfaces
interface Project {
title: string;
description: string;
tech: string[];
image?: string; // Optional fields
link?: string;
}
The result? Components that are testable, maintainable, and reusable.
Phase 3: Testing Strategy
I maintained a strict testing discipline throughout:
npm run test
# ✓ 150+ tests passing
# ✓ 17 test files
# ✓ Components, hooks, utilities covered
Key insight: When AI generates code, tests become your safety net. Every prompt included:
"..update related tests, verify: npm run lint, npm run test, npm run build"
This caught regressions immediately — especially when refactoring UI components.
Phase 4: CI/CD Pipeline
The deployment architecture:
┌─────────────┐ ┌─────────────────┐ ┌─────────────┐
│ Push to │────▶│ GitHub Actions │────▶│ AWS Amplify │
│ main branch │ │ (lint/test/build)│ │ (deploy) │
└─────────────┘ └─────────────────┘ └─────────────┘
GitHub Actions handles:
- ESLint validation
- 150+ test execution
- Next.js static build
- Push to deploy branch
AWS Amplify handles:
- Automatic deployment on branch update
- CDN distribution
- SSL certificate management
- Custom domain configuration
The entire pipeline runs in under 3 minutes.
Phase 5: AWS Architecture
Contact Form Backend
┌──────────┐ ┌─────────────┐ ┌────────┐ ┌─────┐
│ Contact │────▶│ API Gateway │────▶│ Lambda │────▶│ SNS │
│ Form │ │ │ │ │ │ │
└──────────┘ └─────────────┘ └────────┘ └─────┘
- API Gateway: RESTful endpoint with CORS configured
- Lambda: Node.js function for form processing
- SNS: Email notification delivery
Hosting
- Amplify: Static web hosting with built-in CI/CD
- Route53: DNS management
- ACM: SSL certificate (auto-managed by Amplify)
Total AWS cost: ~$1/month (mostly Route53 hosted zone)
Security Considerations
Even with AI-generated code, security wasn't negotiable:
No secrets in code — Environment variables in Amplify
HTTPS everywhere — Amplify-managed SSL
Input validation — Client and server-side
CORS configured — API Gateway restrictions
No sensitive data exposure — Form data handled server-side
Dependencies audited — npm audit in CI pipeline
Lessons Learned
What Worked
- Documentation-first approach — AI performs better with context
- Incremental prompts — Small, focused changes vs. massive rewrites
- Test requirements in every prompt — Catches regressions immediately
- Reference examples — "Make it look like tenable.com" > vague descriptions
What Didn't
- Vague design requests — "Make it look professional" yields generic results
- Skipping verification — Always run lint/test/build before committing
- Trusting without reviewing — AI makes mistakes; review the diff
The Prompt Pattern That Worked
Read [context files].
[Specific task description]
Requirements:
1. [Explicit requirement]
2. [Explicit requirement]
3. [Explicit requirement]
Verify: npm run lint, npm run test, npm run build
Update docs: CHANGELOG.md, FEATURES.md
This pattern maintained consistency across 50+ development sessions.
Results
| Metric | Value |
|---|---|
| Development time | ~2 weeks (evenings/weekends) |
| Lines of code | 5,000+ |
| Test coverage | 150+ tests |
| Lighthouse score | 90+ (Performance, SEO, Accessibility) |
| Monthly AWS cost | ~$1 |
| Manual code written | 0 lines |
Try It Yourself
The site is live at bennwokoye.com
Key takeaways for your own AI-assisted projects:
- Invest in documentation — It's context for your AI
- Enforce engineering standards — AI will follow them if you're explicit
- Test everything — Your safety net for AI-generated code
- Iterate visually — Screenshots help AI understand design issues
- Own the architecture — AI assists; you architect
Check out my other posts on dev.to/benzinoh.
Top comments (0)