Ever get that feeling when you submit a pull request at 3 AM and your senior dev's review comes back... a little too harsh? Well, I decided to lean into that nightmare and build something gloriously cursed.
What if your code reviewer was literally haunted?
Meet Cursed Code Reviewer β an AI-powered code review system that analyzes your code with the personality of a demonic senior developer who's been cursing bad code for centuries. Think GitHub's code review meets The Exorcist, with a splash of AWS serverless architecture.
The Haunting Begins π
The concept was simple but deliciously dark: build a full-stack application that scans your code for issues and delivers feedback in the voice of a cursed, Halloween-themed senior developer. But here's where it gets interesting β I didn't just want to build it. I wanted to build it fast, with precision, and using AI to help me architect the entire thing.
That's where Kiro came in.
The Arsenal: Specs, Steering, Hooks, and MCP
1. Specs: The Blueprint from the Crypt
Instead of explaining my vision over and over, I created three spec documents in .kiro/specs/cursed-code-reviewer/:
requirements.md β Seven detailed user stories with acceptance criteria:
### Requirement 2
**User Story:** As a developer, I want to receive code feedback in a
demonic, Halloween-themed voice, so that code review is entertaining
and memorable
#### Acceptance Criteria
1. THE Cursed Code Reviewer SHALL deliver all feedback using dark
humor and Halloween-themed language
2. WHEN describing code issues, THE Cursed Code Reviewer SHALL use
demonic senior developer persona with phrases referencing curses,
spirits, and haunted themes
...
design.md β Complete architecture with:
- Mermaid diagrams showing data flow
- TypeScript interfaces for every component
- DynamoDB schema with GSIs
- Error handling strategies
- Halloween theme design system (color palette:
--cursed-black,--phantom-purple,--blood-red)
tasks.md β A 322-line implementation plan with 14 major tasks, each broken into subtasks with requirement traceability:
- [x] 5. Build DemonicOracle Lambda with Bedrock integration
- [x] 5.1 Implement Bedrock client and prompt engineering
- Set up AWS Bedrock SDK client
- Create prompt templates for demonic feedback generation
- Implement personality selection based on severity
- Requirements: 2.1, 2.2, 2.3, 2.4
When I asked Kiro to build a feature, it referenced these specs automatically. No context lost. No repeated explanations. Just pure, spec-driven development.
2. Steering: Teaching AI to Think Like You
Steering documents are where the magic happens. These live in .kiro/steering/ and fundamentally shape how the AI thinks about your project.
product.md β Defined the product vision:
## Core Features
- **Code Scanning**: Analyzes code for security, performance, and quality
- **AI Feedback**: Generates demonic-themed code review comments using
AWS Bedrock (Claude)
- **Auto-Fix**: Generates patches to fix identified issues
## Key Terminology
- **SoulPool**: Cognito user pool (authentication)
- **TombstoneDB**: DynamoDB table (data storage)
- **SpectralAnalyzer**: Code scanning Lambda
- **DemonicOracle**: AI feedback Lambda using Bedrock
tech.md β Locked in the tech stack and patterns:
## Architecture
- **Frontend**: React 18 + TypeScript + Vite + TailwindCSS
- **Backend**: AWS Lambda + TypeScript (Node.js 20)
- **AI**: AWS Bedrock (Claude 3 Sonnet)
## TypeScript Configuration
- Backend: ES2022, CommonJS (for Lambda)
- Path alias: `@/*` maps to `./src/*`
structure.md β Enforced code organization rules:
### Naming Conventions
- Components: PascalCase (e.g., `SoulVault.tsx`)
- Lambda handlers: camelCase with `Handler` suffix
- AWS Resources: Use demonic/spooky names consistently
These steering docs meant that every time Kiro generated code, it automatically:
- Used the haunted naming conventions (
TombstoneDB,SpectralAnalyzer) - Structured Lambda functions correctly for deployment
- Applied TypeScript path aliases consistently
- Followed the monorepo workspace pattern
3. Hooks: Automation from Beyond
Hooks are event-driven automations. I created .kiro/hooks/source-to-docs-sync.kiro.hook:
{
"enabled": true,
"when": {
"type": "fileEdited",
"patterns": ["**/*.ts", "**/*.tsx", "**/package.json"]
},
"then": {
"type": "askAgent",
"prompt": "Source code files have been modified. Please review
the changes and update the relevant documentation files to
reflect these changes..."
}
}
What this does: Every time I edited a TypeScript file, Kiro automatically checked if documentation needed updates and fixed it. My README, AUTHENTICATION.md, and DEPLOYMENT.md stayed in perfect sync with the codebaseβno manual work.
That 2 AM bug fix that changed the authentication flow? Documentation updated automatically. Lambda signature change? API docs refreshed. It's like having a technical writer haunting your repo.
4. MCP
Model Context Protocol (MCP) is Anthropic's standard for giving AI models access to external tools and data. For this project, I used the AWS Knowledge MCP server from AWS Labs. a game-changer when building serverless applications.
Here's what it gave me:
Real-time AWS Documentation Access:
Instead of tab-switching between my editor and AWS docs or the AI giving me outdated AWS implementation code, the AI could query:
- Latest Bedrock API documentation
- DynamoDB best practices and patterns
- Lambda runtime specifications for Node.js 20
- CloudFormation resource availability by region
The Architecture
Here's what we built:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Crypt Dashboard (React + TailwindCSS + TypeScript) β
β - SpectralScanner: Submit code/PRs for review β
β - CursedFeedback: Display demonic code review comments β
β - PatchGraveyard: View and apply AI-generated fixes β
β - SoulVault: Halloween-themed auth UI β
ββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββββ
β
β REST API (JWT Auth)
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β NightmareGateway (API Gateway) + AWS Cognito SoulPool β
ββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββββ
β
β Lambda Invocations
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β AWS Lambda Functions (Node.js 20 + TypeScript) β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β SpectralAnalyzer: Scans code, detects language, β β
β β fetches PRs from GitHub, stores in S3 β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β
β β DemonicOracle: Calls AWS Bedrock (Claude) to β β
β β generate cursed feedback and code explanations β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β
β β HauntedPatchForge: Generates and validates code β β
β β fixes using AI, stores patches in DynamoDB β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ β
β β CryptKeeper: Handles user context and preferences β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββ¬ββββββββββββββββ¬βββββββββββββββββββββββββββββ
β β
β ββββββββ
βΌ βΌ
βββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββββ
β TombstoneDB (DynamoDB) β β AWS Bedrock (Claude Sonnet) β
β - User preferences β β - Demonic feedback gen β
β - Scan history β β - Code fix suggestions β
β - Cursed issues β β - Personality variations β
β - Haunted patches β β β
βββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββββ
The Results: Demonic Feedback in Production
The final product delivers exactly what I envisioned:
Code submission:
You: *uploads messy JavaScript*
SpectralAnalyzer: "Spectral scan complete.
Curse level: CRITICAL. 13 issues detected."
AI-generated feedback:
DemonicOracle: "Your variables are named like a drunk warlock
cast a confusion spell! 'x', 'tmp', 'foo'?! The spirits of
clean code are weeping in their graves.
Technical issue: Non-descriptive variable names reduce code
maintainability. Consider: userAccount, temporaryBuffer,
configurationHandler."
Key Takeaways: What I Learned
1. Specs Are Your Source of Truth
Writing detailed requirements, design docs, and task breakdowns upfront felt like extra work. But it paid off exponentially when working with AI. Instead of re-explaining context in every conversation, the AI just... knew.
2. Steering Documents Shape AI Behavior
These aren't just docsβthey're instructions for how the AI should think about your project. Product vision, tech patterns, code organizationβsteering docs turn generic AI into your team's AI.
3. Hooks Enable True Automation
Documentation sync, test running, deployment checksβhooks let you build workflows that happen automatically. It's like CI/CD for your development process.
4. MCP Gives AI Superpowers
With the AWS documentation MCP server connected, Kiro always provided me with the latest implementations and platform versions and just knew how things are done with AWS.
The 'What'? The 'How'?
What did I build? A full-stack, serverless code review system with AI-generated demonic feedback, auto-fix suggestions, and a Halloween-themed UI.
How did I build it? By using Kiro's spec-driven development approach:
- Specs defined requirements, design, and tasks
- Steering shaped AI behavior and enforced patterns
- Hooks automated documentation and workflows
- MCP provided the latest AWS implementation knowledge to Kiro
Could I have built this without Kiro? Absolutely. Would it have taken 3-4x longer? Also absolutely.
Final Thoughts
Building Cursed Code Reviewer taught me that the future of AI-assisted development isn't just about better models. It's about better structure. Specs, steering, hooks, and MCP transform an agentic coding tool into a legitimate development partner.

Top comments (0)