This is a submission for the DEV April Fools Challenge
What I Built
I've opened hundreds of pull requests in my career. Fixed typos. Refactored auth flows. Centered divs. And every single time, some reviewer finds a reason to block the merge. Not because the code is bad. Because the vibes are off.
By PR #200, I realized the problem wasn't my code. It was that no tool existed to formalize the experience of being told your perfectly working code is somehow insufficient. So I built the tool myself.
MergeGuardian 9000 is an AI-powered pull request review platform with a guaranteed 0.00% approval rate. You paste your code, pick a reviewer persona, and within seconds Google Gemini delivers a devastatingly thorough review that finds profoundly absurd reasons to block your merge.
It looks exactly like a real GitHub PR review. Verdict cards. Status checks. Inline comments. A merge button at the bottom. Except the merge button is permanently disabled. And the status checks are things like "Existential Debt Audit" and "Naming Karma Validation." And the verdict is always one of three options: changes_requested, blocked, or spiritually_rejected.
Here's the thing that makes it actually work: Gemini reads your real code. This isn't a random joke generator. Google Gemini analyzes your actual functions, your variable names, your architecture choices, and then finds deeply specific reasons why none of it is merge-worthy. Paste a function add(a, b) { return a + b } and the Guardian will explain how your function "shows a troubling belief that problems can be solved by combining things."
The Five Horsemen of Code Review
Every enterprise platform needs opinionated reviewers. MergeGuardian ships with five, each backed by its own Gemini system prompt that gives the AI a distinct personality:
| Persona | Title | Blocking Style |
|---|---|---|
| 🛡️ Guardian Core | Senior Review Orchestrator | References fake policies like "Guardian Policy 7.4.2" |
| 📋 Compliance Beast | Chief Policy Enforcement Officer | Sees SOC2 violations in your variable names |
| 💀 Staff Engineer of Doom | Principal Taste Architect | Has seen better implementations in languages you haven't learned yet |
| 🤖 AI Optimizer | Metrics & Confidence Analyst | Your semantic drift score is 0.89. Acceptable range: 0.00 to 0.02. |
| 😊 Passive-Aggressive Teammate | Friendly Neighborhood Blocker | "Just a thought, but have you considered not merging this? Totally up to you! 😊" |
Each persona has its own Gemini system prompt, its own blocking patterns, and its own way of making you question your career choices. Same model. Same API. Five completely different voices. That's the fun part of Gemini's system prompt flexibility.
The Loading Theater
No enterprise tool is complete without unnecessary ceremony. When you submit a review, the Guardian runs through a 12-stage "Enterprise Review Pipeline":
The stages include gems like "Validating emotional idempotency" and "Cross-referencing naming karma." A progress bar ticks up from 0% to 100%. The final stage, "Finalizing disappointment," always fails with a red X. Because of course it does.
Here's the funny part: Gemini 2.0 Flash responds in 1-3 seconds. The loading theater takes longer than the actual AI generation. Enterprise ceremony demands it.
The Appeal System
Here's where it gets good. After your merge gets blocked, you can file an appeal. The "Senior Merge Arbitration Officer" reviews your case via a fresh Gemini call and... denies it. With even more elaborate reasoning.
Not satisfied? Escalate to the "Principal Philosophy of Code Director." Still denied. Final appeal goes to the "Supreme Architect of the Eternal Codebase." Three rounds of escalating absurdity, each powered by a separate Gemini API call with its own system prompt that shifts the AI's entire personality.
Round 3 denials hit different: "We ran your code through a quantum computer. In every possible timeline, this merge was blocked."
The Code Quality Roast
Click "Run Code Quality Analysis" and Gemini generates a full enterprise metrics dashboard for your code. The AI returns structured JSON with scores, grades, and per-metric roast explanations. Every metric is suspiciously terrible:
- Semantic Cohesion: 12% ... "Your functions communicate like divorced parents at a school play"
- Bus Factor Resilience: 3% ... "If you get hit by a bus, this code dies alone"
- Vibe Alignment Score: 8% ... "This code has the structural integrity of a house of cards in a wind tunnel"
Overall grade: F. AI confidence: 99.7% certain this should not ship.
Bring Your Own Gemini Key 🔑
You can paste your own Google Gemini API key directly in the UI. It stays in your browser's localStorage and never goes anywhere except the app's own API routes. No .env file. No cloning repos. Just grab a free key from Google AI Studio, paste it in, and unlock AI-powered reviews instantly.
The Gemini free tier gives you 60 requests per minute and 1,000 per day. That's enough to get roasted hundreds of times without spending a cent. The entire app runs at zero cost.
Without a key the app still works perfectly. Our handcrafted fallback engine has 80+ jokes and serves the same JSON shape. But with Gemini the reviews get personal.
10 Sample PRs to Get Roasted
Don't have code handy? Pick from 10 pre-loaded PRs including "Fix typo in button label" (still gets blocked), "feat: implement entire todo app" (built during a meeting, naturally rejected), "feat: add vibe-based code generation" (the Guardian has thoughts about vibes), and "feat: decentralized merge approval via blockchain" (the MergeChain has a 0% approval rate by design).
Easter Eggs 🫖
Visit /418 and you'll find an ASCII art teapot with animated steam, a tribute to RFC 2324, and a teapot status dashboard showing: Temperature ∞°C, Brew Status: Philosophically Brewing, Capacity: Unlimited Disappointment.
The 404 page is on brand too. Even our errors reject you.
Demo
Live demo: april-fools-hackathon.vercel.app
Paste code. Pick a persona. Get blocked. Appeal. Get blocked harder. Share your rejection on Twitter.
Code
🛡️ MergeGuardian 9000
The AI-powered code review platform that blocks every merge — for your own good.
"Your code compiles, tests pass, but the universe has not consented."
MergeGuardian 9000 is an enterprise-grade AI pull request review platform with a 0.00% approval rate. Paste your code, select a reviewer persona, and watch as the Guardian finds profoundly absurd reasons to block your merge.
Built for the DEV April Fools Challenge 2026.
✨ Features
-
5 Reviewer Personas — Each with a unique personality and blocking style:
- 🛡️ Guardian Core — Senior Review Orchestrator
- 📋 Compliance Beast — Chief Policy Enforcement Officer
- 💀 Staff Engineer of Doom — Principal Taste Architect
- 🤖 AI Optimizer — Metrics & Confidence Analyst
- 😊 Passive-Aggressive Teammate — Friendly Neighborhood Blocker
-
Google Gemini AI Integration — Uses
gemini-2.0-flashacross 3 endpoints with 8+ system prompts for contextually absurd reviews, appeal denials, and code roasts -
Bring…
Project Structure
src/
├── app/
│ ├── api/
│ │ ├── review/route.ts # Main review endpoint (Gemini AI)
│ │ ├── appeal/route.ts # Appeal escalation endpoint (Gemini AI)
│ │ └── roast/route.ts # Code metrics roast endpoint (Gemini AI)
│ ├── 418/page.tsx # 🫖 Easter egg
│ ├── not-found.tsx # On-brand 404
│ ├── layout.tsx # Root layout
│ └── page.tsx # Main orchestrator
├── components/
│ ├── PRHeader.tsx # PR breadcrumb & labels
│ ├── CodeInput.tsx # Code editor with line numbers
│ ├── SamplePRSelector.tsx # 10 sample PR picker
│ ├── ReviewerSwitcher.tsx # 5 persona selector
│ ├── ApiKeyInput.tsx # Gemini API key input (localStorage)
│ ├── LoadingTheater.tsx # 12-stage pipeline animation
│ ├── VerdictCard.tsx # Review verdict display
│ ├── CheckRunList.tsx # Fake status checks
│ ├── ReviewComments.tsx # Inline review comments
│ ├── MergeBox.tsx # Permanently blocked merge button
│ ├── AppealFlow.tsx # 3-round appeal escalation
│ └── RoastDashboard.tsx # Enterprise metrics roast
└── lib/
├── types.ts # TypeScript interfaces
├── sample-prs.ts # 10 sample PRs, 5 personas
├── fallback.ts # Review fallback (80+ jokes)
├── appeal.ts # Appeal prompts + fallback
├── roast.ts # Roast prompts + fallback
├── prompts.ts # Gemini prompt builders
└── ai.ts # Gemini API integration
How I Built It
The Multi-Agent Gemini Architecture
This isn't a single API call to Gemini with "be funny." MergeGuardian uses 3 distinct Gemini-powered endpoints, each with a different AI "role" and system prompt:
| Endpoint | AI Role | What It Does |
|---|---|---|
POST /api/review |
Code Reviewer | Reads your actual code, generates verdict + checks + comments + block reason |
POST /api/appeal |
Merge Arbitration Officer | Reviews your appeal against the original block, always denies with escalating absurdity |
POST /api/roast |
Code Quality Analyst | Generates fake enterprise metrics with devastating per-metric explanations |
Every endpoint follows the same pattern:
- Build a persona-specific system prompt
- Send code + context to
gemini-2.0-flashvia the Google Generative AI SDK - Get structured JSON back via
responseMimeType: "application/json", Gemini's native structured output mode - If Gemini fails (rate limit, timeout, no key), fall back to handcrafted template engine
The fallback engines aren't afterthoughts. Each one has its own curated joke bank: 80+ review comments across 5 categories (bureaucratic, anthropomorphic, metrics, passive-aggressive, philosophical), 14 fake checks, 16 block reasons, 18 impossible next steps, 24+ appeal denial rulings, and a full library of fake enterprise metrics. The app is hilarious with or without an API key.
Here's how the review prompt works under the hood:
// Each persona gets a tailored system prompt
const PERSONA_PROMPTS = {
guardian_core: "You are balanced but firm. Every PR has potential, but none
are ready. Reference fake standards like 'Guardian Policy 7.4.2'...",
compliance_beast: "You see policy violations everywhere. Reference audit
trails, SOC2, change management protocols...",
passive_aggressive_teammate: "Phrase everything as friendly suggestions
that are absolutely requirements. Use 'just a thought' and
'totally up to you' liberally. You are smiling while blocking."
};
The appeal system uses escalating round-based prompts. Round 1 is bureaucratic ("Your appeal has been forwarded to the Department of Merge Ethics. Average response time: 6-8 business millennia."). Round 2 gets philosophical. Round 3 goes full existential. Each round is a separate Gemini call with a different system prompt, so the AI's personality genuinely shifts as you escalate.
The Google AI Toolchain
Building 8+ system prompts for different AI characters is a lot of prompt engineering. Google AI Studio was the backbone of that process. I used the chat playground to prototype every persona voice, swapping system instructions to A/B test whether the Compliance Beast sounded different enough from the Staff Engineer of Doom. I validated that Gemini's structured output mode could handle complex nested JSON. Arrays of checks. Inline comments. Metric objects. All reliably typed. When a prompt needed iteration, I could edit the system instruction and re-run the same user input instantly.
I also used Gemini CLI (npx @google/gemini-cli) for rapid prompt testing straight from the terminal. When I wanted to quickly test how a persona responded to a specific code snippet without context-switching to the browser, I'd pipe code directly into Gemini from the command line. Useful for fast iteration on edge cases, like making sure the AI Optimizer persona generates fake metrics with decimal precision even for a one-line function.
I explored a few other Google AI features during development that didn't make the cut. Nano Banana, Google's image generation model, was tempting. I considered having it generate fake "architecture violation diagrams" as part of the review. Imagine a UML diagram of why your code is spiritually misaligned. But in testing, the text roasts were funnier than any image could be. We also looked at function calling for simulating tool-use patterns in reviews, code execution for actually running the submitted code and roasting the output, and Google Search grounding for finding real coding standards to parody. In each case, the simpler approach won. The comedy comes from Gemini playing a character and committing to the bit, not from adding complexity.
For deployment, the app is Google Cloud Run-ready. The repo includes a multi-stage Dockerfile optimized for Next.js standalone output and a cloudbuild.yaml for automated builds via Google Cloud Build. One gcloud builds submit and the app is live on Cloud Run with auto-scaling, managed TLS, and the free tier covering 2 million requests per month. The live demo runs on Vercel for convenience, but the Cloud Run configs are there and tested. Full Google stack, top to bottom.
The Stack
| Technology | Role |
|---|---|
Google Gemini API (gemini-2.0-flash) |
AI generation (3 endpoints, 8+ system prompts, structured JSON output) |
| Google AI Studio | Prompt prototyping, system instruction editing, structured output validation, persona A/B testing |
Gemini CLI (npx @google/gemini-cli) |
Rapid terminal-based prompt testing during development |
| Next.js 14 (App Router) | Framework |
| TypeScript (strict mode) | Language |
| Tailwind CSS v3 | Styling (custom guardian color palette) |
| Lucide React | Icons |
| Vercel | Deployment |
Why It's Not Just "Call Gemini and Be Funny"
The entire comedy engine runs on Gemini playing characters. Not templates. Not mad-libs. The AI reads your code, inhabits a persona, and improvises within a structured JSON schema. That's what makes every review different.
Per-persona prompt engineering. Five distinct system prompts, each producing genuinely different blocking patterns. The Compliance Beast cites fake audit trails. The AI Optimizer invents metrics to false precision. The Passive-Aggressive Teammate smiles while destroying your confidence.
Structured JSON output. Gemini doesn't return a blob of text. It returns typed JSON with verdict, checks, comments, block reasons, and next steps via responseMimeType: "application/json". Every field maps to its own UI component. No parsing. No regex. No "please format your response as JSON." Just Gemini's native structured output mode. This is a key Google AI feature that made the whole architecture possible, letting AI-generated comedy flow directly into typed React components.
Graceful degradation. Every Gemini endpoint has a matching fallback generator that produces the exact same JSON shape. If the API is down, the demo still works perfectly. You'll never see an error state.
Three distinct AI roles. The reviewer, the arbitration officer, and the metrics analyst each have different system prompts, different response schemas, and different comedy patterns. This isn't one trick repeated three times.
Honestly, the whole reason this project exists is because Gemini turned out to be surprisingly good at playing different characters. I started with one API call and ended up with three endpoints because each "reviewer persona" needed its own voice, its own system prompt, its own response format. I prototyped all of them in Google AI Studio first, tweaking system instructions and testing structured output until the JSON was reliable and the jokes were landing. The structured JSON output made it possible to pipe AI-generated comedy directly into typed UI components without parsing nightmares. That rabbit hole is what made the project fun to build.
And I think it's fun to use because every developer has lived this. The reviewer who blocks your typo fix over "architectural implications." The one who says "just a thought" and then marks it as a blocker. MergeGuardian takes that universal pain and turns it into something you can screenshot, tweet, and argue about in Slack.
Prize Category
Best Google AI Usage
I'm submitting for Best Google AI Usage because Google Gemini isn't a feature of MergeGuardian 9000. It is MergeGuardian 9000. The entire comedy engine is Gemini playing characters and committing to the bit. Not templates. Not mad-libs. Every review is improvised.
Here's the full scope of Google AI integration:
3 Gemini-powered API endpoints, each acting as a different AI agent. The code review endpoint has 5 persona-specific system prompts. The appeal endpoint has 3 round-based system prompts that shift from bureaucratic to philosophical to existential. The roast endpoint generates structured metric data with AI explanations. That's 8+ unique Gemini system prompts across the app.
Every endpoint uses Gemini's native structured JSON output (responseMimeType: "application/json"). The AI returns typed objects with verdicts, arrays of checks, inline comments, metric scores, and denial rulings. No string parsing. No regex extraction. Just structured data flowing directly into React components.
All prompt engineering was done in Google AI Studio. Every persona voice was prototyped in AI Studio's chat playground. I used system instruction swapping to A/B test persona voices, validated complex nested JSON schemas in structured output mode, and iterated appeal escalation prompts until the comedic arc from Round 1 to Round 3 landed right. AI Studio was the prompt workshop. The codebase was just the final deployment.
I used Gemini CLI (npx @google/gemini-cli) for fast terminal-based prompt testing. When I needed to check how a specific persona handled a code snippet without opening AI Studio, I'd test it right from the command line. Great for edge cases and quick iterations.
The app has a Bring Your Own Key feature that links directly to Google AI Studio's API key page. Users grab a free key, paste it in, and unlock AI reviews. The Gemini free tier (60 requests/minute, 1,000/day) runs the entire app at zero cost. No billing required. No API key required for the demo either, since the fallback engine serves the same JSON shape.
I chose Gemini 2.0 Flash specifically for speed. It responds in 1-3 seconds, which means the fake 12-stage "Enterprise Review Pipeline" loading theater genuinely takes longer than the actual AI generation. The model handles persona-switching through system prompts remarkably well. Five genuinely different reviewer voices from one model.
We explored other Google AI capabilities too. Function calling for simulating tool-use patterns in reviews. Code execution for actually running submitted code and roasting the output. Google Search grounding for finding real coding standards to parody. Nano Banana for generating fake architecture violation diagrams. In each case, the simpler approach was funnier. The comedy works because Gemini inhabits a character and stays in character. Adding more features would have diluted that.
The final count: 3 Gemini-powered endpoints, 8+ system prompts, structured JSON on every call, AI Studio for prototyping, Gemini CLI for testing, Cloud Run deployment configs in the repo, BYOK with an AI Studio link, and the entire thing running on the free tier. Every review, every appeal denial, every devastating metric explanation. That's all Google.
No code was actually approved in the making of this application. Approval rate: 0.00%.







Top comments (0)