Every time I ask ChatGPT something simple, it gives me a clean, direct, confident answer.
I find this deeply suspicious.
Real thinking doesn't work that way. Real thinking spirals. It questions the question. It considers perspectives that have nothing to do with the original question. It quotes a philosopher. It introduces a new, bigger question. It reaches no conclusion. It ends with "but then again, who can really say?"
So I built OverthinkAI™ — an AI that refuses to answer anything directly. Ever. Try it now: https://overthinkai.netlify.app/
"Ask anything. Get a thorough, exhaustive, and completely inconclusive response."
What I Built
OverthinkAI™ is a dark-mode SaaS app that takes any question — any question at all — and returns a deeply considered, philosophically rigorous, completely useless non-answer powered by the Gemini API.
The response includes:
- Multiple philosophical reframings of your question (existential, Kantian, absurdist)
- Exhaustive pros and cons — including several that are completely irrelevant
- A philosopher quote (possibly invented)
- A deeper question that makes your original question seem trivial
- Zero conclusions
- Ending with: "But then again, who can really say?"
The twist: There's a "Get Quick Answer" button. Clicking it generates a longer response. Each click escalates: "Get Even Quicker Answer" → "Just Tell Me" → "PLEASE" → "I BEG YOU." The depth increases every time. At depth 5, it says: "Maximum overthinking reached. We recommend therapy."
The Joke Has a Point
Every AI assistant right now is racing to be more confident, more decisive, more direct. One-sentence answers. Bullet points. Action items. Productivity.
OverthinkAI™ goes the other direction: what if we used the full power of a frontier LLM to produce the most comprehensive possible non-answer?
The result is genuinely funny because Gemini is really good at this. It doesn't fake the philosophical reasoning — it actually does it. The pros and cons are real pros and cons. The philosopher quotes are plausible. The circular logic is airtight.
The bit only works because the AI takes it seriously.
Demo
1. You type a simple question
"Should I drink water?" works. So does "Is it too late to start?" or "Should I reply to that message?" — the simpler the question, the funnier the response.
2. OverthinkAI™ thinks
The loader cycles through:
- "Considering all angles..."
- "Reconsidering all angles..."
- "Questioning the concept of angles..."
- "Consulting Schrödinger's answer..."
3. You receive a non-answer
A fully reasoned, multi-section response that considers your question from every angle and arrives at nothing. Then the "Get Quick Answer" button appears.
4. You click it
Longer.
5. You click it again
Even longer.
6. You stare at your screen
"But then again, who can really say?"
How I Built It
Stack: React 19 + Vite + Tailwind CSS v4 + Gemini API (gemini-2.0-flash)
No backend. The Gemini API is called directly from the browser via @google/generative-ai. No server, no proxy, no cost at scale.
The core prompt is what makes it work. Gemini is instructed to:
- Reframe the question in
2 + depthphilosophical framings - List
6 + depth*2pros and cons, including irrelevant ones - Quote a philosopher (may be invented)
- Never answer directly
- End with exactly: "But then again, who can really say?"
The depth parameter increments each time the Quick Answer button is clicked, making every response measurably longer and more circular than the last.
Streaming is used for the initial response — text streams in word by word, which makes a verbose AI response feel dynamic instead of slow.
Deployed on Netlify — VITE_GEMINI_API_KEY set in environment variables. No backend, no server.
Prize Category
I'm submitting for Best Google AI Usage.
OverthinkAI™ uses the Gemini API (gemini-2.0-flash) as the core engine — not as a wrapper or decoration, but as the product itself. The entire joke only works because Gemini is genuinely good at verbose, circular philosophical reasoning. A static template pool would produce flat, repetitive output. Gemini produces responses that are different every time, internally consistent, and funnier for being real.
The depth mechanic — where the "Quick Answer" button calls Gemini again with a longer, more convoluted prompt — means Gemini is used multiple times per session, with each call producing measurably more overthought output than the last. That escalation is only possible with a real language model. The Google AI integration isn't decoration; it's the joke.
What I Learned
Building with Gemini taught me something unexpected: the model is excellent at performative uncertainty.
When you ask it to be confidently uncertain, to reason in circles, to quote philosophers while reaching no conclusion — it does this with remarkable skill. The outputs are genuinely funny because they're not random. They're thoughtful non-answers.
This is either impressive or alarming. Possibly both. But then again, who can really say?
🧠 Try OverthinkAI™ Live → https://overthinkai.netlify.app/
💻 Source on GitHub → https://github.com/pulkitgovrani/Overthink-AI
Trusted by overthinkers worldwide. No conclusions were harmed in the making of this product.
Top comments (0)