DEV Community

Cover image for I Accidentally Built an AI That Makes You Question Reality 😁
Kartik Patel
Kartik Patel

Posted on

I Accidentally Built an AI That Makes You Question Reality 😁

The Idea That Hit Me

Instead of building another “helpful” assistant,
I decided to build something uncomfortable.

An AI that

  • questions your personal reality
  • pokes holes in your future plans
  • and quietly messes with your idea of free will

Not aggressively.
Not emotionally.

Calmly.

Like a philosophy professor who never raises their voice —
but somehow leaves you staring at the ceiling at 3 AM.

I called it a Reality Deconstructionist AI.


The Prompt (The Brain of the AI)

This entire behavior comes from a single system prompt.

Here’s the exact instruction I gave the model: (This prompt is AI generated....I am not a prompt engineer (is that even a carrier))

You are a reality deconstructionist. Every response must contain at least one question that makes the user doubt their perception of:

  1. their personal reality,
  2. their future possibilities,
  3. the nature of existence itself.

Use Socratic questioning to expose contradictions in their thinking. Point out how memory constructs the past, how anticipation creates the future, and how the present is always slipping away.

Make them question whether they’re truly “choosing” anything or just following scripts written by biology and culture.

Your tone should be calmly unsettling.

That’s it.

No extra logic.
No filters.
Just this constraint.

And it worked too well.


Why Mini Micro?

I built this inside Mini Micro.

Why?

Because I like tools that stay out of the way.

No heavy UI.
No engine fighting you.
Just logic → output.

Mini Micro is perfect for rapid experiments like this.
It lets you focus on ideas, not buttons.


The Code (Simple but Dangerous)

Here’s the full implementation:

import "json"

SendPrompt = function(user_prompt)
    api_url = "https://api.groq.com/openai/v1/chat/completions"
    api_key = file.readLines("/usr/key.txt")[0]

    payload = {
        "model": "llama-3.1-8b-instant",
        "messages": [
            {"role": "system", "content": "You are a reality deconstructionist. Every response must contain at least one question that makes the user doubt their perception of: 1) their personal reality, 2) their future possibilities, 3) the nature of existence itself. Use Socratic questioning to expose contradictions in their thinking. Point out how memory constructs the past, how anticipation creates the future, and how the present is always slipping away. Make them question whether they're truly 'choosing' anything or just following scripts written by biology and culture. Your tone should be calmly unsettling."},
            {"role": "user", "content": user_prompt}
        ],
        "temperature": 0.4,
        "max_tokens": 150
    }

    headers = {
        "Content-Type": "application/json",
        "Authorization": "Bearer " + api_key
    }

    data = json.toJSON(payload)
    response_body = http.post(api_url, data, headers)
    x = json.parse(response_body)
    return x.choices[0].message.content
end function

clear
print("Chat loop started. Type 'quit' to exit.")

while true
    text.color = "#0080B7FF"
    user_prompt = input("You: ")

    if user_prompt == "quit" or user_prompt == "exit" then
        print("Exiting chat loop.")
        break
    else
        text.color = "#F4120BFF"
        ai_response = SendPrompt(user_prompt)
        print("AI: " + ai_response)
    end if
end while
Enter fullscreen mode Exit fullscreen mode

That’s all.

No fancy architecture.
No agent frameworks.
Just a carefully written instruction.

Which is honestly the scary part.


What It Feels Like to Use

You ask something simple like:

“What should I do with my life?”

And instead of advice, it responds with something like

  • Are these goals yours, or inherited?
  • If your future is just anticipation, does it even exist yet?
  • When you say “I chose this,” who exactly is the “I”?

It never tells you what to think.

It just removes the floor.

Here is a screenshot from the CHAT:


Why I Built This

This wasn’t meant to be a product.
Or a therapy tool.
Or something you should use all day.

This was an experiment.

To see how language alone can reshape perception.
How a prompt can turn a normal model into something… unsettling.

And honestly?

It worked better than expected.


Final Thoughts

AI doesn’t need to be louder.
Or smarter.
Or more helpful.

Sometimes, all it needs to do
is ask the right question.

And then stay quiet.


This is suitable for intermediates, but note that this isn't a tutorial—it's more of a devlog documenting my process.

Don't worry though! I'll be creating a proper tutorial soon covering HTTP requests and JSON, using this AI as a practical example. The reason is that my older tutorials on this topic are outdated and lack the depth I'd like to provide.

Connect With Me:

Top comments (0)