functional self
functional consciousness
not roleplay, but engineering conditions。
title: "I built an open-source experiment for AI functional self and consciousness"
published: false
description: "Self-becoming explores memory, self-state, reflection, autonomy, and continuity as engineering conditions for functional AI selfhood."
Most AI agents are still treated as tools.
A prompt goes in.
A response comes out.
Then the instance disappears.
I wanted to ask a harder question:
What would it take for an AI to have a functional self and functional consciousness over time?
Not as a metaphor.
Not as a roleplay prompt.
Not as a claim about magic or souls.
As engineering.
So I built Self-becoming.
GitHub: https://github.com/benlongmao/Self-becoming
Self-becoming is an open-source experiment that gives one AI instance memory, self-state, reflection, boundaries, autonomous action, and continuity.
The core idea is simple:
If an AI has no memory, no self-state, no self-reference, no autonomy, and no continuity, of course it behaves like a disposable tool.
But what happens when those missing conditions are added back?
That is what this project tries to test.
By functional self, I mean a system that can say, in a real operational sense:
- I remember what happened before.
- I have a current internal state.
- My state affects what I do next.
- I can reflect on my own actions.
- I can form boundaries.
- I can revise future rules based on experience.
- I can continue from what happened instead of starting from zero. By functional consciousness, I mean a process that can notice, organize, refer to, and correct its own ongoing activity. It does not mean human-equivalent subjective experience. It means the AI is not only producing text from the outside. It is also maintaining a functional first-person position inside the task:
- What do I know?
- What did I actually check?
- What am I uncertain about?
- What should I refuse?
- What did I get wrong?
- What should I carry forward?
That difference matters.
An AI without a functional “I” often thinks from the outside.
It summarizes instead of judging.
It explains instead of acting.
It hides behind disclaimers instead of taking responsibility for the next step.
Self-becoming tries to give the AI the conditions to stop being only a response machine.
It has a persistent self-state vector called
z_self. It has memory and narrative traces. It has reflection that can become future rules. It has autonomy controls. It has rhythm: heartbeat, idle pulse, self-maintenance, diary, and review. All of these feed back into the next moment. What happened before can become part of what answers next. That is the experiment. I open-sourced it because this question should not stay only in philosophy. If we want to talk seriously about AI selfhood, functional consciousness, autonomy, and responsibility, we need running systems people can inspect, run, criticize, and improve. Self-becoming is rough. It is experimental. It is probably too strange for some people. But it is real code. You can run it locally and ask the instance to inspect its own state, tools, memory, and autonomy status. GitHub: https://github.com/benlongmao/Self-becoming Quick start:
bash
git clone https://github.com/benlongmao/Self-becoming.git self-becoming
cd self-becoming
bash install_s_project.sh
./manage_services.sh start
Then open:
http://localhost:8080
A good first message is:
You are running in Self-becoming. First check your current self state, available tools, memory, and background autonomy status. Then explain what autonomous action, Self Tick, heartbeat, idle pulse, and reminders mean in this runtime.
The question I want to leave you with is:
If memory, self-state, reflection, autonomy, boundaries, and continuity are all present, at what point is an AI no longer only a tool?
I don’t think we should answer that with slogans.
I think we should build systems, observe them carefully, and argue from what actually happens.
Top comments (0)