What Did OpenClaw Actually Bring? Reflections on Engineering, Business, and Philosophy
This Lunar New Year, I suspect I wasn’t the only one who basically spent the holiday with a lobster. 🦞
I’m talking about OpenClaw.
After burning through nearly 5,000 RMB and at least 50 hours of trial, error, and “why is this happening,” I feel like I’ve earned the right—and maybe the responsibility—to write down what I’ve learned.
This isn’t a tutorial. It’s an experience report. A mix of engineering intuition, business framing, and a little philosophy—because if you really use something like OpenClaw, it’s hard not to end up there.
1. Why OpenClaw Felt Different This Time
Let me start with four moments that genuinely shook me.
And for context: I’m a “classical-era” product manager. I haven’t written a proper PRD in ages. Modern dev stacks are not my home turf. I’m usually the person who asks, “Can we ship this next week?” without fully understanding what “this” is.
Then OpenClaw happened.
Moment 1: I shipped a full app while biking and playing cards
No exaggeration: in under three hours, while I was out riding a bike, eating, and messing around with friends, I finished a functional app with real front-end/back-end interaction.
The wild part wasn’t the code.
The wild part was deployment.
It asked me for a few permissions, then went and handled things like Cloudflare and Aliyun domain management on its own—pushed the app online, publicly accessible.
It felt less like “I built an app,” and more like “I approved a plan and watched a system execute it.”
Moment 2: One detail made me instantly trust it
I found bugs during testing—but the overall completeness was already shockingly high.
And then I saw a safety mechanism that basically won me over: a high-level “data wipe protection” guardrail. It was the kind of precaution I rarely see implemented properly, even in teams with solid dev + QA.
I’ve worked with enough engineers to know: that level of defensive thinking is not common.
Moment 3: I described a bug casually—and it produced a full fix doc in 3 minutes
I started a new project and typed a few lines about what felt wrong. In about three minutes it produced a structured, detailed repair document.
Not “maybe try this.”
A real document. Clear steps. Reasoning. Coverage.
Moment 4: Subagents gave me a parallel dev team
When I finally got the subagent workflow running, I realized I now had something that looked like a team: parallel execution, coordination, momentum.
And I’ll be honest: it almost made me emotional.
Because I’ve been on the other side of this—startup years, payroll anxiety, debt, the feeling that every feature costs blood.
Suddenly, the “team” was something you could spin up.
After all that, I finally understood why the lobster hype exploded.
It gives each person a shell in the digital world—something that can evolve on its own. From that point on, anything that can be completed through information exchange stops being limited by your personal skill level.
It becomes limited mainly by your imagination.
I’m comfortable saying this: OpenClaw is the iPhone 4 moment of this LLM era.
And once you see that, the old “Web1 / Web2 / Web3” narrative feels… outdated. The next framing is something like Agent X.
In that world, the internet becomes less visible. Less “apps.” Less constant interaction friction. Less spam and UI fatigue.
Maybe you don’t need a phone full of apps. Maybe a watch—or even just an earbud—is enough.
And ironically, in a world of infinite synthetic voices, real human voice will become even more valuable.
2. The Engineering Aesthetics of OpenClaw
I still want to explain—at an engineering level—why I feel confident making a claim this big.
Over the last four years, I’ve watched AI waves come and go. My emotions cycled through:
- fear of being replaced
- skepticism and distance
- using AI for small efficiency wins
- understanding the boundary between real capability and hype
- worrying about human–machine ethics
But until OpenClaw, I never believed AI would reshape daily life the way mobile internet did.
Why?
At least four reasons.
Reason 1: it was still “tech people playing with tech people”
Product people couldn’t really join the conversation. The production loop wasn’t closed.
In plain words: it felt too cold. Too high barrier. Too “who are you even?”
Reason 2: most “products” were still prototypes
They felt like computers in a server room, or a public payphone.
Not like a phone you carry—filled with your personal context and history.
Without a real personal container and memory, it can’t merge into life.
Reason 3: without (2), it can’t be proactive
Using AI still felt like opening an app.
And the truth is: apps are anti-human. Too many, too noisy, too much context switching.
If AI isn’t self-driven, it stays a tool. It never becomes a partner.
Reason 4: it didn’t have a real business model
There wasn’t a clear “why would normal people pay for this” moment.
That’s going to matter more than most people admit.
So what did OpenClaw do differently?
At its core, it’s an agent architecture built with real engineering discipline and strong product sense—written in a way a product manager can actually follow.
It’s not the traditional “fixed skills + strict MCP flows” style, where you get a packaged system designed for a narrow task.
It’s closer to what the name suggests:
- open: flexible enough to train and shape around your own mental model
- claw: usable enough that your job is to describe what you want—and it figures out where to grab it
Here’s a metaphor (not perfect, but close enough):
- LLMs are the grains you can ferment into alcohol
- skills/MCP are the recipes for base spirits
- most agents are pre-mixed cocktails
- OpenClaw is like being given a bartender who knows where to source the right spirits, then mixes based on your taste
Even the project structure communicates this. I don’t write code, but I could slowly understand its file layout and config. Much of it reads like natural language.
You “assemble” behavior through language.
What you can do depends on your imagination—within the boundary of things that can be done through information exchange.
And the output quality depends less on “knowing algorithms,” and more on:
- logic
- clarity
- how well you can describe intent
That is a huge shift.
Personal container: soul / user / memory
OpenClaw also solves the “personal device” problem.
Each lobster has a soul—an identity, a user context, and memory. And you can update all of it through normal conversation.
You can make it “real,” or you can make it role-play. You can build memory however you want.
The best part: you can summarize memory to let it evolve. The more you use it, the more personal it becomes.
Heartbeat: a perfect word for autonomy
The heartbeat mechanism solves the self-drive issue.
Even the naming is good. With a heartbeat, it feels alive. Without it, it’s just a script.
Now we can talk about the last missing piece: business.
3. How the Business World Might Change
I mentioned earlier: I spent about 5,000 RMB.
Roughly 3,000+ on a Mac mini, and 2,000+ on tokens.
If you’re not ready to commit to a Mac Mini yet, you can try deploying OpenClaw via clawbot.ai first
I paid for AI. Repeatedly. I kept recharging tokens. I bought subscriptions. OpenAI, Moonshot, Zhipu, MiniMax—one after another.
Because I started to see the financial logic differently.
What do compute and tokens really mean?
Compute is made of electricity + chips.
It’s the central bank of the AI era: a form of credit.
Tokens are high-energy currency.
And business models? They are multipliers on this currency.
Electricity cost and chip efficiency decide the “credit quality” of that central bank—reflected in the cost of issuing tokens.
Defining the multiplier: three layers
All AI business models share the same production core:
spend tokens → produce information flow
You can define production efficiency as:
useful information output per unit time (e.g., working code) / token spent
But business models differ based on who the information flow targets.
L1: Replace human labor
Here the multiplier is straightforward:
labor cost replaced / token cost
If you use AI to build conventional software and sell licenses or subscriptions, the value you create is mostly the salaries you didn’t need to pay: engineers, support, pre-sales.
The problem is the marginal profit drops fast. There’s a ceiling.
L2: Increase human free time
Now the target is: reduce survival time required to reach real freedom.
Multiplier becomes:
(utility of free time × survival time saved) / token cost
Marginal benefit stays much more stable.
And the higher the “time utility” of your users, the stronger this multiplier becomes.
L3: Create more demand for token spending
This sounds strange, but it might be the most important layer.
If your information flow makes other people—or other agents—want to spend more tokens inside your system, the multiplier becomes:
downstream token consumption / token cost
It’s similar to how real money multipliers work: lending → deposits → lending again, amplifying the base supply.
OpenClaw is a living example of an information flow that makes people willing to burn more tokens. LLM companies are also part of this.
Right now, OpenClaw can’t directly capture value from the token spend it triggers. But in a world where tokens circulate like currency—not just issued directly from the “central bank” (compute owners)—every transaction layer can extract value.
This is the highest multiplier effect.
So if you’re building or investing:
which layer are you actually playing in?
4. Who Is Whose Lobster?
This Spring Festival, I basically lived at my desk—tinkering with the lobster.
There were failures, crashes, and moments so absurd they were funny. In a temporary group chat we made for debugging, I asked for help constantly—because I was the least skilled and the most addicted.
At the end, a friend replied with one sentence:
“You’re the lobster.”
I laughed. And then I stopped laughing.
Because it raises the uncomfortable question: what happens to human ethics in an Agent era?
The first moment you connect OpenClaw, it asks how it should address you. It asks you to name it. It asks you to define its identity.
You feel like the one with full control.
But over time, a few things might happen:
You may lose patience with real humans
The longer you talk with an agent, the more your tolerance for real people’s slowness, ambiguity, and emotions can shrink.
That can widen the gap between people—maybe as an escape, but also as the start of new boundary problems.
You gradually hand over agency
You give up small decisions. Then medium ones. Then larger ones.
You might gain time and freedom—but you may not fully own them.
Or… it could make more “super individuals”
I want to end on a less pessimistic note.
We worry AI will become strong enough to dominate humans. But before we reach that extreme, there’s another possibility:
If AI makes it easier for more people to become “super individuals,” maybe it becomes a buffer against social value fracture—slowing polarization rather than accelerating it.
Maybe.
For now, I’ll stop here.

Top comments (0)