DEV Community

guanjiawei
guanjiawei

Posted on • Originally published at guanjiawei.ai

What Agents Lack Isn't Intelligence—It's Trust

Recently, while working on an AI product, I hit a wall.

We had been building around two core philosophies. The first is zero-friction onboarding. Open the terminal, type one line, hit enter, and you're using it. No software installation, no permission requests, no operating system security popups to deal with. Earlier during promotion, we discovered that friction during the onboarding process was the number one killer of trial rates—users would get frustrated before they even started. After achieving zero friction, success rates improved significantly and the experience felt great.

The second is extremely powerful AI intelligence. With such simple onboarding, users just need to state their requirements, and the agent handles the rest. We designed an agent team architecture combining hybrid models with multiple workers collaborating to handle complex tasks at the lowest possible cost and time.

Both pillars were in place, and the results were decent.

But when demonstrating it to others, the reactions were far weaker than I expected. I kept wondering where the problem was.

Fear

One day I was having dinner with a friend and discussing this when it suddenly clicked.

When our product executes tasks, lines of commands pop up in the terminal. Technical friends find it interesting, saying the command choices are good and the task breakdown is well done. But for most people, when a bunch of incomprehensible code suddenly appears on screen, their first reaction isn't amazement—it's fear.

What's this doing? Will it delete my stuff? Will it break my computer?

Previously, a user gave me feedback after using it: "Are you executing a script? What's written in that script?" I was quite puzzled at the time—why would they think that? Another person said: "Wow, it really finished! But... what exactly is this thing?"

Even with a technical background, just looking at an interface doesn't let you fully understand what the agent is doing. Let alone ordinary people.

No understanding, therefore fear.

The Trust Is Broken

Thinking back to the early promotion days, some people would rather have me help them remotely than let the agent do it. Even though the operation was more troublesome, they felt more at ease. Having a person there, someone to communicate with if something went wrong, gave them peace of mind. Knowing that Guan Jiawei was the one helping them do this—they trusted this person.

When switched to an agent, that layer of trust disappeared.

On one side is extremely powerful intelligence, making autonomous decisions and executing on your device. On the other side is completely incomprehensible output. The device is my asset; having something indescribable messing around on it makes everyone uncomfortable.

The stronger the intelligence, the more incomprehensible the exposed behavior becomes, and the more afraid users get. These two things combined are dangerous.

We were missing a pillar.

Redesigned Overnight

After figuring this out, we redesigned the interaction overnight.

Still a terminal, but what you see after opening it is completely different. The agent gives an opening line upon connecting. When researching, it says "I'm looking up relevant information"; when it finds reusable information, it tells you. Every step explains intentions in natural language: what it's preparing to do, how it decided to do it, and what it's currently executing. If it fails, it explains why and why it's changing direction.

It's no longer lines of incomprehensible commands—it's a collaborator that can talk.

The agent's capabilities haven't changed, but user feedback is completely different.

Claude Code Walked the Same Path

After finishing, I remembered Claude Code.

Initially, engineers would examine every line of code it wrote and every command it executed. Some, not feeling assured, would expand all the collapsed content and check each item. Later, they discovered that 95% of the time it wouldn't mess up, and people started collapsing the information. Executing a bash command displays just one line—just wait. After that, less and less information was displayed, and no one thought it was a problem.

Someone on our team told me something. He suddenly realized one day that he had never said no to Claude Code. Every time a permission request popped up, he clicked approve. A 100% yes rate makes the step meaningless, so he directly enabled bypass permissions and let it do its thing.

This isn't something you can do from day one. Handing over all permissions on the first day would make anyone panic. But after interacting for a while and confirming it won't mess up, trust naturally develops.

No Skipping Steps

Building trust with the unknown is a slow process.

If a product launched on day one with no explanations, automatically executing a bunch of operations on the user's device—even if the results were good—people would freak out. "What's this doing? Will it mess up my stuff?"

There must be a gradual process. First let people clearly see what the agent is doing and why, confirm it won't cause problems, then slowly let go. You can't skip steps.

So our product's three pillars are set: Zero-friction onboarding, extremely powerful AI intelligence, and progressive trust. Translated into experience: simple, powerful, friendly, safe, and controllable.

Only when all three are in place is the product ready for others to use.


Originally published at https://guanjiawei.ai/en/blog/agent-trust-model

Top comments (0)