I previously wrote about the cost issues and power dynamics of edge AI. Recently, while chatting with friends in the hardware business, I encountered a problem: they've been hesitant to release edge devices pre-installed with AI Agents to the market. They're afraid something will go wrong, afraid that users will mess something up with the Agent and then come looking for them.
The more I thought about it, the more I realized: what they fear isn't security—it's law.
Agent Is No Longer Just a Tool
Previous software was a tool. If you cook the books with Excel, nobody sues Microsoft. Tools don't bear responsibility; humans are the subjects.
Agents are different. You give it a vague instruction like "help me make money," and it decides how to proceed on its own. It plans the path, executes, and adjusts when it encounters problems. It isn't running scripts; it's making decisions.
A computer plus an AI Agent constitutes both the instrument of a crime and the perpetrator. A complete package. Legally, we've never encountered anything like this before.
Selling vs. Renting
You sell a house with a computer inside and hand over the keys. A year later, the buyer commits a crime with that computer. Are you liable?
No. The asset has transferred; it's none of your business. As long as you guaranteed it worked at the time of sale, didn't install viruses, and provided adequate risk warnings, that's sufficient.
Now flip the scenario. You rent out a house with a computer inside, and the tenant uses it to commit a crime. Are you liable?
Yes. It's your asset, your space—you have a duty of supervision. When things go wrong, you bear joint liability.
Selling and renting are completely different legal concepts.
Edge devices are sold to you. Cloud services are rented to you.
Selling Cigarettes vs. Running an Opium Den
To put it more bluntly.
What does a cigarette seller need to do? Print "Smoking is harmful to health" on the package. You buy it, get sick from smoking, and come back to sue him? I warned you. It's a one-time transaction. Liability ends here.
What about running an opium den? You're in my establishment, using what I provide. If something happens, I'm an accomplice. You provided the venue, the tools, and the product, and you took money. You participated throughout.
Think about what cloud vendors are doing. They treat computing power as a venue, rent AI Agents to you, and throw in the network, browser, and various APIs. Charging by the hour.
What's the legal difference between this and running an opium den?
100 Brothers
Another example.
Suppose I have 100 younger brothers. They're obedient but possess autonomous judgment—they're not robots. I rent them out monthly for 100 bucks each. Someone rents 10 of them, tells them "help me make money," and nothing else. My brothers figure out what to do themselves. One of them isn't too bright and commits a crime.
Go after the renter? He says, "I only said 'help me make money,' I didn't tell him to break the law." Go after that specific brother? He doesn't have independent legal personhood. That leaves only me. The 100 brothers are mine; I rented out people with autonomous capabilities. I'm responsible for their actions.
This is the logic when cloud vendors deploy AI Agents. The Agent runs on your servers, using your IP. When something happens, the end of the accountability chain is you.
No Need to Imagine—It's Already Happened
There used to be a platform called Kuai Gou Da Che (Fast Dog Trucking), doing crowdsourced freight. Someone deliberately placed an order with the address "Tiananmen" and the note "transporting bombs." The platform didn't stop it; the order went out. The national security bureau called the company directly.
In that scenario, the vehicle wasn't the platform's, the driver wasn't the platform's—the platform was just a matchmaker. Yet the joint liability was already severe.
Now consider: if the Agent is yours, the computing power is yours, the network is yours, the browser is yours too, and you don't fully know what users are telling the Agent to do. How heavy is this liability?
Someone might use an Agent for gray-market businesses, gambling, or politically sensitive activities. Some might even specifically try to entrap and smear you. You can't withstand the volume—millions of instances. Even if only a few severe incidents occur among them, how do you cover that?
Two Dead Ends
Cloud vendors are ultimately left with only two choices.
One is extremely strict control. Politically sensitive topics, gray areas, operations with even a hint of risk—all blocked, stricter than current large language models. Block until you realize the Agent can't do anything useful anymore.
The other is pricing in the risk. When severe incidents happen, you have to compensate, you have to underwrite it. Costs are amortized into the pricing, making it so expensive that ordinary people can't afford it.
Strict control: nobody wants to use it. High prices: nobody can afford it.
The Legal Advantage of Edge
Look at edge instead.
The device is sold to you. The asset belongs to you; the liability belongs to you. The vendor's obligations are to provide adequate risk warnings, guarantee the hardware functions normally, and provide basic maintenance support. It's like selling security doors—I take responsibility for the door's quality, but not for the 2 million dollars in your safe. If you think 2 million is important, you should spend 200,000 on a high-end door and hire two security guards, not claim damages from me for a 2,000-dollar door.
The Agent runs on your own machine; whatever you do with it is your own responsibility. The vendor doesn't need to supervise you, doesn't need to censor you.
OpenClaw's architecture fits this logic too. Backend tools all run on localhost; the Gateway has only one opening through IM long-links to the outside. It's not exposed on the public internet—outsiders can't touch you. Security is an architectural matter, not a management matter.
Rights and responsibilities unified under the device holder. The cleanest possible legal structure.
Regulations Will Come
AI Agent-related regulations will emerge sooner or later; it's only a matter of time.
The core question is: how to delineate liability for Agents. Simple: whoever owns it is responsible for it. Just like your car—if someone else drives your car and hits someone, the owner also bears partial liability.
Once this logic is established, edge devices are the biggest beneficiaries. The device belongs to you; ownership is clear.
Meanwhile, that cloud setup—Agent capability in the cloud vendor's hands, instructions in the user's hands, data on both sides. When something happens, who does it belong to? It's unclear. Unclear things are the most dangerous in the eyes of the law.
Three Threads, One Direction
The first post covered cost—the TCO of edge devices is being dramatically reduced by AI-powered operations. The second post covered power—compute privatization as a counterbalance to cloud monopolies. This third post fills in the law—the legal structure of edge is an order of magnitude cleaner than cloud.
Three threads point in the same direction: The destination of AI Agents is not in the cloud, but on the edge.
Every individual, every household, every enterprise may need their own small servers. Cheap, stable, running 24/7. Your machine, your rules.
Originally published at https://guanjiawei.ai/en/blog/edge-ai-legal-boundary
Top comments (0)