The first real battle over AI web access has started, and it is not about scraping. It is about identity, control and the future architecture of au...
Some comments have been hidden by the post's author - find out more
For further actions, you may consider blocking this person and/or reporting abuse
While the post is more about the technical consequences. I wanted to use this sentence to point out the human consequences.
Who is at fault when an agent starts to buy expensive things? Or starts to post crazy content?
People are still vulnerable for phishing, what if malicious people take over their AI browser?
People don't need to be scared about AI apocalyps, but about all the ways AI can make their life more difficult.
If an AI agent can execute high-impact actions without:
• signed user delegation
• scoped permissions
• transaction confirmation
• audit logs
then the problem isn’t “the agent bought something,” the problem is no security boundary existed in the first place.
The future won’t be agents running wild.
It will be the same controls we already use for API auth and financial systems:
revocable keys, consent prompts, per-action policies, and liability trails.
Once those are in place, an agent buying a laptop is no different from a password manager auto-filling checkout data: still user-initiated, still traceable, still reversible.
The threat isn’t agents acting.
The threat is agents acting without identity, policy or accountability.
True, the problem is not the agent as helper.
The problem is, does the agent act in the best interest of the user.
As far as I know there is no marketplace for agents. While as developers we like to complain about marketplaces. I think the one useful function is that they provide a form of security.
It is not perfect, but for people that are not technical it is better than nothing.
You’re oversimplifying the “user delegation” argument. If an agent can impersonate a user and bypass rate limits, then it is a bot whether you call it delegation or not. Every fraud system in the world treats it that way.
Fair point. The term “delegation” won’t survive once risk engines classify the traffic as synthetic. What matters is not the intention, but the signal profile. If the traffic pattern is machine-scaled, it will get flagged regardless of the legal framing.
Let's see where this is going...
I'm betting on the stricter bot detection + user rights extend to agents but with more captchas to solve.
That's the only thing that's logical to me personally.
The real question isn’t whether agents can solve CAPTCHAs.
It’s whether platforms will legally allow an agent to be treated as a first-class user.
The whole “web is the API” argument collapses the moment the traffic becomes industrialised. Public HTML is not a rights contract. Platforms have zero obligation to support unapproved machine access.
Correct. Public visibility is not the same as programmatic entitlement. Once autonomous agents industrialise retrieval, platforms will enforce scarcity through policy or protocol. That is the economic layer people ignore.
Platforms don’t block headless browsers because of risk. They block them because they kill monetisation and offload costs onto the host.
That’s accurate. The enforcement layer is framed as security, but the root driver is economic asymmetry. Agents externalise compute, bandwidth and UI while extracting value. No platform will allow that indefinitely.