One of the things missing here is the ongoing actions of the agent. Much of what's being compared to are "once and done" deleterious actions. The problems with AIs isn't really "one and done". Indeed, the whole point of creating an AI is to create a long-running process to address a number of discrete problems.
In the case of someone using an agent for a single action, sure, the notions above abpply. However, in the case of ongoing agency, you bear a responsibility for evaluating each of your agents actions to ensure they were done in a way that reflects your values. If you use an agent over and over, it implies that you support that results produced by that agent.
Effectively, "once is an accident," but beyond that...
That is very true!
That's similar to a case in the Talmud where someone's ox gored another ox the owner of the ox has to pay half the damage, but that only applies the first it second time, from the third time onwards the owner has to pay the entire damage because now it's known that the ox is aggressive so the owner should have been more careful.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.