I wrote a shorter technical note on why prompt injection becomes harder once we move from chatbots to agents.
The problem is not only that a model may follow a bad instruction.
The harder case is when untrusted content travels through a workflow: retrieval, summaries, memory, tool outputs, and later decisions.
That is where prompt injection starts to look like a missing trust boundary.
Full article:
https://msukhareva.substack.com/p/prompt-injection-is-not-just-one
Top comments (0)