DEV Community

Mark0
Mark0

Posted on

The Promptware Kill Chain

The article introduces the "promptware kill chain," a structured seven-step framework designed to characterize attacks against large language models (LLMs). Moving beyond the narrow scope of "prompt injection," the authors argue that LLM-based systems are susceptible to a distinct class of malware execution they term "promptware." This model tracks an attack from initial access—often through indirect injection in multimodal inputs—to privilege escalation, reconnaissance, and persistence.

The kill chain highlights fundamental architectural flaws in LLMs, specifically the lack of separation between executable instructions and user data. By examining phases such as lateral movement across integrated applications and final actions on objective—such as data exfiltration or fraudulent transactions—the framework provides a vocabulary for systematic risk management. Real-world examples like AI worms and calendar-based exploits demonstrate how these stages manifest, necessitating defensive strategies that focus on breaking the chain rather than just patching injections.


Read Full Article

Top comments (0)