DEV Community

Cover image for AI/LLM Security Risks
John Mitchell
John Mitchell

Posted on

AI/LLM Security Risks

Everyone knows OWASP has a convenient "top 10" list of potential security risks, e.g. SQL Injection.
Turns out they have other lists too, like
Risks for LLMs!

Here's the first three LLM security risks:

  • LLM01: Prompt Injection

Manipulating LLMs via crafted inputs can lead to unauthorized access, data breaches, and compromised decision-making.

  • LLM02: Insecure Output Handling

Neglecting to validate LLM outputs may lead to downstream security exploits, including code execution that compromises systems and exposes data.

  • LLM03: Training Data Poisoning

Tampered training data can impair LLM models leading to responses that may compromise security, accuracy, or ethical behavior.

They also have risks for APIs!

And, also an entire site of cheat sheets!

Top comments (0)