Most people worry about prompt injection.
They should also worry about what the model already learned to trust.
A few poisoned fragments in training data, retrieved context, or synthetic pipelines can make AI fail in ways that look calm, credible, and repeatable.
That is what makes data poisoning so dangerous.

Top comments (0)