Most security teams are operating on a 2023 threat model in a 2026 environment. The model says: protect endpoints, manage identities, monitor cloud workloads. It doesn't say anything about what happens when a developer deploys an LLM-backed application that accepts user input, processes it against internal data, and outputs results - with no jailbreak protection, no output filtering, and no audit trail.
Prompt injection is not theoretical. The OWASP Top 10 for LLM Applications published it as the primary risk category. A user crafts input that overrides the system prompt. The model executes instructions it was never supposed to receive. If the model has access to internal APIs, a database connection, or a file system - the blast radius is not a chatbot giving a weird answer. It's data exfiltration through a surface your WAF has never seen.
Lasso Security sits specifically at this layer - detecting prompt injection attempts, monitoring LLM inputs and outputs for policy violations, and providing visibility into what your AI applications are actually doing. Not what they're supposed to do. What they do.
HiddenLayer addresses a different vector: the model itself. ML models are assets with attack surfaces. Model inversion attacks can extract training data. Adversarial inputs can force misclassification at production scale. If your fraud detection model gets fooled by adversarially crafted transactions, the problem is not in your app code - it's in the model weights. HiddenLayer monitors model behavior in production and detects anomalous inference patterns before they become incidents.
Governance is where most AI programs stall. Building a model is fast. Documenting it, auditing its decision logic, and proving to a regulator that it behaved fairly and within policy is a different project entirely. Credo AI and Arthur AI both tackle this - AI governance platforms that track model performance, bias, drift, and policy compliance over time. Arthur leans more toward monitoring production model health. Credo leans toward governance documentation and risk assessment for regulated industries.
The data side of AI security is underestimated. LLMs trained or fine-tuned on internal data carry that data's exposure profile. RAG architectures connecting models to document stores or databases can surface information the model was never explicitly told about - it simply retrieved it. Microsoft Purview handles data classification and governance at the source, tracking sensitive data across cloud environments and flagging when it flows somewhere unexpected. For organizations already in the Microsoft ecosystem, this is the practical starting point.
Wiz gives you the cloud infrastructure picture - misconfigured storage buckets, over-privileged IAM roles, exposed secrets in container images. The attack paths that AI applications open are often not in the AI layer itself but in the infrastructure they connect to.
Compliance documentation for AI systems is becoming legally mandatory in certain jurisdictions. The EU AI Act is real. SOC 2's coverage of AI controls is expanding. Drata automates the evidence collection side - continuous monitoring connected to your actual systems, rather than screenshot folders assembled before an audit window.
The Security Stack Analyzer at includes AI Security as a distinct coverage category. If you're deploying LLM applications and your current stack has no tooling specifically in that row, the coverage gap number will reflect it.
Nobody secured AI applications well in 2023 because nobody had many AI applications in production in 2023. That window closed.
Focus: AI Security, LLM risks, data governance
Products: Lasso Security, HiddenLayer, Credo AI, Arthur AI, Microsoft Purview, Wiz, Drata
Top comments (0)