Over the last few years, AI has quietly moved from “interesting side project” to “core production workload.”
- Agents now call tools.
- LLMs touch internal APIs.
- Retrieval pipelines pull from sensitive data.
And all of this runs continuously, not in isolated demos.
During Launch Week 2026, we took a step back and asked a simple question:
If AI systems are becoming part of our production stack, why are we still securing them like experiments?
That question shaped everything we released during the week.
The problem isn’t AI. It’s the lack of runtime visibility.
Most teams we talk to are not struggling to build AI features.
They are struggling to understand what those features do once they’re live.
Questions like:
- Which models are being used and by whom?
- What tools or MCP servers are agents calling?
- Where does sensitive data flow during prompts and responses?
- What happens when an agent loops, retries, or behaves unexpectedly?
Traditional security tools were never designed to answer these questions because they assume static systems.
AI systems are not static.
Why we focused on runtime, not theory
Instead of publishing frameworks or high-level “AI risk” checklists, Launch Week focused on runtime primitives that developers can actually work with.
Some examples from the week:
- AI Firewall for policy enforcement and inspection where AI apps actually run. For details refer to the release note.
- AI Gateway to bring identity, quotas, and governance to third-party LLM usage. For details refer to the release note.
- MCP Discovery and Testing to continuously inventory and validate the control planes that power agents. For details refer to the MCP Discover release note and MCP Security release note.
- Agentless API discovery and integrations that reduce friction and avoid heavy instrumentation. For details refer to the release note.
The common theme was simple:
Security should observe real behavior, not inferred intent.
MCPs and agents changed the threat model
One of the biggest shifts we’re seeing is the rise of MCP servers and autonomous agents.
These systems blur traditional boundaries:
- One agent can act on behalf of many users.
- Execution identity is often different from request identity.
- A single workflow can span models, tools, APIs, and external services.
If you only look at logs or gateways, you miss the story.
If you only look at prompts, you miss the blast radius.
That’s why discovery and correlation mattered so much in what we launched.
Developers don’t want blockers. They want confidence.
A pattern we saw repeatedly during Launch Week conversations:
Developers are not asking for more approvals.
They are asking for confidence that what they’re shipping won’t surprise them later.
Confidence comes from:
- Knowing what exists.
- Knowing how it behaves.
- Knowing where data flows.
- Knowing when something changes.
When those are in place, speed follows naturally.
Launch Week was not a finish line
Launch Week 2026 wasn’t about declaring AI security "solved".
It was about putting real, usable building blocks into the hands of teams who are already shipping AI.
If you’re building or operating AI systems today, the real work starts after deployment.
That’s where visibility, governance, and runtime protection actually matter.
If you’re curious about what we released and why, the full recap walks through each launch in detail and the thinking behind it: full Launch Week recap
AI systems are production systems now.
It’s time we secure them like it.
Top comments (0)