Everyone’s talking about attackers using AI vs defenders using AI.
But after working closely with teams at Periscope Technologies Inc, I’m starting to feel like that’s not where things are breaking.
What we’re seeing more often:
AI inside companies already has:
• Access to sensitive data
• Ability to trigger workflows
• Decision-making power
But very little verification around:
👉 What it’s doing
👉 Why it’s doing it
👉 Whether it should be doing it
Traditional security made sense when:
• Users were human
• Behavior was predictable
• Access was controlled
AI doesn’t fit that model at all.
Feels like we’re focusing heavily on external AI threats…
While ignoring a new category of risk:
Unverified AI execution inside systems
Curious if others here are seeing this?
Or is the focus still mostly on external threats?
Top comments (0)