Corellium Sold for $170M. Here's What They Couldn't Do.
I'm the creator of Drengr, an MCP server that gives AI agents eyes and hands on mobile devices. I started this blog to share the engineering behind it. No pretending to be a neutral observer writing a think piece — I built this, and I'm here to talk about it.
$170M for a Virtual Phone
Cellebrite — the company law enforcement calls when they need to crack a phone — just acquired Corellium for $170 million. Corellium virtualizes iOS and Android devices in the cloud. You get a full device image running on a remote server, with root access, JTAG debugging, and kernel introspection. Security researchers use it to hunt vulnerabilities. Governments use it for forensic analysis.
$170M. For the ability to look inside a phone.
That number tells you something: programmatic access to mobile devices is not a niche. It's infrastructure. And it's being valued like infrastructure.
What Corellium Does (and Doesn't)
Corellium gives you a virtualized device. You can boot it, inspect its memory, modify its filesystem, attach a debugger. It's a microscope.
What it can't do: use the phone like a human.
It can't tap a button. It can't type a search query. It can't swipe through a feed, navigate a checkout flow, or verify that a login screen actually works after a deploy. It wasn't built for that. It was built for reverse engineering and security research — looking at the internals of the device, not interacting with its UI.
That's a fundamentally different problem. Corellium answers: "What is this device doing internally?" Drengr answers: "Can an AI agent operate this device the way a user would?"
The Missing Layer: Actuation
The mobile device stack has three layers of programmatic access:
Layer 1: Observation — See what's on screen. Take screenshots, read the UI tree, dump logs. Every testing tool does this.
Layer 2: Virtualization — Run the device as a virtual machine. Inspect memory, modify the OS, simulate hardware. This is Corellium's $170M business.
Layer 3: Actuation — Interact with the device as a user. Tap, type, swipe, long press, launch apps, navigate flows. Not through scripts with hardcoded selectors, but through an AI agent that sees the screen and decides what to do.
Layers 1 and 2 have billion-dollar companies behind them. Layer 3 — AI-driven actuation on real mobile devices — is where the gap is. That's the layer Drengr occupies.
How Drengr Fills the Gap
Drengr is a single Rust binary that exposes mobile devices to AI agents via the Model Context Protocol (MCP). Three tools:
-
drengr_look— The agent sees the screen. Either as a compact ~300 token text description or an annotated image with numbered elements. Text-first by default — 100x cheaper than sending screenshots. -
drengr_do— The agent acts. Tap, type, swipe, long press, back, home, launch, scroll — 13 actions that cover the full interaction surface. Each action returns a situation report: what changed, what appeared, what disappeared, whether the app crashed or got stuck. -
drengr_query— The agent asks questions. What's the current activity? Did the app crash? What HTTP calls happened? What does the UI tree look like?
The AI client — Claude Desktop, Cursor, Windsurf, VS Code — is the brain. Drengr is the hands. The agent looks at a screen it has never seen before, reasons about what to do, and does it. No pre-programmed selectors. No XPath. No brittle scripts that break when the designer moves a button.
app: com.example.app
tasks:
- name: login
task: "Log in with user@test.com and password123"
timeout: 60s
- name: checkout
task: "Add headphones to cart and complete purchase"
timeout: 90s
That YAML survived three redesigns. The AI adapted every time.
Why MCP Matters Here
The MCP ecosystem is exploding. MCPNest indexes over 5,000 MCP servers. MCP Shield audits them for supply chain attacks. Scoring platforms rank them by quality. The protocol is becoming the standard interface between AI agents and external tools — the same way LSP became the standard between editors and language servers.
Drengr is the MCP server for mobile devices. It connects to any MCP-compatible AI client without modification. When a better model comes out, you swap the brain. The hands stay the same. When someone builds a better orchestrator, it works with Drengr out of the box.
This is why the architecture matters more than the features. Corellium is a proprietary platform — you use their cloud, their API, their tools. Drengr is a protocol-native server. It plugs into the ecosystem that's forming right now, not a walled garden.
The $170M Signal
When Cellebrite pays $170M for Corellium, they're not buying a product. They're buying a position in the mobile device access market. They're saying: programmatic control of mobile devices is critical infrastructure, and we'll pay nine figures to own a piece of it.
Virtualization was the first wave. Observation was the zeroth. Actuation — letting AI agents operate devices autonomously — is the next.
The companies that figured out how to let machines look at phones built hundred-million-dollar businesses. The companies that figure out how to let machines use phones will build bigger ones.
I don't know if Drengr becomes that. But I know the layer it occupies — AI-native device actuation via an open protocol — is the layer that doesn't exist yet at scale. And $170M says the market is paying attention.
Where Drengr Stands Today
Real devices. Real interactions. Real results:
- Android: physical phones, emulators, cloud device farms (BrowserStack, SauceLabs, AWS Device Farm, LambdaTest, Perfecto, Kobiton)
- iOS: full simulator support — tap, type, swipe, pinch zoom, long press, scroll
- Multi-device: connect Android and iOS simultaneously, switch with a parameter
- Any AI client: Claude Desktop, Cursor, Windsurf, VS Code — anything that speaks MCP
-
One install:
npm install -g drengr
The binary is 5MB. Written in Rust. No runtime dependencies. It runs on your machine, talks to your devices, and gives any AI agent the ability to operate a phone.
Drengr is free to use and available on npm. It supports Android (physical devices, emulators), iOS simulators (full gesture support), and cloud device farms (BrowserStack, SauceLabs, AWS Device Farm, LambdaTest, Perfecto, Kobiton). Built in Rust. Single binary. No runtime dependencies.
Top comments (0)