Twenty years of shipping software. Rails since 1.2. Native mobile since the iPhone 3G. And still—nothing humbles me like a Turbo Native app that feels “sluggish” and won’t tell me why.
You know the scenario. Users leave 2-star reviews: “It’s fine, but… slow sometimes.” Your team runs Lighthouse on the web version: 95+ Performance score. Native shell is just a WKWebView, right? Should be fast. But it’s not. And you’re blind.
That’s when I learned: monitoring a Turbo Native app is not like monitoring a website. It’s not even like monitoring a regular native app. It’s a hybrid ghost—part web, part native, all lies. Sentry became my exorcist.
This is the journey of learning to see what your users feel. Senior devs who’ve debugged memory leaks in IE6 and packet loss on dial-up: you’ll feel right at home.
The Day I Realized RUM Was Lying
We had Sentry’s JavaScript SDK in the web views. Great. We saw page load times, JS errors, API call durations. All looked healthy: median 1.2s to interactive.
But our iOS beta testers kept saying: “The back button stutters.” Not the page load. The transition. The gesture. A thing that has no JavaScript.
Because Turbo Native doesn’t just reload HTML. It manages a native navigation stack—UINavigationController pushing and popping WKWebView instances. And when that stack has five web views, each holding a full DOM, memory pressure causes the native animation to drop frames.
Your Sentry browser SDK sees nothing. No console log. No error. Just a buttery-smooth 60fps claim while the user feels a hitch.
I needed to instrument the bridge.
Building a Two-Sided Stopwatch
The breakthrough came when I realized: performance in Turbo Native happens in two worlds, and Sentry can capture both—if you force them to talk.
We started adding spans that cross the native/JavaScript boundary:
In native (iOS / Android):
// iOS: Turbo visit start
let transaction = SentrySDK.startTransaction(
name: "turbo.navigation",
operation: "ui.load"
)
// Inject a start time into the web view before load
let script = "window.__turboNativeStart = \(Date().timeIntervalSince1970);"
webView.evaluateJavaScript(script)
In the web view’s JavaScript (with Sentry browser SDK):
// Wait for DOM ready, then send a custom metric
document.addEventListener('turbo:load', () => {
const nativeStart = window.__turboNativeStart;
if (nativeStart) {
const jsReady = performance.now();
const nativeToJS = jsReady - (nativeStart * 1000);
Sentry.addBreadcrumb({
category: 'performance',
message: `Native→JS bridge: ${nativeToJS.toFixed(0)}ms`,
level: 'info'
});
// Also send as a transaction
Sentry.startTransaction({
name: 'turbo.bridge_latency',
op: 'measure',
data: { nativeToJS_ms: nativeToJS }
}).finish();
}
});
Now, when a user complains about “slowness,” we can see: was it the native navigation? The bridge serialization? The actual HTML parsing? Or the network?
We found a 400ms gap on older iPhones just from evaluateJavaScript calls. That was the back button stutter.
The Art of the Span: Knowing What to Measure
After six months of tuning, here’s our canonical set of Turbo-specific spans we send to Sentry:
| Span name | What it measures | Typical threshold |
|---|---|---|
turbo.navigation.start |
Native visit() called |
< 5ms |
turbo.webview.load |
WKWebView load request to first paint |
< 800ms |
turbo.bridge.call |
Any native→JS message (e.g., postMessage) |
< 50ms |
turbo.memory.after_visit |
Memory footprint post-navigation | < 150MB on iOS |
turbo.back_gesture |
Native pop animation frame drop rate | < 5% dropped frames |
The memory one is sneaky. Turbo keeps visited web views in a cache. Great for back button speed. Terrible for memory. We added a Sentry check:
// After 3 cached views, warn
if navigationController.viewControllers.count > 3 {
Sentry.captureMessage("Turbo cache high: \(count) web views retained",
level: .warning)
}
That single metric led us to implement a custom cache eviction policy. Back buttons stayed fast. Memory stayed stable. Users stopped complaining.
The Human Layer: Performance as a Feeling
Here’s what I’ve learned after two decades: users don’t care about milliseconds. They care about certainty.
A page that loads in 800ms every time feels faster than a page that loads in 200ms but sometimes takes 2 seconds. Variance is the enemy.
Sentry’s p75 and p95 percentiles became my north star. We stopped optimizing the median. We started hunting the tail.
One culprit: large JSON payloads from the Rails backend, serialized into the Turbo frame. On poor connections, they’d block rendering. We added a data-turbo-permanent to non-critical sections and started streaming the rest. The p95 dropped from 4.2s to 1.1s.
We knew because we could see it in Sentry’s Performance view, filtered by device.model:"iPhone X" and connection.effectiveType:"3g".
That’s the power. Not dashboards. Slicing.
The Mistakes That Made Me Smarter
I’ll be honest: we over-instrumented at first. Every tap, every scroll, every console.log became a Sentry event. Our quota exploded and our UI became noise.
Then we learned: sample transactions for navigation (1 in 20), always capture failures, use profiles not traces for UI thread analysis.
Also: Sentry’s native SDK and browser SDK have different release and dist values. We wasted a week matching them before realizing they don’t need to match. What matters is environment (prod/staging) and user.id.
Oh, and one more: Turbo’s visit can be cancelled (user taps back before page loads). That was flooding our errors. Filter it out:
# In Rails backend, when logging via Turbo streams
if visit_cancelled?
Sentry.set_context("turbo", { cancelled: true })
Sentry.capture_message("Navigation cancelled", level: "debug")
end
Now it’s a breadcrumb, not an alert.
The Masterpiece: When You Feel the Invisible
After all this, something shifted. I could close my eyes, tap through the app, and guess what Sentry would show. High memory? Probably the image gallery. Slow back gesture? Too many cached views. Bridge delay? A heavy Intl polyfill in JavaScript.
That’s the art. Not the tool. The intuition.
Sentry gave us the data. Turbo gave us the constraints. And we—the old dogs who remember fixing cross-browser CSS in 2005—turned that into an app that doesn’t just perform well. It performs predictably.
Last month, a user wrote: “This app never surprises me. It just works.”
That’s the review I frame.
Now go instrument your bridge. Send me a note when you find your first 300ms gap between native and JS. I’ll be here, watching my own p95, smiling.
Top comments (0)