DEV Community

Todd Sullivan
Todd Sullivan

Posted on

I Let Claude Code Do a Performance Review on My iOS App — Here's What It Found

I've been building HerdCount — an offline-first iOS app that counts livestock from a photo using YOLOv8n on CoreML. No internet, no account, just the Neural Engine doing its thing.

The app was working, but after adding a share-card feature (a branded "proof of count" image you can send to buyers or vets), I noticed some jank. Tap Save and the UI would stutter. Scroll through results and frames would drop. Nothing catastrophic, but noticeable.

Instead of diving into Instruments myself, I dropped Claude Code into the repo with a performance review prompt and watched what happened.

What it was asked to do

Simple brief: review the codebase for iOS performance issues, particularly in the Result screen and inference path. No specific files called out, no hints. Just "here's the code, find what's slow."

What it found (and actually fixed)

1. Share card rendering on every SwiftUI body rebuild

The proof-of-count card — a UIGraphicsImageRenderer render of the annotated photo with branding — was being generated inside the view's state updates. Every time SwiftUI rebuilt the body (which it does a lot), it was re-running 300–500ms of image rendering work.

Fix: cache the rendered image in the ViewModel, keyed on the things that actually change (detections, count, label, notes). Only re-render when those values change. Obvious in retrospect. Easy to miss when you're building features.

2. Thumbnail generation blocking the main thread

Tapping Save triggered a thumbnail generation step before writing to SwiftData. That was happening synchronously on the main actor — hence the stutter on save.

Fix: Task.detached with pre-computed Data? handed off to the model layer, keeping UIKit on the main thread where it needs to be but doing the pixel work on a background thread.

3. Static formatters vs per-call allocation

DateFormatter and RelativeDateTimeFormatter were being instantiated per call in a few places — including inside the inference hot path. Each allocation is small, but in VisionService those run on every frame during detection.

Fix: promote to static properties. One allocation, reused forever.

4. Inference path allocations

In VisionService, the observation filtering was a filter followed by a map — two passes, two intermediate arrays per inference call. Collapsed to a single compactMap. In PresetCategory, label matching used an array literal (["dog", "cat", ...]) allocated on the heap each call. Replaced with || comparisons.

The PR

Claude committed all of this as a single structured PR with clear commit messages. The diff was clean, the explanations were accurate, and — importantly — it didn't make anything up. It found real issues, measured them correctly (citing the ms ranges from the actual rendering work), and fixed them without introducing regressions.

The follow-up commits were me fixing a crash it introduced by moving UIKit work off the main actor (classic async/await pitfall — it almost got it right) and a build error from curly quotes in a string literal. Two small misses out of a solid overall review.

The meta part

The app itself is an AI app — CoreML + Vision running YOLOv8n on the Neural Engine. I used an LLM to review and improve the code for an on-device ML app. There's something satisfying about that stack: AI tooling improving AI tooling.

More practically: this kind of review is exactly what Claude Code is good at. Pattern recognition across a codebase — "you're doing this expensive thing unnecessarily" — is tedious to do manually and easy to miss when you're close to the code. Having an external pass that doesn't know what you intended to write is genuinely useful.

The Instruments profiler would have found the same things eventually. But this was faster, and it wrote the fix too.


Tags: ios, swift, claudecode, ai, performance
Status: published
Source: herdcount-ios PR #1 (claude/performance-review)

Top comments (0)