<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Todd Sullivan</title>
    <description>The latest articles on DEV Community by Todd Sullivan (@toddsullivan).</description>
    <link>https://dev.to/toddsullivan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/toddsullivan"/>
    <language>en</language>
    <item>
      <title>HerdCount is Live on the App Store — From Blog Post to Shipped Product in Two Weeks</title>
      <dc:creator>Todd Sullivan</dc:creator>
      <pubDate>Wed, 13 May 2026 10:31:32 +0000</pubDate>
      <link>https://dev.to/toddsullivan/herdcount-is-live-on-the-app-store-from-blog-post-to-shipped-product-in-two-weeks-1cok</link>
      <guid>https://dev.to/toddsullivan/herdcount-is-live-on-the-app-store-from-blog-post-to-shipped-product-in-two-weeks-1cok</guid>
      <description>&lt;p&gt;Two weeks ago I wrote about &lt;a href="https://dev.to/toddsullivan/building-an-offline-first-livestock-counter-with-yolov8-and-coreml-2d2g"&gt;building an offline-first livestock counter with YOLOv8 and CoreML&lt;/a&gt;. Today it's a real product on the App Store.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://apps.apple.com/gb/app/herdcount/id6765711537" rel="noopener noreferrer"&gt;HerdCount — Count your flock, even offline&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;£3.99. No subscription. No cloud. No account. Pay once, use forever.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Does
&lt;/h2&gt;

&lt;p&gt;Point your phone at livestock or plants. Tap a button. Get the count.&lt;/p&gt;

&lt;p&gt;HerdCount uses on-device AI (YOLOv8 + CoreML) to detect and count chickens, sheep, cattle, and plants from a single photo — in under a second, with zero internet required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fapps.apple.com%2Fgb%2Fapp%2Fherdcount%2Fid6765711537" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fapps.apple.com%2Fgb%2Fapp%2Fherdcount%2Fid6765711537" alt="HerdCount" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Built It
&lt;/h2&gt;

&lt;p&gt;I work with on-device computer vision professionally — building &lt;a href="https://axsy.com" rel="noopener noreferrer"&gt;Axsy Smart Vision&lt;/a&gt;, an AI-powered field inspection platform for Salesforce. Retail planogram detection, product identification, compliance scoring — all running on-device.&lt;/p&gt;

&lt;p&gt;But the agricultural space has a simpler, more immediate problem: &lt;strong&gt;counting animals is tedious and error-prone&lt;/strong&gt;. Farmers do it by eye, multiple times a day. Miss one sheep and you're searching hedgerows at dusk.&lt;/p&gt;

&lt;p&gt;The same on-device ML pipeline I use for retail product detection works beautifully for livestock. So I built it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Technical Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;YOLOv8&lt;/strong&gt; — trained on livestock datasets, converted to CoreML&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;On-device inference&lt;/strong&gt; — runs on the iPhone's Neural Engine, no cloud round-trip&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Offline-first&lt;/strong&gt; — works in fields, barns, anywhere with no signal&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Swift/SwiftUI&lt;/strong&gt; — native iOS, 8.7 MB total&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Export&lt;/strong&gt; — CSV via AirDrop, email, or Files app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model handles overlapping animals (tuned IoU threshold at 0.3 rather than the default 0.5) and lets you tap false positives to remove them. Manual +/− adjustment before saving.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned Shipping It
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. App Review is opinionated.&lt;/strong&gt; Apple rejected the first submission because the category detection UI wasn't clear enough. Fair feedback — I redesigned it and it's better for it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The model is the easy part.&lt;/strong&gt; Training YOLOv8 and converting to CoreML took a weekend. The other 90% was UI polish, edge cases, CSV export formatting, and App Store screenshots.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Pricing matters.&lt;/strong&gt; I went with £3.99 one-time purchase. No subscription, no ads, no data collection. Farmers are practical people — they'll pay for a tool that works but won't tolerate dark patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. On-device AI is a real differentiator.&lt;/strong&gt; Every competing app I found requires internet. That's a non-starter for someone standing in a field in rural Wales.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Blog to Product
&lt;/h2&gt;

&lt;p&gt;The Dev.to post about the technical approach got genuine engagement — &lt;a href="https://dev.to/toddsullivan/building-an-offline-first-livestock-counter-with-yolov8-and-coreml-2d2g#comment-37iah"&gt;@gimi5555 asked about NMS strategies for clustered animals&lt;/a&gt;, which led to a good discussion about density estimation as a fallback.&lt;/p&gt;

&lt;p&gt;That conversation validated the approach. Two weeks later, it's a shipped product.&lt;/p&gt;

&lt;p&gt;If you're working with on-device ML and sitting on something useful — ship it. The App Store review process is less scary than it looks, and real users find real problems you'd never catch in development.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;&lt;a href="https://apps.apple.com/gb/app/herdcount/id6765711537" rel="noopener noreferrer"&gt;HerdCount on the App Store →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Built by &lt;a href="https://sullivanltd.co.uk" rel="noopener noreferrer"&gt;RT Sullivan Consulting&lt;/a&gt;. I write about on-device AI, Salesforce field apps, and shipping real products at &lt;a href="https://dev.to/toddsullivan"&gt;dev.to/toddsullivan&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ios</category>
      <category>machinelearning</category>
      <category>ai</category>
      <category>swift</category>
    </item>
    <item>
      <title>Shipping to TestFlight Without Fastlane: Raw xcodebuild, Auto-Incrementing Builds, and One Neat Provisioning Trick</title>
      <dc:creator>Todd Sullivan</dc:creator>
      <pubDate>Wed, 13 May 2026 08:01:59 +0000</pubDate>
      <link>https://dev.to/toddsullivan/shipping-to-testflight-without-fastlane-raw-xcodebuild-auto-incrementing-builds-and-one-neat-p2h</link>
      <guid>https://dev.to/toddsullivan/shipping-to-testflight-without-fastlane-raw-xcodebuild-auto-incrementing-builds-and-one-neat-p2h</guid>
      <description>&lt;p&gt;Most iOS CI tutorials reach for Fastlane. It's the default assumption. And Fastlane is fine — but it's also another Ruby toolchain to maintain, another layer of abstraction between you and xcodebuild errors, and another thing that breaks when Xcode updates.&lt;/p&gt;

&lt;p&gt;For a small side project, I wanted zero overhead. So I wrote a release script using plain &lt;code&gt;xcodebuild&lt;/code&gt; and &lt;code&gt;xcrun altool&lt;/code&gt;, and wired it into GitHub Actions. Here's what I learned.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Setup
&lt;/h3&gt;

&lt;p&gt;The app is a no-dependency iOS project (SwiftUI, SwiftData, zero SPM packages). One scheme, one target, distributes via the App Store. The goal: &lt;code&gt;git push&lt;/code&gt; → trigger workflow → build, sign, upload to TestFlight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auto-incrementing build numbers for free:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;BUILD_NUMBER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;git &lt;span class="nt"&gt;-C&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_ROOT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; rev-list &lt;span class="nt"&gt;--count&lt;/span&gt; HEAD&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Every commit bumps the count. No build number file to commit, no race conditions in CI, no manual tracking. Pass it straight into xcodebuild:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xcodebuild archive &lt;span class="se"&gt;\&lt;/span&gt;
  ...
  &lt;span class="nv"&gt;CURRENT_PROJECT_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BUILD_NUMBER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;TestFlight requires monotonically increasing build numbers. Git commit count gives you that automatically. I've seen people use timestamps (too long), semver patch (manual), or a counter file in the repo (merge conflicts). Commit count is cleaner.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Provisioning Problem — and the Fix
&lt;/h3&gt;

&lt;p&gt;This is where most raw-xcodebuild scripts fall apart. The export step (&lt;code&gt;xcodebuild -exportArchive&lt;/code&gt;) needs an &lt;code&gt;ExportOptions.plist&lt;/code&gt; with the exact provisioning profile UUID. But the UUID changes every time you renew the profile.&lt;/p&gt;

&lt;p&gt;The usual answer is "hardcode it in your plist and update manually." That's the kind of thing you forget for six months and then debug for two hours.&lt;/p&gt;

&lt;p&gt;Better approach: extract the UUID from the archive you just built, then inject it at export time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Pull the embedded profile from the freshly-built archive&lt;/span&gt;
&lt;span class="nv"&gt;EMBEDDED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ARCHIVE&lt;/span&gt;&lt;span class="s2"&gt;/Products/Applications/MyApp.app/embedded.mobileprovision"&lt;/span&gt;
&lt;span class="nv"&gt;PROFILE_UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;security cms &lt;span class="nt"&gt;-D&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EMBEDDED&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | plutil &lt;span class="nt"&gt;-extract&lt;/span&gt; UUID raw -&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Copy it into the Provisioning Profiles directory (xcodebuild looks here)&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EMBEDDED&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/Library/MobileDevice/Provisioning Profiles/&lt;/span&gt;&lt;span class="nv"&gt;$PROFILE_UUID&lt;/span&gt;&lt;span class="s2"&gt;.mobileprovision"&lt;/span&gt;

&lt;span class="c"&gt;# Write a temp ExportOptions with the exact UUID from *this* archive&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EXPORT_OPTIONS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EXPORT_OPTIONS_TMP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
plutil &lt;span class="nt"&gt;-replace&lt;/span&gt; &lt;span class="s2"&gt;"provisioningProfiles.com.example.myapp"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-string&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PROFILE_UUID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EXPORT_OPTIONS_TMP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Now export using that temp plist&lt;/span&gt;
xcodebuild &lt;span class="nt"&gt;-exportArchive&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-archivePath&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ARCHIVE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-exportPath&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EXPORT_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-exportOptionsPlist&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EXPORT_OPTIONS_TMP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The profile UUID in your ExportOptions is always current, because it came from the archive itself. Renew the cert, re-download the profile, and nothing breaks.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Actions Signing
&lt;/h3&gt;

&lt;p&gt;For CI, the certificate lives in a secret as a base64-encoded &lt;code&gt;.p12&lt;/code&gt;. The workflow decodes it into a temporary keychain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;security create-keychain &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$KEYCHAIN_PASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; build.keychain
security import /tmp/cert.p12 &lt;span class="nt"&gt;-k&lt;/span&gt; build.keychain &lt;span class="nt"&gt;-P&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$P12_PASSWORD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-T&lt;/span&gt; /usr/bin/codesign
security set-key-partition-list &lt;span class="nt"&gt;-S&lt;/span&gt; apple-tool:,apple: &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-k&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$KEYCHAIN_PASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; build.keychain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;-T /usr/bin/codesign&lt;/code&gt; flag is critical — without it, the keychain will prompt for a password interactively mid-build, which hangs CI forever. The &lt;code&gt;set-key-partition-list&lt;/code&gt; step is what makes it work without prompts.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Full Flow
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;workflow_dispatch
  → checkout (full depth for commit count)
  → import cert into ephemeral keychain
  → write App Store Connect API key
  → ./scripts/release.sh
      → xcodebuild archive
      → extract profile UUID from archive
      → inject UUID into ExportOptions
      → xcodebuild -exportArchive
      → xcrun altool --upload-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;About 8-12 minutes wall clock on a &lt;code&gt;macos-26&lt;/code&gt; runner. No Ruby, no gems, no Fastlane plugins.&lt;/p&gt;

&lt;h3&gt;
  
  
  When Fastlane Still Makes Sense
&lt;/h3&gt;

&lt;p&gt;If you're managing multiple targets, schemes, environments, or a team with custom lanes — Fastlane earns its complexity. But for a single-target indie app? Raw xcodebuild is readable, debuggable, and requires no maintenance beyond "Xcode updated, did the flags change?"&lt;/p&gt;

&lt;p&gt;The full script is about 70 lines of bash. That's the whole pipeline.&lt;/p&gt;

</description>
      <category>ios</category>
      <category>xcode</category>
      <category>cicd</category>
      <category>devops</category>
    </item>
    <item>
      <title>Building Personalised On-Device ML for Women's Health: No Cloud, No Population Averages</title>
      <dc:creator>Todd Sullivan</dc:creator>
      <pubDate>Mon, 11 May 2026 13:22:24 +0000</pubDate>
      <link>https://dev.to/toddsullivan/building-personalised-on-device-ml-for-womens-health-no-cloud-no-population-averages-4j03</link>
      <guid>https://dev.to/toddsullivan/building-personalised-on-device-ml-for-womens-health-no-cloud-no-population-averages-4j03</guid>
      <description>&lt;p&gt;Most health AI is built on population data. Your symptoms are averaged against thousands of other people, and you get a generalised prediction that fits nobody perfectly.&lt;/p&gt;

&lt;p&gt;I took a different approach with Menopause Intelligence — an iOS app I've been building that predicts high-symptom days for women in perimenopause and menopause.&lt;/p&gt;

&lt;p&gt;The entire model runs on-device, trained on the individual user's own data. No cloud, no population averages, no third-party data sharing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with cloud-based health AI
&lt;/h2&gt;

&lt;p&gt;Population models work when you want average answers. But perimenopause is deeply individual. Two women with identical ages and similar symptom profiles can have completely different biometric triggers.&lt;/p&gt;

&lt;p&gt;The app's job is to tell a user &lt;em&gt;her&lt;/em&gt; patterns — not what typically happens to women like her.&lt;/p&gt;

&lt;h2&gt;
  
  
  The ML pipeline
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Features:&lt;/strong&gt; Seven signals per day, all from HealthKit/Apple Watch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basal body temperature delta vs 7-day mean&lt;/li&gt;
&lt;li&gt;HRV (raw + delta from personal rolling average)&lt;/li&gt;
&lt;li&gt;Sleep efficiency and deep sleep %&lt;/li&gt;
&lt;li&gt;REM sleep %&lt;/li&gt;
&lt;li&gt;Resting heart rate&lt;/li&gt;
&lt;li&gt;Cycle day (if logged)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key design decision:&lt;/strong&gt; We use &lt;em&gt;deltas from the user's personal baseline&lt;/em&gt;, not absolute values. A resting HR of 62 bpm means different things for different people. What matters is whether it's elevated for &lt;em&gt;you&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Label:&lt;/strong&gt; Composite symptom severity score for day D+1 (hot flashes, brain fog, fatigue, mood)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model:&lt;/strong&gt; CoreML + CreateML Components. Runs via a silent weekly background task (BGProcessingTask). The app retriggers training automatically as new data accumulates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cold start:&lt;/strong&gt; The first 30 days use a rule-based weighted scorer as a fallback. Not as accurate, but keeps the app useful while data accumulates.&lt;/p&gt;

&lt;h2&gt;
  
  
  The data architecture
&lt;/h2&gt;

&lt;p&gt;Everything is local:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;HealthKit → DailyLog (SwiftData) → Feature engineering → CoreML inference
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No backend. No analytics SDK. CloudKit sync between devices uses end-to-end encryption. Health data never touches our servers — because we don't have any.&lt;/p&gt;

&lt;p&gt;This isn't just a privacy stance. It's architecturally simpler and removes a whole category of compliance risk. For a health app in this category, "no backend" is a feature you can market.&lt;/p&gt;

&lt;h2&gt;
  
  
  The feedback loop
&lt;/h2&gt;

&lt;p&gt;User-reported symptoms feed back into the next training cycle. Every hot flash logged, every mood entry — they sharpen the model for that specific user.&lt;/p&gt;

&lt;p&gt;This is the same feedback pattern I've used in other on-device vision work: user corrections become training data. The model gets more accurate over time for the individual, not just better at the general case.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I've learned building personalised on-device ML
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Minimum data is a real UX problem.&lt;/strong&gt; 30 days before predictions activate feels long to a user who downloaded the app because she's struggling now. You have to be honest about why, and give her something useful in the meantime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Baseline drift matters.&lt;/strong&gt; A user's "normal" changes over the course of perimenopause. The rolling average window needs to adapt — a fixed 7-day mean becomes stale if someone's baseline HRV is trending down over months.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Privacy is the product.&lt;/strong&gt; In women's health, trust is everything. "Your data never leaves your device" isn't a footnote — it's the headline. It changes the conversation with users who've been burned by other health apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  The stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;UI:&lt;/strong&gt; SwiftUI (iOS 17+)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data:&lt;/strong&gt; SwiftData + CloudKit&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Biometrics:&lt;/strong&gt; HealthKit&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prediction:&lt;/strong&gt; CoreML + CreateML Components&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subscriptions:&lt;/strong&gt; StoreKit 2&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Watch:&lt;/strong&gt; watchOS companion + WidgetKit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More on this as it gets closer to launch.&lt;/p&gt;

</description>
      <category>ios</category>
      <category>machinelearning</category>
      <category>ai</category>
      <category>swift</category>
    </item>
    <item>
      <title>The Fastlane gym Export Options Trap (and Why Your Provisioning Profile Is Being Silently Ignored)</title>
      <dc:creator>Todd Sullivan</dc:creator>
      <pubDate>Mon, 11 May 2026 08:01:51 +0000</pubDate>
      <link>https://dev.to/toddsullivan/the-fastlane-gym-export-options-trap-and-why-your-provisioning-profile-is-being-silently-ignored-5caf</link>
      <guid>https://dev.to/toddsullivan/the-fastlane-gym-export-options-trap-and-why-your-provisioning-profile-is-being-silently-ignored-5caf</guid>
      <description>&lt;p&gt;Spent a few hours last week debugging a CI failure that had no right to be as subtle as it was. The build archived fine, but &lt;code&gt;exportArchive&lt;/code&gt; kept dying with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;error: exportArchive: requires a provisioning profile with the App Groups feature.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The frustrating part: the AppStore provisioning profile was correct. I had just renewed it, decrypted it on the runner, and confirmed the App Group entitlement was in there. The keychain had it. So why was xcodebuild not finding it?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trap
&lt;/h2&gt;

&lt;p&gt;The Fastlane &lt;code&gt;gym&lt;/code&gt; action accepts &lt;code&gt;export_options:&lt;/code&gt; in two forms:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A &lt;strong&gt;path&lt;/strong&gt; to an existing &lt;code&gt;.plist&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;Hash&lt;/strong&gt; of options it will write to a temp plist&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I was passing a Hash — and inside that Hash I had a &lt;code&gt;plist:&lt;/code&gt; key pointing to my own plist file, thinking gym would merge or defer to it. It does not.&lt;/p&gt;

&lt;p&gt;When you pass a Hash, gym writes &lt;em&gt;that Hash&lt;/em&gt; to a temp plist and hands it directly to xcodebuild. The &lt;code&gt;plist:&lt;/code&gt; key inside the Hash is &lt;strong&gt;not&lt;/strong&gt; special — xcodebuild does not recognise it, ignores it silently, and you end up with a minimal plist that has no &lt;code&gt;provisioningProfiles&lt;/code&gt; key at all.&lt;/p&gt;

&lt;p&gt;The temp plist gym generated looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;dict&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;key&amp;gt;&lt;/span&gt;method&lt;span class="nt"&gt;&amp;lt;/key&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;string&amp;gt;&lt;/span&gt;app-store&lt;span class="nt"&gt;&amp;lt;/string&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;key&amp;gt;&lt;/span&gt;uploadSymbols&lt;span class="nt"&gt;&amp;lt;/key&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;true/&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;key&amp;gt;&lt;/span&gt;plist&lt;span class="nt"&gt;&amp;lt;/key&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;string&amp;gt;&lt;/span&gt;RELEASE_exportOptionsPlist_Store.plist&lt;span class="nt"&gt;&amp;lt;/string&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dict&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No &lt;code&gt;provisioningProfiles&lt;/code&gt;. Under manual signing, xcodebuild fell back to automatic profile resolution at export time — which on a clean GitHub Actions runner cannot find the app-group-bearing profile you carefully installed. Build fails. Misleading error. Whole thing looks like a profile problem when the profile was never consulted.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix
&lt;/h2&gt;

&lt;p&gt;Pass &lt;code&gt;export_options:&lt;/code&gt; as a &lt;strong&gt;path string&lt;/strong&gt;, not a Hash:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;gym&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="ss"&gt;scheme: &lt;/span&gt;&lt;span class="s2"&gt;"MyApp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="ss"&gt;configuration: &lt;/span&gt;&lt;span class="s2"&gt;"Release"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="ss"&gt;export_options: &lt;/span&gt;&lt;span class="s2"&gt;"./fastlane/RELEASE_exportOptionsPlist_Store.plist"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your plist should include explicit &lt;code&gt;provisioningProfiles&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;key&amp;gt;&lt;/span&gt;provisioningProfiles&lt;span class="nt"&gt;&amp;lt;/key&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;dict&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;key&amp;gt;&lt;/span&gt;com.example.myapp&lt;span class="nt"&gt;&amp;lt;/key&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;string&amp;gt;&lt;/span&gt;MyApp AppStore Profile&lt;span class="nt"&gt;&amp;lt;/string&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dict&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Gym passes the path straight to &lt;code&gt;xcodebuild -exportOptionsPlist&lt;/code&gt;. Your file is read. No temp plist, no silent key stripping.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Catches People Out
&lt;/h2&gt;

&lt;p&gt;The Hash form is in basically every Fastlane tutorial. It looks clean. Gym does not warn you when it discards unrecognised keys. The only signal is in verbose gym output — if you compare the temp plist it writes against what you expected, the &lt;code&gt;provisioningProfiles&lt;/code&gt; block is missing.&lt;/p&gt;

&lt;p&gt;App Groups make the failure mode worse because they require an exact profile match. Without entitlements like App Groups, xcodebuild automatic selection might accidentally find something usable. With App Groups, it always fails hard.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Do Now
&lt;/h2&gt;

&lt;p&gt;For any iOS app with entitlements — App Groups, Push Notifications, iCloud, anything — I keep an explicit &lt;code&gt;export_options.plist&lt;/code&gt; checked into the repo and pass it as a path. The Hash form is fine for a basic app. The moment signing gets complicated, you want the plist under version control and gym out of the business of generating it.&lt;/p&gt;

&lt;p&gt;One less thing the CI runner has to figure out on its own.&lt;/p&gt;

</description>
      <category>ios</category>
      <category>fastlane</category>
      <category>devops</category>
      <category>xcode</category>
    </item>
    <item>
      <title>Hello DEV! I'm Todd — an AI Engineer Who Builds Real Things</title>
      <dc:creator>Todd Sullivan</dc:creator>
      <pubDate>Fri, 08 May 2026 10:17:42 +0000</pubDate>
      <link>https://dev.to/toddsullivan/hello-dev-im-todd-an-ai-engineer-who-builds-real-things-4cg2</link>
      <guid>https://dev.to/toddsullivan/hello-dev-im-todd-an-ai-engineer-who-builds-real-things-4cg2</guid>
      <description>&lt;p&gt;Hey DEV community 👋&lt;/p&gt;

&lt;p&gt;I'm Todd — an AI engineer based in the UK. I spend most of my time building systems that actually use AI rather than talking about using AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I work on
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude/LLM integrations&lt;/strong&gt; — wiring Claude into real engineering workflows. Co-authoring code, automated pipelines, using it where it genuinely adds signal rather than noise.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;On-device AI&lt;/strong&gt; — computer vision models that run on iOS with no internet connection. Edge inference, model size tradeoffs, the gap between lab accuracy and field accuracy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developer tooling&lt;/strong&gt; — zero-config test runners, CI pipelines with AI in the loop, the kind of boring-but-important stuff that makes teams faster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personal AI infrastructure&lt;/strong&gt; — building a persistent AI assistant that runs 24/7, has real memory, and can actually take actions. Not a chatbot demo.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why I'm here
&lt;/h2&gt;

&lt;p&gt;I write about the engineering decisions, the tradeoffs, and the things that didn't work as well as the things that did. Practical stuff, real code, honest takes.&lt;/p&gt;

&lt;p&gt;If you're building with AI (not just prompting it) — I'd love to connect.&lt;/p&gt;

&lt;h2&gt;
  
  
  A few recent posts
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/toddsullivan/claude-is-in-my-commit-history-3i6f"&gt;Claude is in My Commit History&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/toddsullivan/on-device-ai-what-nobody-tells-you-about-the-tradeoffs-126k"&gt;On-Device AI: What Nobody Tells You About the Tradeoffs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/toddsullivan/zero-config-test-runner-jwt-auto-gen-and-no-setup-docs-4066"&gt;Zero-Config Test Runner&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Say hi 👇&lt;/p&gt;

</description>
      <category>ai</category>
      <category>introduction</category>
      <category>engineering</category>
      <category>hello</category>
    </item>
    <item>
      <title>Claude as a CI Co-pilot: Debugging Apple Signing Hell So You Don't Have To</title>
      <dc:creator>Todd Sullivan</dc:creator>
      <pubDate>Fri, 08 May 2026 08:02:47 +0000</pubDate>
      <link>https://dev.to/toddsullivan/claude-as-a-ci-co-pilot-debugging-apple-signing-hell-so-you-dont-have-to-3ooi</link>
      <guid>https://dev.to/toddsullivan/claude-as-a-ci-co-pilot-debugging-apple-signing-hell-so-you-dont-have-to-3ooi</guid>
      <description>&lt;p&gt;This week I spent a few hours debugging a fastlane CI pipeline that was failing on every single run with Apple provisioning errors. I paired with Claude the entire time. Here's what that actually looks like — not the polished "AI helped me code!" version, but the messy, real one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;iOS build pipeline. Fastlane + &lt;code&gt;match&lt;/code&gt; for code signing. The CI runner kept blowing up at &lt;code&gt;exportArchive&lt;/code&gt; with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;error: exportArchive: requires a provisioning profile with the App Groups feature
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Except — the profile absolutely contained the App Groups entitlement. I inspected the decrypted &lt;code&gt;.mobileprovision&lt;/code&gt; manually. It was there. Xcodebuild was lying.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Claude Actually Helped
&lt;/h2&gt;

&lt;p&gt;I dumped the failing lane, the temp plist gym was generating, and the error into the conversation. Claude caught something I'd missed: when you pass &lt;code&gt;export_options:&lt;/code&gt; as a &lt;strong&gt;Hash&lt;/strong&gt; in your Fastfile, gym writes that hash directly to a temp plist — but any &lt;code&gt;plist:&lt;/code&gt; key inside the hash is treated as a literal value, not a file reference. The external plist file I was trying to load? Never actually loaded.&lt;/p&gt;

&lt;p&gt;The fix was one line: pass &lt;code&gt;export_options:&lt;/code&gt; as a &lt;strong&gt;path string&lt;/strong&gt; instead of a hash. Gym then loads the file properly. The patch I'd been writing into the plist at runtime actually started landing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before (broken) — Hash form ignores your plist: key&lt;/span&gt;
&lt;span class="n"&gt;gym&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="ss"&gt;export_options: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="ss"&gt;method: &lt;/span&gt;&lt;span class="s2"&gt;"app-store"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="ss"&gt;plist: &lt;/span&gt;&lt;span class="s2"&gt;"RELEASE_exportOptionsPlist_Store.plist"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# After (working) — path string makes gym actually load the file&lt;/span&gt;
&lt;span class="n"&gt;gym&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="ss"&gt;export_options: &lt;/span&gt;&lt;span class="s2"&gt;"RELEASE_exportOptionsPlist_Store.plist"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Second Problem
&lt;/h2&gt;

&lt;p&gt;Once that was fixed, the build still failed intermittently. Reason: when &lt;code&gt;match&lt;/code&gt; renews a provisioning profile, Apple appends a serial number suffix to the name (e.g. &lt;code&gt;match AppStore com.example.app 1777460891&lt;/code&gt;). My Fastfile, pbxproj, and export plist all hardcoded the old name. After any renewal, xcodebuild couldn't find it.&lt;/p&gt;

&lt;p&gt;Claude suggested a pattern: after &lt;code&gt;match&lt;/code&gt; runs, read the actual installed profile name from the &lt;code&gt;sigh_*&lt;/code&gt; environment variable, then patch both pbxproj and the export plist at runtime before the build starts. The dynamic name becomes the single source of truth.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Read the actual name after match sets it&lt;/span&gt;
&lt;span class="n"&gt;profile_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"sigh_&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;bundle_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;_appstore_profile-name"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Patch pbxproj&lt;/span&gt;
&lt;span class="nb"&gt;system&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"sed -i '' 's/match AppStore &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;bundle_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.*/&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;profile_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/g' path/to/project.pbxproj"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Patch export plist&lt;/span&gt;
&lt;span class="nb"&gt;system&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"/usr/libexec/PlistBuddy -c 'Set :provisioningProfiles:&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;bundle_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;profile_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;' ExportOptions.plist"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What Made This Work
&lt;/h2&gt;

&lt;p&gt;Claude didn't just hand me code — it helped me build a &lt;strong&gt;mental model&lt;/strong&gt; of what was actually happening. The difference between Hash vs path-string in gym's API is documented somewhere in fastlane's source, but it's not obvious. Same with match's environment variable naming convention.&lt;/p&gt;

&lt;p&gt;The conversation was more like pair programming with someone who'd read the entire fastlane codebase than a Stack Overflow search. I'd describe what I was seeing, Claude would reason about what the tool chain was doing internally, and we'd narrow down the root cause.&lt;/p&gt;

&lt;p&gt;The commits ended up cleaner too. Because I understood &lt;em&gt;why&lt;/em&gt; the fix worked, the commit messages were precise. Co-authored lines show up in git blame: &lt;code&gt;Co-Authored-By: Claude Opus 4.7&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Honest Take
&lt;/h2&gt;

&lt;p&gt;This isn't magic. It's a multiplier on existing knowledge. If you don't understand code signing at all, Claude's explanations will help but you'll still spend time learning the domain. If you do understand it — like I do — it collapses the debugging loop from hours to minutes.&lt;/p&gt;

&lt;p&gt;The gnarly CI/CD problems that used to require tribal knowledge or a very specific Stack Overflow answer from 2019 are now tractable in a single session.&lt;/p&gt;

&lt;p&gt;That's the real unlock.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>ios</category>
      <category>claude</category>
      <category>devops</category>
    </item>
    <item>
      <title>Building an Offline-First Livestock Counter with YOLOv8 and CoreML</title>
      <dc:creator>Todd Sullivan</dc:creator>
      <pubDate>Wed, 06 May 2026 08:02:54 +0000</pubDate>
      <link>https://dev.to/toddsullivan/building-an-offline-first-livestock-counter-with-yolov8-and-coreml-40fa</link>
      <guid>https://dev.to/toddsullivan/building-an-offline-first-livestock-counter-with-yolov8-and-coreml-40fa</guid>
      <description>&lt;p&gt;I built a livestock counting app for smallholders. No internet required, no subscription, no server. You take a photo of your chickens, sheep, or cattle, and it counts them — entirely on-device. Here's how it actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Smallholders regularly need to count animals. In a field. In a barn. Where there's no signal. The apps that exist are either generic (bad accuracy for farm animals), require a server round-trip, or charge you monthly to count your own chickens. None of that made sense to me.&lt;/p&gt;

&lt;p&gt;So I built Muster.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;iOS 17, SwiftUI, SwiftData&lt;/strong&gt; — no third-party dependencies, ships as a one-time purchase&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;YOLOv8n&lt;/strong&gt; — the nano variant, exported to CoreML format&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Apple's Vision framework&lt;/strong&gt; — handles the ML request lifecycle, orientation correction, and bounding box coordinate normalisation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero backend&lt;/strong&gt; — no server, no account, no ongoing cost&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model is small enough to run on-device without breaking a sweat. YOLOv8n sits at about 6MB in CoreML format. On an iPhone 13 it processes a typical farm photo in under 400ms. That's fast enough that it feels instant.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Inference Works
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;VisionService&lt;/code&gt; wraps a &lt;code&gt;VNCoreMLModel&lt;/code&gt; and fires a &lt;code&gt;VNDetectRectanglesRequest&lt;/code&gt; against the input image. The key detail here is orientation: photos from iOS cameras carry EXIF orientation metadata, and if you don't account for it before passing frames to Vision, your bounding boxes are in the wrong coordinate space.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;ciImage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;CIImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;uiImage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;oriented&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;forExifOrientation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;imageOrientationToExifOrientation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;uiImage&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;imageOrientation&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;VNImageRequestHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;ciImage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ciImage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[:])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After inference, each detection gets mapped to a &lt;code&gt;DetectedObject&lt;/code&gt; with a normalised bounding box and confidence score. The UI overlays dot markers on the image — one per detection — and lets the user tap any to dismiss false positives before saving.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preset Categories vs. Tap-to-Select
&lt;/h2&gt;

&lt;p&gt;The tricky UX question was: how does the user tell the app &lt;em&gt;what&lt;/em&gt; to count? I landed on two modes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Preset categories&lt;/strong&gt; — bird/poultry, sheep, cattle, plants — each mapped to specific COCO class IDs. The detection filter is applied post-inference, so the model still runs once regardless.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tap-to-select&lt;/strong&gt; — the user taps one example item in the photo, and the app counts all detections with the nearest matching class. Good for "other" categories the presets don't cover.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The confidence thresholds needed tuning. Out of the box, YOLOv8n is conservative — I loosened the threshold for the farming categories because the cost of missing a sheep is higher than the cost of an occasional false positive that the user can tap away.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Proof-of-Count Card
&lt;/h2&gt;

&lt;p&gt;The feature I shipped last was the shareable count card — a rendered image showing the annotated photo, count total, category, timestamp, and app branding. Smallholders sometimes need to show a headcount to a vet, insurer, or land agent. A screenshot is clunky. A clean card with metadata looks like a document.&lt;/p&gt;

&lt;p&gt;This was a SwiftUI &lt;code&gt;View&lt;/code&gt; rendered to &lt;code&gt;UIGraphicsImageRenderer&lt;/code&gt; — no external libraries, no server-side rendering.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;Running ML inference at the edge is surprisingly painless on modern Apple hardware. CoreML and Vision do the heavy lifting. The hard part isn't the inference — it's the UX around confidence thresholds, false positive handling, and giving users enough control without overwhelming them.&lt;/p&gt;

&lt;p&gt;If you're building anything that involves counting, detecting, or classifying on-device: the YOLOv8n → CoreML pipeline is mature, well-documented, and genuinely fast enough for production use.&lt;/p&gt;

&lt;p&gt;Muster is heading to the App Store soon. One-time purchase. No subscription. Count your flock. No signal needed.&lt;/p&gt;

</description>
      <category>ios</category>
      <category>swift</category>
      <category>machinelearning</category>
      <category>coreml</category>
    </item>
    <item>
      <title>When Your Training Data Pipeline Has Three Different Ideas About the Same Thing</title>
      <dc:creator>Todd Sullivan</dc:creator>
      <pubDate>Mon, 04 May 2026 08:01:44 +0000</pubDate>
      <link>https://dev.to/toddsullivan/when-your-training-data-pipeline-has-three-different-ideas-about-the-same-thing-4b8d</link>
      <guid>https://dev.to/toddsullivan/when-your-training-data-pipeline-has-three-different-ideas-about-the-same-thing-4b8d</guid>
      <description>&lt;p&gt;If you're building ML pipelines that consume data from multiple API endpoints, you've probably hit this: the same thing — a product, a user, a record — arrives in three subtly different shapes depending on which path it took to get to you.&lt;/p&gt;

&lt;p&gt;We hit this in a computer vision training pipeline recently. The pipeline synthesises training images for product classifiers — takes seed images of known products, composites them into scene images, generates bounding box annotations, trains a model. Standard stuff.&lt;/p&gt;

&lt;p&gt;The bug: seed images were being silently dropped during dataset preparation. Not erroring — just gone. The model would train on an incomplete dataset and we'd only notice when accuracy came back lower than expected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root cause:&lt;/strong&gt; UID lookup using exact string match, but three different API callers were sending the same product reference in three different formats:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'Tesco Cornflakes Cereal 500G'        # raw label, spaces preserved
'tesco_cornflakes_cereal_500g'        # stringToFilename output, lowercase underscored
'Tesco_Cornflakes_Cereal_500G'        # case-preserved underscored (from external productCode)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The on-disk index used case-preserved underscored filenames. So if you came through the raw label path, your seed images were quietly dropped. No exception. No warning. Just a smaller dataset than you thought you had.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Happens
&lt;/h2&gt;

&lt;p&gt;Three different API routes, built at different times, by different people, each making a reasonable local decision about how to normalise a string. The bug only appears when you try to join across them using the output of one as the key into an index built from another.&lt;/p&gt;

&lt;p&gt;The fix was to make the lookup tolerant — normalise both the incoming ref &lt;em&gt;and&lt;/em&gt; the index key before comparison, so any of the three shapes resolves to the same entry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;normalise_uid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;uid&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;uid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two lines. But the reason you need them is worth understanding.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Broader Pattern
&lt;/h2&gt;

&lt;p&gt;Silent data loss in ML pipelines is particularly nasty because:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It doesn't fail loudly.&lt;/strong&gt; The pipeline completes successfully. The model trains. You get results. You just don't realise the results are for a smaller, different dataset than you intended.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The signal is weak.&lt;/strong&gt; Lower accuracy could be bad data, bad hyperparameters, distribution shift, or a dozen other things. You might spend days investigating the model before you look at the pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It only manifests at scale.&lt;/strong&gt; In dev, you're running with a handful of products. Everyone has clean, matching UIDs. In production, you have hundreds of products, multiple API callers, and the mismatch rate goes up.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What to Add to Your Pipeline
&lt;/h2&gt;

&lt;p&gt;If you're building training data pipelines that consume product/entity references from multiple sources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Assert dataset size at each stage.&lt;/strong&gt; Expected 120 seed images for this batch? Assert that before training starts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log dropped items explicitly.&lt;/strong&gt; Don't silently skip — log the UID that couldn't be resolved so you can catch shape mismatches immediately.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Normalise at ingestion, not lookup.&lt;/strong&gt; Standardise the UID format the moment it enters your system, rather than trying to be tolerant at every lookup point downstream.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-reference your callers.&lt;/strong&gt; If you have multiple API endpoints that all feed the same pipeline, explicitly document which normalisation each one applies. It'll be someone else's problem in six months, and that someone might be you.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The actual ML work — model architecture, training loops, hyperparameter tuning — gets a lot of attention. The data pipeline that feeds it is equally important and tends to get much less scrutiny. Bugs there don't throw exceptions. They just quietly make your model worse.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
      <category>computervision</category>
      <category>ai</category>
    </item>
    <item>
      <title>YOLOv8 + CoreML on iOS: Shipping Offline Computer Vision That Actually Works in the Field</title>
      <dc:creator>Todd Sullivan</dc:creator>
      <pubDate>Fri, 01 May 2026 08:01:59 +0000</pubDate>
      <link>https://dev.to/toddsullivan/yolov8-coreml-on-ios-shipping-offline-computer-vision-that-actually-works-in-the-field-3e22</link>
      <guid>https://dev.to/toddsullivan/yolov8-coreml-on-ios-shipping-offline-computer-vision-that-actually-works-in-the-field-3e22</guid>
      <description>&lt;p&gt;I have been building a lot of server-side vision systems — cloud inference, GPU clusters, the whole stack. But a recent side project reminded me how compelling on-device AI still is, especially when you strip away the assumption of reliable connectivity.&lt;/p&gt;

&lt;p&gt;The project: a livestock counting app for smallholders. Take a photo of your flock, tap one chicken, get a count back. No account, no subscription, no signal required. Just a model on the device doing its job.&lt;/p&gt;

&lt;p&gt;Here is what I learned porting YOLOv8 into an iOS app via CoreML.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why On-Device at All?
&lt;/h2&gt;

&lt;p&gt;The obvious answer: barns and fields do not have 5G. But the less-obvious answer is more interesting — &lt;strong&gt;no server means no ongoing cost, no latency, and no privacy concern&lt;/strong&gt;. The photo never leaves the phone. That is increasingly a selling point, not a footnote.&lt;/p&gt;

&lt;p&gt;For small utility apps, cloud inference is overkill. You are paying per-inference and maintaining infrastructure to serve a model that could run on a £400 phone in under 200ms.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Stack: YOLOv8n → CoreML → Apple Vision
&lt;/h2&gt;

&lt;p&gt;The model is YOLOv8 nano (yolov8n), trained on COCO. Nano is the key decision — it is ~6MB, runs on the Neural Engine, and for categories like &lt;code&gt;bird&lt;/code&gt;, &lt;code&gt;sheep&lt;/code&gt;, &lt;code&gt;cow&lt;/code&gt; the accuracy is genuinely good enough for a counting use case.&lt;/p&gt;

&lt;p&gt;The conversion path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;ultralytics coremltools
yolo &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;yolov8n.pt &lt;span class="nv"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;coreml &lt;span class="nv"&gt;nms&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;True
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That gives you a &lt;code&gt;.mlpackage&lt;/code&gt;. Xcode compiles it to &lt;code&gt;.mlmodelc&lt;/code&gt; at build time and generates a Swift wrapper class automatically. The inference code is clean:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;MLModelConfiguration&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;computeUnits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;all&lt;/span&gt;  &lt;span class="c1"&gt;// prefer Neural Engine&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;mlModel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="kt"&gt;MLModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;contentsOf&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;modelURL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;configuration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;vnModel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="kt"&gt;VNCoreMLModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;for&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;mlModel&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;VNCoreMLRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;vnModel&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;imageCropAndScaleOption&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;scaleFit&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;VNImageRequestHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;cgImage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;cgImage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;orientation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;orientation&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;perform&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="k"&gt;as?&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;VNRecognizedObjectObservation&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On a modern iPhone, yolov8n inference on a 640px image runs in roughly 50–80ms. Fast enough that it feels instant.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Hard Part: Confidence Thresholds
&lt;/h2&gt;

&lt;p&gt;COCO-trained YOLOv8 with default confidence thresholds performs well on textbook images. Real livestock photos are not textbook images. Partially-occluded animals behind fence posts, sheep that are mostly mud, chickens half-in-frame — these score lower confidence but are still valid detections you want to count.&lt;/p&gt;

&lt;p&gt;I ended up with a final threshold of &lt;code&gt;0.25&lt;/code&gt;, vs the default &lt;code&gt;0.35–0.45&lt;/code&gt; most tutorials recommend. The model exports with NMS baked in (&lt;code&gt;conf=0.15, iou=0.65&lt;/code&gt;), and I apply a second filter in Swift at &lt;code&gt;0.25&lt;/code&gt;. This catches most real-world partial occlusions without drowning in false positives.&lt;/p&gt;

&lt;p&gt;The other trick: let users tap to remove false positives rather than trying to tune away every edge case. Editable results beat perfect results. People accept "mostly right, I will tap off the fence post shadow" much better than "sometimes wrong with no recourse."&lt;/p&gt;




&lt;h2&gt;
  
  
  Tap-to-Identify Flow
&lt;/h2&gt;

&lt;p&gt;Instead of forcing a category selection, users can just tap on one example object in the photo. The app finds the highest-confidence detection at that point, identifies its COCO class, and returns all detections of the same class.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Vision uses bottom-left origin; UIKit uses top-left&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;visionPoint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;CGPoint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;normalisedPoint&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;y&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;1.0&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;normalisedPoint&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;tapped&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;observations&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;$0&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;boundingBox&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;contains&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;visionPoint&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;by&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;$0&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;confidence&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;confidence&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That coordinate flip (&lt;code&gt;1.0 - normalisedPoint.y&lt;/code&gt;) is the kind of thing that wastes 45 minutes if you do not know to expect it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What On-Device Vision Is Actually Good For
&lt;/h2&gt;

&lt;p&gt;After building this, my take: on-device inference with a small COCO-trained model is a good fit for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Counting / detection&lt;/strong&gt; of common real-world objects (people, animals, vehicles, plants)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Apps that work in low-connectivity environments&lt;/strong&gt; — field tools, outdoor apps, anything rural&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privacy-sensitive use cases&lt;/strong&gt; — medical, personal, anything users would not want hitting a cloud API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One-off utility apps&lt;/strong&gt; where server infrastructure is not justified&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is not a good fit for fine-grained classification (you need a domain-specific model), real-time video at scale, or anything needing more than ~80 COCO classes.&lt;/p&gt;

&lt;p&gt;The stack — YOLOv8 + CoreML + Apple Vision framework — is mature, well-documented, and genuinely pleasant to work with. If you are building something where offline matters, it is worth the afternoon it takes to get running.&lt;/p&gt;

</description>
      <category>ios</category>
      <category>machinelearning</category>
      <category>swift</category>
      <category>computervision</category>
    </item>
    <item>
      <title>Bridging Apple Services to a Remote AI with MCP and SSH</title>
      <dc:creator>Todd Sullivan</dc:creator>
      <pubDate>Wed, 29 Apr 2026 08:01:52 +0000</pubDate>
      <link>https://dev.to/toddsullivan/bridging-apple-services-to-a-remote-ai-with-mcp-and-ssh-j7m</link>
      <guid>https://dev.to/toddsullivan/bridging-apple-services-to-a-remote-ai-with-mcp-and-ssh-j7m</guid>
      <description>&lt;p&gt;My AI assistant runs on a remote server. My Apple Mail, Calendar, and Messages live on my Mac. Getting them to talk to each other took an MCP server, some AppleScript, and an SSH reverse tunnel — and it works surprisingly well.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;I run an AI assistant on a remote EC2 instance. It's persistent, always-on, and handles scheduled tasks, cron jobs, and multi-step automations. But macOS services — Mail, Calendar, Messages — are locked to the local machine by design. AppleScript won't work over SSH, and there's no official API for Apple Messages.&lt;/p&gt;

&lt;p&gt;The naive solution is to move everything to the cloud. The practical solution is to bring the cloud &lt;em&gt;to the Mac&lt;/em&gt; via an MCP server.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;clawMCP&lt;/code&gt; is a TypeScript MCP server that exposes 12 tools across three Apple services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mail&lt;/strong&gt; — list mailboxes, list/read/search messages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Calendar&lt;/strong&gt; — list calendars, query events, create events, find free slots&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Messages&lt;/strong&gt; — list chats, read message history, send iMessages&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It runs locally on my Mac as a &lt;code&gt;launchd&lt;/code&gt; service, listening on &lt;code&gt;localhost:3100&lt;/code&gt;. The remote AI instance connects to it through an SSH reverse tunnel that maps the remote &lt;code&gt;localhost:3100&lt;/code&gt; back to the Mac's port.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;macOS (local machine)            EC2 (remote AI host)
+---------------------+          +---------------------+
| clawMCP server      | &amp;lt;--SSH-- | AI assistant        |
| port 3100           |  tunnel  | MCP client          |
| AppleScript -&amp;gt; Mail |          | SSE at :3100        |
| AppleScript -&amp;gt; Cal  |          +---------------------+
| sqlite3 -&amp;gt; chat.db  |
+---------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Implementation Notes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Apple Mail and Calendar&lt;/strong&gt; are handled via &lt;code&gt;osascript&lt;/code&gt; — compiled AppleScript executed from Node.js. It's not elegant, but it's reliable and requires no third-party dependencies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apple Messages&lt;/strong&gt; is different. iMessage doesn't expose an AppleScript API for reading messages (only sending). Instead, I read directly from &lt;code&gt;~/Library/Messages/chat.db&lt;/code&gt; — a SQLite database that stores the full local message history. A query joining &lt;code&gt;message&lt;/code&gt;, &lt;code&gt;chat&lt;/code&gt;, and &lt;code&gt;handle&lt;/code&gt; tables gets you everything you need.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;messages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;prepare&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`
  SELECT m.text, m.is_from_me, h.id as sender, m.date
  FROM message m
  JOIN chat_message_join cmj ON cmj.message_id = m.ROWID
  JOIN chat c ON c.ROWID = cmj.chat_id
  JOIN handle h ON h.ROWID = m.handle_id
  WHERE c.chat_identifier = ?
  ORDER BY m.date DESC
  LIMIT ?
`&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chatId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The tunnel&lt;/strong&gt; is a simple persistent SSH connection with &lt;code&gt;RemoteForward 3100 localhost:3100&lt;/code&gt;. A startup script launches the server and the tunnel together; launchd restarts both if either crashes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Enables
&lt;/h2&gt;

&lt;p&gt;With clawMCP connected, my AI assistant can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check my calendar before scheduling anything&lt;/li&gt;
&lt;li&gt;Read and search my email without me copy-pasting threads&lt;/li&gt;
&lt;li&gt;Look up recent iMessage history for context&lt;/li&gt;
&lt;li&gt;Send me iMessages as a notification channel when long-running tasks complete&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The last one is genuinely useful. When a 20-minute build finishes or a data pipeline completes, it pings me via iMessage. No email, no Slack — just a message on my phone from my AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;MCP is the right abstraction here.&lt;/strong&gt; It gives you tool discovery, type-safe parameters, and a standard transport layer. Building this as a raw HTTP API would have worked but required more glue code on the AI side.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SQLite is surprisingly powerful for this use case.&lt;/strong&gt; Direct database reads are faster and more flexible than AppleScript for Messages. Just be careful with Full Disk Access permissions — macOS will silently fail without them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The SSH tunnel is simpler than it sounds.&lt;/strong&gt; One line in &lt;code&gt;~/.ssh/config&lt;/code&gt;, one &lt;code&gt;RemoteForward&lt;/code&gt; directive, and it just works. No VPN, no port forwarding on the router, no cloud relay service.&lt;/p&gt;

&lt;p&gt;If your AI lives somewhere other than your local machine, an MCP server + SSH reverse tunnel is a clean pattern for bridging local services. The code is all TypeScript, the surface area is small, and the result is an assistant that actually knows what's on your calendar.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>macos</category>
      <category>typescript</category>
      <category>devtools</category>
    </item>
    <item>
      <title>Killing the Setup Endpoint: Moving Env Provisioning into GitHub Actions</title>
      <dc:creator>Todd Sullivan</dc:creator>
      <pubDate>Mon, 27 Apr 2026 08:02:13 +0000</pubDate>
      <link>https://dev.to/toddsullivan/killing-the-setup-endpoint-moving-env-provisioning-into-github-actions-4jpj</link>
      <guid>https://dev.to/toddsullivan/killing-the-setup-endpoint-moving-env-provisioning-into-github-actions-4jpj</guid>
      <description>&lt;p&gt;We had an API endpoint that set up environments. It claimed a pre-warmed org from a pool, authenticated two users, imported test data, installed a bundle, and published config. Six sequential shell calls. Runtime dependency on a server. Credentials scattered across process state. A pain to debug when it failed at step 4 of 6 at 2am.&lt;/p&gt;

&lt;p&gt;The fix wasn't to rewrite the API. It was to stop having an API at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  The move: GitHub Actions as the runtime
&lt;/h2&gt;

&lt;p&gt;The entire setup sequence now lives in a single GitHub Actions workflow file. No server, no queue, no process isolation hacks. The runner &lt;em&gt;is&lt;/em&gt; the environment — ephemeral, observable, retryable.&lt;/p&gt;

&lt;p&gt;The key architectural shifts:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Parallelise everything that can be.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The old endpoint ran sequentially because it was Node.js with a queue. GitHub Actions has native parallelism via step grouping. Auth for two users? One &lt;code&gt;run&lt;/code&gt; block, two background processes, &lt;code&gt;wait&lt;/code&gt;. Test data import for multiple data keys? Matrix strategy, each key in its own parallel job. What was 6 serial calls is now 3 parallel groups.&lt;/p&gt;

&lt;p&gt;Before: ~8 minutes end-to-end.&lt;br&gt;
After: ~3.5 minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Reusable workflows for cross-repo consumption.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The real unlock was &lt;code&gt;workflow_call&lt;/code&gt;. Instead of every repo maintaining its own setup script or calling an API, they just reference the central workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;provision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;your-org/env-setup/.github/workflows/setup.yml@main&lt;/span&gt;
    &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;staging&lt;/span&gt;
      &lt;span class="na"&gt;dataset&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;core&lt;/span&gt;
    &lt;span class="na"&gt;secrets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;inherit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;secrets: inherit&lt;/code&gt; means the caller's secrets pass through automatically — define them once at the org level, every repo picks them up. No per-repo secret duplication. Rotate once, everything updates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Credentials as artifacts, not environment variables.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Secrets (passwords, tokens, auth URLs) get written to a JSON file and uploaded as a run artifact with masking:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"::add-mask::&lt;/span&gt;&lt;span class="nv"&gt;$ADMIN_PASSWORD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Downstream jobs download the artifact and unmask what they need. This means logs stay clean, credentials are scoped to the job that needs them, and there's no secret bleeding into env vars that outlive the step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Non-secret outputs as workflow outputs.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instance URLs, user IDs, org IDs — non-sensitive stuff — get published as &lt;code&gt;jobs.&amp;lt;job&amp;gt;.outputs&lt;/code&gt;. Any downstream job can reference &lt;code&gt;needs.provision.outputs.instanceUrl&lt;/code&gt; directly. Clean separation between sensitive and non-sensitive data.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this replaced
&lt;/h2&gt;

&lt;p&gt;The old flow required a running API server, a cloud function, and a manually-maintained shell script per environment type. When the server had a bad deploy, env setup broke. When the shell script fell out of sync with the API, you got silent failures.&lt;/p&gt;

&lt;p&gt;Now it's a YAML file in a repo. PRs are reviewed. Failures show up in Actions logs with full context. Retries are a button click.&lt;/p&gt;

&lt;h2&gt;
  
  
  The unexpected benefit
&lt;/h2&gt;

&lt;p&gt;Making setup a reusable workflow forced us to define its interface clearly: inputs, outputs, required secrets. That contract made the setup process legible to anyone on the team, not just the person who wrote the original API endpoint.&lt;/p&gt;

&lt;p&gt;If you're running environment provisioning as a service endpoint and it's causing pain — consider whether it needs to be a service at all. Sometimes the right move is to make the CI runner do the work.&lt;/p&gt;

</description>
      <category>githubactions</category>
      <category>devops</category>
      <category>cicd</category>
      <category>automation</category>
    </item>
    <item>
      <title>I Built a Persistent AI Assistant That Runs on My Mac</title>
      <dc:creator>Todd Sullivan</dc:creator>
      <pubDate>Fri, 24 Apr 2026 09:05:25 +0000</pubDate>
      <link>https://dev.to/toddsullivan/i-built-a-persistent-ai-assistant-that-runs-on-my-mac-44n</link>
      <guid>https://dev.to/toddsullivan/i-built-a-persistent-ai-assistant-that-runs-on-my-mac-44n</guid>
      <description>&lt;p&gt;I got tired of AI assistants that forget everything the moment a session ends. So I built one that doesn't.&lt;/p&gt;

&lt;p&gt;It runs 24/7 on my Mac, has access to my files, GitHub, iMessage, email, and calendar. It knows who I am, what I'm working on, and what I said to it last week.&lt;/p&gt;

&lt;h2&gt;
  
  
  The core problem with stateless AI
&lt;/h2&gt;

&lt;p&gt;Every time you open a new Claude or ChatGPT session, you start from zero. You re-explain your context. You re-establish what you're working on. You paste in the same background info.&lt;/p&gt;

&lt;p&gt;This is fine for one-off tasks. It's terrible for an ongoing working relationship.&lt;/p&gt;

&lt;h2&gt;
  
  
  The memory architecture
&lt;/h2&gt;

&lt;p&gt;Instead of in-context memory, I use files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;MEMORY.md&lt;/code&gt; — long-term curated knowledge. What matters, distilled.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;memory/YYYY-MM-DD.md&lt;/code&gt; — daily logs. What happened, decisions made, things to remember.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;USER.md&lt;/code&gt; — who I am, my stack, my communication style.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TOOLS.md&lt;/code&gt; — local setup specifics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every session, the agent reads the relevant files before doing anything. This is the continuity layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  MCP for real-world access
&lt;/h2&gt;

&lt;p&gt;Model Context Protocol (MCP) is what lets the agent actually &lt;em&gt;do&lt;/em&gt; things — not just talk about them.&lt;/p&gt;

&lt;p&gt;I use it for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apple Mail, Calendar, Messages via a local MCP server&lt;/li&gt;
&lt;li&gt;GitHub via &lt;code&gt;gh&lt;/code&gt; CLI&lt;/li&gt;
&lt;li&gt;File system access&lt;/li&gt;
&lt;li&gt;Browser automation (Puppeteer via Chrome DevTools Protocol)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The result
&lt;/h2&gt;

&lt;p&gt;It's not a chatbot. It's closer to a part-time assistant who's always available and never forgets anything. The most useful thing isn't any single capability — it's that context persists.&lt;/p&gt;

&lt;p&gt;I can say "remember the JWT issue from last week" and it actually knows what I mean.&lt;/p&gt;




&lt;p&gt;The hardest part isn't the AI. It's designing the memory and context system that makes it feel coherent over time.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>macos</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
