DEV Community

naoki_JPN
naoki_JPN

Posted on

I shipped a paid web app in half a day using Claude Code + Codex (Garmin AI CSV converter)

Note: This article reflects information as of May 2026.

What I built

Garmin AI Export — a web app that converts the activity, sleep, and health data ZIP exported from Garmin Connect into clean CSVs that ChatGPT, Gemini, and Claude can readily analyze.

🔗 https://garmin-ai-export.vercel.app

If you've worn a Garmin watch for years, you've accumulated a massive trail of data: runs, sleep records, heart rate, stress, Body Battery — all of it. Garmin Connect lets you export everything, but what you get is a tangled bundle of nested JSON and binary FIT files. Hand that directly to ChatGPT and tell it to "analyze this," and it won't know where to start.

This app:

  1. Takes the raw ZIP you exported from Garmin Connect
  2. Unpacks and parses it entirely in your browser
  3. Returns a ZIP containing four files — activities.csv, sleep.csv, daily_health.csv, laps.csv — plus an AI prompt template

Feed that into the Code Interpreter on ChatGPT, Gemini, or Claude and you can immediately ask things like "show me how my pace has trended" or "find correlations between sleep and next-day performance."

And here's the punchline: I built and shipped this — including the payment integration — in about half a day, by pairing Claude Code (Opus 4.7) with Codex (GPT-5 Codex). I'll get into the role-split below, but the short version is: Claude Code played PM and reviewer; Codex did the implementation.

Tech stack

  • Next.js (App Router) + Tailwind CSS
  • JSZip — ZIP unpacking and re-compression
  • @streamparser/json — streaming parser for huge JSON
  • fit-file-parser — for Garmin's binary .fit files
  • PapaParse — CSV generation
  • Web Worker — to keep the main thread responsive
  • Square Payment Links — payments (Apple Pay / Google Pay supported)
  • Vercel — hosting

The key principle is "everything in the browser." Garmin exports can be hundreds of megabytes to multiple gigabytes — there's no way you're streaming that to a server (Vercel's request body limit is 4.5MB; you're done before you start). Plus, on principle, I didn't want users' health data sitting on my server.

So upload, parse, convert, and download all happen inside the user's browser. The only server-side code is a tiny API route that creates a Square payment link.

How Claude Code and Codex split the work

This project ran on two AI agents with clearly separated roles.

Role Agent Responsibilities
PM / Research Claude Code (Opus 4.7) Translate requests into GitHub Issues, manage labels, set priorities
Implementation Codex (GPT-5 Codex) Cut branches, write code, open Draft PRs
Code review Claude Code (Opus 4.7) Read PR diffs, leave sharp inline comments on GitHub
Merge / deploy Claude Code (Opus 4.7) Approve, squash-merge, verify production
Approval / GO Me (the human) Just say "OK," "implement #X," "ship it"

The actual flow looks like:

I describe what I want
  → Claude Code creates a GitHub Issue (label: triage)
  → I say "OK" → label changes to ready
  → I say "implement #X"
  → Codex cuts a branch, writes code, opens a Draft PR
  → I say "please review"
  → Claude Code posts a thorough review on GitHub
  → Codex addresses the comments
  → Claude Code re-reviews → LGTM, squash merge
  → Vercel auto-deploys
  → Claude Code curl-checks production
Enter fullscreen mode Exit fullscreen mode

What's interesting about this setup is that using a different model for review actually works. When Claude Code reads code that Codex wrote, there's no self-confirmation bias — comments like "disabled={!result} is dead code here," "you're using a private JSZip API," "the worker isn't terminated on reset" surface naturally. You don't get this when the same model writes and reviews.

I never wrote a line of code. I only said "I want this," "OK," and "ship it."

Things that bit me

I stepped on a few mines worth documenting.

1. Parsing 135MB JSON on the main thread froze the page

Garmin's *_summarizedActivities.json is over 100MB for heavy users. A naive JSON.parse(text) blocks the main thread for ~20 seconds and the browser starts asking the user if they want to kill the page.

The fix is two-layered.

(a) Move it to a Web Worker

// src/workers/garmin-converter.worker.ts
self.onmessage = async (event) => {
  const result = await convertGarminExportCoreBuffer(event.data.file, postProgress);
  self.postMessage({ type: "done", result }, { transfer: [result.buffer] });
};
Enter fullscreen mode Exit fullscreen mode

Putting the ArrayBuffer in the transfer list does a zero-copy handoff to the main thread. Even a 100MB+ buffer crosses the worker boundary instantly because nothing is copied.

(b) Stream-parse the JSON itself

Use @streamparser/json to iterate over the giant array element-by-element:

import { JSONParser } from "@streamparser/json";

const parser = new JSONParser({
  paths: ["$.*.summarizedActivitiesExport.*"],
});

parser.onValue = ({ value }) => {
  rows.push(extractActivityRow(value));
};
Enter fullscreen mode Exit fullscreen mode

The $.* prefix in paths is a sneaky trap. The actual Garmin JSON shape is [{"summarizedActivitiesExport": [...]}] — an array at the root. So you have to indicate "the array root" with $.*, otherwise nothing matches and your CSV ends up empty. I forgot this on the first pass and spent way too long staring at empty output.

2. Reaching into JSZip's private API and getting burned

The first version of the code, written by Claude, was using zip.file().internalStream("uint8array"). It worked, but it's an undocumented JSZip internal API with no type definitions. Review caught it and we rewrote it:

const buffer = await entry.async("uint8array");
const chunkSize = 1 << 20; // 1MB
for (let offset = 0; offset < buffer.length; offset += chunkSize) {
  // process 1MB at a time, yielding back to the event loop
  await yieldToEventLoop();
  processChunk(buffer.subarray(offset, offset + chunkSize));
}
Enter fullscreen mode Exit fullscreen mode

yieldToEventLoop() is just new Promise(resolve => setTimeout(resolve, 0)). Without it, you're back to freezing the UI on huge files.

3. The iOS Safari Blob + IndexedDB landmine

For the payment gate, I needed to persist the converted ZIP somewhere while the user bounces over to Square's checkout and back. The natural choice: stash a Blob in IndexedDB. But:

The biggest risk is the case where iOS Safari can't restore the Blob after payment.

This actually happens. iOS Safari has a long-standing bug where Blobs stored in IndexedDB come back corrupted when retrieved, especially for larger Blobs. "Paid the money, can't get the file" is the worst possible UX, so:

// Blob → ArrayBuffer before storing
const buffer = await result.blob.arrayBuffer();
await store.put({ buffer, files, filename, ... });

// Reconstruct the Blob from the ArrayBuffer when retrieving
const blob = new Blob([record.buffer], { type: "application/zip" });
Enter fullscreen mode Exit fullscreen mode

ArrayBuffer round-trips cleanly through IndexedDB even on iOS Safari. Catching this in review saved a lot of pain.

4. The Worker keeping running after a Reset

If a user clicks "Reset" while a conversion is in progress, you have to actually terminate the worker — otherwise it leaks memory and burns CPU.

let activeWorker: Worker | null = null;

export function abortConversion() {
  activeWorker?.terminate();
  activeWorker = null;
}
Enter fullscreen mode Exit fullscreen mode

Easy to forget. A review comment of "is this still running in the background after reset?" is what surfaced it.

Adding payments

I gated the download behind a paywall.

Why Square?

Candidates were Stripe, PayPal, Square, Paddle. Reasons I picked Square:

  • Stripe rejected my application (sole-proprietor application takes time)
  • Apple Pay and Google Pay are included by default (zero extra implementation)
  • Payment Links API gives you hosted checkout in seconds — no card form on my own site, so no PCI DSS scope

The implementation is shockingly small. Server-side, you just hit Square's API to mint a payment link:

// src/app/api/square/payment-link/route.ts
const squareResponse = await fetch(
  `${getSquareApiBaseUrl()}/v2/online-checkout/payment-links`,
  {
    method: "POST",
    headers: {
      Authorization: `Bearer ${process.env.SQUARE_ACCESS_TOKEN}`,
      "Content-Type": "application/json",
      "Square-Version": "2026-01-22",
    },
    body: JSON.stringify({
      idempotency_key: crypto.randomUUID(),
      quick_pay: {
        name: "Garmin AI Export",
        price_money: { amount: PRICE, currency: "JPY" },
        location_id: process.env.SQUARE_LOCATION_ID,
      },
      checkout_options: {
        allow_tipping: false,
        redirect_url: "https://garmin-ai-export.vercel.app?paid=true",
      },
    }),
  },
);
Enter fullscreen mode Exit fullscreen mode

Redirect the user to the returned payment_link.url, they pay (card or Apple Pay or Google Pay) on Square's hosted checkout, and they come back to your site with ?paid=true appended.

Soft gate, by design

I do not server-side verify that the user actually paid. If ?paid=true is in the URL, the download unlocks.

This is intentional:

  • I have no database and no user accounts, so there's no real way to verify (you could log purchases via webhook, but you'd need a way to identify users)
  • For a tool meant for individual personal use, someone bypassing it by typing the URL is fine
  • Philosophy: "If you feel like paying, please pay. If not, that's also fine."

About currency

Square only lets you receive revenue in your account's home country currency. My account is registered in Japan, so JPY only. When an overseas user pays via Apple Pay, Apple does the FX conversion on their side, and the merchant always sees JPY in the books. So an English UI with JPY pricing is perfectly workable for international users.

Deploy and cost

Moving from Vercel Hobby to Pro immediately

The Vercel Hobby plan is strictly "personal, non-commercial use." Running a paid site on Hobby violates the ToS and risks account termination.

→ I upgraded to Pro ($20/month) right after wiring up payments.

Bandwidth is a non-issue

I worried at first: "These Garmin ZIPs can be hundreds of MB — am I going to blow through bandwidth?" But once I thought about it:

  • Uploading the Garmin ZIP → handled in-browser (never touches Vercel)
  • Downloading the converted ZIP → from browser to local disk (never touches Vercel)
  • Payment screen → on Square's domain (never touches Vercel)

Vercel only serves the initial page load (~2MB) and the Square API call (~1KB). Pro's 1TB allotment covers ~500,000 users. The browser-only architecture really pays off here.

Google Analytics and a favicon

Last touches: GA and a custom favicon.

GA4 via next/script with afterInteractive:

{SHOULD_LOAD_GOOGLE_ANALYTICS ? (
  <>
    <Script
      src={`https://www.googletagmanager.com/gtag/js?id=${GA_MEASUREMENT_ID}`}
      strategy="afterInteractive"
    />
    <Script id="google-analytics" strategy="afterInteractive">
      {`window.dataLayer = window.dataLayer || [];
        function gtag(){dataLayer.push(arguments);}
        gtag('js', new Date());
        gtag('config', '${GA_MEASUREMENT_ID}');`}
    </Script>
  </>
) : null}
Enter fullscreen mode Exit fullscreen mode

Favicons in Next.js App Router are convention-based — drop src/app/icon.tsx and src/app/apple-icon.tsx and they're auto-served at /icon and /apple-icon. You generate them dynamically from ImageResponse so no external image file required:

// src/app/app-icon-image.tsx
export function createAppIconResponse(size: { width: number; height: number }) {
  return new ImageResponse(
    (
      <div style={{ background: "#104c3f", borderRadius: "20%", /* ... */ }}>
        <svg viewBox="0 0 24 24" fill="none" stroke="white">
          {/* FileArchive icon SVG paths */}
        </svg>
      </div>
    ),
    size,
  );
}
Enter fullscreen mode Exit fullscreen mode

Reflections on building with Claude Code + Codex

What worked well

  • The role-split is fast. Claude Code focusing on requirement-shaping and review while Codex steadily implements is a great fit for "one human plus a fleet of AIs."
  • Mixed-model review actually catches things. When the same model writes and reviews, it misses stuff. A different model has no qualms about saying "wait, this is wrong."
  • When I got stuck, Claude Code could surface fixes like "iOS Safari has a known IndexedDB Blob bug, switch to ArrayBuffer storage" without me prompting it.
  • Codex is callable from CLI, so I can shoot off "implement this issue and open a PR" via Bash in one shot.

What I made sure to do

  • Never let "I'd kinda want this" turn into "started implementing." I wrote an explicit rule in CLAUDE.md: implementation only starts on "implement #X."
  • Reviews always go to a different model (Claude Code) so the implementer (Codex) doesn't grade its own homework.
  • I don't say "merge it" — I say "ship it to main" / "release to prod" so deployments are an explicit step.

The "I'll just whip up a thing" mode is incredibly powerful. The flip side is that without checks, it'll happily over-build, so the review-and-approval ritual is worth keeping.

Wrap-up

  • Half a day → a working web app with payments, analytics, and a custom favicon
  • Browser-only architecture wins on both server cost and privacy
  • Claude Code (PM/reviewer) + Codex (implementer) as a two-agent setup gives you both speed and quality

If you're a Garmin user curious to throw your own activity data at ChatGPT, Gemini, or Claude, give it a try.

🔗 https://garmin-ai-export.vercel.app

Top comments (0)