DEV Community

우병수
우병수

Posted on • Originally published at techdigestor.com

I Built a Personal Glucose Dashboard with React and Firebase — Here's Everything That Tripped Me Up

TL;DR: The thing that finally broke me with Dexcom Clarity was discovering that you can export your data — but only as a PDF. Not CSV, not JSON, not anything a developer could actually use.

📖 Reading time: ~38 min

What's in this article

  1. The Problem: CGM Apps Are Either Locked Down or Ugly
  2. What You'll Actually Build
  3. Project Setup — From Zero to Running Locally
  4. Connecting to Glucose Data — Your Options and Their Tradeoffs
  5. Firestore Data Model — Get This Wrong and You'll Rewrite It
  6. Real-Time Updates with onSnapshot — Where It Gets Interesting
  7. Building the Glucose Chart with Recharts
  8. Firebase Auth — Keeping Your Health Data Actually Private

The Problem: CGM Apps Are Either Locked Down or Ugly

The thing that finally broke me with Dexcom Clarity was discovering that you can export your data — but only as a PDF. Not CSV, not JSON, not anything a developer could actually use. LibreView does offer CSV export, but it's batched, delayed by up to 24 hours on the free tier, and the column naming is inconsistent between firmware versions of the sensor. Neither app lets you set an alert at, say, 140 mg/dL post-meal if you're trying to catch early spikes. You get their alert thresholds or nothing.

What I actually wanted was a single view that overlays my CGM readings against a manual meal log, a sleep window pulled from my Garmin, and whatever exercise I logged that day. The correlation between a 45-minute walk and my glucose curve two hours later is genuinely useful information — and neither Dexcom nor Abbott surfaces it. They're both optimized for the clinical view: did you go hypo, did you go hyper, here's your A1C estimate. That's not useless, but it's not what a technically-minded person managing their own data wants to see.

I went with React + Firebase instead of Next.js + Supabase for a specific reason: real-time updates. My CGM pushes readings every 5 minutes via a bridge app (xDrip+ on Android writing to a Firebase Realtime Database endpoint). With Firebase's onValue listener, the dashboard re-renders the moment a new reading lands — no polling, no refresh, no stale data. Next.js Server Components and Supabase's real-time subscriptions can do this too, but the setup overhead is higher and I'd be fighting SSR concerns for a dashboard that's almost entirely client-side state anyway. Firebase's free Spark plan covers the read/write volume of one person's CGM data comfortably — we're talking a few hundred KB per day at most. For scaffolding the component structure and Firestore security rules faster, I leaned on a couple of the tools listed in our guide on Best AI Coding Tools in 2026 (thorough Guide).

The xDrip+ → Firebase bridge is the part nobody talks about in these tutorials. xDrip+ has a built-in "Upload to Firebase" plugin under Settings → Cloud Upload → Firebase Realtime Database. You give it your database URL and a secret, and it starts writing glucose entries to a path like /sgv/ in this shape:

{
  "sgv": 118,          // glucose in mg/dL
  "date": 1718200412000, // Unix ms timestamp
  "direction": "Flat",  // trend arrow as string
  "device": "dexcom"
}
Enter fullscreen mode Exit fullscreen mode

That's the data contract your React app reads from. The direction field is inconsistent — sometimes it's "Flat", sometimes "→", sometimes "FortyFiveUp" depending on xDrip version — so you need a normalization function before you hand it to Recharts or whatever charting library you're using. I wasted half a day on that before hardcoding a lookup table. The other gotcha: Firebase Realtime Database (not Firestore) is the right choice for this ingestion path because xDrip's plugin was built for it. Don't try to redirect writes to Firestore at this layer — it's a different SDK and the plugin doesn't support it.

What You'll Actually Build

The finished product is a single-page React app that pulls glucose readings from Firestore in real time and plots them as a continuous line chart — target range bands included, so you can see at a glance how many readings landed between 70 and 180 mg/dL, which is the ADA's standard target window. The bands are configurable per user; I store mine as Firestore document fields so the threshold values survive a browser refresh without touching code.

Alongside the chart, there's a manual log panel for meals and insulin doses. Each entry gets timestamped server-side using serverTimestamp() — not the client clock, because I learned early on that phone clocks drift and that drift makes correlation analysis useless. The log entries render as vertical markers on the same time axis as the glucose line, so you can visually trace a post-meal spike back to what you ate and when. That correlation is the entire reason I built this instead of just exporting CSVs from my CGM app.

Firebase Auth handles identity. I use Google Sign-In because it's one button and I'm not managing passwords for a personal health tool. The critical bit is that every Firestore read and write checks request.auth.uid against the document owner's UID. Here's the actual security rule I run:

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    // Only the authenticated owner can read or write their own data
    match /users/{userId}/{document=**} {
      allow read, write: if request.auth != null
                         && request.auth.uid == userId;
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

That rule rejects every write attempt that doesn't come from your own UID — including your own accidental curl commands against the wrong project. There are no public endpoints. No API keys exposed in a backend you have to keep alive. Firestore's client SDK talks directly to Google's servers, authenticated via the Firebase Auth token in the browser. The whole security model lives in those four lines, not in an Express middleware you'll forget to update.

Deployment is a single vercel --prod from the project root. No Docker, no EC2, no systemd service file. Vercel treats it as a static React build — which it is, since there's no Node server in the picture. The only environment variables you need are the Firebase project config values, which you set once in the Vercel dashboard. After that, git push to main and it redeploys automatically. I've been running this for three months without touching the infrastructure side once.

A few concrete constraints to set expectations before you build: Firestore's free Spark plan gives you 50,000 reads and 20,000 writes per day. A CGM logs a reading every five minutes, which is 288 reads per day just to render the chart — well inside the limit for personal use. If you add real-time listeners (onSnapshot) instead of one-shot fetches, each listener open counts as one read per document change, not per page load, which is actually cheaper than polling. I switched to onSnapshot on week two and my daily read count dropped by about 60%.

Project Setup — From Zero to Running Locally

The thing that catches most people off guard with Vite is the environment variable prefix. If you're coming from Create React App, you'll spend a confused 20 minutes wondering why your Firebase config is undefined everywhere. CRA used REACT_APP_ — Vite uses VITE_, and it will silently ignore anything else. No warning, no error, just undefined at runtime.

Start with the scaffold:

npm create vite@latest glucose-dash -- --template react-ts
cd glucose-dash
npm install firebase recharts date-fns react-hook-form zustand
npm install -D @types/node
Enter fullscreen mode Exit fullscreen mode

I picked this specific dependency set deliberately. Recharts because it's the least painful charting library for React — the API maps well to how you actually think about glucose data (time on x-axis, mg/dL or mmol/L on y-axis, reference bands for target range). date-fns because moment.js is 67KB gzipped and date-fns is tree-shakeable. react-hook-form because you'll have a manual entry form and uncontrolled inputs with validation are genuinely less painful here than controlled state. Zustand for app-level state — the auth status, current user, and reading filters don't need React context boilerplate.

For Firebase: create a project at console.firebase.google.com, enable Firestore in production mode (you'll write real security rules — more on that later), and enable Authentication with the Email/Password provider. Grab the config object from Project Settings → Your apps → Web app. Then your .env.local looks like this:

# .env.local — never commit this
VITE_FIREBASE_API_KEY=AIzaSy...
VITE_FIREBASE_AUTH_DOMAIN=glucose-dash.firebaseapp.com
VITE_FIREBASE_PROJECT_ID=glucose-dash
VITE_FIREBASE_STORAGE_BUCKET=glucose-dash.appspot.com
VITE_FIREBASE_MESSAGING_SENDER_ID=1234567890
VITE_FIREBASE_APP_ID=1:1234567890:web:abc123
Enter fullscreen mode Exit fullscreen mode

And your src/lib/firebase.ts initialization:

import { initializeApp } from 'firebase/app'
import { getFirestore } from 'firebase/firestore'
import { getAuth } from 'firebase/auth'

const firebaseConfig = {
  apiKey: import.meta.env.VITE_FIREBASE_API_KEY,
  authDomain: import.meta.env.VITE_FIREBASE_AUTH_DOMAIN,
  projectId: import.meta.env.VITE_FIREBASE_PROJECT_ID,
  storageBucket: import.meta.env.VITE_FIREBASE_STORAGE_BUCKET,
  messagingSenderId: import.meta.env.VITE_FIREBASE_MESSAGING_SENDER_ID,
  appId: import.meta.env.VITE_FIREBASE_APP_ID,
}

const app = initializeApp(firebaseConfig)

// Export these — every hook will import from here, not re-initialize
export const db = getFirestore(app)
export const auth = getAuth(app)
Enter fullscreen mode Exit fullscreen mode

The folder structure I settled on after three months of daily use. I tried collocation-by-feature first and it got messy fast because glucose readings touch nearly every screen. Collocation works better when features are genuinely isolated:

src/
  components/       # Dumb UI — GlucoseChart, ReadingCard, RangeIndicator
  hooks/            # useReadings.ts, useAuth.ts, useStats.ts
  lib/              # firebase.ts, constants.ts (target ranges, colors)
  pages/            # Dashboard.tsx, Login.tsx, LogReading.tsx
  store/            # zustand slices — authStore.ts, filterStore.ts
  types/            # GlucoseReading, UserProfile — shared interfaces
  App.tsx
  main.tsx
Enter fullscreen mode Exit fullscreen mode

One thing I'd do differently: put your TypeScript interfaces in types/ from day one instead of colocating them with components. The GlucoseReading interface gets imported by hooks, components, and store slices — if it lives in components/ReadingCard.tsx you end up with circular-looking imports fast. Define it once in types/reading.ts and import it everywhere:

// types/reading.ts
export interface GlucoseReading {
  id: string
  userId: string
  value: number          // always store in mg/dL, convert for display
  timestamp: Date
  mealContext: 'fasting' | 'pre-meal' | 'post-meal' | 'bedtime' | 'other'
  notes?: string
}
Enter fullscreen mode Exit fullscreen mode

Store the value in mg/dL regardless of what unit the user prefers — do the mmol/L conversion (value / 18.0182) at display time in a utility function. If you store in the user's preferred unit and they change it later, you've got a data migration problem in Firestore that you really don't want.

Connecting to Glucose Data — Your Options and Their Tradeoffs

The hardest part of this whole project isn't React or Firebase — it's getting glucose readings into your app in the first place. I spent more time on this than anything else, and the answer changed depending on what I was trying to do. There are three real paths here, and each one suits a different situation.

Option 1: Dexcom Developer API (OAuth2)

Dexcom has a proper developer program at developer.dexcom.com. The OAuth2 flow is standard — authorization code grant, refresh tokens, the works. The sandbox is available immediately after signup and lets you test against synthetic data. The production approval process, though, is slow. Budget 2–4 weeks and expect an email thread asking about your use case. For a personal dashboard, I was honest: "personal health monitoring app, single user, not for distribution." That got approved fine.

The sandbox endpoint you'll be hitting constantly during development:

GET https://sandbox-api.dexcom.com/v3/users/self/egvs?startDate=2024-01-01T00:00:00&endDate=2024-01-02T00:00:00
Authorization: Bearer YOUR_ACCESS_TOKEN
Enter fullscreen mode Exit fullscreen mode

The response looks like this — note the unit field and the rateOfChange, which is what drives trend arrows:

{
  "records": [
    {
      "recordId": "abc123",
      "systemTime": "2024-01-01T08:05:14",
      "displayTime": "2024-01-01T02:05:14",
      "value": 142,
      "status": null,
      "trend": "flat",
      "trendRate": -0.1,
      "unit": "mg/dL",
      "rateOfChange": -0.1,
      "displayDevice": "g7",
      "transmitterGeneration": "g7"
    }
  ],
  "pages": {
    "nextPageToken": null,
    "prevPageToken": null
  }
}
Enter fullscreen mode Exit fullscreen mode

The thing that caught me off guard: sandbox data has a 3-hour delay baked in. This isn't a bug — it mirrors the production behavior that Dexcom enforces on third-party apps for regulatory reasons. Your dashboard will never show a reading from the last 3 hours if you're using the official API. For retrospective analysis that's fine. For a "what is my blood sugar right now" view, this kills the use case entirely. That's when you look at Option 2.

Option 2: Nightscout REST API (No Approval, Real-Time)

If you're already running a Nightscout instance — and many T1Ds are — you can skip the whole approval process. Nightscout exposes a REST API at /api/v1/entries.json that returns readings with no delay. You own the server, you own the data pipeline. A simple fetch looks like:

const res = await fetch(
  `https://YOUR_NS_INSTANCE.fly.dev/api/v1/entries.json?count=288&token=YOUR_API_TOKEN`
);
const entries = await res.json();
// each entry: { sgv: 142, date: 1704096314000, direction: "Flat", device: "share2" }
Enter fullscreen mode Exit fullscreen mode

The tradeoff is that you're maintaining Nightscout infrastructure. Running it on Railway or Fly.io free tiers works until it doesn't — cold starts, memory limits, the usual free-tier chaos. But for real-time data without a multi-week approval queue, this is the practical choice while you wait for Dexcom production access.

Option 3: Manual CSV from Dexcom Clarity

Ugly, but it gets you building immediately. Log into Clarity, export your data as CSV, and you have a structured file with timestamp and glucose value columns. I used this for the first two weeks of prototyping and it was surprisingly useful — you get months of real historical data to test your charts and aggregations against. The CSV structure from Clarity has a few garbage header rows you need to skip, and the timestamp format is locale-dependent, so watch for that if you're in a non-US locale.

Here's how I parsed it with PapaParse and bulk-wrote to Firestore without slamming the 1 write/second soft limit. The trick is batching with writeBatch() — Firestore batches support up to 500 operations each, and you commit them sequentially with a small delay between:

import Papa from "papaparse";
import { writeBatch, doc, collection } from "firebase/firestore";
import { db } from "./firebase";

async function importClarityCSV(file) {
  const text = await file.text();

  const { data } = Papa.parse(text, {
    header: true,
    skipEmptyLines: true,
    // Clarity exports have 3 metadata rows before the actual header
    beforeFirstChunk: (chunk) => chunk.split("\n").slice(3).join("\n"),
  });

  const readings = data
    .filter((row) => row["Glucose Value (mg/dL)"] && row["Timestamp (YYYY-MM-DDThh:mm:ss)"])
    .map((row) => ({
      value: parseInt(row["Glucose Value (mg/dL)"], 10),
      timestamp: new Date(row["Timestamp (YYYY-MM-DDThh:mm:ss)"]),
      source: "clarity_csv",
    }));

  const BATCH_SIZE = 499; // leave 1 slot for safety under the 500 op limit
  const DELAY_MS = 1100; // just over 1 second between commits

  for (let i = 0; i < readings.length; i += BATCH_SIZE) {
    const batch = writeBatch(db);
    const chunk = readings.slice(i, i + BATCH_SIZE);

    chunk.forEach((reading) => {
      // use timestamp as doc ID to make imports idempotent
      const ref = doc(collection(db, "glucose_readings"), reading.timestamp.toISOString());
      batch.set(ref, reading, { merge: true });
    });

    await batch.commit();

    if (i + BATCH_SIZE < readings.length) {
      await new Promise((r) => setTimeout(r, DELAY_MS));
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Using the ISO timestamp as the document ID is the move here — it makes re-imports idempotent. Run the import twice with the same file and you won't get duplicate readings, just overwrites. The merge: true on the set call means if you later enrich a document with additional fields, a re-import won't wipe them. This CSV path is worth building properly even if you plan to switch to the API later — having a bulk import mechanism is useful for backfills whenever you change your data model.

Firestore Data Model — Get This Wrong and You'll Rewrite It

The data model mistake I made first time around was throwing everything into a flat readings collection with a userId field and planning to filter from there. That works for maybe 500 documents. Once you have months of CGM data — a Dexcom G7 writes a reading every 5 minutes, so that's 288 documents per day — your query costs explode and you lose the ability to do efficient per-user security rules without ugly workarounds.

The structure that actually held up for me after three months of daily use is user-scoped subcollections. Here's the path that matters:

users/{uid}/readings/{readingId}
users/{uid}/events/{eventId}
Enter fullscreen mode Exit fullscreen mode

Each readings document has a consistent shape. Don't let this drift — if you allow nullable fields early on, you'll be writing defensive code everywhere in React:

// Firestore document: users/{uid}/readings/{readingId}
{
  value: 112,              // mg/dL as a number, never a string
  unit: "mg/dL",          // lock this — don't store mmol/L separately, convert on read
  trend: "flat",          // "rising" | "falling" | "rising_quickly" | "flat" | "unknown"
  timestamp: Timestamp,   // Firestore Timestamp, NOT a Unix number or ISO string
  source: "dexcom"        // "dexcom" | "manual" | "libre"
}
Enter fullscreen mode Exit fullscreen mode

The events collection is separate on purpose. Meal logs, insulin doses, exercise — none of that belongs in readings. The moment you co-mingle them, every time-range query has to include a type filter just to exclude events from glucose charts, which forces a composite index on every query path. Keep them separate and your glucose chart query is just orderBy('timestamp') with a range filter. Clean, single-field index, no extra cost.

// users/{uid}/events/{eventId}
{
  type: "meal",            // "meal" | "insulin" | "exercise" | "note"
  timestamp: Timestamp,
  label: "Lunch",
  carbGrams: 45,           // only present when type === "meal"
  units: null,             // only present when type === "insulin"
  notes: "pasta, felt spike after"
}
Enter fullscreen mode Exit fullscreen mode

You will hit a Firestore missing-index error. Firestore's error message actually includes a direct link to auto-create the index in the console, which is genuinely useful — but you need to know which query triggers it. The one you'll definitely need is on events when filtering by time range AND type simultaneously. Add this to your firestore.indexes.json before you deploy to staging:

{
  "indexes": [
    {
      "collectionGroup": "events",
      "queryScope": "COLLECTION",
      "fields": [
        { "fieldPath": "timestamp", "order": "ASCENDING" },
        { "fieldPath": "type", "order": "ASCENDING" }
      ]
    },
    {
      "collectionGroup": "readings",
      "queryScope": "COLLECTION",
      "fields": [
        { "fieldPath": "timestamp", "order": "ASCENDING" },
        { "fieldPath": "source", "order": "ASCENDING" }
      ]
    }
  ],
  "fieldOverrides": []
}
Enter fullscreen mode Exit fullscreen mode

Deploy indexes with firebase deploy --only firestore:indexes — they take a few minutes to build, and queries against them will fail with a permissions-style error until they're ready. Don't let that confuse you during staging.

Security rules are the thing most tutorials gloss over. Here are the exact rules I run, tested against the Firestore emulator with both authenticated and unauthenticated requests:

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {

    // No access to anything outside user-scoped paths
    match /users/{userId} {
      allow read, write: if request.auth != null
                         && request.auth.uid == userId;

      match /readings/{readingId} {
        allow read, write: if request.auth != null
                           && request.auth.uid == userId;
      }

      match /events/{eventId} {
        allow read, write: if request.auth != null
                           && request.auth.uid == userId;
      }
    }

    // Explicitly deny everything else — don't rely on implicit denial
    match /{document=**} {
      allow read, write: if false;
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The explicit allow read, write: if false catch-all at the bottom is not redundant — Firestore rules are additive, so if you later add a collection and forget to write rules for it, the fallback prevents accidental open access. Test this with firebase emulators:start and hit it with both an anonymous request and a request using a different UID than the document owner. Both should return permission-denied before you ship.

Real-Time Updates with onSnapshot — Where It Gets Interesting

The thing that genuinely surprised me about onSnapshot wasn't the real-time part — it was that Firestore sends deltas. When a new glucose reading lands in the collection, your listener receives only that document, not the entire result set re-fetched from scratch. For a dashboard that might be holding 288 readings (a full day at 5-minute CGM intervals), that matters. Polling every 30 seconds would pull the full window each time. With onSnapshot, you get a tiny diff.

Here's the custom hook I settled on after a few iterations. The key parts are the date-bounded query, proper cleanup, and surfacing loading and error states that the chart component actually needs:

// hooks/useGlucoseReadings.js
import { useEffect, useState } from 'react';
import { collection, query, where, orderBy, onSnapshot } from 'firebase/firestore';
import { db } from '../lib/firebase';
import { useGlucoseStore } from '../store/glucoseStore';

export function useGlucoseReadings(startDate, endDate) {
  const [loading, setLoading] = useState(true);
  const [error, setError] = useState(null);
  const setReadings = useGlucoseStore((s) => s.setReadings);
  const readings = useGlucoseStore((s) => s.readings);

  useEffect(() => {
    if (!startDate || !endDate) return;

    setLoading(true);

    const q = query(
      collection(db, 'glucoseReadings'),
      where('userId', '==', auth.currentUser.uid),
      where('timestamp', '>=', startDate),
      where('timestamp', '<=', endDate),
      orderBy('timestamp', 'asc')
    );

    const unsubscribe = onSnapshot(
      q,
      { includeMetadataChanges: true }, // needed so we can check fromCache
      (snapshot) => {
        // Don't trust this render if it came from the local cache
        const isFromCache = snapshot.metadata.fromCache;
        const hasPendingWrites = snapshot.metadata.hasPendingWrites;

        const docs = snapshot.docs.map((doc) => ({
          id: doc.id,
          ...doc.data(),
        }));

        setReadings(docs);

        // Only clear loading once we have confirmed server data
        if (!isFromCache && !hasPendingWrites) {
          setLoading(false);
        }
      },
      (err) => {
        setError(err);
        setLoading(false);
      }
    );

    // Firestore cleanup — if you forget this, you leak listeners on every date range change
    return () => unsubscribe();
  }, [startDate, endDate]);

  return { readings, loading, error };
}
Enter fullscreen mode Exit fullscreen mode

The fromCache gotcha burned me hard the first week. Firestore's offline persistence is on by default in the web SDK, which means onSnapshot fires immediately with whatever it has cached locally, then fires again with server-confirmed data. If you set loading = false on the first callback, your chart renders with yesterday's data for a split second before jumping to today's. Users with CGMs check this dashboard constantly — that flash is disorienting. Passing { includeMetadataChanges: true } and checking snapshot.metadata.fromCache lets you hold the loading state until the server round-trip completes. You can also show a subtle "syncing..." badge during that window instead of blocking the whole UI.

Zustand handles the in-memory layer. The store is simple on purpose — I didn't want it doing anything clever:

// store/glucoseStore.js
import { create } from 'zustand';

export const useGlucoseStore = create((set) => ({
  readings: [],
  setReadings: (readings) => set({ readings }),
  // Only holds the last 24h window — no pagination complexity here
  clearReadings: () => set({ readings: [] }),
}));
Enter fullscreen mode Exit fullscreen mode

The reason Zustand fits here rather than just useState inside the hook is chart re-renders. My dashboard has a summary card, a trend line, and a time-in-range bar — three separate components that all need the same readings array. Without a shared store, each one either re-queries Firestore (wasteful, also creates three separate listeners) or you end up prop-drilling through a wrapper that doesn't conceptually own the data. With Zustand, the hook writes once, all three components subscribe independently, and onSnapshot stays a single listener. Zustand's shallow equality check also means the trend line component only re-renders when the readings array actually changes, not on every parent render cycle. That's the part that made chart performance noticeably smoother on lower-end Android phones.

Building the Glucose Chart with Recharts

The thing that sold me on Recharts over Chart.js for this project wasn't the API — it was that I stopped fighting React's rendering model. Chart.js wants you to grab a canvas ref, call new Chart(canvasRef.current, config), then manually destroy and rebuild the instance when data changes. That's fine for a jQuery-era app but it's fighting upstream in a React component. Recharts is just JSX. Your chart state lives in props. Re-renders work exactly how you'd expect. I haven't touched a single ref in three months of building this dashboard.

The <ComposedChart> is the backbone here because you need to layer genuinely different chart types — a line for readings, reference bands for target range, vertical markers for meals. Here's the actual structure I run:

import {
  ComposedChart, Line, ReferenceArea, ReferenceLine,
  XAxis, YAxis, CartesianGrid, Tooltip, ResponsiveContainer
} from "recharts";

// readings: [{ ts: 1718000000000, value: 142, trend: "RISING" }]
// meals: [{ ts: 1718003600000, label: "Lunch" }]

<ResponsiveContainer width="100%" height={300}>
  <ComposedChart data={readings} margin={{ top: 10, right: 20, left: 0, bottom: 0 }}>
    <CartesianGrid strokeDasharray="3 3" stroke="#2a2a2a" />
    <XAxis
      dataKey="ts"
      scale="time"
      type="number"
      domain={["dataMin", "dataMax"]}
      tickFormatter={(ts) => format(new Date(ts), "h:mm a")}
    />
    <YAxis domain={[40, 300]} />

    {/* Target range band */}
    <ReferenceArea y1={70} y2={180} fill="#22c55e" fillOpacity={0.06} />

    {/* Meal markers */}
    {meals.map((m) => (
      <ReferenceLine
        key={m.ts}
        x={m.ts}
        stroke="#f59e0b"
        strokeDasharray="4 2"
        label={{ value: "🍽", position: "top", fontSize: 12 }}
      />
    ))}

    <Line
      dataKey="value"
      dot={<GlucoseDot />}   {/* custom renderer, see below */}
      stroke="#94a3b8"       {/* fallback — overridden per dot */}
      strokeWidth={2}
      connectNulls={false}   {/* CRITICAL */}
      isAnimationActive={false}  {/* kills jank on live updates */}
    />

    <Tooltip content={<GlucoseTooltip meals={meals} />} />
  </ComposedChart>
</ResponsiveContainer>
Enter fullscreen mode Exit fullscreen mode

The connectNulls={false} line is the most important thing in that whole config. Recharts will happily draw a line straight across a 4-hour sensor gap if you let it, which makes a missed reading look like a steady 112 mg/dL all morning. The fix is to post-process your readings array before it hits the chart — for any gap wider than 6 minutes (Dexcom fires every 5), insert a { ts: gapTimestamp, value: null } entry. Then connectNulls={false} actually does something. Without the explicit nulls, there's nothing to "not connect".

function insertGapNulls(readings, gapThresholdMs = 6 * 60 * 1000) {
  const result = [];
  for (let i = 0; i < readings.length; i++) {
    result.push(readings[i]);
    if (i < readings.length - 1) {
      const delta = readings[i + 1].ts - readings[i].ts;
      if (delta > gapThresholdMs) {
        // drop a null in the middle of the gap so Recharts breaks the line
        result.push({ ts: readings[i].ts + delta / 2, value: null, trend: null });
      }
    }
  }
  return result;
}
Enter fullscreen mode Exit fullscreen mode

Color coding by range is done with a custom dot renderer, not CSS classes. Recharts passes the full data point to your dot component, so you can color-code on the actual glucose value rather than trying to fight with SVG filters:

function glucoseColor(value) {
  if (value === null) return "transparent";
  if (value < 70) return "#ef4444";   // red — hypoglycemia
  if (value <= 180) return "#22c55e"; // green — target
  if (value <= 250) return "#eab308"; // yellow — elevated
  return "#ef4444";                   // red — hyperglycemia
}

function GlucoseDot({ cx, cy, payload }) {
  if (payload.value === null) return null;
  return (
    <circle
      cx={cx} cy={cy} r={3}
      fill={glucoseColor(payload.value)}
      stroke="none"
    />
  );
}
Enter fullscreen mode Exit fullscreen mode

The custom tooltip is where the Dexcom trend arrows actually become useful. The API returns strings like "RISING", "FALLING", "RISING_QUICKLY", "FALLING_QUICKLY", "FLAT" — map those to unicode arrows and show them inline with the value. I also check within ±15 minutes of the hovered timestamp for any logged meal, which gives you the cause-and-effect relationship right in the tooltip without any extra UI:

const TREND_ARROWS = {
  RISING_QUICKLY: "⬆️",
  RISING: "↗️",
  FLAT: "",
  FALLING: "↘️",
  FALLING_QUICKLY: "⬇️",
};

function GlucoseTooltip({ active, payload, label, meals }) {
  if (!active || !payload?.length) return null;
  const { value, trend } = payload[0].payload;
  const arrow = TREND_ARROWS[trend] ?? "";

  // find any meal logged within 15 minutes of this reading
  const nearbyMeal = meals.find(
    (m) => Math.abs(m.ts - label) <= 15 * 60 * 1000
  );

  return (
    <div style={{ background: "#1e1e1e", padding: "8px 12px", borderRadius: 6 }}>
      <p style={{ color: glucoseColor(value), margin: 0, fontWeight: 600 }}>
        {value} mg/dL {arrow}
      </p>
      <p style={{ color: "#94a3b8", margin: "4px 0 0", fontSize: 12 }}>
        {format(new Date(label), "h:mm a")}
      </p>
      {nearbyMeal && (
        <p style={{ color: "#f59e0b", margin: "4px 0 0", fontSize: 12 }}>
          🍽 {nearbyMeal.label}
        </p>
      )}
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

One thing that bit me early: isAnimationActive={false} on the Line. When Firebase is streaming live updates every 5 minutes via onSnapshot, each new data point triggers a re-render. With animations on, you get this annoying redraw where the entire line replays its entrance animation every single time. Turning it off makes the live chart feel like a real-time feed instead of a loading spinner in a loop.

Firebase Auth — Keeping Your Health Data Actually Private

Skip Google Sign-In for a Personal Health Tool — Here's What I Used Instead

Google Sign-In looks appealing until you realize the only person logging into this dashboard is you, on two devices, occasionally at 2am after a bad reading. I went with email/password auth and set it up once in the Firebase console. No OAuth redirect flows, no "which Google account did I use again?" confusion, no additional surface area. The Firebase Auth SDK handles the session persistence and I never think about it. For a solo personal tool where the data is literally your bloodstream, the fewer moving parts the better.

The setup is about 10 lines of real code:

// src/lib/firebase.ts
import { initializeApp } from "firebase/app";
import { getAuth } from "firebase/auth";
import { getFirestore } from "firebase/firestore";

const app = initializeApp({
  apiKey: import.meta.env.VITE_FIREBASE_API_KEY,
  authDomain: import.meta.env.VITE_FIREBASE_AUTH_DOMAIN,
  projectId: import.meta.env.VITE_FIREBASE_PROJECT_ID,
});

export const auth = getAuth(app);
export const db = getFirestore(app);
Enter fullscreen mode Exit fullscreen mode

The thing that caught me off guard early: Firebase initializes immediately when you import those modules, but your Firestore listeners don't know about auth state until onAuthStateChanged fires. If you set up your onSnapshot listener at component mount, it'll briefly fire unauthenticated and either fail silently or throw a permission error that logs to the console. The fix is an AuthGate component that blocks rendering entirely until auth state resolves:

// src/components/AuthGate.tsx
import { useEffect, useState } from "react";
import { onAuthStateChanged, User } from "firebase/auth";
import { auth } from "../lib/firebase";
import LoginPage from "../pages/LoginPage";

export default function AuthGate({ children }: { children: React.ReactNode }) {
  const [user, setUser] = useState<User | null | "loading">("loading");

  useEffect(() => {
    return onAuthStateChanged(auth, (u) => setUser(u));
  }, []);

  if (user === "loading") return <div>Checking auth...</div>;
  if (!user) return <LoginPage />;

  // Only renders children — and therefore Firestore listeners — once
  // we have a confirmed authenticated user. No race condition.
  return <>{children}</>;
}
Enter fullscreen mode Exit fullscreen mode

Wrap your entire <App /> in this inside main.tsx and your Firestore hooks never run unauthenticated. No redirect logic needed anywhere else in the app.

The Firestore Rule That Actually Matters

Firestore security rules are the last line of defense if your frontend auth somehow breaks or gets bypassed. For glucose data, I store readings under /readings/{docId} with a uid field on every document. The rule is deliberately minimal — no helper functions, no wildcards doing something clever:

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    match /readings/{readingId} {
      allow read, write: if request.auth != null
        && request.auth.uid == resource.data.uid;

      // resource.data.uid on reads, request.resource.data.uid on writes
      allow create: if request.auth != null
        && request.auth.uid == request.resource.data.uid;
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The gotcha here: resource.data.uid refers to the existing document (reads, updates, deletes), but on a create there's no existing document yet — you need request.resource.data.uid instead. If you only write one rule, creates silently fail. Test this with the Firebase Rules Playground in the console before you trust it. I also run firebase emulators:start locally and hit the emulator during development so I can verify the rules without burning real production reads.

Two Firebase Projects: Dev and Prod, No Exceptions

I created two separate Firebase projects — glucose-dash-dev and glucose-dash-prod — and I switch between them via .env.development and .env.production files. Vite picks these up automatically based on NODE_ENV. This is not optional for health data. The moment you start testing edge cases — malformed readings, bulk imports, schema changes — you will write bad data to Firestore. If that's your prod project, you now have garbage mixed into three months of real health history.

# .env.development
VITE_FIREBASE_API_KEY=dev-key-here
VITE_FIREBASE_AUTH_DOMAIN=glucose-dash-dev.firebaseapp.com
VITE_FIREBASE_PROJECT_ID=glucose-dash-dev

# .env.production
VITE_FIREBASE_API_KEY=prod-key-here
VITE_FIREBASE_AUTH_DOMAIN=glucose-dash-prod.firebaseapp.com
VITE_FIREBASE_PROJECT_ID=glucose-dash-prod
Enter fullscreen mode Exit fullscreen mode

Firebase's free Spark plan covers both projects without issue — glucose readings are tiny documents and I'm nowhere near the 1 GiB storage or 50k daily read limits for a single-user dashboard. The prod project has no test user and no data except real readings. The dev project gets seeded with synthetic glucose data generated from a small script that randomizes values in realistic ranges (roughly 70–180 mg/dL with occasional spikes). That way my chart components render realistically without me having to log a single actual meal.

Meal and Insulin Logging UI

The thing that tripped me up first wasn't the Firestore schema or the chart library — it was the timestamp default. I had it locked to Date.now() on mount, which sounds reasonable until you actually live with it for a week. You eat breakfast, bolus, do stuff, then come back 45 minutes later to log it. If the timestamp field is frozen at component mount time, every single one of those retroactive logs is off by minutes or hours. The datetime input has to be a controlled field that defaults to now but stays fully editable. That one change made my data actually trustworthy.

Here's the quick-log form with react-hook-form. I keep it short — if logging feels like work, I skip it, and then the dashboard is useless:

import { useForm } from 'react-hook-form';
import { format } from 'date-fns';

// Format that datetime-local inputs expect
const toLocalDatetimeString = (date: Date) =>
  format(date, "yyyy-MM-dd'T'HH:mm");

type LogEntry = {
  meal: string;
  carbs: number;
  insulin: number;
  timestamp: string; // kept as string until Firestore write
};

export function QuickLogForm() {
  const { register, handleSubmit, reset } = useForm<LogEntry>({
    defaultValues: {
      meal: '',
      carbs: 0,
      insulin: 0,
      // NOT Date.now() — this re-evaluates at render time, not at module load
      timestamp: toLocalDatetimeString(new Date()),
    },
  });

  const onSubmit = (data: LogEntry) => {
    logEntry({ ...data, timestamp: new Date(data.timestamp) });
    reset({ timestamp: toLocalDatetimeString(new Date()) });
  };

  return (
    <form onSubmit={handleSubmit(onSubmit)}>
      <input placeholder="Meal (e.g. oatmeal + banana)" {...register('meal')} />
      <input type="number" step="1" placeholder="Carbs (g)" {...register('carbs', { valueAsNumber: true })} />
      <input type="number" step="0.5" placeholder="Insulin (units)" {...register('insulin', { valueAsNumber: true })} />
      {/* Editable — user can scrub back to when they actually ate */}
      <input type="datetime-local" {...register('timestamp')} />
      <button type="submit">Log</button>
    </form>
  );
}
Enter fullscreen mode Exit fullscreen mode

The valueAsNumber: true on the carbs and insulin fields saves you from dealing with string-to-number coercion downstream. Skip it and you'll spend an afternoon debugging why your chart tooltips show "30" + 5 = "305".

For the optimistic UI, I write to Zustand immediately, fire the Firestore write in the background, and roll back if it rejects. The perceived speed difference is real — even on a fast connection, waiting for Firestore confirmation before showing the log entry creates enough lag that the app feels sluggish.

import { useLogStore } from '@/store/logStore';
import { addDoc, collection, serverTimestamp } from 'firebase/firestore';
import { db } from '@/lib/firebase';
import toast from 'react-hot-toast';
import { nanoid } from 'nanoid';

async function logEntry(entry: Omit<LogEntry, 'timestamp'> & { timestamp: Date }) {
  const tempId = nanoid();
  const optimisticEntry = { ...entry, id: tempId, synced: false };

  // 1. Slam it into Zustand — UI updates instantly
  useLogStore.getState().addEntry(optimisticEntry);

  try {
    const docRef = await addDoc(collection(db, 'logs'), {
      ...entry,
      timestamp: entry.timestamp, // Firestore handles Date objects fine
      createdAt: serverTimestamp(),
    });

    // 2. Replace temp entry with confirmed one
    useLogStore.getState().confirmEntry(tempId, docRef.id);
  } catch (err) {
    // 3. Rollback and tell the user
    useLogStore.getState().removeEntry(tempId);
    toast.error('Failed to save log. Check your connection.');
  }
}
Enter fullscreen mode Exit fullscreen mode

The ReferenceLine issue from Recharts is genuinely subtle. I spent two hours thinking the lines were broken before I read the source. The x prop on <ReferenceLine> has to be the exact same type and format as the values in your data's x-axis key. If your chart data uses Unix timestamps (milliseconds as numbers), your reference line x must also be a number. If you're using ISO strings, it must be an ISO string. A mismatch silently renders nothing — no error, no warning, just absence.

// Your chart data — timestamps stored as ms since epoch
const glucoseData = logs.map(l => ({
  time: l.timestamp.getTime(), // number
  glucose: l.glucose,
}));

// Meal events — must use .getTime() here too, not a Date or string
const mealEvents = entries.map(e => e.timestamp.getTime());

<LineChart data={glucoseData}>
  <XAxis dataKey="time" type="number" domain={['auto', 'auto']} />
  <Line dataKey="glucose" />

  {mealEvents.map((t) => (
    <ReferenceLine
      key={t}
      x={t}            // ← must be a number if XAxis type="number"
      stroke="#f97316"
      strokeDasharray="4 4"
      label={{ value: '🍽', position: 'top' }}
    />
  ))}
</LineChart>
Enter fullscreen mode Exit fullscreen mode

One more thing I didn't expect: the label emoji on ReferenceLine renders inconsistently across browsers when using the label shorthand. If you need reliable rendering, use label={<CustomLabel />} with an actual SVG text element instead. The shorthand is fine for personal use where you control the browser, but it's worth knowing the escape hatch exists.

Deploying to Vercel — Five Minutes If You Don't Hit the Environment Variable Gotcha

The thing that will burn you — and it burned me — is that Vercel silently shares environment variables across all deployment environments unless you explicitly scope them. So if you have a dev Firebase project with test data and a production Firebase project with real glucose readings, your preview deployments will happily write to production. Took me one confused afternoon to trace that back.

The actual deploy is fast. From your project root, after running npm run build once to confirm it compiles clean:

# First deploy wires up the project, --prod skips the preview step
vercel --prod

# Output you want to see:
# ✓ Detected framework: Vite
# ✓ Build Command: vite build
# ✓ Output Directory: dist
# Deployed to: https://your-glucose-dashboard.vercel.app
Enter fullscreen mode Exit fullscreen mode

Vercel detects Vite automatically from your package.json — no vercel.json config needed unless you're doing something unusual like custom rewrites. The dist/ output directory is picked up correctly without you touching anything.

For your Firebase config, go to Settings > Environment Variables in the Vercel dashboard and add every VITE_-prefixed variable there. The full list if you're using the standard Firebase SDK setup:

VITE_FIREBASE_API_KEY=AIzaSy...
VITE_FIREBASE_AUTH_DOMAIN=your-app.firebaseapp.com
VITE_FIREBASE_PROJECT_ID=your-app
VITE_FIREBASE_STORAGE_BUCKET=your-app.appspot.com
VITE_FIREBASE_MESSAGING_SENDER_ID=123456789
VITE_FIREBASE_APP_ID=1:123456789:web:abc123

# Do NOT have these in your git repo at all
# .gitignore should include .env and .env.local
Enter fullscreen mode Exit fullscreen mode

The scoping piece: by default, Vercel applies env vars to Production, Preview, and Development simultaneously. If you have a separate Firebase project for staging/test, click each variable and uncheck "Preview" and "Development" for the production credentials, then add separate values scoped to Preview only. The UI is a set of checkboxes on each variable row — not hidden, just easy to skip if you're moving fast.

Custom domain setup is where Vercel actually earns its reputation. Under Settings > Domains, add your domain, then Vercel gives you either an A record or a CNAME depending on whether it's an apex domain or subdomain. I pointed glucose.mydomain.com at it with a CNAME, DNS propagated in under 10 minutes with Cloudflare, and TLS was provisioned automatically. Accessing a dashboard you built yourself from your phone via a clean URL is genuinely satisfying — and practically useful when you want to check trends without opening a laptop.

Rough Edges I Hit That the Docs Don't Warn You About

The one that burned me first: Firestore Timestamp objects are not JavaScript Date objects, and Recharts doesn't care about either — it wants a plain number. When you pull a reading from Firestore and try to pass reading.timestamp directly into a <LineChart> x-axis, you'll get a flat line or nothing at all. The fix is to call .toMillis() at the point you deserialize, not right before you render. I convert immediately after the snapshot fires and store epoch milliseconds in state throughout the whole app. One conversion point, no surprises later.

// Inside your onSnapshot callback — convert immediately
const readings = snapshot.docs.map(doc => {
  const data = doc.data();
  return {
    ...data,
    // Convert Firestore Timestamp → epoch ms right here
    // so nothing downstream ever sees a Timestamp object
    timestampMs: data.timestamp.toMillis(),
  };
});
Enter fullscreen mode Exit fullscreen mode

Dexcom's trend arrow field returns 'NOT_COMPUTABLE' more often than the API docs imply — mostly during warmup, signal loss, or rapid change events where the algorithm gives up. If your chart renderer tries to look up an arrow icon or direction label for every reading unconditionally, it will crash or render garbage for a non-trivial chunk of your data. Guard every single trend string access. I built a safeTrend() helper that maps 'NOT_COMPUTABLE' and anything else unexpected to a neutral dash glyph and moved on.

const TREND_MAP = {
  DoubleUp: '', SingleUp: '', FortyFiveUp: '',
  Flat: '', FortyFiveDown: '', SingleDown: '',
  DoubleDown: '', NOT_COMPUTABLE: '', RATE_OUT_OF_RANGE: '',
};

// Default to '—' for any string not in the map
const safeTrend = (trend) => TREND_MAP[trend] ?? '';
Enter fullscreen mode Exit fullscreen mode

Firebase's Spark (free) plan caps you at 50,000 reads and 20,000 writes per day. A CGM produces one reading every 5 minutes — that's 288 data points per day. If you write a query that fetches each document individually (easy to do if you're iterating and calling doc.get() in a loop), you burn 288 reads just to render today's chart, and that's before any historical views. Always query with a range filter and pull a day's worth in one snapshot listener. One query returning 288 documents costs 288 reads — same as individual fetches, but you only pay it once instead of every time state updates.

// One query, one round trip for a full day's data
const startOfDay = new Date();
startOfDay.setHours(0, 0, 0, 0);

const q = query(
  collection(db, 'readings'),
  where('timestamp', '>=', Timestamp.fromDate(startOfDay)),
  orderBy('timestamp', 'asc')
);
// This returns up to 288 docs in a single read batch
const unsub = onSnapshot(q, (snap) => { /* ... */ });
Enter fullscreen mode Exit fullscreen mode

React StrictMode — which Create React App and Next.js both enable by default in development — intentionally mounts components twice to surface side effects. This means your onSnapshot listener registers twice, and your Firebase console will show double the read count during local development. I spent an embarrassing amount of time wondering if I had a listener leak before I figured this out. It's not a bug, it doesn't happen in production builds, and the fix is just a useEffect cleanup that unsubscribes properly. Your production read count will be exactly half what you see in dev.

Dexcom returns timestamps as UTC ISO 8601 strings like "2024-03-15T14:32:00Z" — note the trailing Z. new Date("2024-03-15T14:32:00Z") parses correctly. The problem shows up when you're testing with sample data someone typed manually and forgot the Znew Date("2024-03-15T14:32:00") is treated as local time in most browsers, not UTC, which silently shifts all your chart data by your timezone offset. I switched to date-fns's parseISO for all Dexcom timestamp parsing because it's explicit about UTC handling, and I validate that the Z suffix exists before persisting to Firestore:

import { parseISO } from 'date-fns';

// parseISO is unambiguous about UTC — use this over new Date() for API strings
const parseGlucoseTimestamp = (isoString) => {
  if (!isoString.endsWith('Z') && !isoString.includes('+')) {
    console.warn('Timestamp missing timezone suffix, assuming UTC:', isoString);
    return parseISO(isoString + 'Z');
  }
  return parseISO(isoString);
};
Enter fullscreen mode Exit fullscreen mode

What I'd Do Differently (Honest Retrospective)

The thing I got most wrong was data fetching strategy. I queried Firestore by date range from day one with no pagination — just a where("timestamp", ">=", start).where("timestamp", "<=", end) and let it rip. That works fine when you have 200 readings. After three months of CGM data, a 30-day range query is pulling 1,440+ documents every render cycle. Firestore charges per read, and more importantly it's slow. The right call from the start is cursor-based pagination with limit():

// First page
const firstPage = await db
  .collection("readings")
  .where("userId", "==", uid)
  .orderBy("timestamp", "desc")
  .limit(96) // 24 hours of 15-min CGM readings
  .get();

// Next page — store lastVisible in component state
const nextPage = await db
  .collection("readings")
  .where("userId", "==", uid)
  .orderBy("timestamp", "desc")
  .startAfter(lastVisible)
  .limit(96)
  .get();
Enter fullscreen mode Exit fullscreen mode

You're fetching exactly what you render. Chart shows 24 hours? Fetch 96 documents. The "load more" pattern feels wrong for a time-series dashboard until you build the UI around it — then it's actually better UX because the chart isn't trying to render 1,400 data points anyway.

PWA support was an afterthought and I paid for it. I wanted the dashboard on my phone's home screen — one tap, full-screen, no browser chrome. Adding the Vite PWA plugin (vite-plugin-pwa) after the project structure was set meant I had to redo the manifest, regenerate icons at every required size, and fix a caching conflict with my existing service worker assumptions. The manifest registration order matters, the workbox config needs to be explicit about which routes to cache, and iOS Safari has its own opinions about splash screens. If you start with PWA in mind, your vite.config.ts looks like this from day one:

import { VitePWA } from 'vite-plugin-pwa'

export default {
  plugins: [
    VitePWA({
      registerType: 'autoUpdate',
      manifest: {
        name: 'Glucose Dashboard',
        short_name: 'Glucose',
        theme_color: '#1a1a2e',
        display: 'standalone',
        icons: [
          { src: '/icon-192.png', sizes: '192x192', type: 'image/png' },
          { src: '/icon-512.png', sizes: '512x512', type: 'image/png' }
        ]
      },
      workbox: {
        globPatterns: ['**/*.{js,css,html,ico,png,svg}']
      }
    })
  ]
}
Enter fullscreen mode Exit fullscreen mode

Retrofitting this took an afternoon. Starting with it takes 20 minutes.

I reached for Zustand early because I wanted clean global state for the current user, date range selection, and unit preference (mg/dL vs mmol/L). Three months later I have exactly three slices of state that almost never update simultaneously. That's not a Zustand use case — that's a useReducer + Context use case. Zustand is genuinely good when you have complex state that multiple unrelated components need to subscribe to with minimal re-renders. A personal health dashboard with one authenticated user and a handful of filters doesn't qualify. I'd save the dependency and the onboarding friction if someone else ever looks at this code.

The biggest architectural mistake was not evaluating Nightscout seriously before starting. If you're on a CGM — Dexcom G6, G7, Libre 3 — Nightscout already solves the data ingestion problem. It reads from your uploader, stores to MongoDB, and exposes a REST API with endpoints like GET /api/v1/entries.json?count=288 that return exactly what you need. The Dexcom API developer approval process is a real wait — weeks, not days — and approval isn't guaranteed for personal projects. Nightscout sidesteps all of that. I'd still build the React frontend myself for the custom visualization I wanted, but I'd point it at a self-hosted Nightscout instance on Railway or Fly.io instead of writing my own ingestion pipeline. The data normalization work Nightscout has already done is substantial, and reinventing it is a waste unless you have very specific requirements it can't meet.


Disclaimer: This article is for informational purposes only. The views and opinions expressed are those of the author(s) and do not necessarily reflect the official policy or position of Sonic Rocket or its affiliates. Always consult with a certified professional before making any financial or technical decisions based on this content.


Originally published on techdigestor.com. Follow for more developer-focused tooling reviews and productivity guides.

Top comments (0)