DEV Community

Cover image for Migrate MySQL, Postgres, SQLite, or SQL Server to Cloudflare D1 — with checkpoints and integrity checks
zintrust Zin
zintrust Zin

Posted on

Migrate MySQL, Postgres, SQLite, or SQL Server to Cloudflare D1 — with checkpoints and integrity checks

Moving an existing database to Cloudflare D1 sounds simple until you hit type mismatches, partial failures halfway through a million-row import, or find out afterwards that 3,000 rows silently didn't land.

We built @zintrust/d1-migrator to solve exactly that.

What it does

  • Migrates MySQL, PostgreSQL, SQLite, or SQL Server → Cloudflare D1
  • Resumable — checkpoints every N rows so a failure doesn't mean starting from zero
  • Data integrity verified — row counts + checksums validated after every batch
  • Dry-run mode — preview the full migration without touching D1
  • Interactive mode — guided prompts for tables with D1 compatibility warnings
  • Zero downtime — runs against your live database
  • TypeScript-first — full types, works with @zintrust/core

Install

npm install @zintrust/d1-migrator
Enter fullscreen mode Exit fullscreen mode

Quickest possible migration (1-line CLI)

DB_CONNECTION=mysql DB_READ_HOSTS=127.0.0.1 DB_PORT=3306 DB_DATABASE=mydb DB_USERNAME=root DB_PASSWORD=secret D1_TARGET_DB=my-d1-db zin migrate-to-d1
Enter fullscreen mode Exit fullscreen mode

All values can also be passed as explicit CLI flags. The order of precedence is: CLI flag → env var → default.

TypeScript API

import { D1Migrator } from '@zintrust/d1-migrator';

const progress = await D1Migrator.DataMigrator.migrateData({
  sourceConnection: 'mysql://user:password@localhost:3306/mydb',
  sourceDriver: 'mysql',
  targetDatabase: 'my-d1-db',
  targetType: 'd1',
  batchSize: 1000,
  checkpointInterval: 10000,
});

console.log(`Migrated: ${progress.processedRows} rows`);
Enter fullscreen mode Exit fullscreen mode

Type conversions handled automatically

Source D1 / SQLite Notes
DATETIME TEXT (ISO 8601) Auto-converted
BIGINT TEXT Precision preserved
DECIMAL TEXT Precision preserved
JSON TEXT Serialized JSON string
BLOB BLOB Binary preserved

Resuming after a failure

zin migrate-to-d1 --resume --migration-id <id>
Enter fullscreen mode Exit fullscreen mode

Checkpoints are stored in .wrangler/state/v3/migrations/ — only unprocessed rows are re-migrated.

Links

MIT licensed. Part of the ZinTrust backend framework ecosystem.

Would love feedback from anyone migrating production databases to D1 — especially edge cases with unusual schemas or large datasets. 👇

Top comments (0)