DEV Community

Cover image for Stop hand-writing INSERTs: JSON SQL with dialect-aware output(Postgres/MySQL/SQLite)
jsontotable
jsontotable

Posted on

Stop hand-writing INSERTs: JSON SQL with dialect-aware output(Postgres/MySQL/SQLite)

When you need to import JSON into a relational database, the “easy part” (copying data) quickly turns into the hard part: writing a schema and a pile of INSERT statements that actually run on your target database.

In practice, JSON → SQL gets annoying for three reasons:

  • Type inference: JSON doesn’t carry a strict schema, so you have to decide what types to use.
  • Arrays / nesting: nested objects and arrays don’t map cleanly into a single flat table.
  • Dialect differences: “SQL” isn’t one thing—Postgres, MySQL, and SQLite have enough differences to cause real runtime errors.

Below is a practical workflow that works for the most common case (an array of objects you want to import as rows), plus a small example and a checklist of gotchas.


The minimal workflow that actually works

1) Start with a JSON array of objects

If you can shape your data to an array of objects, the mapping is straightforward:

  • each object → one row
  • object keys → column names

Example shape:

[
  {"id": 1, "name": "Alice", "active": true},
  {"id": 2, "name": "Bob", "active": false}
]
Enter fullscreen mode Exit fullscreen mode

If your JSON is a single object (not an array), you can often wrap it into a one-element array for importing.

2) Decide a stable column order

You’ll save time later if you keep column order consistent:

  • Use the key order from the first object (simple and predictable)
  • Or sort keys alphabetically (stable across runs)

Consistency matters because it keeps your INSERT columns and values aligned.

3) Choose a strategy for nested values

You have to decide what to do with nested objects/arrays. There are two common strategies:

  • Flatten nested keys (e.g., address.cityaddress_city) — best when you control the schema and need queryable columns.
  • Stringify nested values (store as text/JSON) — best for quick imports when you want deterministic output.

For a “get it into the DB quickly” workflow, stringify nested values is usually the least painful.

4) Pick the SQL dialect before generating output

This is the part people skip—and it’s the reason “valid SQL” still fails.

Pick your target:

  • Postgres
  • MySQL
  • SQLite

…and generate SQL that matches that target.


A short example (JSON → CREATE TABLE + INSERT)

Let’s use this JSON:

[
  {"id": 1, "name": "Alice", "active": true, "score": 12.5},
  {"id": 2, "name": "Bob", "active": false, "score": null}
]
Enter fullscreen mode Exit fullscreen mode

A reasonable SQL output (conceptually) looks like this:

CREATE TABLE users (
  id INTEGER,
  name TEXT,
  active BOOLEAN,
  score REAL
);

INSERT INTO users (id, name, active, score) VALUES
  (1, 'Alice', TRUE, 12.5),
  (2, 'Bob', FALSE, NULL);
Enter fullscreen mode Exit fullscreen mode

This is intentionally “boring SQL”:

  • It’s readable.
  • It’s easy to run.
  • It makes null-handling explicit.

Dialect notes (Postgres vs MySQL vs SQLite)

You don’t need to memorize everything, but there are a few recurring differences worth calling out.

1) Identifiers and quoting

Column names like order, group, user, etc. can collide with reserved keywords.

  • Postgres often uses double quotes for identifiers: "user"
  • MySQL commonly uses backticks: `user`
  • SQLite accepts double quotes, but behavior can vary depending on settings

If you’re generating SQL from arbitrary JSON keys, you need safe identifier quoting (or key normalization).

2) Data types aren’t identical

Even when names look similar, behavior differs:

  • SQLite is famously permissive (dynamic typing). Many “types” are affinity hints.
  • Postgres has strict types and richer type options.
  • MySQL types are strict-ish but different again.

For imports from JSON, the pragmatic path is to use a small set of common types:

  • integers → INTEGER
  • floats → REAL (or DOUBLE depending on dialect)
  • strings → TEXT
  • booleans → BOOLEAN

3) Booleans and NULL behavior

  • Postgres supports TRUE/FALSE and a real boolean type.
  • MySQL also supports boolean-ish behavior (often as TINYINT(1) under the hood).
  • SQLite doesn’t have a real boolean type; it stores 0/1 (but accepts many inputs).

For JSON imports, your generator should normalize booleans and nulls carefully.


Common pitfalls (and how to avoid them)

Pitfall #1: Mixed types across rows

You’ll see this in the wild:

[
  {"value": 1},
  {"value": "2"}
]
Enter fullscreen mode Exit fullscreen mode

What type is value? If you choose INTEGER, the string row breaks. If you choose TEXT, you lose numeric semantics.

Practical fix:

  • default to TEXT when types conflict, or
  • add a “strict” mode that rejects mixed-type columns.

Pitfall #2: NULLs hide your real type

A column may be null for many rows, which makes naive type inference pick the wrong type.

Practical fix:

  • infer type from non-null values first; treat nulls as “unknown” until you see a real value.

Pitfall #3: Reserved keywords and unsafe column names

JSON keys might be:

  • user
  • select
  • created-at
  • some.key

These can break SQL or become annoying to query.

Practical fix:

  • quote identifiers properly for your dialect, and/or
  • normalize keys (e.g., created-atcreated_at).

Want a quick converter?

If you’d rather not hand-write schema + inserts, I built a free online JSON → SQL tool that:

  • generates CREATE TABLE + INSERT
  • lets you choose a dialect (Postgres/MySQL/SQLite)
  • supports practical options like column ordering and export

Tool: https://jsontotable.net/json-to-sql

If you want more examples and notes, here’s the guide:
https://jsontotable.net/blog/export-and-share/json-to-sql-create-table-insert-dialect-guide


If you found this useful

Tell me what your hardest JSON→SQL case looks like (nested data? mixed types? huge payloads?). I’m collecting real-world cases to improve the converter’s defaults.

Top comments (0)