DEV Community

Cover image for How to Migrate MySQL to PostgreSQL Without Breaking Everything (With Real Examples)
Rehman Afzal
Rehman Afzal

Posted on

How to Migrate MySQL to PostgreSQL Without Breaking Everything (With Real Examples)

So you've outgrown MySQL. Maybe you need better JSON support, real window functions, or you're moving to a managed cloud database that defaults to Postgres.

Whatever the reason — MySQL to PostgreSQL migration trips up almost everyone the first time. The two dialects look similar but behave very differently under the hood.
This guide walks through the actual syntax differences, real failure points, and how to convert a MySQL dump cleanly.

Why MySQL Dumps Don't Import Directly into PostgreSQL
Here's a typical MySQL dump:
sqlCREATE TABLE users (
id INT(11) NOT NULL AUTO_INCREMENT,
username VARCHAR(100) NOT NULL,
is_active TINYINT(1) DEFAULT 1,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
Paste this into psql and you get errors immediately. Here is every line that breaks and why:

Backtick identifiers — not valid SQL standard, PostgreSQL uses double quotes
AUTO_INCREMENT — MySQL-specific keyword, PostgreSQL uses SERIAL
TINYINT(1) — no equivalent type, PostgreSQL uses BOOLEAN
DATETIME — works differently, PostgreSQL uses TIMESTAMP
ENGINE=InnoDB — MySQL storage engine declaration, remove entirely
DEFAULT CHARSET=utf8mb4 — MySQL encoding clause, remove entirely

The same table written for PostgreSQL:
sqlCREATE TABLE "users" (
"id" SERIAL NOT NULL,
"username" VARCHAR(100) NOT NULL,
"is_active" BOOLEAN DEFAULT TRUE,
"created_at" TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY ("id")
);
Six things changed. Let's go through each one properly.

  1. Identifiers — Backticks to Double Quotes MySQL uses backticks. PostgreSQL uses double quotes, or no quotes at all for simple lowercase names. sql-- MySQL SELECT id, user_name FROM orders;

-- PostgreSQL
SELECT "id", "user_name" FROM "orders";

-- or just (if names are simple lowercase)
SELECT id, user_name FROM orders;

Gotcha: PostgreSQL is case-sensitive with quoted identifiers. "UserName" and "username" are two different columns. If your MySQL schema used mixed-case names, decide upfront whether to keep the quotes or normalize everything to lowercase.

  1. Auto Increment — AUTO_INCREMENT to SERIAL sql-- MySQL id INT NOT NULL AUTO_INCREMENT

-- PostgreSQL (classic, still works everywhere)
id SERIAL NOT NULL

-- PostgreSQL (modern SQL standard, preferred)
id INT GENERATED ALWAYS AS IDENTITY
SERIAL is shorthand that creates a sequence and wires it as the default. GENERATED ALWAYS AS IDENTITY is the proper SQL standard approach and prevents accidental manual inserts from overwriting the sequence.

  1. Booleans — TINYINT(1) to BOOLEAN MySQL has no native boolean type. It uses TINYINT(1) with 0 and 1. PostgreSQL has a real BOOLEAN type. sql-- MySQL is_active TINYINT(1) DEFAULT 1

-- PostgreSQL
is_active BOOLEAN DEFAULT TRUE
Your INSERT statements need updating too:
sql-- MySQL
INSERT INTO users (is_active) VALUES (1);

-- PostgreSQL
INSERT INTO users (is_active) VALUES (TRUE);

  1. Date and Time — DATETIME to TIMESTAMP sql-- MySQL created_at DATETIME DEFAULT CURRENT_TIMESTAMP updated_at DATETIME ON UPDATE CURRENT_TIMESTAMP

-- PostgreSQL
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
-- ON UPDATE has no PostgreSQL equivalent — use a trigger instead
If you need ON UPDATE CURRENT_TIMESTAMP behavior, you need a trigger in PostgreSQL:
sqlCREATE OR REPLACE FUNCTION update_updated_at()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER set_updated_at
BEFORE UPDATE ON users
FOR EACH ROW
EXECUTE FUNCTION update_updated_at();

  1. Functions That Have Different Names
    Some common MySQL functions either do not exist or have different names in PostgreSQL:
    sql-- MySQL -- PostgreSQL
    IFNULL(col, 'default') → COALESCE(col, 'default')
    GROUP_CONCAT(col) → STRING_AGG(col, ',')
    LIMIT 10, 20 → LIMIT 10 OFFSET 20
    RAND() → RANDOM()
    NOW() is the same in both — one less thing to change.

  2. INSERT IGNORE and ON DUPLICATE KEY
    sql-- MySQL: silently skip duplicates
    INSERT IGNORE INTO users (email) VALUES ('a@b.com');

-- MySQL: upsert
INSERT INTO users (id, email)
VALUES (1, 'a@b.com')
ON DUPLICATE KEY UPDATE email = VALUES(email);
sql-- PostgreSQL: skip duplicates
INSERT INTO users (email) VALUES ('a@b.com')
ON CONFLICT DO NOTHING;

-- PostgreSQL: upsert
INSERT INTO users (id, email)
VALUES (1, 'a@b.com')
ON CONFLICT (id) DO UPDATE SET email = EXCLUDED.email;

The Migration Process Step by Step
Step 1 — Export cleanly from MySQL
bashmysqldump -u root -p \
--no-tablespaces \
--skip-add-locks \
--skip-comments \
mydb > mydb_dump.sql
The --no-tablespaces and --skip-add-locks flags produce a cleaner dump that is much easier to transform.
Step 2 — Convert the dump
For small schemas you can use sed for the basics:
bashsed -i 's/`/"/g' mydb_dump.sql
sed -i 's/AUTO_INCREMENT/SERIAL/g' mydb_dump.sql
sed -i 's/TINYINT(1)/BOOLEAN/g' mydb_dump.sql
sed -i 's/ ENGINE=InnoDB[^;]*//g' mydb_dump.sql
sed -i 's/ DEFAULT CHARSET=utf8mb4//g' mydb_dump.sql
For anything larger or more complex, sed will miss edge cases — multiline statements, character encoding issues, nested quoting. I use DBConverter for this: upload the dump, pick PostgreSQL as the target, and download a file that is ready to import. It handles the type mapping, identifier rewriting, and engine clause removal automatically.
Step 3 — Always import into staging first
bashpsql -U postgres -d mydb_staging -f mydb_dump_converted.sql
Never import directly into production. Check for errors, run your test suite, verify row counts match your source.
Step 4 — Resync your sequences
After importing data with explicit IDs, your SERIAL sequences will be out of sync. Fix each one:
sqlSELECT setval(
pg_get_serial_sequence('users', 'id'),
(SELECT MAX(id) FROM users)
);
Run this for every table with a serial column or you will hit duplicate key errors on the first insert.
Step 5 — Audit your queries
Run EXPLAIN ANALYZE on your slowest queries. PostgreSQL's planner is different and you may need new indexes. Watch out for:

LIKE '%term%' — PostgreSQL will not use a B-tree index on a leading wildcard, use full-text search instead
GROUP BY is stricter — every non-aggregated column must be in the GROUP BY clause
String comparison is case-sensitive — WHERE name = 'Alice' will not match 'alice' in PostgreSQL

Common Import Errors and What They Mean
syntax error at or near "'" around backtick positions
→ Backticks were not fully replaced. Find and replace all backticks with double quotes.
column "is_active" is of type boolean but expression is of type integer
→ Your INSERT statements still use 1 and 0. Replace with TRUE and FALSE.
function ifnull(...) does not exist
→ Replace IFNULL with COALESCE throughout.
syntax error at or near "ENGINE"
→ The ENGINE=InnoDB tail was not stripped from your CREATE TABLE statements.
duplicate key value violates unique constraint on first insert after migration
→ Sequence is out of sync. Run the setval fix from Step 4 above.

Summary
The six things you must handle in every MySQL to PostgreSQL migration:

Backticks → double quotes
AUTO_INCREMENT → SERIAL
TINYINT(1) → BOOLEAN
DATETIME → TIMESTAMP
MySQL-specific functions → PostgreSQL equivalents
INSERT IGNORE / ON DUPLICATE KEY → ON CONFLICT

For small schemas, the sed commands above get you most of the way. For anything production-grade, let a tool like DBConverter handle the rewriting so you can focus on testing the result rather than editing SQL line by line.

Hit a migration edge case not covered here? Drop it in the comments.

Top comments (0)