<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mickel Samuel</title>
    <description>The latest articles on DEV Community by Mickel Samuel (@mickelsamuel).</description>
    <link>https://dev.to/mickelsamuel</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mickelsamuel"/>
    <language>en</language>
    <item>
      <title>The 5 PostgreSQL Migration Mistakes That Cause Production Outages</title>
      <dc:creator>Mickel Samuel</dc:creator>
      <pubDate>Mon, 02 Mar 2026 14:05:42 +0000</pubDate>
      <link>https://dev.to/mickelsamuel/the-5-postgresql-migration-mistakes-that-cause-production-outages-ngg</link>
      <guid>https://dev.to/mickelsamuel/the-5-postgresql-migration-mistakes-that-cause-production-outages-ngg</guid>
      <description>&lt;p&gt;Most PostgreSQL migration outages are caused by the same five patterns. Each one involves a lock that blocks queries longer than expected, a table rewrite that no one anticipated, or a cascade effect that turns a simple schema change into minutes of downtime. Here are the five mistakes, the production incidents they cause, and the safe patterns to avoid them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mistake #1: CREATE INDEX Without CONCURRENTLY
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The dangerous migration
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- This looks harmless
CREATE INDEX idx_orders_customer_id ON orders (customer_id);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  What happens in production
&lt;/h3&gt;

&lt;p&gt;A regular &lt;code&gt;CREATE INDEX&lt;/code&gt; acquires a &lt;strong&gt;SHARE lock&lt;/strong&gt; on the table. This lock blocks all INSERT, UPDATE, and DELETE operations for the entire duration of the index build.&lt;/p&gt;

&lt;p&gt;On a table with 1 million rows, an index build might take 5-30 seconds. On a table with 100 million rows, it can take minutes. During that entire time, every write to the table is blocked. Every API request that tries to insert or update a row in that table will queue up, waiting.&lt;/p&gt;

&lt;p&gt;The cascading effect is what makes this dangerous: as write queries queue up, they consume connection pool slots. When the connection pool is exhausted, new requests start failing. What started as an index build on one table becomes a total service outage.&lt;/p&gt;
&lt;h3&gt;
  
  
  The safe pattern
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Add CONCURRENTLY — builds the index without blocking writes
CREATE INDEX CONCURRENTLY idx_orders_customer_id ON orders (customer_id);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;code&gt;CREATE INDEX CONCURRENTLY&lt;/code&gt; acquires only a &lt;strong&gt;SHARE UPDATE EXCLUSIVE lock&lt;/strong&gt; , which allows reads and writes to continue while the index is being built. It takes longer (two table scans instead of one), but your application stays up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important caveat:&lt;/strong&gt; CONCURRENTLY cannot run inside a transaction. If your migration framework wraps migrations in transactions by default (Flyway, Liquibase, Knex), you need to disable the transaction for this specific migration.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Flyway: disable transaction for this migration
-- flyway:executeInTransaction=false
CREATE INDEX CONCURRENTLY idx_orders_customer_id ON orders (customer_id);

-- If the concurrent build fails, clean up the invalid index:
DROP INDEX CONCURRENTLY IF EXISTS idx_orders_customer_id;
-- Then retry the CREATE INDEX CONCURRENTLY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Mistake #2: ALTER TABLE SET NOT NULL Without the CHECK Pattern
&lt;/h2&gt;
&lt;h3&gt;
  
  
  The dangerous migration
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Adding NOT NULL to an existing column
ALTER TABLE users ALTER COLUMN email SET NOT NULL;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  What happens in production
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;SET NOT NULL&lt;/code&gt; acquires an &lt;strong&gt;ACCESS EXCLUSIVE lock&lt;/strong&gt; — the most restrictive lock in PostgreSQL. This lock blocks &lt;em&gt;everything&lt;/em&gt; : reads, writes, even other&lt;code&gt;SELECT&lt;/code&gt; queries.&lt;/p&gt;

&lt;p&gt;While holding this lock, PostgreSQL scans the entire table to verify that no existing rows have NULL values in the column. On a 50-million-row table, this scan can take minutes. During those minutes, nothing else can touch the table. Not even reads.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Nothing quite like the surprise you get the first time you rewrite every tuple on a table with 20M rows because you added a column with a default value." — Hacker News&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  The safe pattern (PostgreSQL 12+)
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Step 1: Add a CHECK constraint with NOT VALID
-- This only checks new/modified rows, no full table scan
ALTER TABLE users ADD CONSTRAINT users_email_not_null
  CHECK (email IS NOT NULL) NOT VALID;

-- Step 2: Validate the constraint in a separate transaction
-- This scans the table but only holds a SHARE UPDATE EXCLUSIVE lock
-- (allows reads AND writes to continue)
ALTER TABLE users VALIDATE CONSTRAINT users_email_not_null;

-- Step 3: Now set NOT NULL — PostgreSQL sees the valid CHECK constraint
-- and skips the full table scan entirely
ALTER TABLE users ALTER COLUMN email SET NOT NULL;

-- Step 4: Drop the CHECK constraint (redundant now)
ALTER TABLE users DROP CONSTRAINT users_email_not_null;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This four-step pattern is the standard safe approach. The key insight: once PostgreSQL sees a validated CHECK constraint that guarantees no NULLs exist, the &lt;code&gt;SET NOT NULL&lt;/code&gt; becomes a metadata-only operation that completes instantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PostgreSQL 18 update:&lt;/strong&gt; As of PostgreSQL 18 (September 2025), you can use &lt;code&gt;ALTER TABLE ... ADD COLUMN ... NOT NULL NOT VALID&lt;/code&gt; directly. This simplifies the pattern for new columns, though the CHECK pattern is still needed for existing columns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mistake #3: ALTER COLUMN TYPE (Silent Table Rewrite)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The dangerous migration
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Changing a column type
ALTER TABLE events ALTER COLUMN payload TYPE jsonb USING payload::jsonb;

-- Or seemingly innocent changes like:
ALTER TABLE users ALTER COLUMN name TYPE varchar(500);  -- was varchar(255)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  What happens in production
&lt;/h3&gt;

&lt;p&gt;Most column type changes trigger a &lt;strong&gt;full table rewrite&lt;/strong&gt;. PostgreSQL creates a new copy of the table with the new column type, copies every row, then swaps the tables. The entire operation runs under an ACCESS EXCLUSIVE lock.&lt;/p&gt;

&lt;p&gt;On a 10GB table, a rewrite can take 10-30 minutes. During that time, no reads or writes are possible. Your application is completely blocked on that table. If the table is central to your application (users, orders, events), this is a full outage.&lt;/p&gt;

&lt;p&gt;What makes this especially dangerous is that many type changes &lt;em&gt;look&lt;/em&gt; safe. Changing&lt;code&gt;varchar(255)&lt;/code&gt; to&lt;code&gt;varchar(500)&lt;/code&gt; seems like it should be a metadata change, but PostgreSQL rewrites the entire table.&lt;/p&gt;
&lt;h3&gt;
  
  
  Safe alternatives
&lt;/h3&gt;

&lt;p&gt;There is no universal safe pattern for column type changes. The approach depends on the specific change:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- varchar(255) → text: SAFE, no rewrite (just removes the length limit)
ALTER TABLE users ALTER COLUMN name TYPE text;

-- varchar(255) → varchar(500): REWRITES the table
-- Safe alternative: just use text instead
ALTER TABLE users ALTER COLUMN name TYPE text;

-- int → bigint: REWRITES the table
-- Safe alternative: expand-contract migration
-- 1. Add new column
ALTER TABLE events ADD COLUMN id_new bigint;
-- 2. Backfill in batches (application-level)
-- 3. Swap columns
-- 4. Drop old column
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The general principle: if the new type requires more storage per row or a different binary representation, PostgreSQL rewrites the table. The safe alternative is usually an expand-contract migration: add a new column, backfill, swap, drop.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mistake #4: Running Migrations Without lock_timeout
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The dangerous migration
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- No lock_timeout set — will wait indefinitely for the lock
ALTER TABLE orders ADD COLUMN discount_pct numeric(5,2);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  What happens in production
&lt;/h3&gt;

&lt;p&gt;Even a fast DDL operation like adding a nullable column needs to acquire a lock on the table. If there is a long-running query or transaction holding a conflicting lock on that table, your DDL statement will &lt;strong&gt;wait indefinitely&lt;/strong&gt; for the lock.&lt;/p&gt;

&lt;p&gt;While your DDL statement is waiting, it goes into the lock queue. Here's the critical part:&lt;strong&gt;every subsequent query on that table also gets queued behind your waiting DDL&lt;/strong&gt;. Even simple SELECT queries. The lock queue is FIFO, and your DDL operation is blocking everything behind it.&lt;/p&gt;

&lt;p&gt;This is the "lock cascade" pattern. One long-running analytics query + one migration without lock_timeout = total table lockout. Your migration is waiting for the analytics query, and every application query is waiting for your migration.&lt;/p&gt;
&lt;h3&gt;
  
  
  The safe pattern
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Always set lock_timeout before DDL operations
SET lock_timeout = '5s';

-- Now if the lock can't be acquired within 5 seconds,
-- the statement fails instead of waiting indefinitely
ALTER TABLE orders ADD COLUMN discount_pct numeric(5,2);

-- Reset for safety
RESET lock_timeout;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With &lt;code&gt;lock_timeout&lt;/code&gt; set, a failed lock acquisition gives you an error you can handle: retry after a short delay, kill the blocking query, or run the migration during a lower-traffic window. Without it, your only option is to manually kill the migration session while your application is down.&lt;/p&gt;

&lt;p&gt;A good deployment wrapper retries the migration 3-5 times with a short delay between attempts. If a 5-second lock_timeout fails 5 times in a row (25 seconds of retrying), something is genuinely blocking the table and you need to investigate — not wait forever.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Retry wrapper pattern (pseudocode)
for attempt in 1..5:
  BEGIN;
    SET LOCAL lock_timeout = '5s';
    ALTER TABLE orders ADD COLUMN discount_pct numeric(5,2);
  COMMIT;
  -- Success, exit loop

  -- On lock_timeout error:
  ROLLBACK;
  sleep(2 * attempt);  -- Exponential backoff

-- If all 5 attempts fail, alert and investigate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Mistake #5: Adding a Foreign Key Without NOT VALID
&lt;/h2&gt;
&lt;h3&gt;
  
  
  The dangerous migration
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Adding a foreign key constraint
ALTER TABLE orders
  ADD CONSTRAINT orders_customer_fk
  FOREIGN KEY (customer_id) REFERENCES customers (id);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  What happens in production
&lt;/h3&gt;

&lt;p&gt;Adding a foreign key constraint acquires locks on &lt;strong&gt;both tables&lt;/strong&gt; — the referencing table (orders) and the referenced table (customers). PostgreSQL needs to scan the entire referencing table to verify that every value in the foreign key column exists in the referenced table.&lt;/p&gt;

&lt;p&gt;GoCardless documented a 15-second API outage caused by exactly this pattern: a foreign key constraint that locked both tables simultaneously, blocking all queries on both.&lt;/p&gt;

&lt;p&gt;The lock on the referenced table (customers) is what most developers don't expect. You think you are modifying the orders table, but you are actually locking customers too. If customers is your most-queried table, this cascades quickly.&lt;/p&gt;
&lt;h3&gt;
  
  
  The safe pattern
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Step 1: Add the constraint with NOT VALID
-- This only checks new/modified rows, no full table scan
-- Still locks both tables briefly, but completes instantly
ALTER TABLE orders
  ADD CONSTRAINT orders_customer_fk
  FOREIGN KEY (customer_id) REFERENCES customers (id)
  NOT VALID;

-- Step 2: Validate the constraint in a separate transaction
-- This scans the table but holds a weaker lock (SHARE UPDATE EXCLUSIVE)
-- Reads and writes continue normally on both tables
ALTER TABLE orders VALIDATE CONSTRAINT orders_customer_fk;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The NOT VALID + VALIDATE pattern splits the operation into two phases: a fast metadata change (milliseconds, locks both tables briefly) and a slow validation scan (seconds to minutes, but only holds a SHARE UPDATE EXCLUSIVE lock that allows reads and writes).&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Thread: The Lock Queue Problem
&lt;/h2&gt;

&lt;p&gt;All five mistakes share the same underlying mechanism: they acquire a lock that's too strong for too long. The PostgreSQL lock queue makes this worse than you'd expect:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Your DDL statement requests an ACCESS EXCLUSIVE lock&lt;/li&gt;
&lt;li&gt;A long-running query is holding a conflicting lock — your DDL waits&lt;/li&gt;
&lt;li&gt;While your DDL waits, all new queries on the table queue &lt;em&gt;behind&lt;/em&gt; your DDL&lt;/li&gt;
&lt;li&gt;The lock queue grows: 10 queries, 50 queries, 200 queries&lt;/li&gt;
&lt;li&gt;Connection pool fills up with blocked queries&lt;/li&gt;
&lt;li&gt;New requests can't get a connection — service returns 5xx errors&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The total outage duration is not the time your DDL takes. It's the time the blocking query was already running + the time your DDL takes + the time to drain the queued requests. A 3-second DDL operation behind a 60-second analytics query causes a 63+ second outage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preventing These Mistakes
&lt;/h2&gt;

&lt;p&gt;The safe patterns above are well-documented but rarely enforced. A developer who writes migrations twice a month is unlikely to remember the NOT VALID + VALIDATE pattern or the lock_timeout requirement every time.&lt;/p&gt;

&lt;p&gt;The most effective prevention is automated linting in CI. When a migration is opened in a pull request, a linter can catch all five patterns before they reach production:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check all migration files for safety issues
npx migrationpilot check migrations/*.sql --fail-on critical

# Example output:
# migrations/V42__add_index.sql
#   MP001 [critical] CREATE INDEX without CONCURRENTLY
#     Acquires: SHARE lock (blocks writes)
#     Safe alternative:
#       CREATE INDEX CONCURRENTLY idx_orders_customer_id ON orders (customer_id);
#
#   MP004 [warning] Missing lock_timeout
#     Safe alternative:
#       SET lock_timeout = '5s';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Free tools that catch these patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/sbdchd/squawk" rel="noopener noreferrer"&gt;Squawk&lt;/a&gt; — 32 rules, Rust, GitHub Action&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/ankane/strong_migrations" rel="noopener noreferrer"&gt;strong_migrations&lt;/a&gt; — Rails only, excellent error messages&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/mickelsamuel/migrationpilot" rel="noopener noreferrer"&gt;MigrationPilot&lt;/a&gt; — 80 rules, lock classification, auto-fix, GitHub Action&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The specific tool matters less than having &lt;em&gt;any&lt;/em&gt; automated check. These five mistakes are repetitive patterns that a linter catches in milliseconds. The production incidents they cause take hours to diagnose and recover from.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Reference
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mistake&lt;/th&gt;
&lt;th&gt;Lock Acquired&lt;/th&gt;
&lt;th&gt;Safe Pattern&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CREATE INDEX (no CONCURRENTLY)&lt;/td&gt;
&lt;td&gt;SHARE&lt;/td&gt;
&lt;td&gt;CREATE INDEX CONCURRENTLY&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SET NOT NULL (direct)&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;CHECK NOT VALID + VALIDATE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ALTER COLUMN TYPE&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;Expand-contract migration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No lock_timeout&lt;/td&gt;
&lt;td&gt;Any (queues indefinitely)&lt;/td&gt;
&lt;td&gt;SET lock_timeout = '5s'&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FK without NOT VALID&lt;/td&gt;
&lt;td&gt;SHARE ROW EXCLUSIVE (both tables)&lt;/td&gt;
&lt;td&gt;NOT VALID + VALIDATE&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>devops</category>
      <category>backend</category>
    </item>
    <item>
      <title>PostgreSQL Migration Best Practices for Zero-Downtime Deployments</title>
      <dc:creator>Mickel Samuel</dc:creator>
      <pubDate>Mon, 02 Mar 2026 14:05:38 +0000</pubDate>
      <link>https://dev.to/mickelsamuel/postgresql-migration-best-practices-for-zero-downtime-deployments-1c4</link>
      <guid>https://dev.to/mickelsamuel/postgresql-migration-best-practices-for-zero-downtime-deployments-1c4</guid>
      <description>&lt;p&gt;Zero-downtime schema migrations are not about a single trick. They require a combination of techniques: lock management, phased rollouts, backfill strategies, and CI enforcement. This guide covers the full playbook.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Always Set lock_timeout
&lt;/h2&gt;

&lt;p&gt;The single most important practice for safe migrations. Every DDL statement that acquires a table lock should be preceded by a lock_timeout:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Set a 5-second lock timeout
-- If the lock cannot be acquired in 5s, the statement fails
-- This prevents the lock queue problem (see below)
SET lock_timeout = '5s';

ALTER TABLE users ADD COLUMN status TEXT DEFAULT 'active';

-- Reset after DDL
RESET lock_timeout;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Without lock_timeout, a DDL statement will wait indefinitely for a lock. While it waits, every subsequent query that needs a conflicting lock queues behind it. A single long-running analytics query can trigger a cascading outage:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Long query holds ACCESS SHARE on the table&lt;/li&gt;
&lt;li&gt;ALTER TABLE waits for ACCESS EXCLUSIVE lock&lt;/li&gt;
&lt;li&gt;All new queries queue behind the ALTER TABLE&lt;/li&gt;
&lt;li&gt;Connection pool exhausts within seconds&lt;/li&gt;
&lt;li&gt;Application-wide outage&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With lock_timeout, the ALTER fails fast. Your deployment script can retry after a delay, and the application continues serving traffic in the meantime.&lt;/p&gt;

&lt;h3&gt;
  
  
  The retry wrapper pattern
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Retry wrapper: try up to 5 times with increasing delays
DO $$
DECLARE
  retries INT := 0;
  max_retries INT := 5;
BEGIN
  LOOP
    BEGIN
      SET lock_timeout = '5s';
      ALTER TABLE users ADD COLUMN status TEXT DEFAULT 'active';
      RESET lock_timeout;
      RETURN;  -- Success
    EXCEPTION WHEN lock_not_available THEN
      retries := retries + 1;
      IF retries &amp;gt;= max_retries THEN
        RAISE EXCEPTION 'Failed to acquire lock after % attempts', max_retries;
      END IF;
      PERFORM pg_sleep(retries * 2);  -- Backoff: 2s, 4s, 6s, 8s, 10s
    END;
  END LOOP;
END $$;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  2. Use statement_timeout for Long-Running DDL
&lt;/h2&gt;

&lt;p&gt;While lock_timeout limits how long you wait to &lt;em&gt;acquire&lt;/em&gt; a lock, statement_timeout limits how long the statement itself can &lt;em&gt;run&lt;/em&gt;. Set both:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Prevent DDL from running longer than expected
SET lock_timeout = '5s';
SET statement_timeout = '30s';

ALTER TABLE users ALTER COLUMN email SET NOT NULL;

RESET lock_timeout;
RESET statement_timeout;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  3. The Expand-Contract Pattern
&lt;/h2&gt;

&lt;p&gt;The expand-contract pattern (also called parallel change) is the safest way to make backwards-incompatible schema changes. It splits a dangerous change into three deployments:&lt;/p&gt;
&lt;h3&gt;
  
  
  Example: Renaming a column
&lt;/h3&gt;

&lt;p&gt;Directly renaming a column breaks all existing queries and application code that references the old name. The expand-contract approach:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- DEPLOY 1: Expand — add the new column alongside the old one
ALTER TABLE users ADD COLUMN full_name TEXT;

-- Create a trigger to keep both columns in sync
CREATE OR REPLACE FUNCTION sync_name_columns()
RETURNS TRIGGER AS $$
BEGIN
  IF TG_OP = 'INSERT' OR TG_OP = 'UPDATE' THEN
    IF NEW.full_name IS NULL AND NEW.name IS NOT NULL THEN
      NEW.full_name := NEW.name;
    ELSIF NEW.name IS NULL AND NEW.full_name IS NOT NULL THEN
      NEW.name := NEW.full_name;
    END IF;
  END IF;
  RETURN NEW;
END $$ LANGUAGE plpgsql;

CREATE TRIGGER trg_sync_names
BEFORE INSERT OR UPDATE ON users
FOR EACH ROW EXECUTE FUNCTION sync_name_columns();

-- Backfill existing rows
UPDATE users SET full_name = name WHERE full_name IS NULL;


-- DEPLOY 2: Migrate — update application to use "full_name"
-- All reads and writes now use the new column name
-- Deploy this code change and verify in production


-- DEPLOY 3: Contract — remove the old column and trigger
DROP TRIGGER trg_sync_names ON users;
DROP FUNCTION sync_name_columns();
ALTER TABLE users DROP COLUMN name;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Example: Changing a column type
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Changing INT to BIGINT directly acquires ACCESS EXCLUSIVE
-- and rewrites the entire table. Instead, use expand-contract:

-- DEPLOY 1: Add new column
ALTER TABLE orders ADD COLUMN amount_v2 BIGINT;

-- Sync trigger + backfill (same pattern as above)

-- DEPLOY 2: Switch reads/writes to amount_v2

-- DEPLOY 3: Drop old column
ALTER TABLE orders DROP COLUMN amount;
-- Optionally: ALTER TABLE orders RENAME COLUMN amount_v2 TO amount;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  4. Batched Backfills
&lt;/h2&gt;

&lt;p&gt;Never run a single UPDATE against millions of rows. A single large UPDATE:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Holds ROW EXCLUSIVE locks for the entire duration&lt;/li&gt;
&lt;li&gt;Generates massive WAL (write-ahead log) volume&lt;/li&gt;
&lt;li&gt;Creates millions of dead tuples that autovacuum must clean up&lt;/li&gt;
&lt;li&gt;Can cause replication lag on replicas&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Might fill up your disk with WAL and dead tuples&lt;/p&gt;

&lt;p&gt;-- BAD: Single update that touches all 50M rows&lt;br&gt;
UPDATE users SET status = 'active' WHERE status IS NULL;&lt;/p&gt;

&lt;p&gt;-- GOOD: Batched update with pauses for autovacuum&lt;br&gt;
DO $$&lt;br&gt;
DECLARE&lt;br&gt;
  rows_updated INT;&lt;br&gt;
  total_updated INT := 0;&lt;br&gt;
  batch_size INT := 10000;&lt;br&gt;
BEGIN&lt;br&gt;
  LOOP&lt;br&gt;
    UPDATE users&lt;br&gt;
    SET status = 'active'&lt;br&gt;
    WHERE id IN (&lt;br&gt;
      SELECT id FROM users&lt;br&gt;
      WHERE status IS NULL&lt;br&gt;
      ORDER BY id&lt;br&gt;
      LIMIT batch_size&lt;br&gt;
    );&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET DIAGNOSTICS rows_updated = ROW_COUNT;
total_updated := total_updated + rows_updated;

RAISE NOTICE 'Updated % rows (% total)', rows_updated, total_updated;

EXIT WHEN rows_updated = 0;

-- Give autovacuum time to clean up dead tuples
PERFORM pg_sleep(0.5);

-- Commit each batch independently (PG 11+ procedures)
COMMIT;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;END LOOP;&lt;br&gt;
END $$;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  5. Safe Constraint Addition
&lt;/h2&gt;
&lt;h3&gt;
  
  
  NOT NULL constraints
&lt;/h3&gt;

&lt;p&gt;Adding NOT NULL directly acquires ACCESS EXCLUSIVE and scans the entire table. Use the CHECK constraint pattern instead:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- BAD: Scans entire table under ACCESS EXCLUSIVE
ALTER TABLE users ALTER COLUMN email SET NOT NULL;

-- GOOD: Two-step pattern
-- Step 1: Add CHECK constraint as NOT VALID (instant, no scan)
ALTER TABLE users ADD CONSTRAINT chk_email_not_null
  CHECK (email IS NOT NULL) NOT VALID;

-- Step 2: Validate the constraint (scans table but only holds
-- SHARE UPDATE EXCLUSIVE — does not block reads or writes)
ALTER TABLE users VALIDATE CONSTRAINT chk_email_not_null;

-- PG 12+: Once a valid CHECK (col IS NOT NULL) exists,
-- you can SET NOT NULL without a table scan:
ALTER TABLE users ALTER COLUMN email SET NOT NULL;
ALTER TABLE users DROP CONSTRAINT chk_email_not_null;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Foreign key constraints
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- BAD: Validates all existing rows under SHARE ROW EXCLUSIVE
ALTER TABLE orders ADD CONSTRAINT fk_user
  FOREIGN KEY (user_id) REFERENCES users (id);

-- GOOD: Add NOT VALID, then validate separately
ALTER TABLE orders ADD CONSTRAINT fk_user
  FOREIGN KEY (user_id) REFERENCES users (id) NOT VALID;

ALTER TABLE orders VALIDATE CONSTRAINT fk_user;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Unique constraints
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- BAD: Creates an index + validates under ACCESS EXCLUSIVE
ALTER TABLE users ADD CONSTRAINT uniq_email UNIQUE (email);

-- GOOD: Build the index concurrently, then attach it
CREATE UNIQUE INDEX CONCURRENTLY idx_users_email ON users (email);
ALTER TABLE users ADD CONSTRAINT uniq_email UNIQUE USING INDEX idx_users_email;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  6. One DDL Per Transaction
&lt;/h2&gt;

&lt;p&gt;Multiple DDL statements in a single transaction compound lock duration. Each statement acquires its own locks, and all locks are held until COMMIT:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- BAD: Three DDL statements in one transaction
-- ACCESS EXCLUSIVE held for the entire combined duration
BEGIN;
ALTER TABLE users ADD COLUMN phone TEXT;
ALTER TABLE users ADD COLUMN address TEXT;
ALTER TABLE users ADD COLUMN city TEXT;
COMMIT;

-- GOOD: Separate transactions (or separate migration files)
ALTER TABLE users ADD COLUMN phone TEXT;
-- locks released
ALTER TABLE users ADD COLUMN address TEXT;
-- locks released
ALTER TABLE users ADD COLUMN city TEXT;
-- locks released
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  7. Test Migrations Against Production Data
&lt;/h2&gt;

&lt;p&gt;Migrations that work on a dev database with 100 rows can cause outages on a production database with 100 million rows. The locking behavior is the same, but the duration is completely different.&lt;/p&gt;

&lt;p&gt;Best practices for migration testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain a staging database with production-sized data (or at least production-like volumes)&lt;/li&gt;
&lt;li&gt;Time your migrations on staging and set alerts for migrations that take longer than expected&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;EXPLAIN&lt;/code&gt; on any data migration queries to check for sequential scans&lt;/li&gt;
&lt;li&gt;Monitor &lt;code&gt;pg_stat_activity&lt;/code&gt; and &lt;code&gt;pg_locks&lt;/code&gt; during staging runs&lt;/li&gt;
&lt;li&gt;Run static analysis on migration files in CI to catch common issues early&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  8. Deployment Ordering
&lt;/h2&gt;

&lt;p&gt;The order of application deployment vs migration execution matters:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Change Type&lt;/th&gt;
&lt;th&gt;Deploy Order&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Add column&lt;/td&gt;
&lt;td&gt;Migration first, then app code&lt;/td&gt;
&lt;td&gt;Old code ignores new columns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Remove column&lt;/td&gt;
&lt;td&gt;App code first, then migration&lt;/td&gt;
&lt;td&gt;Old code still references the column&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rename column&lt;/td&gt;
&lt;td&gt;Expand-contract (3 deploys)&lt;/td&gt;
&lt;td&gt;Both old and new code need to work&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Add NOT NULL&lt;/td&gt;
&lt;td&gt;App code first (ensure no NULLs), then constraint&lt;/td&gt;
&lt;td&gt;Constraint validation fails if NULLs exist&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Add index&lt;/td&gt;
&lt;td&gt;Migration (CONCURRENTLY) anytime&lt;/td&gt;
&lt;td&gt;Indexes are transparent to app code&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  9. Monitor During Migrations
&lt;/h2&gt;

&lt;p&gt;Always monitor these metrics during migration execution:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Active locks on the table being migrated
SELECT mode, granted, count(*)
FROM pg_locks
WHERE relation = 'users'::regclass
GROUP BY mode, granted;

-- Queries waiting for locks
SELECT pid, query, wait_event_type, wait_event,
       age(now(), query_start) AS duration
FROM pg_stat_activity
WHERE wait_event_type = 'Lock'
ORDER BY query_start;

-- Replication lag (if using replicas)
SELECT client_addr, state,
       pg_wal_lsn_diff(sent_lsn, replay_lsn) AS bytes_behind,
       replay_lag
FROM pg_stat_replication;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  10. Enforce Best Practices in CI
&lt;/h2&gt;

&lt;p&gt;The best migration practices are worthless if they are not enforced. Add automated checks to your CI pipeline that catch common mistakes before they reach production:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Missing CONCURRENTLY on index operations&lt;/li&gt;
&lt;li&gt;Missing lock_timeout before DDL&lt;/li&gt;
&lt;li&gt;Volatile defaults that cause table rewrites&lt;/li&gt;
&lt;li&gt;Direct NOT NULL additions without the CHECK pattern&lt;/li&gt;
&lt;li&gt;Multiple DDL in a single transaction&lt;/li&gt;
&lt;li&gt;Unbatched backfills&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/mickelsamuel/migrationpilot" rel="noopener noreferrer"&gt;MigrationPilot&lt;/a&gt; checks all of these patterns (and 80 more) as a CLI, GitHub Action, or Node.js library. It runs in under a second, requires no database connection for static analysis, and posts results as PR comments so your team catches issues during code review.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Add to .github/workflows/migration-check.yml
- uses: mickelsamuel/migrationpilot@v1
  with:
    migration-path: "migrations/*.sql"
    fail-on: critical
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>devops</category>
      <category>backend</category>
    </item>
    <item>
      <title>The Complete Guide to PostgreSQL Lock Types for Schema Changes</title>
      <dc:creator>Mickel Samuel</dc:creator>
      <pubDate>Mon, 02 Mar 2026 14:05:34 +0000</pubDate>
      <link>https://dev.to/mickelsamuel/the-complete-guide-to-postgresql-lock-types-for-schema-changes-4b70</link>
      <guid>https://dev.to/mickelsamuel/the-complete-guide-to-postgresql-lock-types-for-schema-changes-4b70</guid>
      <description>&lt;p&gt;If you run DDL against a production PostgreSQL database without understanding locks, you will cause an outage. That is not a maybe. This guide covers every lock level, which statements acquire which locks, and what that means for your running application.&lt;/p&gt;

&lt;h2&gt;
  
  
  PostgreSQL Lock Levels: The Full Hierarchy
&lt;/h2&gt;

&lt;p&gt;PostgreSQL has eight table-level lock modes, ordered from least restrictive to most restrictive. Two locks conflict when one would violate the guarantees of the other. Understanding this hierarchy is the foundation of safe schema changes.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Lock Level&lt;/th&gt;
&lt;th&gt;Blocks Reads?&lt;/th&gt;
&lt;th&gt;Blocks Writes?&lt;/th&gt;
&lt;th&gt;Acquired By&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ACCESS SHARE&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;SELECT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ROW SHARE&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;SELECT FOR UPDATE/SHARE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ROW EXCLUSIVE&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;INSERT, UPDATE, DELETE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SHARE UPDATE EXCLUSIVE&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;VACUUM, CREATE INDEX CONCURRENTLY, ALTER TABLE (some)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SHARE&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;CREATE INDEX (without CONCURRENTLY)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SHARE ROW EXCLUSIVE&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;CREATE TRIGGER, some ALTER TABLE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;EXCLUSIVE&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;REFRESH MATERIALIZED VIEW CONCURRENTLY&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;DROP, TRUNCATE, most ALTER TABLE, VACUUM FULL, CLUSTER&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The key insight: ACCESS EXCLUSIVE is the only lock that blocks reads. Everything else allows SELECT queries to continue. But anything from SHARE upward blocks writes (INSERT, UPDATE, DELETE), which is still devastating for a production application.&lt;/p&gt;

&lt;h2&gt;
  
  
  ACCESS SHARE: The Harmless Lock
&lt;/h2&gt;

&lt;p&gt;Every &lt;code&gt;SELECT&lt;/code&gt; statement acquires an ACCESS SHARE lock. It only conflicts with ACCESS EXCLUSIVE. This is why you can run queries while most DDL is happening, but a &lt;code&gt;DROP TABLE&lt;/code&gt; will block all your reads.&lt;/p&gt;

&lt;p&gt;ACCESS SHARE is automatically released at the end of the statement (or transaction if inside a transaction block). You never need to worry about this lock in practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  SHARE UPDATE EXCLUSIVE: The Safe DDL Lock
&lt;/h2&gt;

&lt;p&gt;This lock is the sweet spot for online operations. It self-conflicts (you cannot run two concurrent operations that both need it) but does not block reads or writes. Operations that acquire this lock include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;VACUUM&lt;/code&gt; (without FULL)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;CREATE INDEX CONCURRENTLY&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ALTER TABLE VALIDATE CONSTRAINT&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ALTER TABLE SET STATISTICS&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;REINDEX CONCURRENTLY&lt;/code&gt; (PG 12+)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why &lt;code&gt;CREATE INDEX CONCURRENTLY&lt;/code&gt; is the gold standard for adding indexes to production tables. It takes longer, but your application keeps running without any interruption.&lt;/p&gt;

&lt;h2&gt;
  
  
  SHARE: The Write-Blocking Lock
&lt;/h2&gt;

&lt;p&gt;A regular &lt;code&gt;CREATE INDEX&lt;/code&gt; (without CONCURRENTLY) acquires a SHARE lock. This blocks all INSERT, UPDATE, and DELETE operations on the table for the entire duration of the index build.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- This blocks all writes for the entire index build duration
-- On a 100M row table, this could be 10+ minutes
CREATE INDEX idx_users_email ON users (email);

-- This is the safe alternative — only blocks other DDL
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;On a small table (under 100K rows), the difference is negligible. On a 50M row table, a regular CREATE INDEX might take several minutes, and every write to that table queues up behind it. Connection pools fill up, application requests time out, and your users see errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  ACCESS EXCLUSIVE: The Total Lockout
&lt;/h2&gt;

&lt;p&gt;ACCESS EXCLUSIVE is the most dangerous lock in PostgreSQL. It blocks everything: reads, writes, and all other lock types. Nothing can touch the table until the lock is released.&lt;/p&gt;

&lt;p&gt;These operations acquire ACCESS EXCLUSIVE:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;DROP TABLE&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;TRUNCATE&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ALTER TABLE ADD COLUMN ... DEFAULT (volatile)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ALTER TABLE ALTER COLUMN TYPE&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ALTER TABLE SET NOT NULL&lt;/code&gt; (without CHECK pattern)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ALTER TABLE ADD CONSTRAINT ... UNIQUE&lt;/code&gt; (without USING INDEX)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;VACUUM FULL&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;CLUSTER&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;LOCK TABLE&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Lock Queue Problem
&lt;/h2&gt;

&lt;p&gt;Here is the part that surprises most engineers: PostgreSQL lock acquisition is a FIFO queue. If your DDL statement is waiting for an ACCESS EXCLUSIVE lock, every subsequent query that needs a conflicting lock queues up behind it.&lt;/p&gt;

&lt;p&gt;Consider this scenario:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A long-running analytics query holds ACCESS SHARE on the &lt;code&gt;users&lt;/code&gt; table&lt;/li&gt;
&lt;li&gt;Your migration runs &lt;code&gt;ALTER TABLE users SET NOT NULL ...&lt;/code&gt; which needs ACCESS EXCLUSIVE&lt;/li&gt;
&lt;li&gt;The ALTER waits behind the analytics query&lt;/li&gt;
&lt;li&gt;Every new SELECT on &lt;code&gt;users&lt;/code&gt; queues behind the ALTER&lt;/li&gt;
&lt;li&gt;Your connection pool fills up within seconds&lt;/li&gt;
&lt;li&gt;Application outage&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is why &lt;code&gt;SET lock_timeout&lt;/code&gt; is essential. If you set a lock timeout of 5 seconds, the ALTER will fail fast instead of queuing indefinitely:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Always set a lock_timeout before DDL in production
SET lock_timeout = '5s';
ALTER TABLE users ALTER COLUMN email SET NOT NULL;

-- If the lock cannot be acquired within 5 seconds,
-- PostgreSQL raises: ERROR: canceling statement due to lock timeout
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Lock Conflicts: Which Locks Block Which
&lt;/h2&gt;

&lt;p&gt;The lock conflict matrix is not obvious. Two locks conflict only when one would violate the guarantees the other provides. Here are the most important conflicts to remember:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ACCESS EXCLUSIVE&lt;/strong&gt; conflicts with everything, including ACCESS SHARE (SELECT)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SHARE&lt;/strong&gt; conflicts with ROW EXCLUSIVE (INSERT/UPDATE/DELETE), so CREATE INDEX blocks writes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SHARE UPDATE EXCLUSIVE&lt;/strong&gt; only conflicts with itself, SHARE, and the EXCLUSIVEs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ROW EXCLUSIVE&lt;/strong&gt; does not conflict with itself (concurrent writes are fine) but conflicts with SHARE and above&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Common DDL Statements and Their Lock Types
&lt;/h2&gt;

&lt;p&gt;Here is a practical reference for the DDL you are most likely to run in migrations:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- ACCESS EXCLUSIVE (blocks everything)
ALTER TABLE users ADD COLUMN status TEXT DEFAULT 'active';  -- PG &amp;lt; 11
ALTER TABLE users ALTER COLUMN age TYPE bigint;
ALTER TABLE users ADD CONSTRAINT uniq UNIQUE (email);
DROP TABLE old_users;
TRUNCATE users;
VACUUM FULL users;
CLUSTER users USING idx_id;

-- SHARE (blocks writes, allows reads)
CREATE INDEX idx_email ON users (email);

-- SHARE UPDATE EXCLUSIVE (blocks nothing important)
CREATE INDEX CONCURRENTLY idx_email ON users (email);
ALTER TABLE users VALIDATE CONSTRAINT chk_email;
VACUUM users;

-- SHARE ROW EXCLUSIVE (blocks writes)
ALTER TABLE users ADD CONSTRAINT fk_org
  FOREIGN KEY (org_id) REFERENCES orgs(id) NOT VALID;

-- ROW EXCLUSIVE (normal DML — no DDL concern)
INSERT INTO users ...
UPDATE users SET ...
DELETE FROM users ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Practical Guidelines
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. Always set lock_timeout
&lt;/h3&gt;

&lt;p&gt;Before any DDL in production, set a lock_timeout. Five seconds is a good default. If the lock cannot be acquired in that time, it is better to fail and retry than to queue up all traffic.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Use CONCURRENTLY whenever possible
&lt;/h3&gt;

&lt;p&gt;CREATE INDEX, DROP INDEX, REINDEX, and REFRESH MATERIALIZED VIEW all have CONCURRENTLY variants. They take longer but do not block reads or writes. Use them in production without exception.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Avoid ACCESS EXCLUSIVE unless necessary
&lt;/h3&gt;

&lt;p&gt;Most ACCESS EXCLUSIVE operations have safer alternatives. Adding NOT NULL can use the CHECK pattern. Adding a UNIQUE constraint can use CREATE UNIQUE INDEX CONCURRENTLY + ADD CONSTRAINT USING INDEX. Column type changes can use expand-contract with a new column.&lt;/p&gt;
&lt;h3&gt;
  
  
  4. Keep transactions short
&lt;/h3&gt;

&lt;p&gt;Locks are held until the end of the transaction. If you wrap multiple DDL statements in a single transaction, you hold the most restrictive lock for the entire duration. Split large migrations into separate transactions when possible.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to Check Current Locks
&lt;/h2&gt;

&lt;p&gt;You can inspect active locks with this query:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT
  l.locktype,
  l.relation::regclass AS table_name,
  l.mode,
  l.granted,
  l.pid,
  a.query,
  a.state,
  age(now(), a.query_start) AS duration
FROM pg_locks l
JOIN pg_stat_activity a ON l.pid = a.pid
WHERE l.relation IS NOT NULL
  AND a.pid &amp;lt;&amp;gt; pg_backend_pid()
ORDER BY a.query_start;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And to see blocked queries:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT
  blocked.pid AS blocked_pid,
  blocked.query AS blocked_query,
  blocking.pid AS blocking_pid,
  blocking.query AS blocking_query,
  age(now(), blocked.query_start) AS waiting_duration
FROM pg_stat_activity blocked
JOIN pg_locks bl ON blocked.pid = bl.pid AND NOT bl.granted
JOIN pg_locks l ON bl.relation = l.relation
  AND bl.locktype = l.locktype AND l.granted
JOIN pg_stat_activity blocking ON l.pid = blocking.pid
WHERE blocked.pid &amp;lt;&amp;gt; blocking.pid;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Automate Lock Analysis
&lt;/h2&gt;

&lt;p&gt;Memorizing lock types is error-prone. Tools can catch lock issues before they reach production. &lt;a href="https://github.com/mickelsamuel/migrationpilot" rel="noopener noreferrer"&gt;MigrationPilot&lt;/a&gt; analyzes your migration SQL and reports the exact lock type each statement will acquire, flags dangerous patterns like missing CONCURRENTLY or lock_timeout, and suggests safe alternatives. It runs as a CLI, GitHub Action, or Node.js library with 80 safety rules.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx migrationpilot analyze migrations/002_add_index.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>sql</category>
      <category>backend</category>
    </item>
    <item>
      <title>CREATE INDEX CONCURRENTLY: The Complete PostgreSQL Guide</title>
      <dc:creator>Mickel Samuel</dc:creator>
      <pubDate>Mon, 02 Mar 2026 14:05:30 +0000</pubDate>
      <link>https://dev.to/mickelsamuel/create-index-concurrently-the-complete-postgresql-guide-b7m</link>
      <guid>https://dev.to/mickelsamuel/create-index-concurrently-the-complete-postgresql-guide-b7m</guid>
      <description>&lt;p&gt;CREATE INDEX CONCURRENTLY is the single most important command for safely adding indexes to production PostgreSQL tables. This guide covers how it works, why it sometimes fails, how to handle failures, and the related REINDEX CONCURRENTLY command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Regular CREATE INDEX Is Dangerous
&lt;/h2&gt;

&lt;p&gt;A regular &lt;code&gt;CREATE INDEX&lt;/code&gt; acquires a SHARE lock on the table. This lock blocks all INSERT, UPDATE, and DELETE operations for the entire duration of the index build. On a large table, that can mean minutes or even hours of write downtime.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- DANGEROUS: Blocks all writes for the entire build duration
CREATE INDEX idx_orders_customer_id ON orders (customer_id);

-- On a 50M row table, this might take 3-5 minutes
-- Every INSERT, UPDATE, DELETE queues behind this lock
-- Connection pools fill up, application errors cascade
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  How CREATE INDEX CONCURRENTLY Works
&lt;/h2&gt;

&lt;p&gt;CREATE INDEX CONCURRENTLY builds the index without holding a lock that blocks writes. Instead of scanning the table once under a SHARE lock, it performs the build in three phases:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Phase 1 — Catalog entry:&lt;/strong&gt; PostgreSQL creates the index entry in the system catalog with an "invalid" flag. This requires a brief SHARE UPDATE EXCLUSIVE lock (does not block reads or writes). The index is not yet usable by the query planner.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phase 2 — First table scan:&lt;/strong&gt; PostgreSQL scans the entire table and builds the index entries. This happens under a SHARE UPDATE EXCLUSIVE lock, which allows concurrent INSERT, UPDATE, and DELETE operations to continue. Changes that happen during the scan are tracked.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Phase 3 — Second table scan:&lt;/strong&gt; PostgreSQL does a second pass to catch any rows that were modified during Phase 2. Once this is complete, the index is marked as valid and becomes usable by the query planner. Another brief SHARE UPDATE EXCLUSIVE lock is acquired to wait for any transactions that started before Phase 2 to complete.&lt;/p&gt;

&lt;p&gt;-- SAFE: Does not block reads or writes&lt;br&gt;
CREATE INDEX CONCURRENTLY idx_orders_customer_id ON orders (customer_id);&lt;/p&gt;

&lt;p&gt;-- Takes longer than regular CREATE INDEX (roughly 2-3x)&lt;br&gt;
-- but your application continues running normally&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  The Transaction Restriction
&lt;/h2&gt;

&lt;p&gt;CREATE INDEX CONCURRENTLY cannot run inside a transaction block. This is a hard PostgreSQL limitation. If you try, you get an error:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- This fails immediately
BEGIN;
CREATE INDEX CONCURRENTLY idx_email ON users (email);
COMMIT;
-- ERROR: CREATE INDEX CONCURRENTLY cannot run inside a transaction block

-- The solution: run it outside a transaction
CREATE INDEX CONCURRENTLY idx_email ON users (email);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This matters because many migration frameworks wrap each migration file in a transaction by default. If your framework does this (Flyway, Alembic, Django, Rails), you need to configure it to run specific migrations outside a transaction. Check your framework's documentation for how to do this.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;How to Disable Transaction&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Django&lt;/td&gt;
&lt;td&gt;atomic = False on the Migration class&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rails&lt;/td&gt;
&lt;td&gt;disable_ddl_transaction!&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Flyway&lt;/td&gt;
&lt;td&gt;executeInTransaction=false (in SQL comment)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Alembic&lt;/td&gt;
&lt;td&gt;op.execute() with connection.execution_options(isolation_level="AUTOCOMMIT")&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Knex&lt;/td&gt;
&lt;td&gt;knex.schema.raw() outside transaction&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Failure Modes and Recovery
&lt;/h2&gt;

&lt;p&gt;CREATE INDEX CONCURRENTLY can fail partway through. Unlike a regular CREATE INDEX (which rolls back cleanly on failure), a failed CONCURRENTLY operation leaves behind an &lt;strong&gt;invalid index&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Common failure causes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deadlock with another transaction&lt;/li&gt;
&lt;li&gt;Unique constraint violation (for UNIQUE indexes)&lt;/li&gt;
&lt;li&gt;Out of disk space&lt;/li&gt;
&lt;li&gt;statement_timeout or lock_timeout reached&lt;/li&gt;
&lt;li&gt;A long-running transaction that prevents Phase 3 from completing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Detecting invalid indexes
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Find all invalid indexes in the database
SELECT
  schemaname,
  tablename,
  indexname,
  indexdef
FROM pg_indexes
WHERE indexname IN (
  SELECT indexrelid::regclass::text
  FROM pg_index
  WHERE NOT indisvalid
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Recovering from a failed build
&lt;/h3&gt;

&lt;p&gt;You have two options when a concurrent index build fails:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Option 1: Drop and rebuild
DROP INDEX CONCURRENTLY idx_orders_customer_id;
CREATE INDEX CONCURRENTLY idx_orders_customer_id ON orders (customer_id);

-- Option 2: Use REINDEX CONCURRENTLY (PG 12+)
-- This replaces the invalid index in-place
REINDEX INDEX CONCURRENTLY idx_orders_customer_id;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;REINDEX CONCURRENTLY (available since PostgreSQL 12) is generally preferred because it rebuilds the index in a single operation. It creates a new index with a temporary name, swaps it with the old one, and drops the old one — all without blocking writes.&lt;/p&gt;

&lt;h2&gt;
  
  
  REINDEX CONCURRENTLY (PostgreSQL 12+)
&lt;/h2&gt;

&lt;p&gt;REINDEX CONCURRENTLY was added in PostgreSQL 12. It rebuilds an existing index without blocking writes, similar to how CREATE INDEX CONCURRENTLY works. This is useful for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rebuilding invalid indexes left by failed CONCURRENTLY operations&lt;/li&gt;
&lt;li&gt;Rebuilding bloated indexes (index bloat can slow down queries significantly)&lt;/li&gt;
&lt;li&gt;Rebuilding corrupt indexes&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Upgrading index parameters (e.g., changing fillfactor)&lt;/p&gt;

&lt;p&gt;-- Rebuild a single index without blocking writes&lt;br&gt;
REINDEX INDEX CONCURRENTLY idx_orders_customer_id;&lt;/p&gt;

&lt;p&gt;-- Rebuild all indexes on a table&lt;br&gt;
REINDEX TABLE CONCURRENTLY orders;&lt;/p&gt;

&lt;p&gt;-- Rebuild all indexes in a schema&lt;br&gt;
REINDEX SCHEMA CONCURRENTLY public;&lt;/p&gt;

&lt;p&gt;-- DANGEROUS: Without CONCURRENTLY, REINDEX holds ACCESS EXCLUSIVE&lt;br&gt;
-- This blocks all reads and writes&lt;br&gt;
REINDEX INDEX idx_orders_customer_id;  -- Don't do this in production&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Performance Considerations
&lt;/h2&gt;

&lt;p&gt;CREATE INDEX CONCURRENTLY is slower than regular CREATE INDEX. Expect roughly 2-3x the build time because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It scans the table twice instead of once&lt;/li&gt;
&lt;li&gt;It needs to track concurrent modifications between the two scans&lt;/li&gt;
&lt;li&gt;It waits for existing transactions to complete before finalizing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can speed up concurrent index builds by tuning these settings (for the session only):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Increase maintenance_work_mem for faster index builds
-- Default is typically 64MB — increase for large tables
SET maintenance_work_mem = '1GB';

-- Increase max_parallel_maintenance_workers (PG 11+)
-- Allows parallel index builds
SET max_parallel_maintenance_workers = 4;

-- Then create the index
CREATE INDEX CONCURRENTLY idx_orders_created_at ON orders (created_at);

-- Reset settings
RESET maintenance_work_mem;
RESET max_parallel_maintenance_workers;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  DROP INDEX CONCURRENTLY
&lt;/h2&gt;

&lt;p&gt;Just like creation, dropping an index normally acquires an ACCESS EXCLUSIVE lock. Use DROP INDEX CONCURRENTLY to avoid blocking reads and writes:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- DANGEROUS: Blocks all reads and writes
DROP INDEX idx_orders_old;

-- SAFE: Does not block reads or writes
DROP INDEX CONCURRENTLY idx_orders_old;

-- Note: Like CREATE, DROP INDEX CONCURRENTLY cannot run
-- inside a transaction block
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Retry Pattern for CI/CD
&lt;/h2&gt;

&lt;p&gt;Since CREATE INDEX CONCURRENTLY can fail due to transient conditions (deadlocks, long-running transactions), a robust migration strategy includes retry logic:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
# retry-create-index.sh
MAX_RETRIES=3
RETRY_DELAY=10

for i in $(seq 1 $MAX_RETRIES); do
  echo "Attempt $i: Creating index..."

  # Drop invalid index if it exists from a previous failed attempt
  psql -c "DROP INDEX CONCURRENTLY IF EXISTS idx_orders_customer_id;" 2&amp;gt;/dev/null

  # Try to create the index
  if psql -c "CREATE INDEX CONCURRENTLY idx_orders_customer_id ON orders (customer_id);"; then
    echo "Index created successfully"
    exit 0
  fi

  echo "Failed on attempt $i, waiting $RETRY_DELAY seconds..."
  sleep $RETRY_DELAY
done

echo "Failed after $MAX_RETRIES attempts"
exit 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Unique Index Considerations
&lt;/h2&gt;

&lt;p&gt;Creating a UNIQUE index concurrently has an additional failure mode: if duplicate values exist in the column, the build fails and leaves an invalid index. Always check for duplicates first:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Check for duplicates before creating a unique index
SELECT email, count(*)
FROM users
GROUP BY email
HAVING count(*) &amp;gt; 1
LIMIT 10;

-- If duplicates exist, resolve them first
-- Then create the unique index
CREATE UNIQUE INDEX CONCURRENTLY idx_users_email ON users (email);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Summary: The Rules
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Always use CONCURRENTLY for CREATE INDEX, DROP INDEX, and REINDEX in production&lt;/li&gt;
&lt;li&gt;Never run CONCURRENTLY operations inside a transaction block&lt;/li&gt;
&lt;li&gt;Check for invalid indexes after failed CONCURRENTLY operations&lt;/li&gt;
&lt;li&gt;Use REINDEX CONCURRENTLY (PG 12+) to rebuild failed or bloated indexes&lt;/li&gt;
&lt;li&gt;Implement retry logic in your deployment scripts&lt;/li&gt;
&lt;li&gt;Increase maintenance_work_mem for large table index builds&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Automate the Check
&lt;/h2&gt;

&lt;p&gt;Forgetting CONCURRENTLY on a CREATE INDEX is one of the most common migration mistakes. &lt;a href="https://github.com/mickelsamuel/migrationpilot" rel="noopener noreferrer"&gt;MigrationPilot&lt;/a&gt; catches this with rule MP001 (require-concurrent-index) and can auto-fix it — adding CONCURRENTLY and flagging if the statement is inside a transaction block. It also catches missing CONCURRENTLY on DROP INDEX (MP009) and REINDEX (MP021). Add it to your CI pipeline to enforce these patterns automatically.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx migrationpilot analyze migrations/004_add_indexes.sql --fix
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>sql</category>
      <category>backend</category>
    </item>
    <item>
      <title>How to Safely Add a Column with a Default Value in PostgreSQL</title>
      <dc:creator>Mickel Samuel</dc:creator>
      <pubDate>Mon, 02 Mar 2026 14:05:26 +0000</pubDate>
      <link>https://dev.to/mickelsamuel/how-to-safely-add-a-column-with-a-default-value-in-postgresql-9f2</link>
      <guid>https://dev.to/mickelsamuel/how-to-safely-add-a-column-with-a-default-value-in-postgresql-9f2</guid>
      <description>&lt;p&gt;Adding a column with a DEFAULT value is one of the most common migration operations. It is also one of the most misunderstood. Whether it takes milliseconds or causes a full table rewrite depends on your PostgreSQL version and what your DEFAULT expression looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Table Rewrites
&lt;/h2&gt;

&lt;p&gt;Before PostgreSQL 11, adding a column with a DEFAULT value always rewrote the entire table. PostgreSQL had to physically write the default value into every existing row. During this rewrite, the table was locked with an ACCESS EXCLUSIVE lock, blocking all reads and writes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- PostgreSQL 10 and earlier: FULL TABLE REWRITE
-- On a 100M row table, this could take 10+ minutes
-- The table is completely locked during the entire operation
ALTER TABLE users ADD COLUMN status TEXT DEFAULT 'active';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;On a table with 100 million rows, this could take 10 minutes or more. During that time, every query against the table — reads included — queues up. Connection pools exhaust. Application timeouts cascade. Users see errors. This was one of the most common causes of database outages in production PostgreSQL deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  PostgreSQL 11: The Fast-Path Default
&lt;/h2&gt;

&lt;p&gt;PostgreSQL 11 (released October 2018) introduced a significant optimization. When you add a column with a non-volatile DEFAULT, PostgreSQL stores the default value in the catalog (&lt;code&gt;pg_attribute.attmissingval&lt;/code&gt;) instead of writing it to every row. The operation completes in milliseconds regardless of table size.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- PostgreSQL 11+: INSTANT operation (no table rewrite)
-- Works because 'active' is a constant (non-volatile)
ALTER TABLE users ADD COLUMN status TEXT DEFAULT 'active';

-- Also instant — integer constant
ALTER TABLE orders ADD COLUMN version INT DEFAULT 1;

-- Also instant — boolean constant
ALTER TABLE features ADD COLUMN enabled BOOLEAN DEFAULT false;

-- Also instant — NULL default (always was instant)
ALTER TABLE users ADD COLUMN middle_name TEXT;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When a query reads a row that was created before the column was added, PostgreSQL automatically returns the stored default value. Rows created after the ALTER TABLE include the column value physically. Over time, as rows are updated, the default value gets written to disk naturally through MVCC.&lt;/p&gt;

&lt;h2&gt;
  
  
  Volatile vs Non-Volatile Defaults
&lt;/h2&gt;

&lt;p&gt;The optimization only works for &lt;strong&gt;non-volatile&lt;/strong&gt; defaults. A non-volatile expression always returns the same value. A volatile expression might return a different value each time it is evaluated.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Expression&lt;/th&gt;
&lt;th&gt;Volatility&lt;/th&gt;
&lt;th&gt;Table Rewrite?&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;'active'&lt;/td&gt;
&lt;td&gt;Immutable&lt;/td&gt;
&lt;td&gt;No (instant)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;42&lt;/td&gt;
&lt;td&gt;Immutable&lt;/td&gt;
&lt;td&gt;No (instant)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;true / false&lt;/td&gt;
&lt;td&gt;Immutable&lt;/td&gt;
&lt;td&gt;No (instant)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;'{}'::jsonb&lt;/td&gt;
&lt;td&gt;Immutable&lt;/td&gt;
&lt;td&gt;No (instant)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;now()&lt;/td&gt;
&lt;td&gt;Volatile&lt;/td&gt;
&lt;td&gt;Yes (full rewrite)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;gen_random_uuid()&lt;/td&gt;
&lt;td&gt;Volatile&lt;/td&gt;
&lt;td&gt;Yes (full rewrite)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;random()&lt;/td&gt;
&lt;td&gt;Volatile&lt;/td&gt;
&lt;td&gt;Yes (full rewrite)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;clock_timestamp()&lt;/td&gt;
&lt;td&gt;Volatile&lt;/td&gt;
&lt;td&gt;Yes (full rewrite)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;nextval()&lt;/td&gt;
&lt;td&gt;Volatile&lt;/td&gt;
&lt;td&gt;Yes (full rewrite)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The distinction matters because a volatile function must produce a unique value per row. PostgreSQL cannot store a single value in the catalog and use it for all existing rows — it needs to evaluate the function for each one, which requires a table rewrite.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Safe Pattern for Volatile Defaults
&lt;/h2&gt;

&lt;p&gt;When you need a volatile default (like &lt;code&gt;gen_random_uuid()&lt;/code&gt; or &lt;code&gt;now()&lt;/code&gt;), split the operation into three steps:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Step 1: Add the column without a default (instant, no rewrite)
ALTER TABLE users ADD COLUMN created_at TIMESTAMPTZ;

-- Step 2: Set the default for new rows (instant, metadata-only)
ALTER TABLE users ALTER COLUMN created_at SET DEFAULT now();

-- Step 3: Backfill existing rows in batches (no lock)
UPDATE users SET created_at = now()
WHERE id BETWEEN 1 AND 10000 AND created_at IS NULL;

UPDATE users SET created_at = now()
WHERE id BETWEEN 10001 AND 20000 AND created_at IS NULL;
-- ... continue in batches
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This approach never locks the table for more than milliseconds. The backfill runs as normal DML, which only acquires ROW EXCLUSIVE locks. You can batch the updates to avoid holding locks for too long and to give autovacuum time to clean up dead tuples between batches.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Pitfall: UUID Primary Keys
&lt;/h2&gt;

&lt;p&gt;One of the most common mistakes is adding a UUID column with &lt;code&gt;gen_random_uuid()&lt;/code&gt; as the default:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- DANGEROUS: gen_random_uuid() is volatile
-- This rewrites the entire table on PG 11+
ALTER TABLE orders ADD COLUMN external_id UUID DEFAULT gen_random_uuid();

-- SAFE: Split into separate operations
ALTER TABLE orders ADD COLUMN external_id UUID;
ALTER TABLE orders ALTER COLUMN external_id SET DEFAULT gen_random_uuid();
-- Then backfill existing rows in batches
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Common Pitfall: Timestamps
&lt;/h2&gt;

&lt;p&gt;Another frequent mistake is adding a &lt;code&gt;created_at&lt;/code&gt; or &lt;code&gt;updated_at&lt;/code&gt; column with &lt;code&gt;now()&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- DANGEROUS: now() is stable within a transaction
-- but PostgreSQL classifies it as volatile for this purpose
ALTER TABLE events ADD COLUMN created_at TIMESTAMPTZ DEFAULT now();

-- SAFE: Use the three-step pattern
ALTER TABLE events ADD COLUMN created_at TIMESTAMPTZ;
ALTER TABLE events ALTER COLUMN created_at SET DEFAULT now();
-- Backfill in batches...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;A subtle detail: &lt;code&gt;now()&lt;/code&gt; is technically STABLE (returns the same value within a transaction), but PostgreSQL still treats it as volatile for the purposes of the fast-path default optimization. The reason is that a STABLE function could still return different values across transactions, so PostgreSQL cannot store a single value in the catalog.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backfill Strategies
&lt;/h2&gt;

&lt;p&gt;When backfilling existing rows, there are several approaches:&lt;/p&gt;

&lt;h3&gt;
  
  
  Batched updates by primary key range
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Process 10,000 rows at a time
DO $$
DECLARE
  batch_size INT := 10000;
  min_id BIGINT;
  max_id BIGINT;
  current_id BIGINT;
BEGIN
  SELECT min(id), max(id) INTO min_id, max_id FROM users;
  current_id := min_id;

  WHILE current_id &amp;lt;= max_id LOOP
    UPDATE users
    SET status = 'active'
    WHERE id &amp;gt;= current_id
      AND id &amp;lt; current_id + batch_size
      AND status IS NULL;

    current_id := current_id + batch_size;
    COMMIT;  -- Release locks between batches (PG 11+ procedures)
  END LOOP;
END $$;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Using a temporary helper function
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- For large backfills, a loop in a script is often simpler
-- Run this from your application or a migration runner:

-- Bash / psql loop:
for offset in $(seq 0 10000 1000000); do
  psql -c "UPDATE users SET status = 'active'
           WHERE id IN (SELECT id FROM users
                        WHERE status IS NULL
                        ORDER BY id LIMIT 10000)"
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  How to Check if a Default Triggers a Rewrite
&lt;/h2&gt;

&lt;p&gt;You can check the volatility of a function using:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT proname, provolatile
FROM pg_proc
WHERE proname = 'now';

-- provolatile values:
-- 'i' = immutable (safe)
-- 's' = stable (triggers rewrite in ADD COLUMN DEFAULT)
-- 'v' = volatile (triggers rewrite)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If &lt;code&gt;provolatile&lt;/code&gt; is anything other than &lt;code&gt;'i'&lt;/code&gt; (immutable), the default will trigger a table rewrite when used in ADD COLUMN. Use the three-step pattern instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Version-Specific Summary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;PG Version&lt;/th&gt;
&lt;th&gt;Constant Default&lt;/th&gt;
&lt;th&gt;Volatile Default&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;PG 10 and earlier&lt;/td&gt;
&lt;td&gt;Full table rewrite&lt;/td&gt;
&lt;td&gt;Full table rewrite&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PG 11+&lt;/td&gt;
&lt;td&gt;Instant (catalog-only)&lt;/td&gt;
&lt;td&gt;Full table rewrite&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Catch This Automatically
&lt;/h2&gt;

&lt;p&gt;Remembering which defaults are volatile and which are safe is easy to get wrong under pressure. &lt;a href="https://github.com/mickelsamuel/migrationpilot" rel="noopener noreferrer"&gt;MigrationPilot&lt;/a&gt; detects volatile defaults in ADD COLUMN statements (rule MP003) and suggests the safe three-step pattern automatically. It also auto-detects your PostgreSQL version and adjusts its advice accordingly. Run it in your CI pipeline to catch these issues before they reach production.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx migrationpilot analyze migrations/003_add_timestamps.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>sql</category>
      <category>devops</category>
    </item>
    <item>
      <title>Adding NOT NULL Constraints to Existing PostgreSQL Columns Safely</title>
      <dc:creator>Mickel Samuel</dc:creator>
      <pubDate>Mon, 02 Mar 2026 14:05:22 +0000</pubDate>
      <link>https://dev.to/mickelsamuel/adding-not-null-constraints-to-existing-postgresql-columns-safely-5fk9</link>
      <guid>https://dev.to/mickelsamuel/adding-not-null-constraints-to-existing-postgresql-columns-safely-5fk9</guid>
      <description>&lt;p&gt;Adding a NOT NULL constraint to an existing column is one of the most common schema changes, and one of the most dangerous. The naive approach scans the entire table under an ACCESS EXCLUSIVE lock. On a large table, this causes an outage. Here is the safe pattern.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with SET NOT NULL
&lt;/h2&gt;

&lt;p&gt;When you run &lt;code&gt;ALTER TABLE ... ALTER COLUMN ... SET NOT NULL&lt;/code&gt;, PostgreSQL must verify that no existing rows have NULL values in that column. To do this, it scans the entire table while holding an ACCESS EXCLUSIVE lock.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- DANGEROUS: Full table scan under ACCESS EXCLUSIVE
-- On a 50M row table, this blocks ALL queries for minutes
ALTER TABLE users ALTER COLUMN email SET NOT NULL;

-- What happens internally:
-- 1. Acquires ACCESS EXCLUSIVE lock (blocks everything)
-- 2. Scans every row to verify email IS NOT NULL
-- 3. If any NULL found: fails with error
-- 4. If all rows pass: sets the constraint
-- 5. Releases lock

-- During step 2, no query can read or write the table
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;ACCESS EXCLUSIVE means nothing else can happen on this table. No SELECT, no INSERT, no UPDATE, no DELETE. And it queues — any query that tries to access the table while the scan is running gets queued, filling up your connection pool.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Safe Pattern: CHECK + VALIDATE
&lt;/h2&gt;

&lt;p&gt;The safe approach uses a CHECK constraint with NOT VALID to split the operation into two steps with different lock levels:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Step 1: Add the constraint as NOT VALID
-- Acquires ACCESS EXCLUSIVE but is INSTANT (no table scan)
-- Only new inserts/updates are checked going forward
ALTER TABLE users ADD CONSTRAINT chk_email_not_null
  CHECK (email IS NOT NULL) NOT VALID;

-- Step 2: Validate the constraint
-- Acquires SHARE UPDATE EXCLUSIVE (does NOT block reads or writes)
-- Scans the entire table to verify existing rows
ALTER TABLE users VALIDATE CONSTRAINT chk_email_not_null;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The key differences from the direct SET NOT NULL approach:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;SET NOT NULL&lt;/th&gt;
&lt;th&gt;CHECK + VALIDATE&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Lock type (add)&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE (instant)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lock type (validate)&lt;/td&gt;
&lt;td&gt;Same lock, same operation&lt;/td&gt;
&lt;td&gt;SHARE UPDATE EXCLUSIVE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Blocks reads?&lt;/td&gt;
&lt;td&gt;Yes (during scan)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Blocks writes?&lt;/td&gt;
&lt;td&gt;Yes (during scan)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;New rows checked?&lt;/td&gt;
&lt;td&gt;After constraint set&lt;/td&gt;
&lt;td&gt;Immediately after Step 1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  PostgreSQL 12+: Converting CHECK to NOT NULL
&lt;/h2&gt;

&lt;p&gt;Starting with PostgreSQL 12, if a validated CHECK constraint of the form &lt;code&gt;CHECK (column IS NOT NULL)&lt;/code&gt; exists, PostgreSQL recognizes that the column already has a not-null guarantee. In this case, &lt;code&gt;SET NOT NULL&lt;/code&gt; skips the table scan and completes instantly:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Full safe pattern for PG 12+:

-- Step 1: Add NOT VALID CHECK constraint (instant)
ALTER TABLE users ADD CONSTRAINT chk_email_not_null
  CHECK (email IS NOT NULL) NOT VALID;

-- Step 2: Validate (scans table, but under safe lock)
ALTER TABLE users VALIDATE CONSTRAINT chk_email_not_null;

-- Step 3: Convert to real NOT NULL (instant — no scan needed!)
-- PG 12+ skips the scan because it sees the validated CHECK
ALTER TABLE users ALTER COLUMN email SET NOT NULL;

-- Step 4: Drop the now-redundant CHECK constraint
ALTER TABLE users DROP CONSTRAINT chk_email_not_null;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The result is identical to running SET NOT NULL directly, but without the dangerous full-table scan under ACCESS EXCLUSIVE. The total lock time is a few milliseconds in Steps 1, 3, and 4. Step 2 scans the table but does not block any queries.&lt;/p&gt;

&lt;h2&gt;
  
  
  PostgreSQL 11 and Earlier
&lt;/h2&gt;

&lt;p&gt;On PostgreSQL 11 and earlier, the SET NOT NULL optimization does not exist. After validating the CHECK constraint, you have two options:&lt;/p&gt;

&lt;h3&gt;
  
  
  Option A: Keep the CHECK constraint (recommended)
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Just use the CHECK constraint as your not-null guarantee
-- This provides the same data integrity as SET NOT NULL
ALTER TABLE users ADD CONSTRAINT chk_email_not_null
  CHECK (email IS NOT NULL) NOT VALID;
ALTER TABLE users VALIDATE CONSTRAINT chk_email_not_null;

-- Done. The CHECK constraint enforces not-null for all
-- existing and future rows. No need for SET NOT NULL.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The CHECK constraint provides the same guarantee as SET NOT NULL. The only difference is cosmetic: &lt;code&gt;\d users&lt;/code&gt; will not show "not null" next to the column, but the constraint is visible in the constraints section. Some ORMs may not recognize the CHECK constraint as a not-null guarantee, but the data integrity is identical.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option B: Accept the brief lock (small tables only)
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- If the table is small enough (&amp;lt; 100K rows), the scan
-- under ACCESS EXCLUSIVE is fast enough to be acceptable
-- Use lock_timeout to prevent the lock queue problem
SET lock_timeout = '5s';
ALTER TABLE users ALTER COLUMN email SET NOT NULL;
RESET lock_timeout;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Handling Existing NULL Values
&lt;/h2&gt;

&lt;p&gt;Before adding a NOT NULL constraint, you need to ensure no NULL values exist. The VALIDATE step will fail if it finds any. Backfill NULLs first:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Step 0: Backfill NULL values (before adding the constraint)
-- Do this in batches to avoid long-running transactions

-- Check how many NULLs exist
SELECT count(*) FROM users WHERE email IS NULL;

-- Backfill in batches
UPDATE users
SET email = 'unknown@example.com'
WHERE id IN (
  SELECT id FROM users
  WHERE email IS NULL
  ORDER BY id
  LIMIT 10000
);
-- Repeat until no NULLs remain

-- Then proceed with the CHECK + VALIDATE pattern
ALTER TABLE users ADD CONSTRAINT chk_email_not_null
  CHECK (email IS NOT NULL) NOT VALID;
ALTER TABLE users VALIDATE CONSTRAINT chk_email_not_null;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Common Mistakes
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. Forgetting NOT VALID
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- BAD: Without NOT VALID, ADD CONSTRAINT scans the table
-- under SHARE ROW EXCLUSIVE (blocks writes)
ALTER TABLE users ADD CONSTRAINT chk_email_not_null
  CHECK (email IS NOT NULL);

-- GOOD: NOT VALID makes it instant
ALTER TABLE users ADD CONSTRAINT chk_email_not_null
  CHECK (email IS NOT NULL) NOT VALID;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  2. Running SET NOT NULL on PG 11 after CHECK validation
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- BAD (on PG 11): SET NOT NULL still scans the table
-- even though the CHECK constraint is validated
ALTER TABLE users ALTER COLUMN email SET NOT NULL;
-- This does a full scan under ACCESS EXCLUSIVE!

-- On PG 11: Just keep the CHECK constraint
-- On PG 12+: SET NOT NULL is instant (skips scan)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  3. Not setting lock_timeout
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- BAD: Both steps should have lock_timeout
-- Even though the locks are brief, they can queue

-- GOOD: Set lock_timeout for both operations
SET lock_timeout = '5s';
ALTER TABLE users ADD CONSTRAINT chk_email_not_null
  CHECK (email IS NOT NULL) NOT VALID;
RESET lock_timeout;

-- VALIDATE doesn't need lock_timeout (it uses a safe lock)
ALTER TABLE users VALIDATE CONSTRAINT chk_email_not_null;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  The Complete Pattern
&lt;/h2&gt;

&lt;p&gt;Here is the full, production-safe pattern for adding NOT NULL to an existing column:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Migration file: 005_add_email_not_null.sql

-- 1. Backfill any existing NULLs (if they exist)
-- Run this in a separate migration or script before this one
-- UPDATE users SET email = 'unknown@example.com' WHERE email IS NULL;

-- 2. Add NOT VALID CHECK constraint (instant, brief ACCESS EXCLUSIVE)
SET lock_timeout = '5s';
ALTER TABLE users ADD CONSTRAINT chk_email_not_null
  CHECK (email IS NOT NULL) NOT VALID;
RESET lock_timeout;

-- 3. Validate (full scan, but SHARE UPDATE EXCLUSIVE — safe)
ALTER TABLE users VALIDATE CONSTRAINT chk_email_not_null;

-- 4. Convert to formal NOT NULL (PG 12+ only — instant)
SET lock_timeout = '5s';
ALTER TABLE users ALTER COLUMN email SET NOT NULL;
RESET lock_timeout;

-- 5. Drop redundant CHECK (PG 12+ only)
SET lock_timeout = '5s';
ALTER TABLE users DROP CONSTRAINT chk_email_not_null;
RESET lock_timeout;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Automate This Check
&lt;/h2&gt;

&lt;p&gt;Catching a direct SET NOT NULL before it reaches production is exactly the kind of thing that should be automated. &lt;a href="https://github.com/mickelsamuel/migrationpilot" rel="noopener noreferrer"&gt;MigrationPilot&lt;/a&gt; flags SET NOT NULL without the CHECK pattern (rule MP002) and direct ADD CONSTRAINT without NOT VALID (rule MP030). It is version-aware, so on PG 12+ it knows that SET NOT NULL after a validated CHECK is safe. Add it to your CI pipeline to catch these patterns automatically.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx migrationpilot analyze migrations/005_add_email_not_null.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>sql</category>
      <category>backend</category>
    </item>
    <item>
      <title>Which ALTER TABLE Operations Lock Your PostgreSQL Table?</title>
      <dc:creator>Mickel Samuel</dc:creator>
      <pubDate>Mon, 02 Mar 2026 14:05:18 +0000</pubDate>
      <link>https://dev.to/mickelsamuel/which-alter-table-operations-lock-your-postgresql-table-1082</link>
      <guid>https://dev.to/mickelsamuel/which-alter-table-operations-lock-your-postgresql-table-1082</guid>
      <description>&lt;p&gt;ALTER TABLE is not a single operation. PostgreSQL has dozens of ALTER TABLE sub-commands, and they acquire different lock levels. Some are instant and harmless. Others lock your entire table and block all traffic. This is the complete reference.&lt;/p&gt;

&lt;h2&gt;
  
  
  ACCESS EXCLUSIVE Operations (Block Everything)
&lt;/h2&gt;

&lt;p&gt;These ALTER TABLE operations acquire ACCESS EXCLUSIVE, which blocks all reads and writes on the table. On a busy production table, these are the operations most likely to cause outages.&lt;/p&gt;

&lt;h3&gt;
  
  
  ADD COLUMN with volatile DEFAULT
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Rewrites the entire table (all PostgreSQL versions)
ALTER TABLE users ADD COLUMN request_id UUID DEFAULT gen_random_uuid();

-- Safe alternative: three-step pattern
ALTER TABLE users ADD COLUMN request_id UUID;
ALTER TABLE users ALTER COLUMN request_id SET DEFAULT gen_random_uuid();
-- Backfill in batches...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;On PostgreSQL 11+, non-volatile defaults (constants like &lt;code&gt;'active'&lt;/code&gt;, &lt;code&gt;0&lt;/code&gt;, &lt;code&gt;false&lt;/code&gt;) are instant and do not rewrite the table. Volatile defaults (&lt;code&gt;now()&lt;/code&gt;, &lt;code&gt;gen_random_uuid()&lt;/code&gt;, &lt;code&gt;random()&lt;/code&gt;) still rewrite the table.&lt;/p&gt;

&lt;h3&gt;
  
  
  ALTER COLUMN TYPE
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Rewrites the entire table under ACCESS EXCLUSIVE
ALTER TABLE orders ALTER COLUMN amount TYPE BIGINT;
ALTER TABLE users ALTER COLUMN name TYPE VARCHAR(100);

-- Safe alternative: expand-contract pattern
ALTER TABLE orders ADD COLUMN amount_v2 BIGINT;
-- Sync trigger + backfill + app code switch + drop old column
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;There are a few exceptions where ALTER COLUMN TYPE does not rewrite the table: changing &lt;code&gt;VARCHAR(n)&lt;/code&gt; to &lt;code&gt;VARCHAR(m)&lt;/code&gt; where m &amp;gt; n (widening), changing &lt;code&gt;VARCHAR(n)&lt;/code&gt; to &lt;code&gt;TEXT&lt;/code&gt;, and changing &lt;code&gt;NUMERIC(p,s)&lt;/code&gt; to &lt;code&gt;NUMERIC&lt;/code&gt; (removing precision constraint). These are metadata-only changes that still acquire ACCESS EXCLUSIVE but complete instantly.&lt;/p&gt;

&lt;h3&gt;
  
  
  SET NOT NULL
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Scans entire table under ACCESS EXCLUSIVE to verify no NULLs exist
ALTER TABLE users ALTER COLUMN email SET NOT NULL;

-- Safe alternative: CHECK + VALIDATE pattern
ALTER TABLE users ADD CONSTRAINT chk_email_nn
  CHECK (email IS NOT NULL) NOT VALID;
ALTER TABLE users VALIDATE CONSTRAINT chk_email_nn;

-- PG 12+: After validation, SET NOT NULL is instant
ALTER TABLE users ALTER COLUMN email SET NOT NULL;
ALTER TABLE users DROP CONSTRAINT chk_email_nn;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  ADD CONSTRAINT (UNIQUE, PRIMARY KEY, EXCLUDE)
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Scans table + builds index under ACCESS EXCLUSIVE
ALTER TABLE users ADD CONSTRAINT uniq_email UNIQUE (email);

-- Safe alternative: build index concurrently first
CREATE UNIQUE INDEX CONCURRENTLY idx_users_email ON users (email);
ALTER TABLE users ADD CONSTRAINT uniq_email UNIQUE USING INDEX idx_users_email;
-- The ADD CONSTRAINT USING INDEX is instant
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  SET LOGGED / SET UNLOGGED
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Rewrites the entire table
ALTER TABLE events SET UNLOGGED;
ALTER TABLE events SET LOGGED;
-- No safe alternative — avoid in production
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Other ACCESS EXCLUSIVE operations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;DROP COLUMN&lt;/code&gt; — instant but holds ACCESS EXCLUSIVE briefly&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RENAME COLUMN&lt;/code&gt; — instant but holds ACCESS EXCLUSIVE briefly&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RENAME TABLE&lt;/code&gt; — instant but holds ACCESS EXCLUSIVE briefly&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ADD COLUMN ... GENERATED ALWAYS AS (expr) STORED&lt;/code&gt; — rewrites table&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;CLUSTER&lt;/code&gt; — rewrites table&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  SHARE ROW EXCLUSIVE Operations (Block Writes)
&lt;/h2&gt;

&lt;p&gt;These operations block INSERT, UPDATE, DELETE but allow SELECT queries to continue.&lt;/p&gt;
&lt;h3&gt;
  
  
  ADD CONSTRAINT ... FOREIGN KEY (without NOT VALID)
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Validates all existing rows — blocks writes during scan
ALTER TABLE orders ADD CONSTRAINT fk_user
  FOREIGN KEY (user_id) REFERENCES users (id);

-- Safe alternative: NOT VALID + VALIDATE
ALTER TABLE orders ADD CONSTRAINT fk_user
  FOREIGN KEY (user_id) REFERENCES users (id) NOT VALID;
-- NOT VALID acquires SHARE ROW EXCLUSIVE but is instant

ALTER TABLE orders VALIDATE CONSTRAINT fk_user;
-- VALIDATE acquires SHARE UPDATE EXCLUSIVE (does not block writes)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Note that foreign keys lock &lt;strong&gt;both&lt;/strong&gt; the child table (orders) and the parent table (users). The lock on the parent table is SHARE ROW EXCLUSIVE as well. This is one of the most overlooked aspects of FK constraints — your migration might not touch the parent table, but it still locks it.&lt;/p&gt;

&lt;h3&gt;
  
  
  CREATE TRIGGER
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Acquires SHARE ROW EXCLUSIVE — blocks writes briefly
-- This is typically instant, so the lock duration is negligible
CREATE TRIGGER trg_updated_at
BEFORE UPDATE ON users
FOR EACH ROW EXECUTE FUNCTION update_modified_at();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  SHARE UPDATE EXCLUSIVE Operations (Safe)
&lt;/h2&gt;

&lt;p&gt;These operations do not block reads or writes. They only conflict with other SHARE UPDATE EXCLUSIVE operations and stronger locks.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ALTER TABLE VALIDATE CONSTRAINT&lt;/code&gt; — validates CHECK or FK constraints&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ALTER TABLE SET STATISTICS&lt;/code&gt; — changes column statistics target&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ALTER TABLE SET (fillfactor = ...)&lt;/code&gt; — changes storage parameters&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ALTER TABLE SET (autovacuum_enabled = ...)&lt;/code&gt; — changes autovacuum settings&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Quick Reference Table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Lock&lt;/th&gt;
&lt;th&gt;Duration&lt;/th&gt;
&lt;th&gt;Safe Alternative&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ADD COLUMN (no default)&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;Instant&lt;/td&gt;
&lt;td&gt;N/A (already safe)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ADD COLUMN (constant default, PG11+)&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;Instant&lt;/td&gt;
&lt;td&gt;N/A (already safe)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ADD COLUMN (volatile default)&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;Table rewrite&lt;/td&gt;
&lt;td&gt;Three-step pattern&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ALTER COLUMN TYPE&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;Table rewrite&lt;/td&gt;
&lt;td&gt;Expand-contract&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SET NOT NULL&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;Full scan&lt;/td&gt;
&lt;td&gt;CHECK + VALIDATE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DROP NOT NULL&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;Instant&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ADD CONSTRAINT UNIQUE&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;Index build&lt;/td&gt;
&lt;td&gt;CONCURRENTLY + USING INDEX&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ADD CONSTRAINT FK&lt;/td&gt;
&lt;td&gt;SHARE ROW EXCL&lt;/td&gt;
&lt;td&gt;Full scan&lt;/td&gt;
&lt;td&gt;NOT VALID + VALIDATE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ADD CONSTRAINT CHECK&lt;/td&gt;
&lt;td&gt;SHARE ROW EXCL&lt;/td&gt;
&lt;td&gt;Full scan&lt;/td&gt;
&lt;td&gt;NOT VALID + VALIDATE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VALIDATE CONSTRAINT&lt;/td&gt;
&lt;td&gt;SHARE UPDATE EXCL&lt;/td&gt;
&lt;td&gt;Full scan&lt;/td&gt;
&lt;td&gt;N/A (already safe)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DROP COLUMN&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;Instant&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RENAME COLUMN&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;Instant&lt;/td&gt;
&lt;td&gt;Expand-contract&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SET DEFAULT&lt;/td&gt;
&lt;td&gt;ACCESS EXCLUSIVE&lt;/td&gt;
&lt;td&gt;Instant&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SET STATISTICS&lt;/td&gt;
&lt;td&gt;SHARE UPDATE EXCL&lt;/td&gt;
&lt;td&gt;Instant&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  The Key Insight: Lock Level vs Duration
&lt;/h2&gt;

&lt;p&gt;A common misconception is that ACCESS EXCLUSIVE always means danger. That is not quite right. The danger comes from &lt;strong&gt;ACCESS EXCLUSIVE held for a long time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Adding a column without a default acquires ACCESS EXCLUSIVE, but it completes in milliseconds. The lock is barely noticeable. ALTER COLUMN TYPE also acquires ACCESS EXCLUSIVE, but it rewrites the entire table, which can take minutes. Both are ACCESS EXCLUSIVE, but only one is dangerous.&lt;/p&gt;

&lt;p&gt;The risk formula is: &lt;strong&gt;Lock Severity x Duration x Traffic&lt;/strong&gt;. A brief ACCESS EXCLUSIVE on a low-traffic table is fine. A long-running SHARE lock on a high-traffic table can bring down your application.&lt;/p&gt;
&lt;h2&gt;
  
  
  Catch Lock Issues Before Production
&lt;/h2&gt;

&lt;p&gt;Static analysis can catch most dangerous ALTER TABLE patterns before they reach production. &lt;a href="https://github.com/mickelsamuel/migrationpilot" rel="noopener noreferrer"&gt;MigrationPilot&lt;/a&gt; reports the exact lock type each ALTER TABLE sub-command acquires, flags operations that cause table rewrites or full scans, and suggests the safe alternative pattern. All 80 rules run in milliseconds without any database connection.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx migrationpilot analyze migrations/005_alter_users.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>sql</category>
      <category>backend</category>
    </item>
    <item>
      <title>How to Add Database Migration Checks to Your CI/CD Pipeline</title>
      <dc:creator>Mickel Samuel</dc:creator>
      <pubDate>Mon, 02 Mar 2026 14:05:14 +0000</pubDate>
      <link>https://dev.to/mickelsamuel/how-to-add-database-migration-checks-to-your-cicd-pipeline-lm9</link>
      <guid>https://dev.to/mickelsamuel/how-to-add-database-migration-checks-to-your-cicd-pipeline-lm9</guid>
      <description>&lt;p&gt;Code review catches logic bugs, linting catches style issues, and type checking catches type errors. But what catches dangerous database migrations? Most teams ship migration SQL without any automated safety checks. Here is how to fix that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why CI Checks for Migrations
&lt;/h2&gt;

&lt;p&gt;Database migrations have a unique risk profile compared to application code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Irreversible by default.&lt;/strong&gt; A deployed app rollback takes seconds. A schema change rollback can take hours or be impossible (data loss from DROP COLUMN).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production-specific risk.&lt;/strong&gt; A migration that runs in 100ms on dev can take 10 minutes on production with 50M rows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cascading failures.&lt;/strong&gt; A single bad lock can queue all queries, exhaust connection pools, and take down the entire application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low review expertise.&lt;/strong&gt; Most engineers review SQL migrations less carefully than application code because the risks are not obvious.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Automated CI checks solve these problems by catching known-dangerous patterns before the migration is merged. The check runs in seconds, requires no database connection, and provides actionable feedback directly in the pull request.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Check
&lt;/h2&gt;

&lt;p&gt;An effective migration linter should catch these categories of issues:&lt;/p&gt;

&lt;h3&gt;
  
  
  Lock safety
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;CREATE INDEX without CONCURRENTLY (blocks writes)&lt;/li&gt;
&lt;li&gt;Missing lock_timeout before DDL (can queue all traffic)&lt;/li&gt;
&lt;li&gt;ADD COLUMN with volatile DEFAULT (full table rewrite)&lt;/li&gt;
&lt;li&gt;SET NOT NULL without CHECK pattern (ACCESS EXCLUSIVE scan)&lt;/li&gt;
&lt;li&gt;UNIQUE constraint without USING INDEX (builds index under ACCESS EXCLUSIVE)&lt;/li&gt;
&lt;li&gt;Foreign key without NOT VALID (scans both tables under lock)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Data safety
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;DROP TABLE, DROP COLUMN (irreversible data loss)&lt;/li&gt;
&lt;li&gt;TRUNCATE CASCADE (cascading data deletion)&lt;/li&gt;
&lt;li&gt;Data type narrowing (silent data truncation)&lt;/li&gt;
&lt;li&gt;DROP NOT NULL on columns that should never be null&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Best practices
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Prefer TEXT over VARCHAR (no performance difference in PostgreSQL)&lt;/li&gt;
&lt;li&gt;Prefer BIGINT over INT for primary keys (avoid future overflow)&lt;/li&gt;
&lt;li&gt;Prefer TIMESTAMPTZ over TIMESTAMP (timezone-aware)&lt;/li&gt;
&lt;li&gt;Prefer IDENTITY over SERIAL (SQL standard, better semantics)&lt;/li&gt;
&lt;li&gt;Always name indexes explicitly (for reliable migrations later)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  GitHub Actions Setup
&lt;/h2&gt;

&lt;p&gt;The simplest way to add migration checks to a GitHub repository. Create a workflow file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# .github/workflows/migration-check.yml
name: Migration Safety Check

on:
  pull_request:
    paths:
      - 'migrations/**'
      - 'db/migrate/**'
      - 'prisma/migrations/**'

jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: mickelsamuel/migrationpilot@v1
        with:
          migration-path: "migrations/*.sql"
          fail-on: critical
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This workflow runs whenever a PR modifies files in the migrations directory. It analyzes every SQL file, posts a safety report as a PR comment, and fails the check if any critical violations are found.&lt;/p&gt;

&lt;h3&gt;
  
  
  Customizing the check
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# More detailed configuration
- uses: mickelsamuel/migrationpilot@v1
  with:
    # Path to migration files (glob pattern)
    migration-path: "migrations/*.sql"

    # Fail the check on "critical" or "warning" violations
    fail-on: critical

    # Exclude specific rules (comma-separated)
    # e.g., if you intentionally want DROP TABLE in a cleanup migration
    exclude: "MP026,MP017"

    # PostgreSQL version for version-specific advice
    pg-version: "16"

    # Output SARIF for GitHub Code Scanning integration
    sarif-file: "results.sarif"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  SARIF integration with GitHub Code Scanning
&lt;/h3&gt;

&lt;p&gt;For richer integration, MigrationPilot can output SARIF (Static Analysis Results Interchange Format), which GitHub Code Scanning understands natively. Violations appear as inline annotations directly in the PR diff:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# .github/workflows/migration-check.yml
name: Migration Safety Check

on:
  pull_request:
    paths: ['migrations/**']

jobs:
  check:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      security-events: write  # Required for SARIF upload

    steps:
      - uses: actions/checkout@v4

      - uses: mickelsamuel/migrationpilot@v1
        with:
          migration-path: "migrations/*.sql"
          fail-on: critical
          sarif-file: "migrationpilot.sarif"

      - uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: "migrationpilot.sarif"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  GitLab CI Setup
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# .gitlab-ci.yml
migration-check:
  image: node:22-slim
  stage: test
  rules:
    - changes:
        - migrations/*.sql
  script:
    - npx migrationpilot check migrations/*.sql --fail-on critical
  allow_failure: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Bitbucket Pipelines Setup
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# bitbucket-pipelines.yml
pipelines:
  pull-requests:
    '**':
      - step:
          name: Migration Safety Check
          image: node:22-slim
          script:
            - npx migrationpilot check migrations/*.sql --fail-on critical
          condition:
            changesets:
              includePaths:
                - "migrations/**"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Generic CI (Any System)
&lt;/h2&gt;

&lt;p&gt;MigrationPilot works as a plain CLI tool, so it runs anywhere Node.js is available:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install and run
npm install -g migrationpilot

# Check migrations — exits with code 2 on critical violations
migrationpilot check migrations/*.sql --fail-on critical

# Or use npx (no install needed)
npx migrationpilot check migrations/*.sql --fail-on critical

# Output formats for different CI systems
migrationpilot check migrations/*.sql --format json      # Machine-readable
migrationpilot check migrations/*.sql --format sarif      # Code scanning
migrationpilot check migrations/*.sql --format markdown   # Wiki/docs
migrationpilot check migrations/*.sql --quiet             # gcc-style one-liner
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Framework-Specific Migration Paths
&lt;/h2&gt;

&lt;p&gt;Different migration frameworks store SQL files in different locations. Here are the common paths:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;Migration Path&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Flyway&lt;/td&gt;
&lt;td&gt;src/main/resources/db/migration/*.sql&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Liquibase&lt;/td&gt;
&lt;td&gt;src/main/resources/db/changelog/*.sql&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rails&lt;/td&gt;
&lt;td&gt;db/migrate/*.rb (use schema.sql)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Django&lt;/td&gt;
&lt;td&gt;
&lt;em&gt;/migrations/&lt;/em&gt;.py (use sqlmigrate)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Alembic&lt;/td&gt;
&lt;td&gt;alembic/versions/*.py&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prisma&lt;/td&gt;
&lt;td&gt;prisma/migrations/*/migration.sql&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Knex&lt;/td&gt;
&lt;td&gt;migrations/*.js (pipe through knex)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;goose&lt;/td&gt;
&lt;td&gt;migrations/*.sql&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dbmate&lt;/td&gt;
&lt;td&gt;db/migrations/*.sql&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sqitch&lt;/td&gt;
&lt;td&gt;deploy/*.sql&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h3&gt;
  
  
  ORM frameworks (Django, Rails, Alembic)
&lt;/h3&gt;

&lt;p&gt;For frameworks that generate SQL from ORM code (Django, Rails, Alembic), you can pipe the generated SQL through stdin:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Django: Generate SQL from migration, then analyze
python manage.py sqlmigrate myapp 0042 | npx migrationpilot analyze --stdin

# Alembic: Generate SQL from revision
alembic upgrade head --sql | npx migrationpilot analyze --stdin

# Rails: Use schema.sql format
# Set config.active_record.schema_format = :sql in application.rb
# Then analyze the generated SQL file
npx migrationpilot analyze db/structure.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Pre-Commit Hook (Local Check)
&lt;/h2&gt;

&lt;p&gt;For even earlier feedback, add a pre-commit hook that runs before code leaves the developer's machine:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install the pre-commit hook
npx migrationpilot hook install

# Or add to your existing husky setup
# .husky/pre-commit
npx migrationpilot check migrations/*.sql --fail-on critical --quiet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Configuration File
&lt;/h2&gt;

&lt;p&gt;For project-wide settings, create a configuration file in your repository root:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# .migrationpilotrc.yml
# Use a built-in preset as a starting point
extends: ci

# PostgreSQL version (affects version-specific advice)
pgVersion: 16

# Override severity for specific rules
rules:
  MP026:
    severity: warning  # Allow DROP TABLE with a warning
  MP037:
    severity: off      # Don't enforce TEXT over VARCHAR

# Exclude rules globally
exclude:
  - MP015  # Allow SERIAL (team preference)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  What the PR Comment Looks Like
&lt;/h2&gt;

&lt;p&gt;When running as a GitHub Action, MigrationPilot posts a detailed safety report as a PR comment. The comment includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Overall risk score (RED / YELLOW / GREEN)&lt;/li&gt;
&lt;li&gt;Per-statement lock analysis (which lock each DDL acquires)&lt;/li&gt;
&lt;li&gt;Violation details with explanations of why each pattern is dangerous&lt;/li&gt;
&lt;li&gt;Safe alternative SQL you can copy-paste&lt;/li&gt;
&lt;li&gt;Auto-update on each push (no comment spam)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The comment is automatically updated on each push to the PR branch, so there is no comment spam. If all violations are resolved, the comment shows a green checkmark.&lt;/p&gt;
&lt;h2&gt;
  
  
  Gradual Adoption Strategy
&lt;/h2&gt;

&lt;p&gt;You do not need to fix every existing migration to start using CI checks. A practical adoption strategy:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Week 1: Warning mode.&lt;/strong&gt; Add the check with &lt;code&gt;allow_failure: true&lt;/code&gt; (GitLab) or no &lt;code&gt;fail-on&lt;/code&gt; (GitHub Action). Team sees reports but is not blocked.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Week 2: Block critical only.&lt;/strong&gt; Set &lt;code&gt;fail-on: critical&lt;/code&gt; to block the most dangerous patterns (missing CONCURRENTLY, table rewrites).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Month 2: Block warnings.&lt;/strong&gt; Once the team is comfortable, switch to &lt;code&gt;fail-on: warning&lt;/code&gt; to enforce best practices too.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ongoing: Customize rules.&lt;/strong&gt; Tune severity overrides and exclusions based on your team's conventions.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;

&lt;p&gt;Adding migration safety checks to CI takes less than a minute. &lt;a href="https://github.com/mickelsamuel/migrationpilot" rel="noopener noreferrer"&gt;MigrationPilot&lt;/a&gt; is open-source (MIT), runs 80 safety rules in under a second, and requires no database connection. It works as a CLI, GitHub Action, GitLab CI step, or Node.js library.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Try it now on your existing migrations
npx migrationpilot analyze migrations/*.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>database</category>
      <category>devops</category>
      <category>cicd</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Flyway vs Liquibase for PostgreSQL: An Honest Comparison</title>
      <dc:creator>Mickel Samuel</dc:creator>
      <pubDate>Mon, 02 Mar 2026 14:05:10 +0000</pubDate>
      <link>https://dev.to/mickelsamuel/flyway-vs-liquibase-for-postgresql-an-honest-comparison-36gk</link>
      <guid>https://dev.to/mickelsamuel/flyway-vs-liquibase-for-postgresql-an-honest-comparison-36gk</guid>
      <description>&lt;p&gt;Flyway and Liquibase are the two most widely used database migration tools in the Java ecosystem. Both support PostgreSQL. Both are mature. And both have significant differences in philosophy, syntax, and capabilities. This guide compares them honestly, including where each one falls short.&lt;/p&gt;

&lt;h2&gt;
  
  
  Philosophy: Convention vs Configuration
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Flyway&lt;/strong&gt; follows a convention-over-configuration approach. Migrations are plain SQL files with a naming convention (&lt;code&gt;V1__description.sql&lt;/code&gt;). There is one way to do things, and it is simple. If you know SQL, you know Flyway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Liquibase&lt;/strong&gt; takes a configuration-heavy approach. Migrations (called "changesets") can be written in XML, YAML, JSON, or SQL. Liquibase provides a database-agnostic abstraction layer with its own DSL for schema changes. This gives more flexibility at the cost of complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migration Syntax
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Flyway: Plain SQL
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- V2__add_users_email_index.sql
-- Flyway migration: just SQL with a naming convention

CREATE INDEX CONCURRENTLY idx_users_email ON users (email);

-- That's it. The filename determines the version and description.
-- V2 = version 2
-- add_users_email_index = description (underscores become spaces)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Liquibase: XML/YAML/SQL changesets
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!-- db.changelog-2.xml --&amp;gt;
&amp;lt;databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
    http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-4.0.xsd"&amp;gt;

  &amp;lt;changeSet id="2" author="dev"&amp;gt;
    &amp;lt;createIndex indexName="idx_users_email"
                 tableName="users"
                 unique="false"&amp;gt;
      &amp;lt;column name="email"/&amp;gt;
    &amp;lt;/createIndex&amp;gt;
  &amp;lt;/changeSet&amp;gt;
&amp;lt;/databaseChangeLog&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Or in YAML:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# db.changelog-2.yaml
databaseChangeLog:
  - changeSet:
      id: 2
      author: dev
      changes:
        - createIndex:
            indexName: idx_users_email
            tableName: users
            columns:
              - column:
                  name: email
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Or as raw SQL (Liquibase also supports this):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- db.changelog-2.sql
-- liquibase formatted sql
-- changeset dev:2
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Feature Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Flyway&lt;/th&gt;
&lt;th&gt;Liquibase&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Migration format&lt;/td&gt;
&lt;td&gt;SQL, Java&lt;/td&gt;
&lt;td&gt;XML, YAML, JSON, SQL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PostgreSQL support&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rollback support&lt;/td&gt;
&lt;td&gt;Paid (Teams)&lt;/td&gt;
&lt;td&gt;Free (auto + manual)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Schema diff&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes (diff command)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Database-agnostic DSL&lt;/td&gt;
&lt;td&gt;No (SQL is DB-specific)&lt;/td&gt;
&lt;td&gt;Yes (XML/YAML changesets)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Conditional execution&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes (preconditions)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Transaction control&lt;/td&gt;
&lt;td&gt;Per-migration only&lt;/td&gt;
&lt;td&gt;Per-changeset&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning curve&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Medium-High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Community edition&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SQL safety linting&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  Rollback Strategies
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Flyway
&lt;/h3&gt;

&lt;p&gt;Flyway's free tier does not support rollbacks at all. You can only move forward. The Teams/Enterprise tier adds "undo migrations" (files named &lt;code&gt;U2__description.sql&lt;/code&gt;) that you write manually. There is no auto-generated rollback.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- V2__add_status_column.sql
ALTER TABLE users ADD COLUMN status TEXT DEFAULT 'active';

-- U2__add_status_column.sql (Flyway Teams only)
ALTER TABLE users DROP COLUMN status;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Liquibase
&lt;/h3&gt;

&lt;p&gt;Liquibase has built-in rollback support in the free tier. For XML/YAML changesets, Liquibase auto-generates rollback SQL for many operations (CREATE TABLE, ADD COLUMN, etc.). For complex or SQL-based changesets, you specify rollback manually:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;changeSet id="2" author="dev"&amp;gt;
  &amp;lt;addColumn tableName="users"&amp;gt;
    &amp;lt;column name="status" type="TEXT" defaultValue="active"/&amp;gt;
  &amp;lt;/addColumn&amp;gt;
  &amp;lt;!-- Rollback auto-generated: DROP COLUMN status --&amp;gt;
&amp;lt;/changeSet&amp;gt;

&amp;lt;!-- For SQL changesets, specify rollback explicitly --&amp;gt;
&amp;lt;changeSet id="3" author="dev"&amp;gt;
  &amp;lt;sql&amp;gt;
    CREATE INDEX CONCURRENTLY idx_users_email ON users (email);
  &amp;lt;/sql&amp;gt;
  &amp;lt;rollback&amp;gt;
    DROP INDEX CONCURRENTLY idx_users_email;
  &amp;lt;/rollback&amp;gt;
&amp;lt;/changeSet&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  PostgreSQL-Specific Considerations
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Transaction handling
&lt;/h3&gt;

&lt;p&gt;Both Flyway and Liquibase wrap migrations in transactions by default. This is generally good for atomicity but causes problems with specific PostgreSQL operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CREATE INDEX CONCURRENTLY&lt;/code&gt; cannot run inside a transaction&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ALTER TYPE ... ADD VALUE&lt;/code&gt; cannot run inside a transaction (PG 11 and earlier)&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;DROP INDEX CONCURRENTLY&lt;/code&gt; cannot run inside a transaction&lt;/p&gt;

&lt;p&gt;-- Flyway: Disable transaction for specific migration&lt;br&gt;
-- Add this comment at the top of the file:&lt;br&gt;
-- flyway:executeInTransaction=false&lt;br&gt;
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);&lt;/p&gt;

&lt;p&gt;-- Liquibase: Disable transaction per changeset&lt;br&gt;
&lt;br&gt;
  &lt;br&gt;
    CREATE INDEX CONCURRENTLY idx_users_email ON users (email);&lt;br&gt;
  &lt;br&gt;
&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Liquibase abstraction layer limitations
&lt;/h3&gt;

&lt;p&gt;Liquibase's database-agnostic DSL does not support many PostgreSQL-specific features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No CONCURRENTLY support in the XML &lt;code&gt;&amp;lt;createIndex&amp;gt;&lt;/code&gt; tag&lt;/li&gt;
&lt;li&gt;No NOT VALID support for constraints&lt;/li&gt;
&lt;li&gt;No support for partitioning (PARTITION BY)&lt;/li&gt;
&lt;li&gt;No support for RLS policies&lt;/li&gt;
&lt;li&gt;Limited enum type support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For any PostgreSQL-specific migration, you end up using raw SQL changesets anyway, which negates the benefit of the abstraction layer. If you are only targeting PostgreSQL, the XML/YAML format adds complexity without benefit.&lt;/p&gt;
&lt;h2&gt;
  
  
  When to Choose Flyway
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;You want simplicity.&lt;/strong&gt; Flyway's SQL-first approach means there is almost nothing to learn. Write SQL, name the file correctly, done.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You only target PostgreSQL.&lt;/strong&gt; No need for a database-agnostic abstraction layer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Your team knows SQL well.&lt;/strong&gt; Flyway encourages SQL fluency rather than hiding behind an abstraction.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You want minimal tooling overhead.&lt;/strong&gt; Fewer config files, fewer concepts, fewer surprises.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  When to Choose Liquibase
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;You need multi-database support.&lt;/strong&gt; If your application targets PostgreSQL, MySQL, and Oracle, the abstraction layer helps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You need rollback in the free tier.&lt;/strong&gt; Flyway's rollback is a paid feature.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You need conditional execution.&lt;/strong&gt; Liquibase preconditions let you run changesets conditionally (e.g., only if a table does not exist).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You need schema diff.&lt;/strong&gt; Liquibase can compare two databases and generate the diff as changesets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You have complex deployment workflows.&lt;/strong&gt; Liquibase's changeset model with contexts and labels provides more granular control.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  What Neither Tool Does: SQL Safety Analysis
&lt;/h2&gt;

&lt;p&gt;Both Flyway and Liquibase are &lt;strong&gt;migration runners&lt;/strong&gt;. They track which migrations have been applied, execute them in order, and handle versioning. What neither tool does is analyze the SQL itself for safety.&lt;/p&gt;

&lt;p&gt;Neither Flyway nor Liquibase will warn you about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CREATE INDEX without CONCURRENTLY (blocks writes)&lt;/li&gt;
&lt;li&gt;Missing lock_timeout (can queue all traffic)&lt;/li&gt;
&lt;li&gt;ALTER COLUMN TYPE causing a full table rewrite&lt;/li&gt;
&lt;li&gt;SET NOT NULL without the CHECK pattern (ACCESS EXCLUSIVE scan)&lt;/li&gt;
&lt;li&gt;Volatile defaults that rewrite the table&lt;/li&gt;
&lt;li&gt;Foreign keys that lock both tables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools apply your migrations. They do not tell you whether those migrations are safe to run against a production database. This is a fundamentally different concern, and it is where migration linting tools come in.&lt;/p&gt;
&lt;h2&gt;
  
  
  Combining Migration Runners with Safety Linting
&lt;/h2&gt;

&lt;p&gt;The best approach is to use both: a migration runner (Flyway or Liquibase) for execution, and a linter for safety analysis. They complement each other:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Example CI pipeline combining Flyway + MigrationPilot

# Step 1: Lint the migration SQL for safety issues
npx migrationpilot check src/main/resources/db/migration/*.sql \
  --fail-on critical

# Step 2: If lint passes, run the migration with Flyway
flyway migrate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Or with Liquibase:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# For Liquibase SQL changesets:
npx migrationpilot check src/main/resources/db/changelog/*.sql \
  --fail-on critical

# For Liquibase XML/YAML, generate SQL first:
liquibase update-sql &amp;gt; /tmp/pending.sql
npx migrationpilot analyze /tmp/pending.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://github.com/mickelsamuel/migrationpilot" rel="noopener noreferrer"&gt;MigrationPilot&lt;/a&gt; auto-detects both Flyway and Liquibase (plus 12 other frameworks) and works with any migration runner. It analyzes the SQL for lock safety, table rewrites, and 80 other patterns — the exact gap that migration runners do not fill. It runs in CI as a GitHub Action, GitLab CI step, or plain CLI command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Flyway&lt;/strong&gt; : Simple, SQL-first, great for PostgreSQL-only projects. Rollback is paid.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Liquibase&lt;/strong&gt; : Flexible, multi-format, rollback included. More complex, abstraction layer has PostgreSQL gaps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Both&lt;/strong&gt; : Run migrations but do not analyze SQL for safety. Pair with a linter for production-safe migrations.&lt;/li&gt;
&lt;li&gt;For PostgreSQL-only projects, Flyway's simplicity usually wins.&lt;/li&gt;
&lt;li&gt;For multi-database environments or when you need free rollback, Liquibase is the better choice.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Atlas Paywalled Their Migration Linter — Here Are Your Free Alternatives</title>
      <dc:creator>Mickel Samuel</dc:creator>
      <pubDate>Mon, 02 Mar 2026 14:05:06 +0000</pubDate>
      <link>https://dev.to/mickelsamuel/atlas-paywalled-their-migration-linter-here-are-your-free-alternatives-4god</link>
      <guid>https://dev.to/mickelsamuel/atlas-paywalled-their-migration-linter-here-are-your-free-alternatives-4god</guid>
      <description>&lt;p&gt;In October 2025, Atlas moved &lt;code&gt;atlas migrate lint&lt;/code&gt; out of their free Starter plan. If you relied on Atlas for catching dangerous PostgreSQL migrations in CI, you now need a paid plan or a new tool. This guide covers what changed, what's still available for free, and the open-source alternatives you can adopt today.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Changed in Atlas v0.38
&lt;/h2&gt;

&lt;p&gt;Atlas v0.38, released October 30, 2025, restructured its pricing tiers. The key change: the &lt;code&gt;atlas migrate lint&lt;/code&gt; command is no longer available in the free Starter plan. To use migration linting, you now need a Pro plan at &lt;strong&gt;$9/developer/month plus $59/CI project/month&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For a team of 5 engineers with 2 CI projects, that's $163/month for migration linting alone. Not unreasonable for a well-funded team, but a significant cost for startups, open-source projects, and smaller teams that were getting this for free.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Atlas lint actually checked
&lt;/h3&gt;

&lt;p&gt;Atlas's PostgreSQL-specific analyzers were genuinely useful. Their Pro-only rules include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PG301&lt;/strong&gt; : Adding a non-nullable column without a default&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PG302&lt;/strong&gt; : Adding a column with a volatile default&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PG303&lt;/strong&gt; : Modifying a column type (table rewrite)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PG304&lt;/strong&gt; : Adding a primary key to an existing table&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PG305&lt;/strong&gt; : Adding a unique constraint (ACCESS EXCLUSIVE)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PG307&lt;/strong&gt; : Creating an index non-concurrently&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PG311&lt;/strong&gt; : Adding an exclusion constraint (table rewrite)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are real production safety checks. If your team was relying on Atlas to catch these in CI, the paywall means you either pay up or lose that safety net.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Community Edition and Hacker License
&lt;/h3&gt;

&lt;p&gt;Atlas still offers two free options, but with limitations:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community Edition&lt;/strong&gt; (Apache 2.0): An open-source build of Atlas with the basic migration engine. It includes &lt;code&gt;migrate lint&lt;/code&gt; with a basic set of analyzers, but the PostgreSQL-specific rules (PG301-PG311) are Pro-only. You get generic checks like destructive change detection, but not the nuanced PostgreSQL lock and rewrite analysis that made Atlas lint valuable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hacker License&lt;/strong&gt; : Free for students, open-source contributors, and hobby projects. This is generous, but it explicitly excludes commercial use. If your company is running PostgreSQL in production, you cannot use the Hacker License for your CI pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Broader Trend: Paywalls Everywhere
&lt;/h2&gt;

&lt;p&gt;Atlas is not alone in restricting free tiers. Within a 6-month window in 2025:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Liquibase&lt;/strong&gt; changed to a Functional Source License (FSL) in October 2025. FSL is not OSI-approved open source. Apache Fineract has already downgraded their Liquibase dependency, and CNCF projects cannot use FSL-licensed software.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flyway&lt;/strong&gt; discontinued their Teams tier in May 2025, leaving only Community (limited) and Enterprise (expensive, quote-based).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Atlas&lt;/strong&gt; moved lint to Pro in October 2025.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Three major migration tools restricting access within six months is not a coincidence — it's a market consolidation pattern. If you want migration safety tooling that will remain free and open source, the options are now more limited than they were a year ago.&lt;/p&gt;

&lt;h2&gt;
  
  
  Free, Open-Source Alternatives
&lt;/h2&gt;

&lt;p&gt;If you need a free PostgreSQL migration linter in CI, here are your options, assessed honestly:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Squawk
&lt;/h3&gt;

&lt;p&gt;Squawk is the most established free migration linter. Written in Rust, it's fast and focused.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rules:&lt;/strong&gt; 32&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language:&lt;/strong&gt; Rust (CLI binary + npm wrapper)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; ~1,000 stars&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Downloads:&lt;/strong&gt; ~600K/month (npm + PyPI combined)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License:&lt;/strong&gt; Apache 2.0&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Action:&lt;/strong&gt; Yes (free)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt; Fast execution (Rust native), well-tested rules for common PostgreSQL lock hazards, active development. The core rules cover CREATE INDEX without CONCURRENTLY, adding columns with volatile defaults, and other common issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; No lock type classification (doesn't tell you &lt;em&gt;which&lt;/em&gt; lock a statement acquires). No production context awareness — it can't factor in table sizes or active queries. No auto-fix suggestions. Exit code 1 for both errors and warnings (no distinction). Cannot analyze PL/pgSQL function bodies or DO blocks.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install and run Squawk&lt;br&gt;
npm install -g squawk-cli
&lt;h1&gt;
  
  
  Analyze a migration file
&lt;/h1&gt;

&lt;p&gt;squawk migrations/V1__add_users_email.sql&lt;/p&gt;
&lt;h1&gt;
  
  
  In CI (GitHub Action)
&lt;/h1&gt;
&lt;h1&gt;
  
  
  Uses: sbdchd/squawk-action@v1
&lt;/h1&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  

&lt;ol&gt;
&lt;li&gt;strong_migrations (Rails only)
&lt;/li&gt;
&lt;/ol&gt;
&lt;/h3&gt;


&lt;p&gt;If your team uses Ruby on Rails, strong_migrations is the gold standard. It integrates directly with ActiveRecord and catches dangerous migrations at the ORM level.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; ~4,400 stars&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language:&lt;/strong&gt; Ruby (Rails gem)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License:&lt;/strong&gt; MIT&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Used by:&lt;/strong&gt; Instacart, many Rails shops&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt; Excellent developer experience — when a migration is dangerous, it shows you the exact safe alternative with a clear explanation of why. This "explain the risk, show the fix" UX is what every migration linter should aspire to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; Rails and ActiveRecord only. If your team uses Go, Python, Node.js, or plain SQL files, strong_migrations is not an option. This is the most common complaint in migration safety discussions:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"My conundrum is we don't use Django lol. Wish this sort of thoroughness existed in standalone pg migration tooling." — Hacker News commenter&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  3. Eugene
&lt;/h3&gt;

&lt;p&gt;Eugene is a newer tool that takes a unique approach: both static lint and dynamic lock tracing. It can run your migration against a temporary PostgreSQL server and observe which locks are actually acquired.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; ~52 stars&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language:&lt;/strong&gt; Rust&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Approach:&lt;/strong&gt; Static lint + dynamic lock tracing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Status:&lt;/strong&gt; Active but early-stage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt; The dynamic tracing approach can catch issues that static analysis misses. Supabase's postgres-language-server team has acknowledged that Eugene is "more advanced" than their Squawk-based port for lock analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; Very early-stage (52 stars). Requires a running PostgreSQL instance for dynamic tracing. No GitHub Action in the marketplace. Smaller community and less documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. MigrationPilot
&lt;/h3&gt;

&lt;p&gt;MigrationPilot is a newer entry with the deepest free rule set. Disclaimer: this article is published on the MigrationPilot blog, so take this section with appropriate skepticism and verify the claims against the open-source repository.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rules:&lt;/strong&gt; 80 (77 free, 3 paid)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language:&lt;/strong&gt; TypeScript&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License:&lt;/strong&gt; MIT&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Action:&lt;/strong&gt; Yes (free)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-fix:&lt;/strong&gt; 12 rules&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt; Lock type classification (tells you which lock each statement acquires). Risk scoring (RED/YELLOW/GREEN). Auto-fix suggestions for 12 rules. Works with any framework — analyzes plain SQL. Free GitHub Action. VS Code extension. Every rule includes an explanation of why the operation is dangerous and the safe alternative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; New project with a small community. Fewer real-world battle-testing hours than Squawk. TypeScript, not Rust — slower on very large migration files (though still sub-second for typical migrations). The 3 paid rules require a $19/month subscription for production context analysis.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install and run MigrationPilot&lt;br&gt;
npx migrationpilot analyze migrations/*.sql
&lt;h1&gt;
  
  
  Auto-fix common issues
&lt;/h1&gt;

&lt;p&gt;npx migrationpilot analyze migrations/*.sql --fix&lt;/p&gt;
&lt;h1&gt;
  
  
  In CI (GitHub Action)
&lt;/h1&gt;
&lt;h1&gt;
  
  
  Uses: mickelsamuel/migrationpilot@v1
&lt;/h1&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Comparison Table&lt;br&gt;
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Atlas (free)&lt;/th&gt;
&lt;th&gt;Squawk&lt;/th&gt;
&lt;th&gt;MigrationPilot&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Free CI linting&lt;/td&gt;
&lt;td&gt;No (since v0.38)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PostgreSQL rules&lt;/td&gt;
&lt;td&gt;Basic only (PG-specific paywalled)&lt;/td&gt;
&lt;td&gt;32&lt;/td&gt;
&lt;td&gt;77 free, 3 paid&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lock classification&lt;/td&gt;
&lt;td&gt;Partial&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auto-fix&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;12 rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Risk scoring&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Action&lt;/td&gt;
&lt;td&gt;Paid only&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VS Code extension&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-database&lt;/td&gt;
&lt;td&gt;Yes (15+)&lt;/td&gt;
&lt;td&gt;PostgreSQL only&lt;/td&gt;
&lt;td&gt;PostgreSQL only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Community size&lt;/td&gt;
&lt;td&gt;~8,100 stars&lt;/td&gt;
&lt;td&gt;~1,000 stars&lt;/td&gt;
&lt;td&gt;New&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;License&lt;/td&gt;
&lt;td&gt;Apache 2.0 (CE)&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Migration Guide: Switching from Atlas Lint
&lt;/h2&gt;

&lt;p&gt;If you were using &lt;code&gt;atlas migrate lint&lt;/code&gt; in your CI pipeline, here is how to switch to a free alternative. The examples below use MigrationPilot, but the same concept applies to Squawk.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Actions
&lt;/h3&gt;

&lt;p&gt;Replace your Atlas lint workflow step:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Before: Atlas (now requires paid Pro plan)

&lt;ul&gt;
&lt;li&gt;uses: ariga/atlas-action/migrate/lint@v1
with:
dir: file://migrations
dev-url: "sqlite://dev?mode=memory"&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  After: MigrationPilot (free)
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;uses: mickelsamuel/migrationpilot@v1
with:
paths: migrations/*.sql
fail-on: critical
&lt;/li&gt;
&lt;/ul&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;


Local development
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Before: Atlas
atlas migrate lint --dir file://migrations --dev-url "postgres://..."

&lt;h1&gt;
  
  
  After: MigrationPilot
&lt;/h1&gt;

&lt;p&gt;npx migrationpilot analyze migrations/*.sql&lt;/p&gt;
&lt;h1&gt;
  
  
  Or install globally
&lt;/h1&gt;

&lt;p&gt;npm install -g migrationpilot&lt;br&gt;
migrationpilot analyze migrations/*.sql&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Rule mapping: Atlas to MigrationPilot&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;Here is how Atlas's PostgreSQL-specific rules map to MigrationPilot rules:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Atlas Rule&lt;/th&gt;
&lt;th&gt;What It Checks&lt;/th&gt;
&lt;th&gt;MigrationPilot Equivalent&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;PG301&lt;/td&gt;
&lt;td&gt;Non-nullable column without default&lt;/td&gt;
&lt;td&gt;MP002&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PG302&lt;/td&gt;
&lt;td&gt;Volatile default (table rewrite)&lt;/td&gt;
&lt;td&gt;MP003&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PG303&lt;/td&gt;
&lt;td&gt;Column type change (table rewrite)&lt;/td&gt;
&lt;td&gt;MP007&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PG304&lt;/td&gt;
&lt;td&gt;Adding primary key to existing table&lt;/td&gt;
&lt;td&gt;MP027&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PG305&lt;/td&gt;
&lt;td&gt;Adding unique constraint&lt;/td&gt;
&lt;td&gt;MP027&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PG307&lt;/td&gt;
&lt;td&gt;Non-concurrent index creation&lt;/td&gt;
&lt;td&gt;MP001&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PG311&lt;/td&gt;
&lt;td&gt;Exclusion constraint (table rewrite)&lt;/td&gt;
&lt;td&gt;MP027&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;MigrationPilot covers all of Atlas's PostgreSQL-specific rules, plus 73 additional checks that Atlas does not have — including foreign key lock analysis, lock_timeout enforcement, VACUUM FULL detection, enum type safety, trigger cascade analysis, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  When You Should Still Use Atlas
&lt;/h2&gt;

&lt;p&gt;This is not an anti-Atlas article. Atlas is a good product with capabilities that go beyond linting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-database support&lt;/strong&gt; : If you run MySQL, MariaDB, SQLite, and PostgreSQL, Atlas handles all of them. Squawk and MigrationPilot are PostgreSQL-only.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema-as-code&lt;/strong&gt; : Atlas's declarative HCL schema and drift detection are features that linters don't provide.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Managed migration planning&lt;/strong&gt; : Atlas can generate migration files from schema diffs, which is a migration runner feature, not a linter feature.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise features&lt;/strong&gt; : Audit trails, access control, and deployment orchestration for larger teams.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your team needs a full migration management platform and can justify the cost, Atlas Pro is still a strong choice. The alternatives listed here are specifically for teams that need free, open-source migration &lt;em&gt;linting&lt;/em&gt; in CI.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Question: Why Lint Migrations at All?
&lt;/h2&gt;

&lt;p&gt;If you are considering dropping migration linting from your CI pipeline rather than switching tools, consider the cost of not linting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GoCardless&lt;/strong&gt; had a 15-second API outage from a foreign key constraint that locked two tables simultaneously.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resend&lt;/strong&gt; suffered a 12-hour outage from a production DROP operation.&lt;/li&gt;
&lt;li&gt;Every &lt;code&gt;CREATE INDEX&lt;/code&gt; without CONCURRENTLY on a table with more than a few thousand rows blocks all writes until the index build completes.&lt;/li&gt;
&lt;li&gt;Every &lt;code&gt;ALTER TABLE ... SET NOT NULL&lt;/code&gt; without the CHECK constraint pattern scans the entire table under an ACCESS EXCLUSIVE lock.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not theoretical risks. They are recurring patterns that cause production incidents at companies of every size. A migration linter in CI is the cheapest insurance against these issues — it catches the mistake before it reaches production, not after.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Atlas lint is no longer free&lt;/strong&gt; as of v0.38 (October 2025). The Community Edition has basic linting but not the PostgreSQL-specific rules.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Squawk&lt;/strong&gt; is the most established free alternative with 32 rules and ~600K downloads/month.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;strong_migrations&lt;/strong&gt; is excellent if you use Rails.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Eugene&lt;/strong&gt; has a unique dynamic tracing approach but is early-stage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MigrationPilot&lt;/strong&gt; has the most rules (77 free) with lock classification and auto-fix, but is newer and less battle-tested.&lt;/li&gt;
&lt;li&gt;All four free tools have GitHub Actions or CI integrations.&lt;/li&gt;
&lt;li&gt;Pick the one that fits your stack. If you use Rails, use strong_migrations. If you want the established option, use Squawk. If you want the most coverage, try MigrationPilot.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>opensource</category>
      <category>devops</category>
    </item>
    <item>
      <title>Squawk vs MigrationPilot: PostgreSQL Migration Linters Compared</title>
      <dc:creator>Mickel Samuel</dc:creator>
      <pubDate>Mon, 02 Mar 2026 14:05:03 +0000</pubDate>
      <link>https://dev.to/mickelsamuel/squawk-vs-migrationpilot-postgresql-migration-linters-compared-13e4</link>
      <guid>https://dev.to/mickelsamuel/squawk-vs-migrationpilot-postgresql-migration-linters-compared-13e4</guid>
      <description>&lt;p&gt;Squawk and MigrationPilot are both open-source PostgreSQL migration linters. Both analyze SQL files for dangerous operations and both integrate with CI. But they differ significantly in approach, rule depth, and what information they give you. This comparison is published on the MigrationPilot blog — take that into account and verify claims against the source code of both tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Squawk&lt;/th&gt;
&lt;th&gt;MigrationPilot&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Language&lt;/td&gt;
&lt;td&gt;Rust&lt;/td&gt;
&lt;td&gt;TypeScript&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Safety rules&lt;/td&gt;
&lt;td&gt;32&lt;/td&gt;
&lt;td&gt;80 (77 free, 3 paid)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lock type classification&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Risk scoring&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;RED/YELLOW/GREEN&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auto-fix&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;12 rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Action&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes (with PR comments)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VS Code extension&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Config file&lt;/td&gt;
&lt;td&gt;.squawk.toml&lt;/td&gt;
&lt;td&gt;.migrationpilotrc.yml&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Output formats&lt;/td&gt;
&lt;td&gt;Text, TSV, JSON&lt;/td&gt;
&lt;td&gt;CLI, JSON, SARIF, Markdown&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Production context&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes (paid, 3 rules)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Community&lt;/td&gt;
&lt;td&gt;~1,000 stars, 600K downloads/mo&lt;/td&gt;
&lt;td&gt;New project&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;License&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Price&lt;/td&gt;
&lt;td&gt;Free (100%)&lt;/td&gt;
&lt;td&gt;Free (97% of rules), $19/mo for production context&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Where Squawk Is Better
&lt;/h2&gt;

&lt;p&gt;Let's start with where Squawk wins. Being honest about this matters more than marketing.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Speed
&lt;/h3&gt;

&lt;p&gt;Squawk is written in Rust and is fast. For typical migration files (1-50 statements), both tools complete in under a second, so this doesn't matter in practice. But if you have very large migration files (thousands of statements), Squawk's Rust parser will be noticeably faster than MigrationPilot's TypeScript + WASM parser.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Maturity and community
&lt;/h3&gt;

&lt;p&gt;Squawk has ~1,000 GitHub stars and ~600,000 downloads per month. It has been used in production by many teams for years. MigrationPilot is a new project with a small community. If you value battle-tested stability and want a tool that has already encountered and handled edge cases in the wild, Squawk has a significant advantage.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. 100% free
&lt;/h3&gt;

&lt;p&gt;Squawk is completely free with no paid tier. Every feature is available to everyone. MigrationPilot has 3 rules (out of 80) that require a $19/month subscription. While 97% of MigrationPilot's rules are free, Squawk's fully-free model is simpler and carries no risk of future paywall expansion.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where MigrationPilot Is Better
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Rule depth (80 vs 32)
&lt;/h3&gt;

&lt;p&gt;MigrationPilot has 80 safety rules compared to Squawk's 32. The additional 48 rules cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data safety: TRUNCATE TABLE, DROP CASCADE, unvalidated enum changes&lt;/li&gt;
&lt;li&gt;Best practices: VARCHAR vs TEXT, TIMESTAMP vs TIMESTAMPTZ, missing IF NOT EXISTS&lt;/li&gt;
&lt;li&gt;Lock safety: VACUUM FULL, LOCK TABLE, DISABLE TRIGGER, CLUSTER&lt;/li&gt;
&lt;li&gt;Partition safety: partition key in PK, detach partition concurrently&lt;/li&gt;
&lt;li&gt;Extension safety: PostGIS spatial indexes, pgvector HNSW vs IVFFlat&lt;/li&gt;
&lt;li&gt;Transaction safety: uncommitted transactions, concurrent ops in transactions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The core rules overlap substantially — both tools catch CREATE INDEX without CONCURRENTLY, volatile defaults, and column type changes. The difference is in the long tail: patterns that are less common but equally dangerous when they occur.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Lock type classification
&lt;/h3&gt;

&lt;p&gt;When MigrationPilot flags a dangerous statement, it tells you &lt;em&gt;which lock&lt;/em&gt; the statement acquires. This is important because different locks have different impacts:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# MigrationPilot output
  MP001 [critical] CREATE INDEX without CONCURRENTLY
    Acquires: SHARE lock (blocks writes, allows reads)
    Table: orders
    Safe alternative:
      CREATE INDEX CONCURRENTLY idx_orders_customer ON orders (customer_id);

# Squawk output
migrations/001.sql:1:1: warning: prefer-create-index-concurrently
  Instead of `CREATE INDEX`, use `CREATE INDEX CONCURRENTLY`.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Squawk tells you the operation is dangerous and suggests the fix. MigrationPilot also tells you the specific lock type (SHARE, ACCESS EXCLUSIVE, SHARE UPDATE EXCLUSIVE, etc.), which helps you understand the actual impact on running queries. A SHARE lock blocks writes but allows reads; an ACCESS EXCLUSIVE lock blocks everything.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Auto-fix
&lt;/h3&gt;

&lt;p&gt;MigrationPilot can automatically fix 12 rule violations:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Preview what would change
npx migrationpilot analyze migration.sql --fix --dry-run

# Apply fixes directly
npx migrationpilot analyze migration.sql --fix

# Example: MP001 auto-fix
# Before: CREATE INDEX idx_orders_customer ON orders (customer_id);
# After:  CREATE INDEX CONCURRENTLY idx_orders_customer ON orders (customer_id);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Squawk does not have auto-fix. It reports the violation and links to documentation, but you make the change manually. For the 12 rules MigrationPilot can auto-fix, the fix is a deterministic text transformation — not a heuristic guess.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Risk scoring and prioritization
&lt;/h3&gt;

&lt;p&gt;MigrationPilot assigns a risk score to each migration: RED (critical — likely to cause outage), YELLOW (warning — may cause issues), GREEN (safe). This helps teams with many migration files prioritize which ones to fix first.&lt;/p&gt;

&lt;p&gt;Squawk reports violations with severity levels but does not provide an aggregate risk score for the migration as a whole.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Multiple output formats
&lt;/h3&gt;

&lt;p&gt;MigrationPilot outputs in four formats:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CLI&lt;/strong&gt; : Colored terminal output with lock types, timing, and risk scores&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JSON&lt;/strong&gt; : Structured, versioned schema for programmatic consumption&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SARIF&lt;/strong&gt; : For GitHub Code Scanning and IDE integration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Markdown&lt;/strong&gt; : For documentation, wikis, or PR comments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SARIF output is particularly useful — it integrates with GitHub's Code Scanning to show violations directly in the "Files changed" tab of pull requests. Squawk outputs in text, TSV, and JSON formats.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rule Overlap
&lt;/h2&gt;

&lt;p&gt;Most of Squawk's 32 rules have equivalents in MigrationPilot. Here are the key mappings:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Check&lt;/th&gt;
&lt;th&gt;Squawk&lt;/th&gt;
&lt;th&gt;MigrationPilot&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CREATE INDEX without CONCURRENTLY&lt;/td&gt;
&lt;td&gt;prefer-create-index-concurrently&lt;/td&gt;
&lt;td&gt;MP001&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NOT NULL without default&lt;/td&gt;
&lt;td&gt;adding-not-nullable-field&lt;/td&gt;
&lt;td&gt;MP002&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Volatile default value&lt;/td&gt;
&lt;td&gt;adding-field-with-default&lt;/td&gt;
&lt;td&gt;MP003&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Column type change&lt;/td&gt;
&lt;td&gt;changing-column-type&lt;/td&gt;
&lt;td&gt;MP007&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SET NOT NULL&lt;/td&gt;
&lt;td&gt;adding-required-field&lt;/td&gt;
&lt;td&gt;MP018&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FK without NOT VALID&lt;/td&gt;
&lt;td&gt;adding-foreign-key-constraint&lt;/td&gt;
&lt;td&gt;MP005&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DROP INDEX without CONCURRENTLY&lt;/td&gt;
&lt;td&gt;prefer-drop-index-concurrently&lt;/td&gt;
&lt;td&gt;MP009&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SERIAL vs IDENTITY&lt;/td&gt;
&lt;td&gt;prefer-identity&lt;/td&gt;
&lt;td&gt;MP015&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The overlap confirms that both tools catch the most critical migration safety issues. The difference is in coverage breadth: MigrationPilot has 48 additional rules for patterns that Squawk does not check.&lt;/p&gt;

&lt;h2&gt;
  
  
  CI Integration
&lt;/h2&gt;

&lt;p&gt;Both tools provide GitHub Actions. Here is how they look in a workflow:&lt;/p&gt;

&lt;h3&gt;
  
  
  Squawk GitHub Action
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- uses: sbdchd/squawk-action@v1
  with:
    pattern: "migrations/*.sql"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  MigrationPilot GitHub Action
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- uses: mickelsamuel/migrationpilot@v1
  with:
    paths: "migrations/*.sql"
    fail-on: critical
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Both actions post comments on pull requests with the violations found. MigrationPilot additionally provides inline annotations in the "Files changed" tab and a Job Summary with aggregate metrics. Squawk's action is simpler and more focused.&lt;/p&gt;

&lt;h2&gt;
  
  
  Known Limitations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Squawk's known issues
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cannot analyze PL/pgSQL function bodies or DO blocks (GitHub issues #411, #528)&lt;/li&gt;
&lt;li&gt;False positives in some valid contexts (#937, #973)&lt;/li&gt;
&lt;li&gt;Exit code 1 for both errors and warnings — no way to distinguish (#348)&lt;/li&gt;
&lt;li&gt;No cross-file validation or schema-level context&lt;/li&gt;
&lt;li&gt;Pre-commit integration has been broken since June 2024 (#363)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  MigrationPilot's known issues
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;New project — fewer real-world battle-testing hours&lt;/li&gt;
&lt;li&gt;TypeScript is slower than Rust on very large files (sub-second for typical use)&lt;/li&gt;
&lt;li&gt;Small community — fewer contributors finding and fixing edge cases&lt;/li&gt;
&lt;li&gt;3 rules behind a paywall ($19/month for production context analysis)&lt;/li&gt;
&lt;li&gt;No dynamic lock tracing (purely static analysis, like Squawk)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to Use Squawk
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;You want a proven, battle-tested tool.&lt;/strong&gt; Squawk has years of production usage and a larger community. If stability and track record matter most, use Squawk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You want 100% free with zero paid components.&lt;/strong&gt; Squawk is fully free with no paid tier.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You need Rust-native speed.&lt;/strong&gt; If you lint thousands of migration files in CI, Squawk's Rust parser is faster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You want minimal dependencies.&lt;/strong&gt; Squawk is a single binary. MigrationPilot requires Node.js.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to Use MigrationPilot
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;You want the most comprehensive rule set.&lt;/strong&gt; 80 rules cover patterns that Squawk's 32 rules do not check.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You want lock type information.&lt;/strong&gt; Knowing &lt;em&gt;which&lt;/em&gt; lock a statement acquires helps you assess real-world impact.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You want auto-fix.&lt;/strong&gt; 12 rules can be automatically fixed with &lt;code&gt;--fix&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You want SARIF output for GitHub Code Scanning.&lt;/strong&gt; Violations appear directly in PR diffs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You need production context analysis.&lt;/strong&gt; Table size and query impact awareness (paid tier) is unique to MigrationPilot.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You want config presets.&lt;/strong&gt; Five built-in presets (recommended, strict, ci, startup, enterprise) for different deployment contexts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Can You Use Both?
&lt;/h2&gt;

&lt;p&gt;Yes. Both tools analyze plain SQL files and can run independently in CI. Running both gives you the union of their rule sets and two independent opinions on each migration. The overhead is minimal — both complete in under a second for typical migrations.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Run both in CI

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;name: Squawk lint&lt;br&gt;
uses: sbdchd/squawk-action@v1&lt;br&gt;
with:&lt;br&gt;
pattern: "migrations/*.sql"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;name: MigrationPilot lint&lt;br&gt;
uses: mickelsamuel/migrationpilot@v1&lt;br&gt;
with:&lt;br&gt;
paths: "migrations/*.sql"&lt;br&gt;
fail-on: critical&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
&lt;br&gt;
&lt;br&gt;
Summary&lt;br&gt;
&lt;/h2&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Squawk&lt;/strong&gt; is the established choice: proven, fast, 100% free, and sufficient for catching the most common PostgreSQL migration hazards.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MigrationPilot&lt;/strong&gt; offers more coverage: 80 vs 32 rules, lock type classification, auto-fix, risk scoring, and SARIF output. But it's newer and less battle-tested.&lt;/li&gt;
&lt;li&gt;Both tools catch the critical issues (missing CONCURRENTLY, volatile defaults, column type changes, SET NOT NULL).&lt;/li&gt;
&lt;li&gt;The core rules overlap significantly. The difference is in the long tail of less common but still dangerous patterns.&lt;/li&gt;
&lt;li&gt;If stability and simplicity matter most, use Squawk. If coverage and features matter most, try MigrationPilot. If you want maximum safety, use both.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>opensource</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
