DEV Community

Cover image for PostgreSQL vs MySQL vs SQLite: Choosing the Right Database for Your Project
Philip McClarence
Philip McClarence

Posted on

PostgreSQL vs MySQL vs SQLite: Choosing the Right Database for Your Project

PostgreSQL vs MySQL vs SQLite: A Developer's Honest Guide to Choosing the Right Database

Every project starts with the same question. Do I need a full database server, or is SQLite enough?

The answer depends on exactly three things: how many users write concurrently, how much data you have, and whether you need network access to your database. Get these three right and the choice becomes obvious. Get them wrong and you spend months migrating away from a decision that seemed fine at the start.

I have run all three in production. SQLite powers a personal analytics tool that handles 2 million rows without breaking a sweat. MySQL runs a legacy content management system that I maintain but would not choose again. PostgreSQL runs everything else -- including the monitoring platform I built at myDBA.dev. Each one earned its place, and each one would be wrong for the other two use cases.

This is not a ranking. These are three different tools that happen to speak SQL. Understanding what makes each one different -- at the architecture level, not the marketing level -- is the fastest way to make a decision you will not regret.

What Each One Actually Is

Before comparing features, you need to understand a fundamental architectural difference that drives everything else.

SQLite is an embedded database. There is no server process. No TCP connections. No authentication layer. The entire database engine compiles into your application as a C library, and your database is a single file on disk. When your application calls SELECT, the SQLite code runs inside your process, reads from the file, and returns results. No network round-trip. No inter-process communication. Just a function call.

This is not a limitation. It is the design. SQLite was purpose-built for applications where the database and the application are the same program -- mobile apps, desktop software, embedded devices, IoT firmware. It is the most deployed database engine in the world, running on every smartphone, every web browser, and most operating systems.

MySQL is a client-server database. A separate mysqld process runs continuously, listens on a network port (default 3306), and handles connections from applications. It uses a thread-per-connection model -- each client connection gets a thread within the single MySQL process. InnoDB, the default storage engine, provides transactions, row-level locking, and crash recovery.

PostgreSQL is also client-server, but with a multi-process architecture. Each client connection gets its own operating system process, forked from the main postmaster process. This provides stronger isolation -- a crashed connection cannot take down other sessions -- but comes with higher per-connection overhead, which is why connection pooling (PgBouncer, pgcat, Supavisor) is standard practice in production.

One critical clarification: SQLite is not "MySQL lite." The name refers to the lightweight architecture, not a reduced feature set. SQLite and MySQL share no code, no design philosophy, and no common ancestry. Thinking of SQLite as a stripped-down server database leads to wrong conclusions about when to use it.

Architecture: File vs Client-Server

The architectural difference between embedded and client-server cascades into every operational aspect.

Deployment. SQLite's deployment is copying a file. No installation, no configuration, no daemon management. Your database is a regular file you can back up with cp and inspect with the sqlite3 CLI. MySQL and PostgreSQL require server installation, memory configuration, authentication setup, daemon management, and backup infrastructure. That overhead is justified when you need network access and multi-user concurrency -- but it is real overhead.

Connections. SQLite has no connections -- your application opens the file directly. MySQL uses a thread-per-connection model within a single process, scaling efficiently to thousands of connections. PostgreSQL forks a new OS process per connection, providing stronger isolation but requiring a connection pooler (PgBouncer, pgcat) for production deployments beyond a few hundred connections.

Backup and recovery. SQLite backups are file copies. MySQL and PostgreSQL both support logical backups (mysqldump/pg_dump) and point-in-time recovery via WAL/binlog archiving. Litestream can add continuous replication to SQLite by streaming WAL changes to S3.

Concurrency: The Critical Differentiator

This is where the three databases diverge most sharply, and where most wrong decisions originate.

SQLite: One Writer at a Time

SQLite uses database-level locking. In WAL (Write-Ahead Logging) mode -- which you should always enable -- multiple readers can operate simultaneously while a single writer holds the lock. If a second writer tries to write while the first is active, it gets SQLITE_BUSY and must retry.

-- Enable WAL mode (do this once, it persists)
PRAGMA journal_mode=WAL;
Enter fullscreen mode Exit fullscreen mode

This sounds limiting, but consider the math. A typical web request that writes a single row takes 1-5 milliseconds of actual write time. At 5ms per write, SQLite can handle 200 writes per second sequentially. For a personal blog, a small SaaS tool, or a content site with moderate traffic, 200 writes per second is more than enough.

The bottleneck appears when you have sustained concurrent writes -- multiple users editing simultaneously, real-time collaborative features, high-volume event ingestion. If your write queue grows faster than SQLite can drain it, response times spike and SQLITE_BUSY errors multiply.

MySQL: Row-Level Locking with InnoDB

MySQL's InnoDB engine provides row-level locking. Multiple transactions can write to different rows simultaneously without blocking each other. Locks are held only on the specific rows being modified (plus gap locks for range queries to prevent phantom reads).

-- Two transactions can update different rows concurrently
-- Transaction A:
UPDATE orders SET status = 'shipped' WHERE order_id = 1001;

-- Transaction B (runs simultaneously, no blocking):
UPDATE orders SET status = 'shipped' WHERE order_id = 1002;
Enter fullscreen mode Exit fullscreen mode

This scales to thousands of concurrent writers on different rows. Contention only occurs when multiple transactions touch the same rows.

PostgreSQL: MVCC with Row-Level Locking

PostgreSQL uses Multi-Version Concurrency Control (MVCC). When a row is updated, PostgreSQL writes a new version and keeps the old one visible to transactions that started before the update. Readers never block writers. Writers never block readers. Two writers updating the same row will serialize -- the second waits for the first to commit or abort -- but different rows proceed in parallel.

-- PostgreSQL MVCC: readers see consistent snapshots
-- Session A starts a long-running report:
BEGIN;
SELECT count(*) FROM orders WHERE created_at > '2026-01-01';

-- Session B (runs simultaneously, no blocking):
INSERT INTO orders (customer_id, total) VALUES (42, 99.99);
-- Session A's count is unaffected -- it sees the snapshot from when it started
Enter fullscreen mode Exit fullscreen mode

The tradeoff: old row versions accumulate as dead tuples, and PostgreSQL needs VACUUM (usually autovacuum) to reclaim them. This is the single most important operational concern with PostgreSQL, and the one that trips up teams who are new to it.

When SQLite's Concurrency Becomes a Bottleneck

The honest answer: later than most people think, but earlier than SQLite enthusiasts claim.

  • Under 50 concurrent users, moderate writes: SQLite is fine. The write lock contention is negligible.
  • 50-200 concurrent users, regular writes: You will start seeing occasional SQLITE_BUSY errors. Retry logic and connection timeouts become important. It works, but you are managing around a limitation.
  • 200+ concurrent users, frequent writes: Migrate to PostgreSQL or MySQL. The write serialization creates a hard ceiling that no amount of optimization can remove.

Read-heavy workloads shift these thresholds significantly upward. A site with 10,000 concurrent readers and 5 writes per second runs perfectly on SQLite.

Feature Comparison: More Overlap Than You Expect

SQLite supports more SQL features than most developers realize. But the gaps between SQLite and the server databases are real and matter for certain workloads.

What SQLite Supports (More Than You Think)

CTEs (including recursive), window functions (ROW_NUMBER, RANK, LAG, LEAD), JSON (json_extract, json_each, json_group_array), full-text search (FTS5), generated columns, upsert (ON CONFLICT DO UPDATE), and partial indexes. For many applications, SQLite's SQL is sufficient.

What SQLite Lacks

No ALTER COLUMN, no stored procedures, no role-based access control, no native replication, no array or range types, no LISTEN/NOTIFY. Schema migrations are painful -- complex changes often require creating a new table, copying data, and renaming.

Where PostgreSQL Pulls Ahead

PostgreSQL's feature set is in a different category:

  • Extensions: pgvector (AI embeddings), PostGIS (geospatial), TimescaleDB (time-series), pg_trgm (fuzzy text search)
  • JSONB: Binary JSON with GIN indexing and containment operators -- a document database inside your relational database
  • Advanced types: Arrays, ranges, composite types, enums, domains
  • Parallel queries: Multi-core execution for scans, joins, and aggregations
  • Row-level security: Fine-grained access policies per table
  • Logical replication: Replicate specific tables to specific subscribers
  • LISTEN/NOTIFY: Built-in pub-sub for real-time events

MySQL sits between the two. It has JSON support (less ergonomic than JSONB), CTEs, window functions, and mature replication. It lacks PostgreSQL's extension ecosystem, advanced type system, and parallel query execution.

The SQLite Renaissance

Something interesting happened in the last few years. SQLite went from "that embedded database for mobile apps" to a serious contender for web application backends.

Litestream provides continuous replication of SQLite databases to S3-compatible storage. Your SQLite database gets point-in-time recovery and disaster recovery without a server process -- the replication runs as a sidecar that streams WAL changes to object storage.

LiteFS (from Fly.io) enables distributed SQLite with a primary-replica model. One node writes, replicas get near-instant copies via FUSE filesystem interception. This gives you read scaling across multiple nodes with a single-file database.

Turso (libSQL) is a fork of SQLite that adds a server mode, HTTP API, and edge replication. You get SQLite's simplicity with network access and multi-node distribution.

Cloudflare D1 runs SQLite at the edge on Cloudflare's network. Your database runs in the same data center as your user, with single-digit millisecond latency.

Rails 8 made SQLite the default database for new applications, with built-in support for Solid Cache, Solid Queue, and Solid Cable all backed by SQLite. This was a strong signal from the Rails community that SQLite is production-ready for a wide range of web applications.

The common thread: SQLite's speed and simplicity are being combined with infrastructure that addresses its traditional weaknesses (no replication, no network access, no multi-node distribution). In 2026, SQLite at the edge is a legitimate architecture pattern.

Performance: It Depends on the Workload

Benchmarks without context are meaningless. Here is what actually matters.

SQLite Wins: Simple Reads

For single-user read operations, nothing beats SQLite. There is no network round-trip, no connection overhead, no query parsing on a separate server. A simple SELECT is a function call that reads from a local file.

Typical latency for a primary key lookup:

  • SQLite: 5-20 microseconds (in-process, local file)
  • MySQL: 200-500 microseconds (network + thread dispatch + query)
  • PostgreSQL: 300-600 microseconds (network + process dispatch + query)

That 10-50x difference in read latency matters for applications that perform hundreds of database reads per page render.

MySQL Wins: Simple High-Concurrency OLTP

For workloads dominated by single-row inserts and primary key lookups with thousands of concurrent connections, MySQL's thread model has lower per-connection overhead. The clustered index in InnoDB means primary key lookups are a single B-tree traversal to the actual row data.

PostgreSQL Wins: Complex Queries

For analytical queries, multi-table joins, window functions, and complex aggregations, PostgreSQL's query planner is substantially more sophisticated. A complex reporting query that takes 45 seconds on MySQL might finish in 2 seconds on PostgreSQL -- same data, same hardware. The optimizer considers more join strategies, handles CTEs more efficiently (materializing only when beneficial), and can parallelize execution across CPU cores.

PostgreSQL Wins: Mixed Workloads

When your application needs both fast OLTP and complex analytics against the same data, PostgreSQL handles the combination better. MVCC ensures long-running reports do not block writes, and parallel query keeps analytical queries fast even under write load.

When to Use SQLite

  • Desktop applications, mobile apps, or embedded systems
  • Single-server web apps with low write concurrency (personal blogs, small SaaS)
  • Development and testing environments
  • Edge computing where data locality matters
  • Any project where zero operational overhead is a priority

Real-world production uses: Rails 8 applications, Pieter Levels' multi-million-dollar solo SaaS empire, Expensify (billions of queries per day), every iPhone and Android device.

When to Use MySQL

  • WordPress, Drupal, or PHP-ecosystem CMS platforms
  • Teams with deep MySQL expertise and straightforward CRUD workloads
  • Hosting environments that provide managed MySQL but not managed PostgreSQL

MySQL remains widely deployed, but its innovation pace has slowed relative to PostgreSQL. Many of MySQL's recent features (CTEs, window functions, CHECK constraints) are additions PostgreSQL has had for years. The ecosystem gravity is shifting: major frameworks, ORMs, and cloud platforms are increasingly PostgreSQL-first.

When to Use PostgreSQL

  • Multiple users write concurrently in production
  • Complex queries -- CTEs, window functions, lateral joins, recursive queries
  • Extensions needed -- pgvector for AI, PostGIS for geospatial, TimescaleDB for time-series
  • Data integrity is critical -- CHECK constraints, exclusion constraints, row-level security
  • JSON and relational data in the same database (JSONB with GIN indexes)
  • Anything that might grow beyond a single server

PostgreSQL is the safe default for server-side applications in 2026. If you are unsure and your application runs on a server, PostgreSQL is the answer. You will never outgrow its feature set, and the extension ecosystem means you can add capabilities without changing databases.

The Growth Path: SQLite to PostgreSQL

A pragmatic approach for new projects: start with SQLite, migrate to PostgreSQL when you need to.

This works well if you plan for it:

  1. Use an ORM or query builder that abstracts database-specific SQL. SQLAlchemy, Prisma, Drizzle, Django ORM, and ActiveRecord all support both SQLite and PostgreSQL.

  2. Avoid SQLite-specific features. Do not rely on dynamic typing, PRAGMA statements, or SQLite's permissive type coercion. Write your code as if types are enforced from day one.

  3. Test against PostgreSQL periodically. Run your test suite against both databases in CI. This catches SQL incompatibilities before they become migration blockers.

  4. Plan the migration. Tools like pgloader handle type mapping, but custom types, triggers, and application-level assumptions need manual attention.

The migration from SQLite to PostgreSQL is smoother than MySQL to PostgreSQL -- SQLite's SQL dialect is closer to standard SQL.

Monitoring at Scale

Once you graduate from SQLite to PostgreSQL for production, monitoring becomes essential. SQLite is self-managing -- no vacuum to tune, no connection pool to size. PostgreSQL gives you far more power, but it requires operational attention: autovacuum health, query performance, connection utilization, index efficiency, and replication lag.

myDBA.dev provides automated health checks, query performance analysis, and index recommendations to keep your PostgreSQL databases running smoothly. It surfaces problems before they affect users and recommends specific fixes -- from autovacuum tuning to missing index suggestions.

myDBA.dev dashboard showing health score, active connections, query performance trends, and replication status

Index advisor showing recommended indexes with estimated query impact and creation SQL

Comparison Table

Feature SQLite MySQL (InnoDB) PostgreSQL
Architecture Embedded (in-process) Client-server (threaded) Client-server (multi-process)
Deployment Single file, zero config Server installation required Server installation required
Concurrency model Database-level locking (WAL) Row-level locking MVCC + row-level locking
Max concurrent writers 1 Thousands Thousands
Connection overhead None (in-process) Low (threads) Medium (processes, use pooler)
ACID compliance Full Full (InnoDB) Full
SQL standard conformance Good Moderate Excellent
JSON support json_extract, json_each JSON type, functions JSONB with GIN indexes, operators
CTEs / Window functions Yes Yes (8.0+) Yes (advanced)
Full-text search FTS5 module Built-in tsvector/tsquery + GIN
Extensions Loadable extensions Limited Extensive (pgvector, PostGIS, TimescaleDB)
Stored procedures No (app-level only) Yes Yes (PL/pgSQL, PL/Python, etc.)
Replication External tools (Litestream) Built-in (binlog) Built-in (streaming + logical)
Partitioning No Yes Yes (declarative)
Parallel queries No Limited Yes (scans, joins, aggregates)
Row-level security No (file-level only) No Yes
Best read latency Microseconds (in-process) Sub-millisecond (network) Sub-millisecond (network)
Operational overhead Near zero Moderate Moderate-high
Max practical DB size ~1 TB (single file) Petabytes Petabytes
Cost Free, public domain Free (GPL) / Commercial Free (PostgreSQL License)
Best for Embedded, edge, low-write web Simple web apps, PHP/CMS Complex apps, extensions, analytics

The Bottom Line

The decision framework is simpler than most articles make it:

Is the database embedded in the client application? Use SQLite. There is no reason to run a database server for a desktop app, mobile app, or edge function.

Is it a server-side web application with low write concurrency and a single server? SQLite is a legitimate choice. The operational simplicity is a real advantage, and tools like Litestream address the backup and recovery gap.

Is it a server-side application with multiple concurrent users writing data? Use PostgreSQL. The concurrency model, feature set, and extension ecosystem make it the strongest general-purpose choice for server applications in 2026.

Is it a WordPress or PHP-ecosystem project? MySQL remains the pragmatic choice due to ecosystem integration.

Are you unsure? Start with PostgreSQL. You will never outgrow it, and migrating away from PostgreSQL is a problem almost nobody has. Migrating to PostgreSQL is something thousands of teams do every year.

SQLite is not a lesser database -- it is a masterpiece of engineering that does exactly what it was designed to do. PostgreSQL is not overkill for small projects -- it is an investment that scales with you. MySQL is not dead -- it powers a significant portion of the web reliably.

The wrong choice is using any of them without understanding the tradeoffs. Now you do.

Top comments (0)