DEV Community

Cover image for Serverless PostgreSQL 2025: The Truth About Supabase, Neon, and PlanetScale
DataFormatHub
DataFormatHub

Posted on • Originally published at dataformathub.com

Serverless PostgreSQL 2025: The Truth About Supabase, Neon, and PlanetScale

The database landscape for modern applications is in a constant state of flux, with serverless and distributed paradigms pushing the boundaries of what's possible with relational data. As practitioners, we've witnessed significant advancements in the past year, particularly within the PostgreSQL ecosystem, as platforms like Supabase, Neon, and even traditionally MySQL-centric PlanetScale, vie for supremacy in offering robust, scalable, and developer-friendly solutions. This isn't about marketing hype; it's about the tangible architectural shifts, performance gains, and operational efficiencies that are becoming critical for production deployments.

Having recently put these platforms through their paces, I've observed a clear trend: the disaggregation of compute and storage, sophisticated connection management, and the relentless pursuit of "Git-like" developer workflows are no longer aspirational features but table stakes. The numbers tell an interesting story, revealing both impressive strides and persistent challenges.

The Shifting Sands of Database Architectures: From Monolith to Micro-services

Mermaid Diagram

The traditional PostgreSQL deployment, while robust, often struggles under the demands of modern serverless functions and globally distributed applications. The inherent tight coupling of compute and storage in a standard PostgreSQL instance creates bottlenecks for independent scaling and introduces complexity for high availability and disaster recovery. The past year has seen these platforms double down on architectures that fundamentally address these limitations.

Neon, in particular, has built its entire premise on this disaggregation. Their architecture separates the PostgreSQL compute process, which runs as a stateless microservice, from a durable, multi-tenant storage layer. This design means that a PostgreSQL instance can be spun up or down on demand, pulling only the necessary data from the storage layer to respond to queries. This is a significant departure from traditional setups where a single instance controls local storage, simplifying backups, logical replication, and the provisioning of read replicas. The storage layer itself is designed as a key-value store, integrated with cloud object storage like Amazon S3 or Google Cloud Storage for durability and scalability.

Supabase, while offering a managed PostgreSQL instance, has been evolving its surrounding services to embrace more micro-service-oriented patterns. Their migration to a v2 platform architecture, completed for paid plans and gradually rolled out to free plans from March 2024, unbundles services like Storage, Realtime, and the connection pooler (PgBouncer). Previously, these were single-instance services per project. The v2 architecture shifts them to a multi-tenant model, where a single instance serves many projects. This aims to free up resources for the PostgreSQL databases, enable more resource-intensive features, and pave the way for capabilities like zero-downtime scaling. This is a practical optimization, allowing Supabase to offer better resource utilization and potentially more stable performance across its user base.

Supabase's Architectural Evolution: The Edge and Real-time Capabilities

Supabase continues to refine its ecosystem around PostgreSQL, with notable advancements in its Edge Functions and Realtime capabilities. The platform's commitment to providing a comprehensive backend-as-a-service (BaaS) built on open standards remains strong.

Edge Functions: Bringing Compute Closer to Users

Supabase's Edge Functions, powered by Deno, are globally distributed TypeScript functions designed to run close to the user, minimizing latency for HTTP requests. For a deeper look at how this compares to other edge runtimes, check out Cloudflare vs. Deno: The Truth About Edge Computing in 2025. Recent updates in 2025 have made the deployment and management of these functions more streamlined, with direct deployment options from the Supabase dashboard and CLI.

A critical aspect for developers is how these Edge Functions interact with the PostgreSQL database. While Supabase provides an auto-generated RESTful API (PostgREST) and GraphQL API, Edge Functions can also connect directly to the database using any popular PostgreSQL client. This enables running raw SQL within the function, a powerful feature for custom logic that demands low latency and direct data access.

Consider a scenario where you need to perform custom validation or data transformation before writing to the database, or trigger a complex query based on an incoming webhook. An Edge Function can intercept the request, perform the logic, and then interact with PostgreSQL. For example, using the Deno Postgres driver:

// supabase/functions/my-edge-function/index.ts
import { Pool } from 'https://deno.land/x/postgres@v0.17.0/mod.ts'; //

Deno.serve(async (req) => {
  try {
    const { some_data } = await req.json();

    // Establish a connection pool to the database
    // In production, the database URL is automatically configured with SSL.
    const pool = new Pool(Deno.env.get('DATABASE_URL'), 1);
    const connection = await pool.connect();

    try {
      // Example: Insert data after some validation
      const result = await connection.queryObject`
        INSERT INTO public.my_table (data_field)
        VALUES (${some_data})
        RETURNING id;
      `;
      return new Response(JSON.stringify({ id: result.rows[0].id }), {
        headers: { 'Content-Type': 'application/json' },
        status: 200,
      });
    } finally {
      connection.release();
    }
  } catch (err) {
    return new Response(String(err?.message ?? err), { status: 500 });
  }
});
Enter fullscreen mode Exit fullscreen mode

This direct connection contrasts with client-side interactions, where the PostgREST API is often preferred for its built-in RLS enforcement and JSON serialization. For Edge Functions, the server-side nature makes direct TCP connections feasible and often more performant for complex, stateful operations.

Real-time: PostgreSQL Logical Replication at Scale

Supabase's Realtime functionality is a key differentiator, enabling live updates for chat applications, collaborative tools, and dashboards. Their architecture leverages PostgreSQL's Logical Replication feature to stream database changes. Unlike physical replication, which sends binary files, logical replication transmits data changes (inserts, updates, deletes) as logical messages, allowing for fine-grained, table-level subscriptions.

The core of this system involves Supabase creating replication slots on PostgreSQL to stream Write-Ahead Log (WAL) entries. These WAL entries are then parsed and emitted as JSON payloads to clients over WebSocket connections. This design allows for horizontal scalability by decoupling the database from the real-time delivery layer, supporting multiple subscribers with minimal overhead. Supabase uses an Elixir/Phoenix server for its Realtime infrastructure, a deliberate choice due to Elixir's strengths in handling concurrent connections and low-latency messaging. This custom infrastructure was built to overcome limitations of PostgreSQL's native NOTIFY/LISTEN mechanism, particularly its 8000-byte payload limit, which would be insufficient for enterprise-grade real-time capabilities.

The Realtime service also provides Broadcast for ephemeral messages and Presence for tracking shared state, extending beyond just database change notifications. This layered approach gives developers powerful tools for building dynamic applications without deep knowledge of PostgreSQL replication internals.

Neon's Serverless PostgreSQL: Disaggregation in Practice

Neon has consistently championed the disaggregation of compute and storage as the fundamental shift for serverless PostgreSQL. Their architecture is a direct response to the limitations of traditional PostgreSQL in cloud-native, serverless environments.

Compute and Storage Separation: The Core Innovation

Neon's architecture splits the PostgreSQL monolith into a stateless compute layer and a durable, multi-tenant storage layer. The compute nodes, which are standard PostgreSQL instances, become ephemeral. When a database is inactive for a configurable period (e.g., five minutes), the compute node is shut down, scaling to zero. Upon a new connection, a compute node is rapidly spun up in a Kubernetes container, connecting to the existing storage system without needing to restore data. This "scale to zero" capability is a significant cost-saving mechanism for intermittent workloads.

The storage layer is an append-only, layered system built for object stores, providing durability and allowing for features like time-travel and branching. Writes to the database are streamed as WAL records to WAL safekeepers, which ensure durability through a Paxos-based consensus mechanism before being processed by the pageserver and uploaded to object storage. This robust storage system allows for independent scaling of compute and storage resources.

To mitigate the "cold start" latency inherent in scaling compute to zero, Neon employs strategies like connection pooling via PgBouncer. PgBouncer allows Neon to support up to 10,000 concurrent connections by maintaining a pool of connections to PostgreSQL, reducing the overhead of establishing new database connections for each client request.

Developers can choose between direct and pooled connection strings. For serverless functions and high concurrency, the pooled connection string, identifiable by the -pooler option in the hostname (e.g., ep-neon-db.pooler.neon.tech), is highly recommended.

// Example of connecting to Neon with pooling
import { Pool } from '@neondatabase/serverless';

const connectionString = process.env.DATABASE_URL_POOLED; // e.g., postgres://user:pass@ep-neon-db.pooler.neon.tech/dbname

const pool = new Pool({ connectionString });

export async function handler(event) {
  const client = await pool.connect();
  try {
    const res = await client.query('SELECT NOW()');
    return {
      statusCode: 200,
      body: JSON.stringify({ time: res.rows[0].now }),
    };
  } finally {
    client.release();
  }
}
Enter fullscreen mode Exit fullscreen mode

Git-like Branching and Time-Travel

One of Neon's standout features, and a significant productivity booster, is its Git-like branching functionality. Thanks to its copy-on-write storage architecture, creating a new branch is an instantaneous operation, regardless of the database size. This new branch is a full, isolated copy of the parent branch's data and schema at the point of creation, yet it only stores the delta of changes, making it extremely cost-effective.

This enables powerful developer workflows:

  • Feature Development: Developers can create a branch for each new feature, experiment without affecting production, and discard the branch easily.
  • Testing: Spin up a dedicated database branch for each CI/CD pipeline run or preview deployment. Neon's integration with Vercel, for instance, can automatically create a branch for each preview deployment.
  • Point-in-Time Recovery (PITR): Neon retains a history of changes (WAL records) for a configurable restore window (e.g., 6 hours on Free, up to 30 days on Scale plans). This allows users to create a branch from any point in the past within this window, effectively "time-traveling" to recover from mistakes or analyze historical states.

The neonctl CLI is central to managing these branches:

# Create a new branch from 'main'
neonctl branch create my-feature-branch --parent-branch-name main

# Create a branch from a specific point in time (e.g., 1 hour ago)
neonctl branch create bugfix-branch --parent-branch-name production --point-in-time "1 hour ago"

# List branches
neonctl branch list
Enter fullscreen mode Exit fullscreen mode

Each branch gets its own independent autoscaling compute endpoint, preventing "noisy neighbor" issues and ensuring consistent performance. When inactive, these branches also scale down to zero, optimizing costs.

PlanetScale's Vitess-Powered MySQL: A Comparative Lens for PostgreSQL

While PlanetScale has historically been synonymous with Vitess-powered MySQL, their recent entry into the managed PostgreSQL space with PlanetScale for Postgres and the ongoing Neki project for sharded PostgreSQL are significant. This provides an excellent opportunity to compare and contrast architectural philosophies, especially concerning horizontal scalability.

Vitess's Sharding Model and its Influence

Vitess, born at YouTube, is a database clustering system that enables horizontal scaling of MySQL through explicit sharding. It achieves this by routing queries through VTGate proxies, which understand the sharding scheme and direct queries to the appropriate shards. Vitess abstracts away the sharding complexity from the application layer, allowing applications to interact with what appears to be a single, monolithic MySQL database.

Recent Vitess releases, like Vitess 21 (October 2024) and Vitess 23 (November 2025), have focused on enhancing query compatibility, improving cluster management, and expanding VReplication capabilities. Vitess 21 introduced experimental support for atomic distributed transactions and recursive CTEs, addressing long-standing limitations in distributed SQL. Vitess 23 further refined metrics for transaction routing and shard behavior, and upgraded its default MySQL version to 8.4.6, signaling a commitment to forward compatibility.

PlanetScale for Postgres and Project Neki

PlanetScale officially launched its managed PostgreSQL service, built for performance and reliability on AWS or Google Cloud, in late 2025. This offering leverages their "Metal" clusters, which are built on local NVMe drives to provide "Unlimited I/O" and significantly lower latencies compared to traditional EBS-backed instances. The M-10 cluster, starting at $50/month, makes NVMe performance more accessible.

The critical development here is Project Neki, PlanetScale's initiative to bring Vitess-style horizontal sharding to PostgreSQL. Unlike Vitess, which was open-source from the start, Neki is being architected from first principles, leveraging lessons from Vitess but not as a fork. This indicates a serious investment in solving PostgreSQL's horizontal scaling challenges at a fundamental level, rather than simply adapting an existing MySQL solution.

Meanwhile, Supabase has also made a significant move in the PostgreSQL sharding space by bringing Sugu Sougoumarane, co-creator of Vitess, onboard to lead the Multigres project. Multigres aims to bring sharding to PostgreSQL, also starting from scratch but with a focus on compatibility and usability, learning from Vitess's journey. This signals a fascinating race to deliver robust, native PostgreSQL sharding solutions.

Benchmarking the "Serverless" Promise: Latency, Throughput, and Cost

The promise of serverless databases is elasticity, low operational overhead, and cost-effectiveness. However, these benefits often come with performance characteristics that differ from traditional provisioned instances.

Cold Start Latency and Connection Management

One of the most frequently discussed trade-offs in serverless environments is cold start latency. For Neon, when a compute node scales to zero, reactivating it can take anywhere from 500ms to a few seconds, depending on factors like the database size and workload. This latency can be problematic for synchronous, user-facing requests.

Neon mitigates this with its connection pooler (PgBouncer). Using a pooled connection string, applications connect to PgBouncer, which maintains open connections to the underlying PostgreSQL instance. This significantly reduces the overhead of establishing a new TCP connection and authenticating with PostgreSQL for every client request, effectively masking some of the cold start latency from the application's perspective.

Comparative Cold Start (Illustrative, not a direct benchmark):

  • Neon (compute scaled to zero): ~500ms - 2s (first connection after inactivity)
  • Supabase Edge Function (cold start): ~100ms - 500ms (first invocation after inactivity)
  • PlanetScale (provisioned Metal): Near-zero cold start due to always-on, NVMe-backed instances.

Throughput and I/O Performance

For sustained workloads, the underlying infrastructure becomes paramount. PlanetScale's "Metal" clusters, with local NVMe drives, explicitly target high throughput and low latency. They claim "unlimited IOPS" where customers typically hit CPU limits before I/O bottlenecks, with p95 latencies dropping from ~45ms to 5-10ms after migrating to Metal.

Neon's disaggregated storage model, while enabling elasticity, introduces network hops between compute and storage. To counter potential performance degradation, Neon employs a Local File Cache (LFC) between PostgreSQL's shared buffers and the remote storage. This LFC leverages the Linux page cache, aiming to provide RAM-like latencies for frequently accessed data, spilling to disk when the LFC exceeds RAM capacity.

Supabase's performance is tied to its underlying cloud provider and the resource allocation of its managed PostgreSQL instances. The v2 architecture's multi-tenant approach for services like Realtime and Storage aims to provide more dedicated resources for the database itself, potentially improving baseline performance and consistency.

Cost Models

  • Neon: Consumption-based pricing, scaling compute to zero when idle, makes it very cost-effective for development, testing, and bursty workloads.
  • Supabase: Offers a generous free tier, with paid plans based on compute hours, storage, and real-time messages.
  • PlanetScale: Historically known for its usage-based pricing for Vitess, its new PostgreSQL offering includes Metal clusters starting at $50/month, providing a dedicated, high-performance option.

Developer Experience and Ecosystem Integration: CLI, APIs, and Frameworks

A platform's technical prowess is only as good as its usability. All three platforms prioritize developer experience through robust CLI tools, comprehensive APIs, and seamless integration with modern development stacks.

  • Supabase CLI: The supabase CLI is a central tool for local development, managing migrations, and deploying Edge Functions. Recent updates in 2025 include the ability to deploy Edge Functions from the CLI without Docker.
  • Neon CLI (neonctl): neonctl provides comprehensive control over Neon projects, including creating and managing branches. It's crucial for automating CI/CD workflows.
  • PlanetScale CLI: PlanetScale's CLI is well-regarded for managing Vitess clusters and now extends to their PostgreSQL offerings, enabling developers to interact with branching workflows and schema changes.

The Road Ahead: Challenges and Emerging Patterns

Despite the rapid advancements, several challenges persist, and new patterns are emerging. Achieving true, strongly consistent multi-region PostgreSQL deployments remains complex. Sharding, as pursued by Neki (PlanetScale) and Multigres (Supabase), is a step towards horizontal scaling, but ensuring low-latency, strongly consistent reads and writes across geographically distant regions is a harder problem.

The "AI Supercycle" is also profoundly impacting database innovation. PlanetScale announced the general availability of vector support in MySQL in April 2025, allowing vector data to be stored alongside relational data. Supabase has also long supported pgvector for efficient similarity search. This trend suggests databases will become more "intelligent," not just storing data but actively assisting in its interpretation and application within AI workflows.

Conclusion: Practical Considerations for Production Deployments

The recent developments across Supabase, Neon, and PlanetScale underscore a vibrant and rapidly evolving ecosystem for PostgreSQL. Each platform offers distinct advantages for specific use cases:

  • Neon excels for greenfield serverless applications and development workflows that benefit from instantaneous branching and cost-effective scaling to zero.
  • Supabase presents a compelling full-stack BaaS, leveraging PostgreSQL at its core and enriching it with powerful real-time capabilities and flexible Edge Functions.
  • PlanetScale has made a strong entry into the PostgreSQL market with its high-performance Metal clusters and the ambitious Neki sharding project.

For a senior developer evaluating these options, the choice hinges on workload predictability, the criticality of Git-like branching, and whether you require a full-stack BaaS or primarily a database with advanced scaling features. The continuous innovation, particularly around architectural disaggregation and developer experience, indicates a promising future for highly scalable and resilient relational databases in the cloud.


Sources


🛠️ Related Tools

Explore these DataFormatHub tools related to this topic:


📚 You Might Also Like


This article was originally published on DataFormatHub, your go-to resource for data format and developer tools insights.

Top comments (0)