DEV Community

Herrington Darkholme
Herrington Darkholme

Posted on • Edited on

Beyond the Platform: Is Vercel Designing the Future of Programming Languages?

Vercel has recently rolled out features that do more than just enhance its platform—they challenge the very semantics of JavaScript and offer a glimpse into a new era of programming. While these changes have sparked debate, they hint at a broader, more ambitious vision.

Vercel is championing the idea that the next generation of programming languages should natively manage the complexities of modern applications, including heterogeneous runtimes, transparent networks, multi-layer caching, and task durability.

This article explores how programming languages might be evolving to manage not just logic, but the entire lifecycle of data and computation in distributed systems. This evolution is being quietly prototyped through three core language concepts: serializable closures, algebraic effects, and incremental computation.

Vercel's New Primitives for a Distributed World

To understand this future, we must first look at the problems Vercel is solving today. Modern web applications are becoming more complex, featuring:

  • Multiple Runtimes: Code executes on the client (browser), the server (Node.js), and the edge (WebAssembly).
  • Dispersed Data: State lives in memory, on CDNs, in databases, and across file storage.

To tame this complexity, Vercel, together with the React team, have introduced powerful new features that feel less like library additions and more like new language constructs.

Server Actions: Unifying Client and Server Logic

Server Actions allow client-side components to directly call asynchronous functions that execute on the server, effectively erasing the boundary between client and server code.

// server.ts
"use server";
export async function createNote() {
  await db.notes.create();
}

// client.ts
"use client";
// The import is transformed by the compiler into a reference:
// {$$typeof: Symbol.for("react.server.reference"), $$id: 'createNote'}
import {createNote} from './server';

function EmptyNote() {
  return (
    <button onClick={() => createNote()}>Create Note</button>
  );
}
Enter fullscreen mode Exit fullscreen mode

Without Server Actions, this would require manually setting up API endpoints, writing fetch requests, and handling data serialization and deserialization. Server Actions abstract all of this away, making remote procedure calls (RPCs) feel like simple function calls.

The 'use cache' Directive: Language-Level Caching

The 'use cache' directive is a powerful tool for memoizing functions and caching their results. The cache key is automatically generated from the function's arguments and its closed-over values—the variables from its surrounding scope that it "remembers."

export async function Bookings({ type = 'haircut' }: BookingsProps) {
  'use cache'
  async function getBookingsData() {
    const data = await fetch(`/api/bookings?type=${encodeURIComponent(type)}`);
    return data;
  }
  return //...
}
Enter fullscreen mode Exit fullscreen mode

This is a true language-level feature. Because it needs to inspect the function's closure, it cannot be implemented as a simple library API without forcing developers to manually declare all dependencies.

The 'use workflow' Directive: Durable, Resumable Functions

Making asynchronous tasks reliable typically requires a complex stack of message queues, retry logic, and persistence layers. The 'use workflow' directive aims to make durability a native language concept.

A workflow is a function that can maintain its execution state across restarts, failures, or long pauses. It can resume exactly where it left off, whether it was paused for seconds or months.

export async function aiAgentWorkflow(query: string) {
  "use workflow";
  const response = await generateResponse(query);
  const facts = await researchFacts(response);
  const refined = await refineWithFacts(response, facts);
  return { response: refined, sources: facts };
}
Enter fullscreen mode Exit fullscreen mode

Key Characteristics of Workflows:

  • Durable: They survive deployments and crashes through deterministic replays from an event log.
  • Resumable: Execution can be paused and resumed at any await point.
  • Deterministic: They run in a sandboxed environment where non-deterministic APIs (like Math.random or Date.now) are controlled to ensure replayability.

By baking durability into the language with a simple directive, 'use workflow' abstracts away immense infrastructural complexity.

The Historical Arc of Programming Languages

You may feel uneasy about these changes to JavaScript. After all, they alter the language's semantics in significant ways.
However, if we look at the history of programming languages, we see a consistent pattern of evolution aimed at managing increasing complexity.

Programming languages have always evolved to manage new layers of abstraction and complexity:

  • Assembly managed raw CPU instructions.
  • C abstracted away registers and control flow.
  • Java automated memory management.
  • Go simplified concurrency for multi-core processors.

The next logical step in this evolution is to manage the complexities of data management. The challenge is no longer just managing memory or threads, but managing where computation happens, how data moves, and how to ensure that access is fast, persistent, and reliable.

Currently, these are problems solved by frameworks, libraries, and external systems. A developer needs a deep understanding of serialization, RPCs, caching strategies, and message queues. Vercel's new features suggest a future where these are concerns of the language itself.

The Language Features Powering This Vision

Vercel's new directives have been controversial precisely because they alter JavaScript's semantics. Instead, if we see from another perspective, they can be built on powerful concepts from computer science that could form the foundation of a new kind of programming language.

Serializable Closures

A closure is a function that "remembers" the environment in which it was created. A serializable closure is one that can be packaged up—its code and its remembered environment—and sent across a network, stored in a database, or executed at a later time. This effectively allows us to move computation, not just data, across system boundaries.

function serlializeClosure(fn) {
  const { closureData, fnCode } = extractClosedOverVariables(fn);
  return {
    code: fnCode,
    closure: closureData
  };
}
Enter fullscreen mode Exit fullscreen mode
  • Server Actions are a prime example. The createNote function is serialized, sent to the client as a reference, and when invoked, its execution is triggered back on the server.
// Simplified Concept:
// 1. On the server, the function and its closure are captured.
const { closure, code } = serializeClosure(createNote);
const fnId = storeCodeOnServer(code);

// 2. A reference is sent to the client.
sendToClient({ fnId, closure });

// 3. The client uses the reference to invoke the function on the server.
invokeServerFunction({ fnId, closure, fnArgs: [...] });
Enter fullscreen mode Exit fullscreen mode
  • 'use cache' relies on serialization to create a unique cache key. The function's code, its closed-over values, and its arguments are hashed together.
function getCacheKey(fn, args) {
  const { closure, code } = serializeClosure(createNote);
  return hash(fnCode, closureData, args);
}
Enter fullscreen mode Exit fullscreen mode
  • Workflows can be seen as serializable closures that are paused at each await point. The state of the function and its continuation (the "rest" of the function) is serialized and persisted, ready to be resumed later.

For example, the aiAgentWorkflow function can be visualized as a series of serialized closures at each step:

function aiAgentWorkflow(query: string) {
  return generateResponse(query, (response) => {
    // serialize closure 1
    researchFacts(response, (facts) => {
      // serialize closure 2
      refineWithFacts(response, facts, (refined) => {
        // serialize closure 3
        return { response: refined, sources: facts };
      });
    });
  })
}   
Enter fullscreen mode Exit fullscreen mode

Algebraic Effects

Serializable closures are powerful, but they introduce a critical challenge: how do you safely execute a piece of code in a different environment? A function designed to run on the server can't access a database if it's moved to a browser.

This is where algebraic effects come in. Algebraic effects are a programming language feature that separates what a function does (its side effects, like accessing a database or generating a random number) from how those effects are implemented.

An "effect" is performed, and an "effect handler" somewhere up the call stack decides how to interpret it.

The 'use workflow' environment is a perfect example. APIs like Math.random and Date.now must behave deterministically. An effect handler could intercept a call to Date.now and, instead of returning the system time, return a timestamp from the workflow's event log to ensure replayability.

Another example in a hypothetical language with first-class effects, you could define different handlers for client and server contexts:

// Define an effect for database access
function dbAccessEffect(userQuery) {
  const user =
    yield* db.fetchEffect<User, Error, ServerContext>(userQuery);
  return user;
}

// On the server, the handler provides the database context
const user = serverHandler.runEffect(dbAccessEffect, query); // OK

// On the client, the handler would throw an error, as no db is available
// clientHandler.runEffect(dbAccessEffect, query); // Throws Error
Enter fullscreen mode Exit fullscreen mode

This provides a secure and reliable way to control what resources a movable closure can access.

The idea is not something new to the JavaScript world. Library like Effect and languages like Koka have explored algebraic effects for years. But using effect to manage different execution environments and resource access is a novel and powerful application of the concept.

Incremental Computation

In a distributed system, caching isn't just an optimization—it's essential for performance. Incremental computation is the principle of only recomputing outputs that are affected by a change in data, rather than recomputing everything from scratch.

This concept is already widespread:

  • React's Virtual DOM diffing is a form of incremental computation.
  • Memoization and directives like 'use cache' are explicit forms of it.
  • Build systems like Turbopack and Bazel use it to avoid re-building unchanged code.

By making incremental computation a first-class language concept, a future language could automatically manage caching and data dependencies across different rendering strategies (CSR, SSR, SSG, ISR), build systems, and data layers, dramatically improving efficiency.

In fact, "use cache" can be seen as a simple form of incremental computation. A more advanced system could track dependencies at a finer granularity, allowing for partial recomputation when only some data changes. This idea has been explored in JavaScript world, exemplified by projects like skiplabs and Skip langs.

The Dawn of a "Vercel Lang"?

Vercel is not just building a platform; it is embedding a powerful programming model directly into the developer experience. By introducing language-level constructs for distributed systems, they are pushing the boundaries of what developers can build and how easily they can build it.

Is this a good thing? The debate is ongoing, but the direction is clear. This paradigm could be pushed even further. Imagine a language where:

Error monitoring is managed by the platform. A function could declare its potential errors, and the language and platform could automatically provide monitoring and alerting.
Data privacy and security are compiler-enforced. By tracking data flow through an effect system, the language could enforce access controls and prevent sensitive data leaks.
Observability is built-in. An effect system could manage access to resources like databases, providing deep insights into performance without manual instrumentation.

The next-era programming language itself understands and manages the distributed nature of modern applications, freeing developers to focus on logic, not infrastructure.

Top comments (2)

Collapse
 
fyodorio profile image
Fyodor

I guess all the major PaaS solutions lean toward the same paradigm, right? Not only Vercel.

Collapse
 
herrington_darkholme profile image
Herrington Darkholme

yes! but only vercel is pushing the boundary on the language level.