DEV Community

Cover image for A Framework Is Not a Platform
Viktor Lázár
Viktor Lázár

Posted on

A Framework Is Not a Platform

For most of the time we have been writing web applications, two different teams answered two different questions. The framework team decided what the application looked like. The platform team decided where it ran. The line between the two questions held quietly for thirty years, and it held because nobody seriously challenged it.

Rails decided how a controller talked to a model. Spring decided how a bean was wired. Express decided what a route handler looked like. None of them decided what database, proxy, cache, message bus, CDN, or regional topology the organization bought.

That separation was not an accident. It was a property of how those frameworks were built. They produced a process. The process did its job. The infrastructure around the process — the CDN, the cache, the queue, the database, the function runtime, the regional layout — was someone else's job, and that someone else worked on a different review cycle, with different KPIs, accountable to different parts of the org chart.

The line is being erased, and the cleanest place to see it being erased is Next.js 16. Cache Components did not just change caching. They moved an infrastructure decision into a framework API.

The handshake we used to have

A Node.js web application running on Kubernetes is a clean handshake. The application produces a request handler. The platform team picks the cluster, the ingress, the CDN, the cache backend, the secrets store, the regional topology, the function runtime if there is one. They pick those things based on cost, security posture, vendor portfolio, contractual obligations, the team's existing operational expertise, and whatever standards the org has already paid down.

The framework's job, in that handshake, is to be agnostic about all of it. The same code runs behind any reverse proxy. The same code uses whatever cache the platform team chose to put in front of it. The same code can be moved between vendors without changes that touch the application's source — only the deployment surface changes, and the deployment surface is a thin layer the platform team owns end-to-end.

This is what Incremental Static Regeneration looked like in practice. A Next.js application built with ISR produced HTML files and a small revalidation loop. A CDN sat in front. The CDN served the file. Occasionally, on a stale-while-revalidate window, a function regenerated the file in the background. The shape was familiar to every CDN-fronted Node host. Vercel hosted it; Netlify hosted it; Kubernetes with Cloudflare in front hosted it; a bare VPS with nginx and a cron job hosted a recognizable version of it. The economics were similar everywhere because the architecture was platform-neutral, built from a CDN-and-function shape every platform team already understood.

That shape is what Cache Components walks away from.

What v16 changed

Cache Components, the headline feature of Next.js 16, replaces the route-segment caching model with a directive-based one. A page is dynamic by default. The developer marks regions with 'use cache' to opt those regions into caching. The framework prerenders a static shell where it can, streams the dynamic regions when they resolve, and stitches the response together at request time. Inside the page, the model is elegant. I have written about it from the directive-design angle in The Cache Belongs to the Function and will not repeat that argument here.

The argument here is not about what 'use cache' looks like to the developer writing it. It is about what the runtime requires of the infrastructure underneath, once the flag is on.

A page that uses Cache Components is, mechanically, a page whose response is produced per request by the framework's renderer, with cached fragments spliced in. In the general case, the CDN can no longer serve the full response without invoking the renderer. The static parts of the page exist as cached fragments, not as cacheable artifacts. The renderer must run, even on a request where every fragment is a hit, because the renderer is what knows how to assemble the fragments into a streamed response.

This is a small architectural change with large consequences. It moves the unit of caching from "a complete response a CDN can serve" to "a piece of a response the renderer assembles." A CDN is the infrastructure that serves complete responses. It is not the infrastructure that assembles responses from pieces. The framework, in choosing the second model, has chosen to be the assembler — which means the framework has become a piece of infrastructure that did not used to exist between the application and the CDN.

Once the framework is in the request path on every request, three secondary requirements appear, each of which used to be the platform team's choice and is now the framework's demand. A cache backend has to exist, because the default in-memory cache is per-process; in practice, the framework expects a cacheHandlers implementation pointing at a real backing store such as Redis. Tag invalidation has to be coordinated across instances, typically by refreshing a local view of shared invalidation state on the request path; in a clustered deployment, that becomes a round trip to shared storage the application did not used to make. The function runtime starts to matter in ways it did not before, because the dynamic-by-default model only amortizes its renderer cost on a platform that multiplexes concurrent requests across warm function invocations; on a platform without that, the cost is paid linearly with traffic.

None of these requirements are illegitimate as choices. They are illegitimate as framework outputs. The team did not pick Redis because it wanted Redis; the team did not put a per-request lookup on the request path because it wanted one there; the team did not select a function-runtime billing model because it had a view about how Cache Components should amortize. Redis is not the problem. The problem is when Redis stops being an application choice and becomes part of the framework's performance contract.

The escape hatches that closed

In Next.js 15, the team that wanted to keep the platform-neutral economics had options. Mark a route force-static. Enable Partial Prerendering per route with experimental_ppr. Set a route's revalidate value. Each of those decisions was visible at the route-segment level, and each one was a way for the developer to opt a route into a model the platform team's existing infrastructure already knew how to host.

In v16, with cacheComponents: true, those options are gone. The migration guide tells you to delete force-dynamic and force-static. The experimental_ppr segment configuration is removed. The revalidate and fetchCache exports are replaced by cacheLife inside 'use cache' boundaries. The route-segment escape hatches that used to let an application express "this page is static, please serve it as a file" are no longer in the API.

The flag is opt-in, today. A team that wants the v15 economics can leave it off. But the docs already treat Cache Components as the recommended path, the dedicated PPR test suites in the repository are migrating away from a separate identity, and the trajectory of any flag that the framework team owns and recommends is well-known. Within a release or two, the recommended path becomes the default. Within a release or two after that, the legacy path becomes deprecated. The ability to refuse the new model is on a clock, and the clock is the framework team's.

Technically portable, economically captive

The runtime is open source. The contract is documented. The adapters work. By the strict definition of vendor lock-in — you cannot leave — there is no lock-in. Every claim a salesperson would make about the framework's portability is true.

The honest definition of lock-in is not the strict one. The honest definition is: you can leave, but the cost of leaving is large enough to change the build-vs-buy decision. Under that definition, Cache Components introduces a soft form of capture that ISR did not have. The runtime runs anywhere; the cost-effectiveness lives on one platform. Off that platform, the same code shape produces a meaningfully worse cost profile, a meaningfully higher operational burden, and a meaningfully lower performance ceiling.

The performance ceiling is the part that is hardest to recover. On a platform that owns both the proxy and the function runtime, the static shell of a Cache-Components page can be served from the edge before the renderer is even invoked, with the dynamic stream stitched into the same response over a single connection. This is not a standard CDN primitive. It is not the contract a generic CDN signs with the application in front of it — serve a complete response, or proxy through to the origin and serve that. The handoff between a static shell and a function-produced stream, on the same connection, mid-response, is a vendor-aware proxy/runtime product. It can be built; it has not been standardized; and the team that wants it on Kubernetes is not picking it from a menu of CDN features. They are integrating bespoke pieces, or they are accepting a TTFB floor of "pod-reachable plus first render byte" instead of "edge node plus first static byte." The gap is structural, not operational.

The question is not whether another platform can build the missing machinery. The question is whether an application framework should require that machinery to recover the economics it used to preserve by default.

None of this is impossible to operate. It is only impossible to operate optimally, because the optimum has been moved to a place only one vendor lives.

The pattern, beyond Next.js

Next.js is the most aggressive case, but it is not the only framework being pulled in this direction, and the direction is the more interesting story than any one framework.

Remix and React Router 7 sit at the other end of the spectrum, partly by inheritance and partly by deliberate choice. The cache contract has historically been a headers() function on a loader returning standard Cache-Control directives. The CDN does what CDNs do; the framework does not need a backing store, a tag manifest, or a request-time invalidation hook. Whether that posture survives future product pressure is an open question, but today the cache story is platform-neutral by construction.

SvelteKit and Astro preserve the older bargain through adapters and static-first output. The application produces a generic artifact; the adapter materializes it into a deployment-specific shape only when the application has earned a dynamic runtime. The specifics stay at the deployment seam rather than seeping into the application source.

Nuxt sits in the middle. Nitro's caching primitives are function-level and storage-pluggable rather than render-coupled, so a Nuxt application can express a cached value without dragging the rendering pipeline into the request path. The framework has caching, but it has not annexed caching as infrastructure.

TanStack Start sits on a different axis altogether. It is router-and-query first, not renderer-and-cache first. Its primitives — TanStack Router, TanStack Query, server functions, loaders — describe what data should flow where, not what infrastructure should hold the cache. The cache lives with the query, function-level and storage-pluggable, the way TanStack Query has always shipped it. The framework does not need a Redis backing store, a tag manifest, or a request-time invalidation hook to be correct; the application's freshness is a property of its queries, not of the framework's renderer. It is a different architecture from Next.js, not a competing implementation of the same one.

The structural caution is general, not aimed at any one project: a framework that adopts the renderer-and-cache architecture without the matching platform machinery inherits the hard part without inheriting the economic advantage.

Some runtimes refuse this trade by construction. That is the line I have tried to hold in @lazarv/react-server — a cache primitive that lives with the function, a router that is opt-in rather than load-bearing, a deployment story handled at the build seam rather than at the source. Hono, Fastify, Express, the older Node frameworks never had this problem because they never tried to absorb infrastructure decisions in the first place. They stay frameworks because they stay small.

The point is not that every framework should look like the smaller ones. The point is that there is a spectrum, the spectrum has been visible for years, and the choice each framework makes about where to sit on it shapes the economics of every team that picks it.

What "framework" used to mean

A framework, historically, is a thing you pick up to write an application. The decision is local. The team's senior engineer reads two days of docs, the team's frontend lead does a spike, the team picks one, and the work moves forward. The decision does not require sign-off from security, platform, FinOps, procurement, or an architecture review board. It does not need to, because the framework's blast radius is the application source.

A platform is a thing you provision. The decision is organizational. It involves vendor risk review, multi-year contracts, integration with the org's authentication and observability, alignment with the org's existing infrastructure, and the long tail of "what happens if this provider gets acquired" thinking. Those reviews exist because the wrong platform decision is hard to walk back, and because the people who feel the consequences are not the same people who made the call.

When a framework's correctness and performance start to require a specific cache topology, a specific function runtime, a specific proxy behavior, the framework has crossed the category line. Picking it is no longer a local decision. It is a platform decision dressed as a framework decision, and the people who would normally weigh in on a platform decision are not in the room when it is made. The frontend lead picks Next.js because Next.js is what frontend leads pick; the cost of that choice shows up months later, in a Redis bill, in a Lambda invocation count, in a p99 graph that nobody can explain to the CFO without a paragraph of caveats.

This is the part of the trade that does not recover quickly. Money recovers. A team can switch frameworks; it is painful but bounded. What does not recover is the org's awareness that infrastructure was a thing the org was supposed to choose. The next framework that ships on the same model finds the ground already prepared. Each one normalizes the next.

The line we forgot

A framework is not a platform, and a platform should not pretend to be a framework.

The honest test for any tool wearing the framework label is the one this article has been circling. What infrastructure does it require us to operate? What is the degraded-mode cost if we don't? A tool whose answers are "your existing Node host, and roughly the same as before" is a framework. A tool whose answers are "vendor-shaped infrastructure, and meaningfully worse" is something else. It does not have to be a worse thing. It does have to be named for what it is, because the people responsible for the answers to those two questions used to be the ones making the decision.

The dev/ops handshake we used to have was not nostalgia. It was a real division of labor that let frameworks evolve without dragging infrastructure along, and let platforms evolve without rewriting applications. It let teams stay in motion. It let small projects stay small. It let large projects choose where they ran on the basis of their own constraints, not the framework's.

We are losing that division of labor one framework choice at a time, mostly without noticing, and the cost is showing up in places — bills, latency floors, operational complexity, vendor leverage — that nobody connected to the original decision back when it was just "what should we use to build the app."

A framework should be replaceable without replacing the infrastructure underneath it. Infrastructure should not become a consequence of the framework. When those two roles invert, the team has stopped owning the most important architectural surface in the system, and the framework's authors have started.

A framework is not a platform. The two have always known what they were. We are the ones who forgot.

Top comments (0)