Every major React Server Components security release seems to trigger the same little ritual. An advisory lands, someone sees the letters RSC, and a few hours later the lesson has already collapsed into: "RSC is bad."
That lesson is convenient. It is also imprecise.
The same thing happened around the Next.js security release from May 7, 2026. Vercel shipped fixes for several Next.js and upstream React issues, including a high-severity denial-of-service vulnerability affecting the React Server Components packages. But the interesting part of the advisory was not that rendering a Server Component is inherently dangerous. The interesting part was that specially crafted HTTP requests sent to Server Function endpoints could cause excessive CPU usage or out-of-memory failures while the payload was being processed.
That distinction is not a footnote. It is the center of the issue.
A Server Component is not the same attack surface as a Server Function. One sends a representation of a component tree from the server to the client. The other receives a payload from the client, asks the server runtime to deserialize it, and then invokes server-side code. They can both live inside the RSC model. They can both involve the Flight protocol. But from a security perspective, they ask opposite questions.
The Server Component question is: what are we allowing to leave the server and reach the client?
The Server Function question is: what are we allowing to enter the server from the client?
The second one is an input boundary. If that boundary is enforced too late, the failure is not that the RSC model is broken. The failure is that an RPC-shaped input surface was treated as if it were merely a framework ergonomic.
The Category Error
There is a recurring confusion in RSC discussions. People often talk about "RSC" as one thing, when in practice they are combining several distinct mechanisms:
- Server Components: components that run on the server.
- Client Components: components that run in the browser, while still participating in the same React tree.
- Server References / Server Functions: server-side functions for which the client receives a reference and can later issue a call.
- Flight protocol: the serialization format that carries component payloads, references, and a broader set of values between the server and the client.
Together, these form an architecture. They do not share the same risk profile.
When a Server Component renders, the direction of execution is primarily server to client. The server produces a payload. The client consumes it. The usual class of bug is that the server puts something into that payload that it should not have put there. That can be a data leak, a cache-boundary mistake, or a component-level authorization bug.
When a Server Function runs, the direction reverses. The client sends something to the server. The server runtime has to understand the payload, identify the action, materialize the arguments, and pass them to the handler.
That is a very different moment. The browser is no longer just a consumer. The browser, or anything capable of sending an HTTP request, is now providing input to the server.
An endpoint like that cannot be treated as a React composition detail. It is a public RPC surface. It may look like a function call to the developer. TypeScript may make it feel wonderfully local. Over the network, it is still a hostile input boundary.
What Happens in a Server Function Call?
In simplified form, a Server Function request looks like this:
client event
-> POST request
-> identify action id / server reference
-> deserialize Flight payload
-> materialize arguments
-> invoke server function handler
Most application code focuses on the last step:
async function updateProfile(input) {
"use server";
const data = schema.parse(input);
await db.user.update(data);
}
At the application level, this is better than nothing. The handler is not blindly trusting the input. But for the class of problem described in the 2026 advisory, this is already late.
By the time schema.parse(input) runs, the runtime has already done much of the risky work: it has read the request, walked the payload, materialized values, built objects, interpreted references, and potentially dealt with streams, binary values, and nested structures. If the goal of the attack is not to smuggle invalid business data into the handler, but to make deserialization itself consume too much CPU or memory, validation inside the handler does not protect the server from the relevant cost.
So "validate your input" is not specific enough.
The question is where.
If validation happens inside the handler, it protects application invariants.
If validation happens in the Server Function layer after the request has already been deserialized, it gives the developer a better contract, but it may still leave the decoder cost exposed.
If validation happens while the protocol payload is being deserialized, the runtime can know during the argument walk what it expects, what it should drop, what it should reject, and when it should stop processing the request.
That is the difference that matters.
A WAF Is the Wrong Boundary
One important line in Vercel's release was that these advisories could not be reliably blocked at the WAF layer.
That should not be surprising.
A WAF sees HTTP requests. It can inspect headers, size, URLs, known patterns, maybe parts of the body. It does not fully understand the semantics of a Flight payload. It does not know which function a given server reference points to. It does not know how many arguments that function expects. It does not know that slot zero must be a string, slot one must be FormData, slot two must be a Map<string, number>, and the file field must be at most five megabytes and either image/png or image/jpeg.
If the WAF tried to know all of that, it would effectively be reimplementing the framework protocol at the edge. That is brittle, version-dependent, and it puts the responsibility in the wrong place.
The right boundary is where the runtime already knows which Server Function it is about to call, but has not yet handed materialized input to the handler.
That is the protocol layer.
This Is Not a TypeScript Problem
Types usually enter the discussion here. A Server Function might look like this:
async function savePost(post: PostInput) {
"use server";
// ...
}
The developer experience suggests that post is a PostInput.
From the network's point of view, that is only a hope.
The TypeScript type does not exist in the request. It does not exist in the Flight payload. It will not tell the decoder when to stop. It will not reject an overly deep structure, an oversized string, an oversized binary value, an unexpected FormData field, or a Map whose size is itself enough to become a denial-of-service attempt.
Types are useful documentation for the contract. But if the contract protects a runtime boundary, runtime information has to exist too.
That is why the Server Function definition needs metadata that does more than narrow the handler's TypeScript type. The metadata has to reach the protocol decoder.
What Late Validation Looks Like
Consider an abstract Server Function API:
const savePost = createServerFn()
.inputValidator(postSchema)
.handler(async ({ data }) => {
await db.post.create(data);
});
This is a good direction. Validation lives at the definition site, not scattered through the handler body. The handler receives validated data. TypeScript inference moves together with the runtime schema. An API like this is much healthier than a bare async function (input: Whatever) and a comment saying "the client calls this."
TanStack Start is on this side of the line. Its createServerFn API makes the Server Function explicit, treats the input validator as part of the function contract, and documents that client-side calls become network calls. That is much better than hiding the request-shaped nature of the operation.
But it is still a different category.
A TanStack Start Server Function is not an RSC Flight Server Function in the same sense as an RSC server reference. Based on the documented API, validation is part of the Server Function layer: the function receives a data input, the runtime validates that input, and then the handler runs. That is a good application-level contract. If the runtime has already deserialized the body into a JavaScript value before the validator sees it, then the validator is working in the post-deserialization world.
This is not a criticism in the sense of "TanStack Start is bad." It is not. A definition-site validator is the right direction for a classic RPC API.
It is simply not the same protection as giving the Flight decoder the Server Function's argument-slot contract and letting validation happen during the payload walk.
For an RPC API, the question is how much work the framework's serialization layer has to do before the validator gets control. For an RSC Server Function, the question is even sharper because the Flight payload can carry a richer value space. We are not only talking about JSON objects. The protocol can represent references, form data, binary values, streams, iterables, promises, typed arrays, Map, and Set.
The richer the wire format, the less satisfying "we parse it at the top of the handler" becomes.
The react-server Approach
The @lazarv/react-server approach is not merely to provide a convenient validation wrapper around the handler.
The important part is that the Server Function definition attaches metadata to the server reference, and that metadata reaches the Flight decoder.
For example:
import {
createFunction,
formData,
file,
} from "@lazarv/react-server/function";
import { z } from "zod";
export const uploadAvatar = createFunction([
formData(
{
displayName: z.string().min(1).max(80),
avatar: file({
maxBytes: 5_000_000,
mime: ["image/png", "image/jpeg"],
}),
},
{ unknown: "reject" }
),
])(async function uploadAvatar(form) {
"use server";
const displayName = form.get("displayName");
const avatar = form.get("avatar");
await saveAvatar({ displayName, avatar });
});
The point is not only that form.get("avatar") becomes nicer inside the handler.
The more important contract is this:
- the first argument to the Server Function is
FormData; - the allowed fields are known;
- unknown fields can be rejected by default;
- the file has a size limit;
- the MIME allowlist is part of the wire contract;
- the handler only runs if the decoder successfully validates that slot.
That is not business logic. That is an input boundary.
Slot-Walk Validation
The technical shape is roughly this.
When a Server Function export is wrapped with createFunction(...), the parse/validate spec associated with that wrapper is registered as server reference metadata. When a request comes in, the runtime first tries to recover the action id. For header-based action calls, that can come from the react-server-action header. For progressive-enhancement form submissions, it can be encoded in the submitted FormData. If the token is encrypted, the runtime decrypts it first so it knows which action the request is trying to call.
Then comes a small but important step: the action module has to be loaded before decoding.
That may sound incidental. It is not. The server-function metadata registry is populated by the module's top-level registerServerReference(...) calls. If the runtime deserialized the payload first and only loaded the action module later, the first call to an action could silently skip validation. So react-server preloads the action module first, then calls decodeReply with the recovered action id.
From there, the decoder is no longer walking the argument list blindly. It knows which Server Function it is decoding for. It can look up the associated metadata. It can apply parse and validate slot by slot.
If the first argument is z.string(), slot zero has to validate as a string.
If the second argument is arrayBuffer({ maxBytes: 1024 }), the decoder can reject an oversized buffer based on byte length.
If a formData(...) spec uses unknown: "reject", an injected extra field does not reach the handler.
If a file(...) spec declares a MIME allowlist and a size limit, the runtime does not wait for application code to decide whether the file is acceptable.
If a map(...) or set(...) spec has a maximum size, the collection cannot grow without bound in the pre-handler world.
If a stream or async iterable has a maximum chunk count or byte limit, the boundary remains active as the handler consumes it.
That is the key property: the shape of the Server Function input is not only TypeScript inference. It is a decoder contract.
Failure Is Structural
A protocol-level validation failure is not a business validation failure.
It is not the same as a user typing a bad email address into a form and receiving a field error. It means the wire payload did not satisfy the contract under which the server was willing to materialize a Server Function call at all.
The right response is structural rejection.
In react-server, a validation failure during the slot walk becomes a DecodeValidationError and is mapped to a 400 response. The handler does not run. The argument list is not bound. The client does not receive detailed schema diagnostics, because those details can reveal useful shape information to an attacker. The operator log can still keep the useful parts: action id, slot index, and failure reason.
Again, this is different from application-level validation.
A form validation error is a user experience concern.
A decode validation error is a protocol concern.
If we merge those two paths, we either reveal too much to an attacker or give too little feedback to a real user. They should not be the same path.
Bound Arguments Are Not Call Arguments
There is another detail that is easy to lose in RSC Server Function discussions: call arguments and bound captures are not the same thing.
A Server Function can be created with bound values. These are values carried by a server-side closure or binding and later associated with the server reference. They should not be treated the same way as runtime arguments sent by the client.
In the react-server model, bound captures are integrity-protected by the action token. That is a different kind of protection than per-argument validation. They do not need the same schema path as client input, because they are not crossing the same trust boundary.
Arguments sent by the client are hostile input.
Bound captures are integrity-protected server-side state representation.
If both are collapsed into the same "validate the input" bucket, the model becomes muddy again.
Global Decode Limits
Per-function contracts need a second layer: global resource ceilings.
There will be unvalidated legacy actions. There will be code in the middle of migration. There will be Server Functions where some slots are intentionally loose. And there are payload characteristics that should not have to be repeated manually on every function.
So the runtime needs limits such as:
- maximum payload byte size;
- maximum Flight row count;
- maximum materialization depth;
- maximum number of bound arguments;
- maximum BigInt digit count;
- maximum string length;
- maximum stream chunk count.
These do not replace per-function validation. They are the safety floor. The function spec is the precise contract. The global limits are the ceilings that still stop obviously abusive payloads when a function has not yet been declared perfectly.
Together, they form a more meaningful defense.
Dev-Time Strictness
A runtime's behavior is not only what it does in production. It is also what it teaches during development.
If a "use server" export can be called from the client without validation, that is an attack surface that may not be visible at the call site. The developer sees a function. The browser sees an endpoint. The reviewer often reads the handler body, not the wire boundary.
That is why a dev-time warning for bare Server Functions is useful:
Server function ... called without validation
The point is not that every function needs a complex schema. Some functions have no input. Some migration paths need a temporary escape hatch. But that should be an explicit decision. The default should not be that a publicly callable Server Function has no runtime input contract and nobody notices until a security release makes the boundary visible.
In that sense, the no-spec createFunction() form is useful too. It does not add validation, but it records intent. The runtime can tell that the developer has seen the boundary and chosen not to narrow it yet.
The Missing Runtime Contract in Next.js
Next.js fixed the affected upstream React issues, and everyone affected should upgrade. That is not optional. After an advisory like this, the first correct move is always to patch.
But patching is not the same thing as learning the architectural lesson.
In the current Next.js Server Function model, there is no generally documented framework API that gives the runtime a per-function Flight decode contract. There is "use server". There is a server-side handler. You can validate inside that handler. You can build your own helper around it. But that is not the same as attaching argument-slot metadata to the server reference so the decoder knows what it is allowed to materialize before the handler runs.
That is why I consider the react-server approach stronger here.
Not because it has "schema validation." Schema validation exists in many places.
Because the validation happens in a better place.
Because the function contract appears on the Flight protocol decode path.
Because malformed payloads can be structurally rejected before handler execution.
Because the wire-aware specs cover not only application data models, but also the richer value space of the protocol: FormData, File, Blob, ArrayBuffer, typed arrays, Map, Set, streams, iterables, and promises.
And because this defense is not a WAF rule, not a convention, not "remember to parse at the top of the handler," but a runtime boundary.
A Server Function Is a Public Endpoint
The simplest way to say all of this is:
A Server Function is a public endpoint.
Not because it looks like a REST route. Not because the developer wrote a URL for it. Because the client can send a request that causes the server to attempt to invoke a function.
Once we accept that, the security consequences become clearer:
- every Server Function should have an input contract;
- the contract should live at the definition site, not be scattered through the handler body;
- the runtime should know the contract as early as possible;
- deserialization cost should be bounded;
- unknown fields should not be treated as harmless by default;
- file and blob inputs should have size and MIME constraints;
- authorization should be explicit in the Server Function, not inferred from the surrounding component tree;
- the WAF should be an extra layer, not the primary interpreter.
This is not an anti-RSC position.
It is the position that takes RSC seriously enough not to treat all of its parts as one mystical box.
RSC Is Not the Scapegoat
The interesting thing about RSC is that it gives us a formal boundary between two different execution environments. The server and the client are not the same place. They have different capabilities, different costs, different failure modes, and different security responsibilities.
That is the strength of the model.
But a boundary is only useful if we are precise about what crosses it, and in which direction.
For Server Components, the question is what the server sends to the client.
For Server Functions, the question is what the server accepts from the client.
When a Server Function input payload is validated too late, or when the runtime does too much work before it even knows what it expects, the lesson is not that Server Components are a bad idea. The lesson is that an RPC-shaped input surface was treated as framework ergonomics for too long.
That mistake is fixable. But only if we name it precisely.
Not "RSC is bad."
Not "Server Components are insecure."
This:
Server Function input payloads need protocol-level validation.
That sentence is less dramatic. It is also more true.
And if we want RSC to have a healthy future, that is exactly the kind of sentence we need: less drama around the model, and more attention on the few boundaries where the model actually meets a hostile network.
Top comments (0)