One thing I keep running into when testing APIs: Oversized requests are handled as “bad input” instead of being rejected early. I still see APIs responding with:
400 Bad Request500 Internal Server Error- or even
200 OK
…when the only correct response should be 413 Payload Too Large.
If the server parses or processes a huge payload before rejecting it:
- memory is already allocated
- CPU time is already wasted
- threads are already busy
Multiply that by concurrent requests and you get a trivial denial-of-service vector — no auth bypass, no malformed packets, no exotic exploits.
What surprises me is how often this slips through:
- load tests focus on request count, not size
- input validation happens too late
- teams assume “clients won’t send that much data”
How does your API handle oversized payloads today?
And where do you enforce size limits — at the edge or inside app logic?
Top comments (0)