Yesterday, I posted here asking the community to stress test LShort, my new Edge-based link shortener. I built a 3D "War Room" dashboard to visualize the traffic and challenged you to crash it.
Well, the first 24 hours were intense. I learned lessons that only real-world traffic (and a bit of mischief) could teach.
Here is the battle report:
1. The "Jaguarão" Attack (and my unsuspecting Fiancée)
The first spike happened at a terrible time. I was on a video call with my fiancée, trying to pay attention to her, when I saw my War Room dashboard start flashing frantically in the corner of my eye.
Someone from Jaguarão (a city in southern Brazil) decided to test the infrastructure properly. It was a brief but concentrated attack: around 4,000 requests in a few seconds.
The Technical Surprise:
Ironically, as the hit counter went vertical, the average latency actually dropped.
This validated the architecture: The Cloudflare Workers + Upstash Redis combo works incredibly well on "hot paths". The more you hit the same link, the faster the cache serves it.
The Failure (My Wallet):
The shortener held up, but the backend service powering the War Room (hosted on Railway) hit the Billing Hard Limit I had set (it shares the budget with my other hobby projects). Railway, protecting my credit card, paused the service. A quick adjustment to the spending limit brought it back, but it was a good reminder: sometimes the bottleneck isn't code, it's the budget configuration.
2. Error 1101 and The End of the Free Tier
Everything seemed calm until around 3:00 PM today.
The accumulated traffic from Reddit, TabNews, and Dev.to finally took its toll. My Cloudflare Workers free tier quota was completely pulverized.
Suddenly, users started seeing the dreaded Error 1101.
I had to act fast, upgrade the plan, and scale up the worker limits. A few minutes later, systems were 100% operational again.
3. Key Takeaways
This stress test proved that separating responsibilities was the right call:
- Redirects (Critical Path): Running on the Edge allows it to tank heavy loads (as long as the bill is paid).
- Analytics (Async): Decoupling the heavy lifting of data processing saved the redirection engine from crashing during the 4k RPS spike.
In a real-world scenario, traffic would be distributed across many links rather than a DoS on a single URL, but seeing the system scale (and fail exactly where the limits were set) was invaluable.
The Challenge is still on!
Now that I've upgraded the plans and adjusted the limits, the system is ready for Round 2.
👉 Stress Test Link: https://lshort.sh/ITlPomy
Thanks to everyone who participated in the "destruction" so far. Waiting for the next chapter! 🚀
Top comments (0)