Summary: Railway logged 8 incidents in 8 days in May 2026. That sounds bad before you find that they had 1,112 outages since October 2022, averaging roughly one outage per day over three years. This article goes deeper into the instability of Railway as a platform. All data here is collected from Railway's public status page, historical incident records, postmortem blog posts, and third-party tracking via StatusGator.
Railway's early-May 2026 incident streak looked bad on its own. Over eight days, the developer platform reported problems affecting builds, regional networking, volume-backed services, and even Central Station login. The more consequential point is that the streak was not an outlier. The worst incidents of 2026 had already happened months earlier, and they were of a different character entirely.
Railway is not selling raw infrastructure. It is selling abstraction: less operational overhead, faster shipping, a simpler way to deploy and run applications. For customers of that kind of platform, repeated instability lands differently. When Railway has trouble, users are not just dealing with a broken subsystem. They are dealing with the failure of the convenience they were paying for.
Eight Days, Eight Failure Modes
Railway's own historical status page shows eight publicly listed incidents between May 1 and May 8, 2026.
On May 1, some users were unable to log in to Central Station for roughly five minutes. On May 4, Railway disclosed degraded performance for stateful services with attached volumes in EU West, warning users of elevated latency and slower disk I/O. Later that same day, it reported build delays, tying them to degraded GitHub services while noting its engineers were scaling the build pipeline as backlog accumulated. GitHub's own status history shows a May 4 incident that overlaps with Railway's timeline.
May 4 did not stop there. Railway also reported elevated latency in its US East edge network, linked to an upstream CDN layer, and mitigated it by removing the affected provider from rotation. Hours later, it disclosed connectivity issues in Singapore, with failed requests and DNS resolution errors affecting services in that region.
On May 5, builds were running slow in US West due to an unnamed upstream provider. On May 6, EU West users hit ECONNRESET errors from a single unhealthy proxy that Railway removed from rotation. On May 8, builds started queueing again, this time because of a bug in a recent builder image that required a rollback.
Builds, edge routing, regional proxies, stateful storage, platform login: five distinct layers, eight days. The density matters. But it pales next to what had already happened in February and March.
Before May: The Incidents That Are Harder to Explain Away
The May cluster was frequent. The earlier incidents were severe, and two of them had no external party to blame.
On February 11, Railway's own automated abuse-enforcement system sent SIGTERM signals to legitimate user workloads, including active Postgres and MySQL databases. Around 3% of services across the platform were terminated. Railway's dashboard continued showing affected services as "Online" while they were down, and users received no proactive notification. Hacker News user vintagedave wrote that day: "I've had about one third of my Railway services affected. I had no notification from Railway, and logging in showed each affected service as 'Online', even though it had been shut down." Railway's incident report acknowledged the enforcement logic was "overly broad in its targeting criteria." Railway's own system killed customer infrastructure, and its dashboard reported the opposite of what was happening.
A week later, between February 18 and 21, Railway was hit by DDoS attacks reportedly reaching up to 1 Tbps. The attack shifted between application-layer and L4 TCP patterns. At one point, the upstream vendor handling Railway's countermeasures itself went down. Railway's response included repeatedly swapping proxy IP sets to stay ahead of the attackers. Eventually it migrated "Business plan and above customers" to a separate shard of proxies. Railway's incident report details the timeline, but does not directly address what it implies: during a platform-wide emergency, customers on lower tiers received meaningfully less protection.
Then on March 30, Railway crossed from reliability trouble into a data privacy incident. A configuration change pushed by a Railway engineer, intended to enable Surrogate Keys for per-domain CDN caching, accidentally enabled caching on domains that had CDN explicitly disabled. For 52 minutes between 10:42 and 11:34 UTC, authenticated HTTP GET responses were cached and potentially served to different users. Around 3,000 users were affected.
Railway's incident report confirmed the incident but drew immediate criticism for how it framed the scope. Hacker News user varun_chopra wrote: "'0.05% of domains' is a vanity metric. What matters is how many requests were mis-served cross-user. They call it a 'trust boundary violation' in the last line but the rest of the post reads like a press release." User theden added that "customers have lost revenue, had medical data leaked etc., with no proper followup from the railway team." That claim of medical data exposure is unverified in Railway's own reporting, which confirms only that authenticated responses were served to unintended users. User edenstrom responded: "This was really the nail in the coffin for us. Most services are already moved from Railway, but the rest will follow during this week."
Why "It Was the Vendor" Only Goes So Far
Railway could reasonably argue that several of these incidents were not wholly its fault. The May 4 build delays were tied to GitHub. The May 5 slowdown was an upstream provider. The US East edge issue came from a CDN layer. The February DDoS involved a vendor failure during mitigation. In a narrow technical sense, that is true.
For customers, it is not enough.
Users adopt Railway precisely so they do not have to reason through a dependency chain of GitHub, builder images, proxies, CDN vendors, and regional behavior. They buy the abstraction. If that abstraction repeatedly allows third-party problems to spill into user-facing downtime, dependency risk stops being an external footnote and becomes part of Railway's own reliability story.
And the upstream defense does not apply to the two most serious incidents. The February 11 enforcement failure was Railway's own system. The March 30 CDN misconfiguration was a Railway engineer pushing a change to production. There is no third party to point to for either.
This is the tradeoff at the center of modern platform products. The more a vendor simplifies the stack, the more responsibility it concentrates. When things work, that feels like an advantage: teams ship faster without building a deep operations bench. When things do not work, the same concentration becomes a liability. One platform sits between the customer and their builds, runtime networking, stateful services, administrative access, and CDN behavior. The blast radius is organizational as much as technical.
A Pattern Across the Stack, Not a Rough Patch
Any single incident is easy to excuse. Cloud platforms fail. Vendors break. Build systems get jammed. Railway also did what companies are supposed to do: it posted publicly, updated users, and in most cases resolved problems quickly. That transparency is worth acknowledging.
It is also not the same thing as preventing failures from happening.
The problem is the surface area. Builds fail. Edge routing fails. Regional proxies fail. Volume-backed services degrade. Dashboard access breaks. Internal enforcement systems incorrectly terminate customer databases. Authenticated user data gets served to the wrong people. That is not the profile of a single weak component. It is the profile of a service under recurring stress across its entire operational surface, some of it from external dependencies and some of it self-inflicted.
Third-party tracking from StatusGator puts the longer-term picture in sharper relief: 44 incidents in the last 90 days, a median incident duration of 1 hour and 5 minutes, and more than 1,112 outages recorded since October 2022.
What the Public Record Asks Railway to Answer
Railway may have already hardened its build pipeline, improved rollback controls, tightened change management, or reduced exposure to fragile upstreams after these incidents. Those would be meaningful responses.
The public record points in one direction regardless. Railway's own internal systems have incorrectly terminated customer infrastructure and misreported its status. A single configuration change from one engineer exposed authenticated user data for 3,000 accounts. Multiple distinct failure modes hit within the same week, across different layers of the platform. And not just one week: this looks like a pattern years in the making.
In a way, spring 2026 did not reveal a new problem. It made an old one harder to ignore. The incident record stretches back to October 2022, and the pattern across builds, networking, data handling, and Railway's own internal systems has been consistent throughout. The tip of the iceberg was always visible. Most people just were not looking below it.
Top comments (0)