Rails didn't get it wrong.
It got it right for the world it was born into.
In the mid-2000s, backend applications were mostly synchronous, request/response systems. Background jobs were rare, WebSockets didn't exist, observability wasn't a discipline, and "scale" usually meant adding more app servers behind a load balancer. In that world, Rails' core bet - optimizing relentlessly for developer productivity - was exactly the right one.
But the world changed. Modern backend systems are no longer just HTTP request handlers. They're long-lived processes juggling async I/O, background execution, real-time communication, eventing, and deep observability requirements. And most of that complexity didn't disappear - it just moved outside the framework.
Rails' quiet assumption
One of Rails' early assumptions is:
Production complexity lives outside the application.
Need background jobs? Add Sidekiq.
Need coordination? Add Redis.
Need concurrency? Add Async.
Need structured logging? Add Lograge.
This wasn't a mistake. It was a pragmatic trade-off at a time when Ruby had no mature async primitives and when keeping the framework small mattered more than absorbing operational concerns.
But the long-term consequence is familiar to anyone running a complex Rails backend today: your app is no longer a single system. It's a constellation of processes, queues, and coordination layers that must all be reasoned about together.
Rails stayed elegant by pushing complexity outward. Teams paid for that elegance later, in operations.
A different assumption
What if we start from a different premise?
Backend complexity is inevitable - so the framework should absorb as much of it as possible.
For a long time, the Ruby ecosystem didn't have a specific answer for this. We had Rails for productivity, and we had micro-frameworks for raw simplicity. But we lacked a framework designed specifically to handle the modern, high-concurrency, operationally complex world without abandoning the ergonomics of Ruby.
This is the philosophy behind Rage.
Rage is an API-only Ruby framework designed to explore what backend development looks like when we treat modern operational concerns as first-class instead of external integrations.
What this looks like in practice
If Rails was designed today - with fiber schedulers, async I/O, structured logging, and real-time APIs as givens - the architectural choices would look very different.
Concurrency as a foundation, not an escape hatch
Rage is fiber-first. HTTP handling, background jobs, and WebSockets all run inside the same async runtime, with the same object model and failure semantics. Async work isn't something you hand off to another system - it's just another execution path.
Background jobs as part of the application
Instead of assuming a separate worker fleet and queue infrastructure, Rage treats background execution as an in-process capability by default. Jobs are persisted to a write-ahead log on disk, providing delivery guarantees without Redis or a database.
That means fewer moving parts, fewer failure modes, and durability with zero setup - a backend that can start simple and scale outward only when necessary.
Observability as a framework contract
Rage provides a dedicated observability interface that lets developers measure and monitor what's happening inside the application - request handling, job execution, WebSocket connections. The framework sandboxes observability code: if your instrumentation has a bug, it won't crash your request handler or background job. Observability becomes a safe, first-class capability rather than something you hope doesn't interfere with production.
The unified runtime also enables deeper logging semantics. Request IDs aren’t just an HTTP concept - they automatically propagate to any background jobs enqueued during a request, ensuring all logs produced are tagged with the same parent request ID. This kind of cross-cutting observability is automatic in a unified runtime, but requires deliberate coordination when stitching together separate tools.
This isn't about more features. It's about acknowledging that observability is part of what a backend is, not something bolted on later.
Documentation as code
In a distributed world, the API contract is everything. Rage generates OpenAPI documentation through static analysis of your code. That means your API schema can be generated and validated in CI without spinning up the application. The schema isn't a separate file you have to maintain; it's a reflection of your actual routes and controllers, verifiable at build time.
One system first, distributed later
Thanks to a fiber-based architecture and direct inter-process communication, Rage can run a full-fledged backend - HTTP, jobs, WebSockets - in a single process or a multi-process cluster, without introducing Redis just to coordinate state.
Distribution becomes a scaling decision, not a starting requirement.
The monolith, redefined
In the Ruby community, monoliths are often praised as an antidote to microservice sprawl. But "monolith" is usually defined in terms of code structure rather than system behavior.
A Rails app with Sidekiq workers, Redis coordination, and WebSocket servers may live in one repository - but operationally, it's already distributed.
Rage starts from a different definition:
A monolith is a system that can be deployed, understood, and operated as a single unit.
Because HTTP handling, background jobs, async I/O, and WebSockets all live inside the same fiber-based runtime, a Rage backend can remain genuinely monolithic far longer - running comfortably on a single server without external coordination infrastructure.
That doesn't push teams toward microservices. It does the opposite. It allows teams to delay distribution until it's forced by scale, not assumed from day one.
In that sense, Rage is less API-first and more monolith-first - just without a template renderer attached. That's not a limitation. It's the entire point.
Follow along at https://x.com/codewithrage
Top comments (0)