I joined this project at a point where the cracks were starting to show.
The mobile app functioned, but not at the level a high-volume restaurant brand needed.
This was a leading quick-service restaurant brand in the Middle East, running hundreds of outlets and processing millions of orders. Mobile ordering and drive-through apps were no longer side projects; they were core infrastructure.
The ask was simple on paper: make the apps faster, more reliable, and ready to scale. In reality, it meant touching almost everything.
What wasn’t working
The biggest red flag was how the apps behaved under pressure.
Both iOS and Android apps took nearly 7 seconds to open, driven by heavy bundles, inefficient asset handling, and slow responses from the Rails backend. On top of that, the drive-through app only existed on Android, which limited reach and created unnecessary platform fragmentation.
Failures were another concern. Roughly 5% of service requests were failing, mostly due to inconsistent deployments and fragile app behavior. Backend performance didn’t help either, some order listing queries averaged 140ms and regularly timed out when customers had long order histories.
Notifications were basic, monitoring was scattered, and deployments across platforms felt more like rituals than repeatable processes. The business felt the impact through higher support calls, slower releases, and growing customer frustration.
How we approached it
We didn’t treat this as a “rewrite everything” exercise. The goal was to remove friction, not create new risk.
- Broke the work into phases that aligned with an upcoming product release
- Mapped where latency, failures, and operational drag were actually coming from
- Prioritized changes that reduced long-term maintenance, not just quick wins
- Treated DevOps and observability as first-class concerns, not post-launch fixes
One codebase, fewer headaches
On the mobile side, we migrated the separate iOS and Android codebases to a unified React Native (Expo) setup, aligning them more closely with the existing Rails backend. The goal wasn’t just cross-platform support, it was consistency. A single shared codebase meant features shipped together, bugs were fixed once, and teams stopped duplicating effort.
Performance tuning followed quickly. We tightened asset bundling, introduced lazy loading, and improved caching strategies. Startup time dropped sharply, and the apps finally felt responsive instead of hesitant.
As a bonus, the drive-through app landed on iOS for the first time, unlocking an entirely new slice of users without doubling development effort.
Fixing the real bottleneck: the backend
Fast screens don’t matter if the backend can’t keep up.
- Identified high-traffic APIs that were doing unnecessary work
- Focused on the order listing endpoint handling ~60M records
- Restricted query ranges to meaningful time windows
- Prevented expensive queries from running unchecked
We also cleaned up data access patterns:
- Introduced GraphQL to reduce over-fetching
- Tightened authorization using user-scoped access
- Added targeted indexing instead of blanket optimizations
The outcome was faster responses and predictable behavior under load.
Making stability boring (in a good way)
Reliability improves when failures are visible early. We added automated end-to-end tests using Maestro, integrated Sentry for real-time error tracking, and standardized deployment flows for both platforms.
Releases stopped being stressful. Issues surfaced sooner. The architecture became easier to extend instead of fragile to touch.
Infrastructure that supports growth
While the apps evolved, the infrastructure needed the same attention. We rebuilt observability from the ground up using Prometheus, Grafana, CloudWatch, and Opsgenie, giving teams a single, coherent view of system health.
Security was tightened at the Kubernetes level, Terraform modules were standardized, and load testing was automated using JMeter within CI/CD pipelines. Auto-scaling and retention policies were tuned to balance performance and cost instead of overcompensating with brute force.
What changed after six months
The numbers were hard to ignore:
- App startup time dropped from ~7 seconds to under 3
- Service failures fell from 5% to 0.5
- Backend query times improved from 140ms to ~40ms
But the real wins showed up elsewhere. Support tickets went down. App ratings went up. Order volumes increased as friction disappeared. Internally, the engineering team moved faster and was shipping in days instead of weeks, with fewer surprises.
For our team at RailsFactory, the biggest success was this: the system stopped fighting the people using it.
- Customers ordered without delays
- Developers built without fear.
That’s usually how you know app modernization works!
Top comments (0)