Here’s what I learned building a subscription management system from scratch.
If you’ve ever built an app, you’ve probably started the same way I did a frontend, a backend, and a database.
It works. It feels clean. You ship features quickly.
But then the app grows.
You add things like subscription tracking, renewal reminders, analytics, maybe even notifications. Suddenly, what used to feel simple starts getting messy. Logic spreads across different parts of the codebase. Small changes start breaking unrelated features. Debugging takes longer than building.
This is exactly where most subscription management systems start to struggle.
The problem isn’t that the system is “wrong.” It’s that it was designed for a smaller version of reality.
In the beginning, a subscription system looks simple:
- Store subscription details
- Track billing cycles
- Send reminders
But in a real-world scenario, things expand quickly:
- Users can have multiple subscriptions
- Each subscription has different billing cycles
- Notifications need to be timely and reliable
- Data needs to be consistent across the system
What started as a few database tables becomes a system that needs to handle coordination, timing, and scale.
And here’s the key insight I learned the hard way:
A system that works for 10 users is not the same system that works for 10,000 users.
Most failures don’t happen because of bad code. They happen because the system wasn’t designed to evolve.
That’s when I stopped thinking in terms of “features”… and started thinking in terms of system design.
Starting with Back-of-the-Envelope System Design
Before writing more code, I stepped back and did something I should’ve done earlier a quick back-of-the-envelope design.
Nothing fancy. No diagrams. Just rough thinking.
If this system actually grows, what does it need to handle?
Let’s say:
- 10,000 users
- Each user has ~5 subscriptions
- That’s 50,000 active subscriptions
- Each subscription may trigger 1–2 notifications per cycle
Now you’re looking at:
- Thousands of reads per day (dashboard, listings)
- Frequent writes (new subscriptions, updates)
- Time-based events (renewals, reminders)
Suddenly, this is no longer a “simple app.”
It’s a system with different types of workloads:
- User authentication
- Subscription management
- Scheduled processing (renewals, alerts)
- Notification delivery
And here’s where things get interesting.
Not all parts of the system behave the same way.
- Authentication needs to be fast and secure
- Subscription data needs consistency
- Notifications need reliability, not just speed
- Analytics is read-heavy
Trying to handle all of this in a single backend quickly becomes messy.
You either:
- Overload one system with too many responsibilities
- Or start adding hacks to make things “just work”
Neither scales well.
This quick exercise changed how I approached the system.
Instead of thinking:
“What feature do I build next?”
I started asking:
“What kind of system does this feature belong to?”
That shift is what naturally led me toward a more scalable architecture.
Breaking the Monolith and moving to a Scalable Architecture
At this point, it became clear that adding more features to a single backend was only going to make things worse.
Everything was tightly connected.
- Authentication logic mixed with business logic
- Subscription updates triggering notification logic directly
- Small changes forcing full redeploys
It worked but it didn’t scale.
So instead of patching the system again, I made a bigger shift:
I started breaking the system into separate services.
Not because microservices are trendy, but because the system needed clear boundaries.
I split the application based on responsibilities:
- Authentication became its own service
- Subscription logic became isolated
- Notifications were handled independently
This immediately brought clarity.
Each part of the system now had a single responsibility.
Changes in one service didn’t risk breaking everything else.
But let’s be honest this didn’t magically make things easier.
In fact, it introduced a new set of problems:
- How do services talk to each other?
- How do you manage authentication across services?
- Where does request routing happen?
This is where the architecture evolved further.
Instead of clients directly calling multiple services, I introduced an API Gateway as the single entry point. Every request flows through it, and it decides where to route it.
This solved a lot of chaos:
- Centralized routing
- Consistent authentication checks
- Cleaner client-side logic
Now the system looked less like a single app and more like a coordinated set of services.
And that’s the key difference.
You’re no longer building “an app.”
You’re building a system where multiple components work together.
It’s more complex, yes.
But it’s also the first step toward something that can actually scale.
Inside the System: How Each Service Works
Once the system was split into services, the next challenge was making sure each one had a clear role. Without that, you just end up with distributed chaos instead of a clean architecture.
Here’s how the system is structured today.
At the front, there’s the mobile application built with React Native. This is what users interact with — adding subscriptions, viewing dashboards, and receiving alerts.
Every request from the app goes through the API Gateway. Think of it as the system’s front door. It handles routing, basic filtering, and ensures requests reach the correct service without the client needing to know internal details.
Behind that, the system is divided into focused services.
The Auth Service is responsible for authentication and security. It handles login, registration, JWT token generation, and role-based access control. This keeps security isolated and consistent across the system.
The User Service manages user-related data profiles, preferences, and account settings. It doesn’t deal with subscriptions directly, which keeps responsibilities clean.
The core of the system is the Subscription Service. This is where all the main business logic lives creating subscriptions, managing billing cycles, calculating renewals, and powering analytics.
The Help/Support Service provides a guided resolution system using a decision-tree approach, helping users troubleshoot common issues step-by-step and reducing the need for manual support intervention.
Then comes the Notification Service, which handles sending alerts. Whether it’s email, in-app notifications, or push alerts, this service ensures users are notified at the right time without blocking the main application flow.
And then there’s the most interesting part — the LLM Service.
Instead of forcing users to manually fill forms, this service allows them to send a simple message like:
“Netflix ₹499 monthly”
The service processes that input, extracts structured data, and creates a subscription event automatically in the database. It sits alongside the system, enhancing it without tightly coupling with core logic.
Each service focuses on one thing.
They communicate through APIs, not shared logic.
This separation is what makes the system maintainable as it grows.
Because at scale, clarity matters more than cleverness.
Making the System Feel Intelligent (LLM + Automation)
Up to this point, the system was scalable. It was structured. It could handle growth.
But it still felt… like a system.
Users had to:
- Open the app
- Fill forms
- Enter subscription details manually
It worked. But it wasn’t great.
That’s when I started thinking about a different problem:
What if users didn’t have to “use” the system at all?
What if they could just say what they did, and the system handled the rest?
That idea led to integrating an LLM-based service into the architecture.
Now instead of filling a form, a user can type something like:
“Bought Spotify for ₹199 per month”
That message goes to the LLM service, which processes it and extracts:
- Subscription name
- Cost
- Billing cycle
- Relevant timing details
Once parsed, it creates a structured event and pushes it into the system just like any manually created subscription.
From the user’s perspective, it feels almost invisible.
No forms. No friction. Just intent → action.
And importantly, this didn’t break the architecture.
The LLM service is not tightly coupled with the core system. It acts as an intelligent layer on top:
- It receives input
- Transforms it
- Sends structured data to the Subscription Service
That’s it.
This design keeps the system clean while still enabling powerful behavior.
I’ve written in detail about how this LLM pipeline works and how it processes natural language into database-ready events in a separate articles — [LINK]. But at a high level, the goal here wasn’t just automation.
It was about reducing user effort to near zero.
Because scalability isn’t just about handling more users.
It’s also about making the system easier to use as it grows.
What I Learned Building for Scale
Looking back, the biggest lesson wasn’t about microservices, APIs, or even system design.
It was about how systems evolve.
I didn’t start with a perfect architecture. I started with something simple that worked. And that’s important because trying to over-engineer from day one usually slows you down more than it helps.
But at some point, the system outgrows its original design.
That’s where most developers struggle.
You can either:
- Keep patching the existing system
- Or step back and rethink the architecture
I learned that scalability isn’t about adding more code.
It’s about creating the right boundaries.
Breaking the system into services wasn’t just about scaling traffic. It was about:
- Isolating failures
- Making changes safer
- Keeping logic understandable
At the same time, microservices are not a silver bullet.
They introduce:
- Network complexity
- Distributed debugging
- More infrastructure to manage
So the goal isn’t “use microservices.” The goal is:
Use the right level of complexity for the problem you have.
Another key shift was thinking beyond features.
Early on, I was focused on:
- “Add subscriptions”
- “Add notifications”
Later, the thinking changed to:
- “How do these parts interact?”
- “What happens when this grows 10x?”
And finally, the most interesting realization came from adding the LLM layer.
A system can be scalable and still feel heavy to use.
But when you reduce friction when users don’t have to think in terms of forms and fields you unlock a completely different experience.
That’s when the system stops feeling like software… and starts feeling intuitive.
Conclusion
It’s about evolving the system as the problem grows.
I began with a simple setup — one backend, one database, straightforward APIs. And for a while, that was enough. But as soon as real-world complexity entered the picture — multiple subscriptions, notifications, automation — the cracks started to show.
That’s when the shift happened.
From thinking in terms of features… to thinking in terms of systems.
Breaking the application into services brought structure. Introducing an API Gateway brought control. Separating responsibilities made the system easier to reason about.
And then, adding an intelligent layer with the LLM service changed how users interact with the system entirely.
Because in the end, scalability isn’t just about handling more traffic.
It’s about:
- keeping systems maintainable
- reducing friction for users
- and designing something that can grow without collapsing under its own complexity
If there’s one thing I’d take away from this journey, it’s this:
Don’t try to build a perfect system from day one.
Build something that works and be ready to redesign it when it stops working.
That’s how real systems are built.


Top comments (0)