DEV Community

Cover image for Stop Starting with Microservices: Why I Built a Modular Monolith for my B2B SaaS Boilerplate
Mahmoud Farouk
Mahmoud Farouk

Posted on

Stop Starting with Microservices: Why I Built a Modular Monolith for my B2B SaaS Boilerplate

If you've ever started a new B2B SaaS project, you know the drill. Before you write a single line of feature code that actually brings value to your users, you spend weeks (or months) wrestling with the exact same infrastructure:

Setting up Multi-tenancy (and worrying about data leakage).

Configuring stateless JWT authentication.

Handling database migrations.

Integrating payment webhooks safely (idempotency, anyone?).

As a Principal Software Developer, I realized this "setup fatigue" was killing momentum. So, I decided to build the ultimate boilerplate to solve this once and for all: the Arab Enterprise Kit (AEK).

But when it came to the architecture, I made a controversial choice for 2026: I completely ignored Microservices.

Here is why I went with a strict Modular Monolith instead, and how I structured the tech stack.

The Microservices Trap for Early-Stage SaaS
Microservices are fantastic for scaling massive engineering teams. But for solo founders or small teams launching a B2B SaaS, they introduce a DevOps nightmare:

Distributed tracing becomes mandatory.

Data consistency and distributed transactions (Saga patterns) slow down feature delivery.

Infrastructure costs spike before you even have your first paying customer.

The Modular Monolith Solution
Instead of splitting by network boundaries, I split by logical boundaries within the same deployment unit.

Here is the tech stack I chose for the AEK:

Backend: Java 25 & Spring Boot 4. We use Single-Schema Multi-Tenancy with Hibernate Filters. It ensures strict data isolation at the ORM level without the overhead of spinning up a new database per tenant.

Frontend: Angular 21 (Zoneless UI using Signals). Combined with native Tailwind RTL/LTR support, making it perfectly suited for global and MENA region applications.

Integrations: * Stripe Webhooks: Fully configured with idempotency keys to prevent double-charging during network retries.

Local AI (Ollama): Because B2B enterprise clients care deeply about privacy, sending data to 3rd party LLMs is often a dealbreaker. Integrating local AI ensures zero data leaks.

The Result?
By keeping everything in a well-structured monolith, you get the deployment simplicity of a single .jar file, but the clean separation of domains. If one module ever needs to scale independently, extracting it is straightforward because the boundaries are already respected in the code.

🎁 Get the Full 37-Page Architectural Blueprint
Before I even wrote the code for the AEK, I documented every single architectural decision, tradeoff, and pattern into a comprehensive blueprint.

If you are a technical founder or developer struggling with the initial architecture of your SaaS, you can grab the complete 37-page Architectural Blueprint for free here:

👉 https://mahmoudfarouk28.gumroad.com/l/sdmeap

I'd love to hear your thoughts. Have you fallen into the microservices trap early on, or are you team a monolith? Let's discuss in the comments! 👇

Top comments (6)

Collapse
 
buildbasekit profile image
buildbasekit

This is spot on.

Most early-stage devs over-engineer with microservices before they even have real users.

Modular monolith + clear boundaries is way more practical for actually shipping fast.

Curious, how are you handling module boundaries in Spring Boot? Separate packages only or enforcing stricter rules?

Collapse
 
mfarouk2894 profile image
Mahmoud Farouk

Thanks for reading and dropping the comment.

Exactly! Premature optimization with infrastructure is the fastest way to kill a startup's momentum before it even begins.

To answer your question: I definitely go beyond just separate packages. Relying only on package visibility usually degrades into a "Big Ball of Mud" as the codebase grows.

In the AEK, I enforce stricter boundaries using a multi-module build structure. Each domain (e.g., Auth, Billing, Core) is its own physical module. They communicate strictly through defined internal API interfaces or asynchronously via Spring Application Events to keep the domains decoupled.

To enforce this programmatically, I rely on ArchUnit tests. If a developer tries to bypass the public interface and directly import a class from another module's internal implementation, the ArchUnit test fails and breaks the CI build. It forces the team to respect the architectural boundaries.

I actually broke down this exact module separation strategy (with diagrams) in the blueprint!

Collapse
 
buildbasekit profile image
buildbasekit

That’s solid, enforcing boundaries with ArchUnit is a smart move.

I’ve seen exactly that “package-only” approach slowly turn into a mess over time.

Curious, how do you handle shared concerns across modules (like security, tenant context, etc.) without breaking those boundaries?

Do you keep a core module or expose them via interfaces/events as well?

Thread Thread
 
mfarouk2894 profile image
Mahmoud Farouk

Spot on again! Handling cross-cutting concerns is usually where Modular Monoliths make or break.

In the AEK, I borrow the Domain-Driven Design (DDD) concept of a Shared Kernel (I usually name it the aek-core module).

The golden rule I enforce is that this core module must be incredibly thin and contain absolutely zero business logic. It is strictly reserved for global infrastructure: the TenantContextHolder (using ThreadLocal), custom exception bases, base entity classes, and the overarching Security/JWT filter chain.

Every domain module (Auth, Billing, etc.) declares a dependency on this Shared Kernel, but the Shared Kernel depends on nothing. This strict unidirectional dependency completely prevents the dreaded circular dependency loop.

So, when a domain service needs to know who the current user is or which tenant data to fetch, it simply reads from the context holder provided by the core module. This way, the domain logic remains completely oblivious to the actual HTTP request, headers, or JWT parsing mechanics.

It keeps the boundaries clean while ensuring security and multi-tenancy context propagate seamlessly across the entire application lifecycle!

Thread Thread
 
buildbasekit profile image
buildbasekit

That’s a clean way to handle it.

Keeping the core thin is probably the hardest part, I’ve seen “shared kernel” slowly turn into a dumping ground if not enforced strictly.

I like that you kept domain logic completely unaware of transport concerns, that’s where most setups start leaking.

Curious, have you faced any pressure to move things into core over time, or has ArchUnit + structure been enough to keep it disciplined?

Thread Thread
 
mfarouk2894 profile image
Mahmoud Farouk

You hit the nail on the head! The "Shared Kernel as a dumping ground" is probably the most common architectural trap teams fall into.

To answer your question: Yes, there is always pressure. The temptation for a developer to just drop a UserHelper or a CommonDTO into the core module to save 5 minutes of thinking is huge.

While ArchUnit is fantastic for enforcing dependency direction, it can't always judge context. A developer could technically put billing logic inside the core module without breaking the dependency graph, even though it completely violates the domain boundary.

So, tooling alone isn't enough. It ultimately comes down to strict Code Review policies and enforcing a hard rule: If the code contains a business rule or domain concept, it is banned from the core. If two distinct modules suddenly need the same domain logic, they either duplicate it (since forcing DRY across bounded contexts is usually an anti-pattern) or we extract a completely new, specific domain module. The core remains purely for infrastructure and abstractions.

It definitely takes constant discipline, but saving the monolith from becoming a Big Ball of Mud is 100% worth it.