DEV Community

TheProdSDE
TheProdSDE

Posted on

We Scaled from Azure App Service to Container Apps: Here's Why

If you’ve ever tried to deploy a full stack (Node backend, MCP server, ETL jobs, Angular SPA, and PostgreSQL) on Azure, you quickly hit a few hard limits at each tier. This is our story of rolling out exactly that stack and how we moved step‑by‑step from Azure App Service Free, to Basic, then to Azure Container Apps (ACA) while keeping Postgres on Azure DB for PostgreSQL.


1. Starting with App Service Free (and why it died fast)

We started where many devs do: Azure App Service Free tier (F1). We deployed:

  • Node.js backend API
  • Next: a WebSocket‑based MCP server
  • ETL job as an Azure WebJob
  • Angular SPA as static assets served by Node
  • PostgreSQL as Azure Database for PostgreSQL (B1ms + 32 GB storage)

The Free tier is fine for learning, but two hard limits killed it for any real‑world sharing:

  • No SLA and very limited CPU seconds.
  • No support for custom‑domain SSL; only *.azurewebsites.net with Azure‑managed cert.

Entra ID, modern browsers, and any third‑party integration now demand HTTPS with a custom domain.

App Service Free = no custom‑domain SSL unless you downgrade to plain HTTP, which is effectively unusable beyond local dev. about-azure

So for any external demo, client, or even corporate‑network access, Free tier is non‑starter.


2. Moving to App Service Basic (HTTPS gate, but one‑endpoint ceiling)

We upgraded to App Service Linux Basic tier (B1), which costs roughly ₹350–₹500/month in India, depending on region and billing cycle. learn.microsoft

This tier unlocked:

  • Dedicated compute (1 vCore, ~1.75 GB RAM).
  • Custom domains + App Service Managed Certificates (free, auto‑renewed TLS/SSL). codejack
  • WebJobs for background tasks (our daily ETL).

On Basic, we could:

  • Serve the Angular SPA from /
  • Expose the Node backend API on /api/*
  • Run the MCP server in the same Node process or via a virtual path. stackoverflow

However, one hard constraint emerged:

👉 Azure App Service exposes only one main HTTP endpoint per app. Everything fights for the same worker process, CPU, and memory.

This created three problems:

  • No isolation between backend, MCP server, and ETL.
  • ETL jobs (10–15 min) bloated the same instance that served the API.
  • Scaling is limited to a few instances (max 3 in Basic), all sharing the same codebase. learn.microsoft

So we had:

  • HTTPS + custom domain
  • Cheap, simple PaaS
  • True microservices isolation

3. Why “one app Service Plan for backend + MCP + ETL” failed

Following the “one App Service Plan for all” pattern sounded clean: share the same plan to keep costs low and everything in one logical place.

But in practice:

  • All services share one HTTP endpoint, so /, /api, and MCP‑style endpoints live in the same process.
  • Long‑running ETL jobs prevent the app from being scaled‑to‑zero; you pay for the instance even when idle.
  • No independent scaling for backend vs MCP vs ETL.

This is where architecture becomes a hard constraint, not just a style choice.


4. PostgreSQL: always separate, always on Azure DB

Throughout this journey, we kept PostgreSQL as Azure Database for PostgreSQL Flexible Server (B1ms + 32 GB storage) on a separate deployment from our compute. azure.microsoft

Why? Because:

  • DB and app scale independently (compute vs storage, backup, replication).
  • Flexible Server supports managed backups, high availability, and autoscaling without tying it to App Service pricing tiers. azure.microsoft

Postgres cost (B1ms + 32 GB) sits roughly in ₹1,200–₹2,000/month for low‑to‑medium traffic. azure.microsoft

This is a universal pattern:

  • App Service / ACA / AKS should not host the DB directly.
  • Use Azure DB for PostgreSQL (or standalone VM) and wire it via connection string or Managed Identity when possible. docs.azure

5. ACA: the “microservices‑on‑budget” escape hatch

Once we hit the one‑endpoint bottleneck and wanted true isolation between backend, MCP, and ETL, we moved the compute to Azure Container Apps (ACA).

We:

  • Containerized the app (Node + MCP + SPA + ETL logic) into a single Docker image.
  • Pushed it to Azure Container Registry (ACR). azure.microsoft
  • Deployed the backend / MCP containers into Azure Container Apps and wired them to the same Azure DB for PostgreSQL. azure.microsoft

ACA became the dev‑test layer we wanted:

  • Microservices‑style isolation without full Kubernetes complexity.
  • Auto‑scale for ETL jobs (5–10 min bursts) while keeping cloud bills low by scaling down to near‑zero.
  • Per‑second billing with 180k vCPU‑seconds, 360k GiB‑seconds, and 2M requests per month free. azure.microsoft

We ran the backend with 0.25 vCPU + 0.5 GiB RAM and scaled up instances during ETL (5–10 min) only, then let them scale back. This kept the compute cost very low while gaining full control over scaling policies. azureway

For demo / internal‑use access, ACA’s default endpoint (*.azurecontainerapps.io) with built‑in HTTPS was sufficient; we did not need to bring a custom domain yet.


6. Why not ACI, why not just Nginx + self‑managed certs?

During the ACA phase, we evaluated:

  • Azure Container Instances (ACI) for the ETL / MCP workloads.
  • Nginx / reverse proxy in front of ACA or App Service with self‑managed SSL certs.

We ruled both out quickly:

  • ACI is great for short‑lived, one‑shot containers, but it lacks the managed app‑style primitives (health checks, scaling, ingress, jobs) we needed for backend + MCP + ETL. learn.microsoft
  • Self‑signed certs with Nginx/Grok cause SSL handshake failures in production when the proxy talks to a backend with a non‑trusted certificate. Workarounds like proxy_ssl_verify off are security anti‑patterns. explaintopic

Instead, we leaned into Azure managed TLS:

  • App Service: App Service Managed Certificates (free) for custom domains. azure.github
  • ACA: Use Azure Front Door or Azure Functions + App Gateway as the TLS‑terminating layer when you need a custom‑domain install in production. learn.microsoft

This keeps you from managing certs and instead uses Azure‑managed TLS wherever possible.


7. When we move to AKS (for production)

For Dev / Test, ACA + ACR + Azure DB for PostgreSQL is the sweet spot:

  • Microservices‑style architecture
  • Cost‑effective per‑second billing
  • No need to manage Kubernetes (yet)

For production, our plan is Azure Kubernetes Service (AKS) because:

  • Multi‑cluster, multi‑region HA with enterprise SLA.
  • Fine‑grained scaling, pod affinities, networking, observability.
  • RBAC, audit logs, and compliance story for regulated workloads. sedai

AKS is more expensive (control plane + worker nodes), but that’s the cost of enterprise‑grade ops, not just compute. azure.microsoft

So our ladder now looks like:

  1. App Service Free – Learning, POC, but blocked by SSL.
  2. App Service Basic – MVP that needs HTTPS + custom domain.
  3. Azure Container Apps + ACR – Microservices on budget, dev‑test ready.
  4. AKS + Azure DB for PostgreSQL – Production‑ready stack.

8. Cost snapshot (approx, India region)

These are rough monthly estimates for your stack (Node + MCP + ETL + SPA + PostgreSQL) on each tier:

Tier / Service Cost (approx / month) Notes
PostgreSQL (B1ms + 32 GB) ₹1,200–₹2,000 Constant across tiers azure.microsoft
App Service Linux (B1, Basic) ₹350–₹500 Single plan hosting backend + MCP + ETL job. learn.microsoft
Azure Container Apps (0.25 vCPU + 0.5 GiB) ₹100–₹500 (or <) Often under cost when using free vCPU/GiB‑seconds. azure.microsoft
Azure Container Registry (Basic) ₹500–₹600 For 10 GB storage and basic usage. azure.microsoft
Azure Kubernetes Service (2‑node cluster) ₹4,000–₹6,000+ Node VMs dominate cost. azure.microsoft

From Free → App Service Basic → ACA → AKS, moving up is less about price and more about capabilities:

  • Isolation
  • True microservices
  • Production‑grade HA / SLA

9. Architecture pro‑/cons: Quick comparison

Service / Tier Pros Cons
App Service Free Free, simple, no infra management. learn.microsoft No custom‑domain SSL, limited CPU, zero SLA. learn.microsoft
App Service Basic HTTPS + custom domain with free managed certs; low operational effort. azure.github One HTTP endpoint per app, no true isolation, scaling limited. stackoverflow
Azure Container Apps Microservices, per‑second billing, auto‑scale to zero, ETL‑burst friendly. azure.microsoft No built‑in custom‑domain TLS; requires Front Door / App Gateway in front for production‑style domains. azure.microsoft
Azure Kubernetes Service (AKS) Full Kubernetes, HA, RBAC, observability, enterprise‑grade ops. azure.microsoft Higher cost, steeper learning curve, ops overhead. azure.microsoft
Azure DB for PostgreSQL Managed DB, backups, HA, autoscale options. azure.microsoft Separate billing; you must manage connection‑pooling patterns.

10. When to move up the ladder

Use this mental model:

  • Stay on Free only if you’re learning and not exposing anything to external users.
  • Move to App Service Basic as soon as you need HTTPS + custom domain (internal dashboards, client demos).
  • Move to ACA when you want true isolation between backend, MCP, ETL, or when you want per‑second, scale‑to‑zero billing.
  • Move to AKS when you need multi‑region HA, enterprise SLA, compliance, or mature Kubernetes for your stack.

Also, if you ever catch yourself trying to run everything in one App Service and hitting limits (one endpoint, ETL‑job‑blocking‑API, no scaling boundaries), that’s a clear signal to containerize and move to ACA or AKS.


11. Final takeaway

The Azure deployment ladder is not about chasing the cheapest tier forever. It’s about understanding what each tier unlocks:

  • Free tier → Proof of concept only.
  • Basic tier → HTTPS + custom domain, but monolithic; that’s the ceiling.
  • ACA → Microservices on a budget, with true isolation and flexible scaling.
  • AKS → Enterprise‑grade ops, HA, and Kubernetes.

In our stack:

  • PostgreSQL stays on Azure DB for PostgreSQL (B1ms + 32 GB) and interacts with all tiers.
  • App Service gets you from Free to Basic quickly.
  • ACA becomes the dev‑test layer for backend + MCP + ETL without over‑committing to AKS.
  • AKS is the production plan when you’re ready for full Kubernetes maturity.

If you’ve been wrestling with “do I put ETL in WebJobs or in a separate container?”, “how do I share the same DB with all my services?”, or “when is AKS actually worth it?”, this deployment ladder captures the real‑world constraints you’ll hit—and the next step to take.


Running PostgreSQL on AKS for non‑prod testing

Even though we keep PostgreSQL on Azure DB for PostgreSQL in production, you can also run it inside AKS for non‑production testing while using smaller node sizes to keep costs low. For example, you can:

  • Deploy PostgreSQL as a StatefulSet on low‑sized nodes (e.g., Standard_B2s) with modest CPU, RAM, and storage.
  • Use this cluster purely for feature, integration, and load‑testing of your backend, MCP, and ETL jobs.

This is useful if your goal is to:

  • Validate that your application stack behaves correctly under pressure.
  • Show demos where the entire platform (app + DB) lives on Kubernetes.

However, this setup comes with caveats:

  • You take on DB operations (backups, failover, patching, monitoring) yourself.
  • Non‑prod does not replace production‑style tests on Azure DB for PostgreSQL, which remains the recommended pattern for production workloads.

So you can use Postgres‑on‑AKS to prove your app can scale in a test environment, but you should keep managed DB for actual production, where HA, SLA, and operational safety matter.

References


Top comments (0)