DEV Community

Whuatorhe Tejiri
Whuatorhe Tejiri

Posted on

RideShare Pro Production-Grade Microservices Deployment on Azure Kubernetes Service (AKS)

Project Overview

RideShare Pro is a microservices-based ride-sharing platform designed for scalability, fault tolerance, and high availability on Azure Kubernetes Service (AKS). This project simulates a real-world DevOps scenario for a fast-growing African ride-sharing startup.


Architecture Overview

Core Services

Service Name Description
User Service Handles registration, authentication, profiles
Ride Matching Matches riders with available drivers
Payment Service Processes payments and manages billing
Notification Service Sends SMS, push, and email notifications

Supporting Infrastructure

  • PostgreSQL (StatefulSet) – Primary + Replica with persistent volumes
  • Redis (StatefulSet) – Clustered for caching, session, and pub/sub
  • Nginx Ingress Controller – API Gateway for external routing
  • Azure Container Registry (ACR) – Private container registry
  • AKS Cluster – Orchestrator with autoscaling enabled

Tools & Prerequisites

  • Azure CLI
  • kubectl
  • Docker
  • Git
  • Azure Subscription (with Contributor access)

Getting Started

Clone the Repositories

Provision Infrastructure

Create Resource Group

RESOURCE_GROUP="teleios-tejiri-rg"
LOCATION="centralus"
az group create --name $RESOURCE_GROUP --location $LOCATION

Create ACR

ACR_NAME="teleiostejiriacr"

az acr create --resource-group $RESOURCE_GROUP --name $ACR_NAME --sku Standard

Authenticate with Azure & Configure CLI

Log in to Azure:

az login
This will open a browser for authentication.

Set your desired subscription (if you have multiple):

az account set --subscription "YOUR_SUBSCRIPTION_NAME"

Create AKS Cluster

CLUSTER_NAME="rideshare-aks"

az aks create \
--resource-group $RESOURCE_GROUP \
--name $CLUSTER_NAME \
--node-count 2 \
--node-vm-size Standard_B2ls_v2 \
--enable-cluster-autoscaler \
--min-count 2 \
--max-count 6 \
--generate-ssh-keys \
--attach-acr $ACR_NAME \
--location $LOCATION

Get AKS Credentials and Set kubectl Context

az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME

Verify current context:

kubectl config current-context

Data Layer Deployment

Redis StatefulSet

Create the following files and apply them kubectl apply -f .
Enter fullscreen mode Exit fullscreen mode
  • redis-statefulset.yaml
  • redis-headless-service.yaml
  • redis-configmap.yaml

Validate:

  • kubectl get statefulset redis-cluster
  • kubectl get pods -l app=redis-cluster
  • kubectl get pvc

PostgreSQL StatefulSet

Create the following files and apply them with kubectl apply
Enter fullscreen mode Exit fullscreen mode

Files:

  • postgresql-primary-statefulset.yaml
  • postgresql-replica-statefulset.yaml
  • postgresql-headless-service.yaml
  • postgresql-secret.yaml
  • postgresql-config-map.yaml

Validate:

  • kubectl get statefulset
  • kubectl exec -it postgres-primary-0 -- psql -U postgres -c "\l"

Microservices Deployment

Build and Push Docker Images

Example for User Service

cd rideshare-user-service

  • docker build -t $ACR_NAME.azurecr.io/user-service:v1.0 .
  • docker push $ACR_NAME.azurecr.io/user-service:v1.0

Repeat for all services.

Deploy Kubernetes Manifests

Create namespace e.g "microservices" and create the following files for each service:

  • deployment.yaml
  • service.yaml
  • secret.yaml
  • hpa.yaml
  • pdb.yaml

Deploy:

  • kubectl apply -f ./k8s/deployment.yaml
  • kubectl apply -f ./k8s/service.yaml
  • kubectl apply -f ./k8s/hpa.yaml
  • kubectl apply -f ./k8s/pdb.yaml

Validate:

Kubectl get pods -n microservices

Install NGINX Ingress Controller

Apply the official NGINX Ingress manifests:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.6/deploy/static/provider/cloud/deploy.yaml

This deploys everything: the ingress controller, service, RBAC, etc.

Wait for the pods to be ready:

kubectl get pods -n ingress-nginx

Once ready, you can get the external IP:

kubectl get service ingress-nginx-controller -n ingress-nginx

We mapped the ingress loadbalancer ip to azure dns to enable persistency when we connect to our dns against changing ip address.

We write our ingress rule based on what path the services are expecting, some services expects /api/payments/* while some just need /*, so we have two ingress rule files that does a rewrite by striping the request sent by the browser from /api/service/* to just / and then a service that do not rewrite but just sends the request directly.

Install cert-manager (manually)

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.yaml

Wait for the pods:

kubectl get pods --namespace cert-manager

You should see:

  • cert-manager
  • cert-manager-cainjector
  • cert-manager-webhook

Create Let's Encrypt Issuer

Create a ClusterIssuer (for all namespaces):
Apply it

kubectl apply -f letsencrypt-clusterissuer.yaml

Add TLS + Certificate to Ingress rule

Apply the Ingress rules

Wait for TLS Certificate
Check the certificate status:

kubectl describe certificate notification-tls -n microservices

You should see Ready=True once it's provisioned.

Run the following commands

  • kubectl get certificates -n microservices
  • kubectl get secrets -n microservices

You’ll see a secret used by the Ingress for HTTPS.

Test your dns using Https to confirm if it has worked.
https://your-dns-name/api/notifications/docs which leads to the swagger ui.

CI/CD Pipeline (GitHub Actions)

Each repo contains .github/workflows/deploy.yml

Stages:

Checkout code

Build Docker image

Push to ACR

Deploy to AKS (main/release only)

Triggers:

Push to any branch

Pull request creation

Manual workflow trigger

Access Microservice

Links to the individual repos having all our /k8s folders are attached with all files

Top comments (0)