DEV Community

Cover image for Hands-On Kubernetes: Running Event-Driven Microservices Locally with KIND
Sagar Maheshwary
Sagar Maheshwary

Posted on • Edited on

Hands-On Kubernetes: Running Event-Driven Microservices Locally with KIND

Kubernetes can feel heavy to experiment with, but KIND (Kubernetes IN Docker) makes it easy to run a full, multi-node Kubernetes cluster locally using Docker containers. It’s fast, lightweight, and behaves very similarly to a real cluster, supporting ingress controllers, custom configurations, and multi node setups. This makes KIND an excellent option for testing microservices architectures, experimenting with Kubernetes manifests, or learning cluster-level components without provisioning any cloud infrastructure.

In this article, we’ll walk through running a small event-driven microservices inside a Kind cluster. You’ll learn how to:

  • Configure and create a KIND cluster
  • Set up a local Docker registry and integrate it with the cluster
  • Install NGINX Ingress Controller and expose services
  • Deploy microservices using ConfigMaps, Secrets, Deployments, and Services
  • Run RabbitMQ using a StatefulSet
  • Use readiness/liveness probes for clean startup behavior

The demo contains three simple services:

  • API Gateway (Go) — exposes a REST /users endpoint
  • User Service (Go) — publishes a user.created event to RabbitMQ
  • Notification Service (NestJS) — consumes the event and logs a welcome message

Here’s the architecture we’ll be deploying:

Deployment Architecture

This article focuses on the Kubernetes side, not the application code, so we won’t be walking through any code snippets—but feel free to explore the services in the Github Repo if you're interested.

Table of Contents

Repository Structure

This project is intentionally structured as a single repository to keep the tutorial simple and easy to clone.

├── api-gateway/               # Go service exposing REST APIs
├── user-service/              # Go service publishing RabbitMQ events
├── notification-service/      # NestJS service consuming RabbitMQ events
│
├── k8s/
│   ├── api-gateway/           # Deployment, Service, ConfigMap, Ingress
│   ├── user-service/
│   ├── notification-service/
│   ├── rabbitmq/              # StatefulSet, ConfigMap, Secrets
│   ├── namespace.yaml         # Namespace definition
│   └── kind-config.yaml       # Multi-node KIND cluster + local registry
│
├── Makefile                   # Automates cluster creation and deployments
├── docker-compose.yaml        # Optional for local dev
└── devbox.json                # Reproducible development environment
Enter fullscreen mode Exit fullscreen mode

If you intend to follow along and build everything step-by-step, you can clone the repo and switch to the no-k8s branch, which contains only the microservices:

git clone https://github.com/SagarMaheshwary/kind-microservices-demo.git
cd kind-microservices-demo
git checkout no-k8s
Enter fullscreen mode Exit fullscreen mode

KIND Cluster and Registry Setup

First, you need to install Docker, KIND, and kubectl (the Kubernetes CLI). I’m using Ubuntu for this demo, but these tools are also available for macOS and Windows.

If you prefer, you can install Devbox and simply run devbox shell in the project root. Devbox will read devbox.json and automatically install Docker, KIND, kubectl, and cloud-provider-kind for you.

We’ll start by setting up a local Docker registry running as a container. This lets us push and pull images locally without relying on Docker Hub or any external registry.

Run the registry:

docker run -d \
 -p 5000:5000 \
 --net kind \
 -v registry-data:/var/lib/registry \
 --name kind-registry \
 registry:2
Enter fullscreen mode Exit fullscreen mode
  • --net kind → uses the same Docker network as KIND so the cluster can access kind-registry:5000
  • registry-data volume → keeps images persistent
  • registry:2 → official lightweight Docker registry image

Next, we’ll create the KIND cluster. We define k8s/kind-config.yaml that connects KIND to the local registry and sets up a multi-node cluster:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4

nodes:
  - role: control-plane
  - role: worker
  - role: worker

containerdConfigPatches:
  - |-
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."kind-registry:5000"]
      endpoint = ["http://kind-registry:5000"]
Enter fullscreen mode Exit fullscreen mode

Node roles:

  • control-plane → runs Kubernetes components like the API server, scheduler, and controller manager
  • worker → runs your Deployments, StatefulSets, and other workloads

Create the cluster:

kind create cluster --config=./k8s/kind-config.yaml

# Set kubectl context to the KIND cluster and default namespace to microservices.
kubectl config set-context kind-kind --namespace=microservices
Enter fullscreen mode Exit fullscreen mode

Verify that it’s running:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

You should see one control-plane node and two workers in the Ready state.

Next, we’ll create two namespaces for this demo in k8s/namespace.yaml:

apiVersion: v1
kind: Namespace
metadata:
  name: microservices
---
apiVersion: v1
kind: Namespace
metadata:
  name: datastores
Enter fullscreen mode Exit fullscreen mode
  • microservices → where all application services will run
  • datastores → where databases and brokers will run (RabbitMQ in our case)

Apply the manifest:

kubectl apply -f ./k8s/namespace.yaml
Enter fullscreen mode Exit fullscreen mode

We’ll also install metrics-server so Kubernetes can report CPU/memory usage for Pods and Nodes (useful for debugging, HPA, and general observability):

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# Patch the metrics-server deployment to allow insecure TLS connections to the kubelet (required for KIND).
kubectl patch deployment metrics-server \
  --namespace kube-system \
  --type='json' \
  -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-insecure-tls"}]'
Enter fullscreen mode Exit fullscreen mode

Check for metrics-server readiness:

kubectl get pods -n kube-system -l k8s-app=metrics-server
Enter fullscreen mode Exit fullscreen mode

Now you can use the top command to see CPU and memory usage:

kubectl top nodes
Enter fullscreen mode Exit fullscreen mode

The Basics of Kubernetes

Before deploying our microservices, let’s quickly cover the core Kubernetes concepts used in this demo. This isn’t a deep-dive, just enough practical knowledge to understand the manifests we’ll apply later.

Manifests

Kubernetes resources are defined using manifests, usually in YAML format. These files describe the desired state of your cluster.

  • apiVersion – defines which Kubernetes API version the resource belongs to
  • kind – specifies the type of resource (Pod, Deployment, Service, etc.)
  • metadata – names and labels for identifying the resource
  • spec – configuration details of the resource

Think of manifests as instructions for Kubernetes, telling it “create/update/delete these objects” to match your declared state.

Core Resources (Quick Overview)

Pod
The smallest deployable unit. A pod can run one or more containers (including sidecars, which are helper containers for logging, metrics, proxies, etc.). Pods are ephemeral, Kubernetes may recreate them at any time.

Deployment
Manages pods for you. Handles rolling updates, restarts, scaling, and ensures the correct number of pods are running. In real-world setups, you almost never create pods directly—Deployments take care of their lifecycle.

Service
Provides a stable network endpoint for accessing pods.

Types of Services:

  • ClusterIP — internal-only (default)
  • NodePort — exposes a port on every node
  • LoadBalancer — creates a cloud load balancer (not supported directly in KIND)
  • Headless — no cluster IP; used for StatefulSets or service discovery patterns

Ingress
Routes external HTTP(S) traffic into the cluster. Requires an Ingress Controller (we use NGINX Ingress).

ConfigMap
Stores non-sensitive configuration such as environment variables or app configs. Lets you avoid baking configuration into images.

Secret
Stores sensitive values (passwords, tokens). The data is Base64-encoded (Base64 is not encryption, it only ensures safe handling of special characters). In real clusters, Secrets are typically integrated with external secret managers such as AWS Secrets Manager or Vault.

StatefulSet
Used for stateful applications. Unlike Deployments, StatefulSets guarantee:

  • stable pod names
  • ordered startup/shutdown
  • persistent storage attachment

PersistentVolume (PV) / PersistentVolumeClaim (PVC)
Provide persistent disk storage. A PVC is a "request" for storage; a PV fulfills it. KIND auto-provisions storage for PVCs, which is perfect for local testing (e.g., RabbitMQ).

Namespace
Logical grouping of resources. Helps separate environments and teams. In this demo, we’ll use:

  • microservices
  • datastores

These fundamentals are enough to follow the manifests we’ll apply later.

Kubectl Basics

We’ll interact with the cluster using kubectl, the Kubernetes command-line tool. This section introduces a few essential commands you’ll commonly use to apply manifests, check resource status, and debug your services. It’s not a complete reference, just enough to follow along with this demo.

Apply a manifest to create or update resources:

kubectl apply -f k8s/user-service/deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Check the status of resources:

kubectl get pods
kubectl get pods -n microservices
Enter fullscreen mode Exit fullscreen mode

This same pattern works for other resources such as Deployments, Services, or Ingress resources:

kubectl get deployment
kubectl get service
kubectl get ingress
Enter fullscreen mode Exit fullscreen mode

Restart or trigger a rollout for a deployment:

kubectl rollout restart deployment/user-service
Enter fullscreen mode Exit fullscreen mode

Kubernetes will gradually replace old pods with new ones while respecting readiness and liveness probes, enabling zero-downtime deployments if your app handles graceful shutdowns.

Create a resource directly without a manifest:

kubectl create configmap example-config --from-literal=LOG_LEVEL=debug
Enter fullscreen mode Exit fullscreen mode

Port forward a service or pod to your host machine for debugging:

kubectl port-forward service/api-gateway 8080:80
Enter fullscreen mode Exit fullscreen mode

You can use labels to filter pods when checking status or viewing logs:

kubectl get pods -l app=user-service
kubectl logs -f pod/<pod-name>
Enter fullscreen mode Exit fullscreen mode

Setup NGINX Ingress Controller

An Ingress Controller is a Kubernetes component that manages external access to services inside the cluster, primarily HTTP/HTTPS traffic. It allows you to define routes, hostnames, and TLS for your services without exposing each one individually as a NodePort or LoadBalancer.

In cloud environments, a Service of type LoadBalancer automatically provisions a real network load balancer (AWS ELB, GCP LB, Azure LB, etc.).
However, this does not work on local Kubernetes distributions like KIND, because there is no cloud provider available to create that external load balancer. This is where cloud-provider-kind comes in.

cloud-provider-kind simulates a cloud environment for KIND by providing a fake external IP for Kubernetes LoadBalancer Services. It watches for Services of type LoadBalancer, allocates a local IP address for them, and routes traffic from that address to the underlying NodePort. This gives KIND clusters production-like behavior for ingress controllers—without requiring a cloud provider.

How traffic flows:

Ingress Request Flow

Install the Ingress Controller:

kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/deploy-ingress-nginx.yaml
Enter fullscreen mode Exit fullscreen mode

Download the cloud-provider-kind binary and run it:

curl -L https://github.com/kubernetes-sigs/cloud-provider-kind/releases/download/v0.6.0/cloud-provider-kind_0.6.0_linux_amd64.tar.gz | tar -xzf - cloud-provider-kind
chmod +x cloud-provider-kind
./cloud-provider-kind
Enter fullscreen mode Exit fullscreen mode

Once the Ingress Controller and cloud-provider-kind are running, check the LoadBalancer IP:

kubectl get service -n ingress-nginx
Enter fullscreen mode Exit fullscreen mode

You’ll see the external IP under the EXTERNAL-IP column.

Add it to your OS hosts file (Linux/macOS: /etc/hosts, Windows: C:\Windows\System32\drivers\etc\hosts):

[External-IP] microservices.local
Enter fullscreen mode Exit fullscreen mode

Now, any request to http://microservices.local will be routed through the NGINX Ingress Controller to the correct service and pod inside the cluster.

Understanding the Microservices

Before deploying anything, it’s helpful to understand the core components of our demo system and how they interact.

The API Gateway (Go) exposes a simple REST endpoint /users for clients and forwards requests to the User Service. It acts as the main entry point for all client requests.

The User Service (Go) also exposes a REST endpoint /users (we avoid gRPC here to keep things simple). It stores user information in-memory and publishes a user.created event to RabbitMQ whenever a new user is created.

The Notification Service (NestJS) consumes the user.created events from RabbitMQ and logs a message like “Welcome email sent to [user name]”.

Microservices user.created event flow

Healthchecks

Each service exposes /livez and /readyz endpoints to enable Kubernetes probes.

  • The /livez endpoint simply returns HTTP 200 if the service is alive. If Kubernetes does not receive a 200 response, it restarts the pod.
  • The /readyz endpoint verifies that all dependencies (such as RabbitMQ connections for the User Service and Notification Service) are available before returning HTTP 200. If dependencies aren’t ready, it returns HTTP 503, preventing Kubernetes from sending traffic to the pod until it is fully ready.

Graceful Shutdowns

All services implement graceful shutdowns. When Kubernetes sends a SIGTERM signal, the services allow in-progress requests to complete, close connections to dependencies like RabbitMQ, and then shut down the HTTP server.

Without graceful shutdowns, requests could be dropped mid-progress, causing potential errors or lost messages. Implementing this ensures smooth rolling updates and proper pod termination during scaling or redeployments.

Kubernetes Manifests

Microservices

Now that we understand the microservices, let’s see how they’re deployed in Kubernetes using manifests. We’ll start with user-service:

Let's create a configmap for our user-service in k8s/user-service/configmap.yaml and define our env variables:

apiVersion: v1
kind: ConfigMap
metadata:
  name: user-service
  namespace: microservices
data:
  HTTP_SERVER_URL: "0.0.0.0:4000"
  HTTP_SHUTDOWN_TIMEOUT: "5s"
  GIN_MODE: "release"

  AMQP_HOST: "rabbitmq.datastores.service.cluster.local"
  AMQP_PORT: "5672"
  AMQP_CONNECTION_RETRY_INTERVAL_SECONDS: "5s"
  AMQP_CONNECTION_RETRY_ATTEMPTS: "10"
  AMQP_PUBLISH_TIMEOUT_SECONDS: "2s"
Enter fullscreen mode Exit fullscreen mode

rabbitmq.datastores.service.cluster.local points to the RabbitMQ service in the datastores namespace. Other services can use just the service name if they are in the same namespace, like user-service for the API Gateway to forward requests.

for rabbitmq username/password, we define a secret k8s/user-service/secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: user-service
  namespace: microservices
data:
  AMQP_USERNAME: ZGVmYXVsdA== #base64 for "default"
  AMQP_PASSWORD: ZGVmYXVsdA== #base64 for "default"
Enter fullscreen mode Exit fullscreen mode

Use the following commands to encode/decode Base64 values:

echo -n "hello world" | base64 # -n prevents echo from adding a newline, which would change the encoded value.
echo "aGVsbG8gd29ybGQ=" | base64 --decode
Enter fullscreen mode Exit fullscreen mode

Create a Deployment manifest in k8s/user-service/deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-service
  namespace: microservices
  labels:
    app: user-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: user-service
  template:
    metadata:
      labels:
        app: user-service
    spec:
      containers:
        - name: user-service
          image: kind-registry:5000/user-service:1.0
          envFrom:
            - configMapRef:
                name: user-service
            - secretRef:
                name: user-service
          ports:
            - containerPort: 4000
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /livez
              port: 4000
            initialDelaySeconds: 20 # Wait 20s for the service to start
            periodSeconds: 10 # Check every 10s
            timeoutSeconds: 3 # Timeout after 3s
            failureThreshold: 4 # Restart pod after 4 consecutive failures (~40s)
          readinessProbe:
            httpGet:
              path: /readyz
              port: 4000
            initialDelaySeconds: 5
            periodSeconds: 5
            timeoutSeconds: 2
            failureThreshold: 2 # Mark pod not ready after 2 consecutive failures (~10s)
          resources:
            requests:
              memory: "100Mi"
              cpu: "150m"
            limits:
              memory: "150Mi"
              cpu: "300m"
Enter fullscreen mode Exit fullscreen mode

Key fields in our deployment manifest:

  • replicas: 1 → Number of identical Pods to create.
  • selector.matchLabels → Tells the Deployment which Pods it owns. Must match the labels in template.metadata.labels.
  • template → The actual Pod definition.
  • image → Container image to deploy.
  • envFrom (configMapRef and secretRef) → Pull environment variables from ConfigMap/Secret automatically.
  • ports.containerPort → Exposes port 4000 inside the Pod.
  • livenessProbe → Detects stuck containers and restarts them.
  • readinessProbe → Sends traffic only when the app is ready.
  • initialDelaySeconds → Useful for slow-start apps to avoid false failures.
  • resources.requests → Minimum guaranteed resources for the Pod (100Mi = 100 MB RAM, 150m = 0.15 CPU cores, checkout resource unit docs).
  • resources.limits → Maximum resources the container can consume.

Define the Service in k8s/user-service/service.yaml. We use ClusterIP, as this service is internal:

apiVersion: v1
kind: Service
metadata:
  name: user-service
  namespace: microservices
  labels:
    app: user-service
spec:
  type: ClusterIP
  selector:
    app: user-service
  ports:
    - protocol: TCP
      port: 4000 # exposed by the Service
      targetPort: 4000 # Pod container port
      name: http # A human-readable name for the port; useful for referencing this port in other resources (like NetworkPolicy or Ingress).
Enter fullscreen mode Exit fullscreen mode

Create an Ingress for the API Gateway in k8s/api-gateway/ingress.yaml to expose it externally via the NGINX Ingress Controller:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-gateway
  namespace: microservices
  annotations: # Extra settings for the ingress controller.
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: "nginx" # Uses the NGINX ingress controller.
  rules: # Define host and URL paths, mapping traffic to the appropriate backend service and port.
    - host: microservices.local
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: api-gateway
                port:
                  number: 4000
Enter fullscreen mode Exit fullscreen mode

The Deployment, Service, and Secret manifests for API Gateway and Notification Service follow the same patterns. Only names, labels, and container images differ. ConfigMaps define service-specific settings such as the API Gateway pointing to the User Service, and Notification Service pointing to RabbitMQ.

# API Gateway doesn't need secret.yaml, so make sure to remove "envFrom.secretRef" from copied deployment.yaml
cp ./k8s/user-service/deployment.yaml ./k8s/api-gateway/deployment.yaml
cp ./k8s/user-service/service.yaml ./k8s/api-gateway/service.yaml

mkdir ./k8s/notification-service
cp ./k8s/user-service/deployment.yaml ./k8s/notification-service/deployment.yaml
cp ./k8s/user-service/service.yaml ./k8s/notification-service/service.yaml
cp ./k8s/user-service/secret.yaml ./k8s/notification-service/secret.yaml
Enter fullscreen mode Exit fullscreen mode

Create a ConfigMap for the API Gateway (k8s/api-gateway/configmap.yaml):

apiVersion: v1
kind: ConfigMap
metadata:
  name: api-gateway
  namespace: microservices
data:
  HTTP_SERVER_URL: "0.0.0.0:4000"
  HTTP_SERVER_SHUTDOWN_TIMEOUT: "5s"
  GIN_MODE: "release"

  USER_SERVICE_CLIENT_URL: "http://user-service:4000"
  USER_SERVICE_CLIENT_TIMEOUT: "2s"
Enter fullscreen mode Exit fullscreen mode

And for the Notification Service (k8s/notification-service/configmap.yaml):

apiVersion: v1
kind: ConfigMap
metadata:
  name: notification-service
  namespace: microservices
data:
  HTTP_SERVER_URL: "0.0.0.0:4000"

  AMQP_HOST: "rabbitmq.datastores.service.cluster.local"
  AMQP_PORT: "5672"
  AMQP_QUEUE: "notification-service"
Enter fullscreen mode Exit fullscreen mode

Copy-pasting raw Kubernetes manifests for every service works for demos, but it doesn’t scale well. In production setups, you often package your manifests as a Helm chart and give each microservice its own values.yaml. That file defines only what varies between services, things like labels, image name/tag, environment variables, configs, and resource settings, while all the shared templates live in one place. This keeps your configuration clean, DRY, and much easier to manage as your number of microservices grows.

Setup RabbitMQ

We’ll run RabbitMQ inside the cluster using a StatefulSet.

First, create a ConfigMap for in k8s/user-service/configmap.yaml and define environment variables:

apiVersion: v1
kind: ConfigMap
metadata:
  name: rabbitmq
  namespace: datastores
data:
  enabled_plugins: |
    [rabbitmq_management].
  rabbitmq.conf: |
    listeners.tcp.default = 5672
    management.tcp.port = 15672
    loopback_users.guest = false
Enter fullscreen mode Exit fullscreen mode

This ConfigMap mounts two config files into the RabbitMQ container:

  • enabled_plugins → Enables the Management UI
  • rabbitmq.conf → Port settings and default runtime config

Next, define a Secret for RabbitMQ credentials in k8s/rabbitmq/secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: rabbitmq
  namespace: datastores
type: Opaque
data:
  RABBITMQ_DEFAULT_USER: ZGVmYXVsdA== # Base64 for "default"
  RABBITMQ_DEFAULT_PASS: ZGVmYXVsdA== # Base64 for "default"
Enter fullscreen mode Exit fullscreen mode

Now let's create the StatefulSet in k8s/rabbitmq/statefulset.yaml. A StatefulSet is similar to a Deployment but adds a few important capabilities for stateful systems like message brokers:

  • volumeMounts → Mounts RabbitMQ configuration and persistent storage
  • volumeClaimTemplates → Creates a dedicated PVC per Pod for durable message data
  • stable hostnames → Ensures Pods get predictable DNS entries

The postStart script is required because the official RabbitMQ Docker image does not automatically apply user tags, permissions, or certain admin settings at startup. Without this script:

  • the default user exists but has no administrator tag
  • required permissions are not granted
  • the Management UI will show limited capabilities
  • your microservices may fail permission checks when publishing/consuming messages
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: rabbitmq
  namespace: datastores
spec:
  serviceName: rabbitmq
  replicas: 1
  selector:
    matchLabels:
      app: rabbitmq
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      containers:
        - name: rabbitmq
          image: rabbitmq:3.12-management
          ports:
            - containerPort: 5672
            - containerPort: 15672
          envFrom:
            - secretRef:
                name: rabbitmq
          volumeMounts:
            - name: rabbitmq
              mountPath: /etc/rabbitmq
            - name: rabbitmq-data
              mountPath: /var/lib/rabbitmq
          lifecycle:
            postStart:
              exec:
                command: [
                    "/bin/sh",
                    "-c",
                    "echo 'Configuring RabbitMQ...'; \
                    for i in $(seq 1 60); do \
                    if rabbitmqctl status; then \
                    rabbitmqctl wait /var/lib/rabbitmq/mnesia/rabbit@$(hostname).pid && \
                    echo 'RabbitMQ is ready!' && break; \
                    else \
                    echo 'Waiting for RabbitMQ to be ready...'; \
                    fi; \
                    sleep 5; \
                    done; \
                    rabbitmqctl set_user_tags ${RABBITMQ_DEFAULT_USER} administrator && \
                    rabbitmqctl set_permissions -p / ${RABBITMQ_DEFAULT_USER} '.*' '.*' '.*'; \
                    echo 'RabbitMQ configured successfully.'",
                  ]
      volumes:
        - name: rabbitmq
          configMap:
            name: rabbitmq
  volumeClaimTemplates:
    - metadata:
        name: rabbitmq-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
Enter fullscreen mode Exit fullscreen mode

The last piece is a Headless Service for the StatefulSet, defined in k8s/rabbitmq/headless-service.yaml. It provides each RabbitMQ Pod a stable DNS identity:

apiVersion: v1
kind: Service
metadata:
  name: rabbitmq
  namespace: datastores
spec:
  ports:
    - name: tcp
      port: 5672
    - name: management
      port: 15672
  clusterIP: None
  selector:
    app: rabbitmq
Enter fullscreen mode Exit fullscreen mode

Tip: For local development and experiments, it’s perfectly fine to run datastores like Postgres or RabbitMQ outside the KIND cluster using Docker Compose. This dramatically reduces complexity since you avoid dealing with StatefulSets, PVCs, and storage provisioning. I’ve experimented with both approaches, and running stateful components externally often leads to a smoother and faster feedback loop.
Even in production, databases and message brokers are typically not run inside Kubernetes due to operational complexity, durability concerns, and failover constraints.

Running Everything

With all manifests in place, the final step is to build the service images, push them to the local registry, deploy all components, and verify that everything is running correctly.

First, build and push the images for each microservice (api-gateway, user-service, notification-service) to the local KIND registry:

docker build --target production -t localhost:5000/api-gateway:1.0 ./api-gateway
docker build --target production -t localhost:5000/user-service:1.0 ./user-service
docker build --target production -t localhost:5000/notification-service:1.0 ./notification-service

docker push localhost:5000/api-gateway:1.0
docker push localhost:5000/user-service:1.0
docker push localhost:5000/notification-service:1.0
Enter fullscreen mode Exit fullscreen mode

Next, deploy RabbitMQ and the microservices by applying the manifests:

kubectl apply -f k8s/rabbitmq
kubectl apply -f k8s/user-service
kubectl apply -f k8s/notification-service
kubectl apply -f k8s/api-gateway
Enter fullscreen mode Exit fullscreen mode

User service and notification service may restart a few times initially if RabbitMQ isn’t ready. Kubernetes assumes each service is self-healing, and the readiness probe ensures that pods only receive traffic once their dependencies are available.

To validate that everything is running properly, watch the pods come up:

kubectl get pods -A -w
Enter fullscreen mode Exit fullscreen mode

If a pod isn’t behaving as expected, you can inspect it further. To see detailed information:

kubectl describe pod <pod-name> -n <namespace>
Enter fullscreen mode Exit fullscreen mode

To follow logs in real-time, for example for the Notification Service:

kubectl logs -f deployment/notification-service -n microservices
Enter fullscreen mode Exit fullscreen mode

Some common issues you might encounter:

  • ImagePullBackOff — The image isn’t in the registry or KIND isn’t configured to pull from it. Double-check your registry setup and image tags.
  • CrashLoopBackOff — Usually occurs when a service starts before RabbitMQ is ready. Make sure readiness probes are correctly defined and RabbitMQ is healthy.
  • Ingress 404 — Either the host entry isn’t in /etc/hosts, or the Ingress isn’t mapped correctly. Check it with kubectl get ingress -n microservices.

Calling the API

Once all Pods are running and the Ingress is active, you can create a user through the API Gateway:

curl -X POST http://microservices.local/users \
  -H "Content-Type: application/json" \
  -d '{"name": "daniel", "email": "daniel@example.com", "password": "123"}'
Enter fullscreen mode Exit fullscreen mode

If everything is wired correctly, the User Service will publish a user.created event, and the Notification Service will consume it.

Check the logs:

kubectl logs -f deployment/notification-service -n microservices
Enter fullscreen mode Exit fullscreen mode

You should see something like:

{
  "level": "log",
  "pid": 25,
  "timestamp": 1764442767463,
  "message": "Sending welcome email to daniel",
  "context": "NotificationController"
}
Enter fullscreen mode Exit fullscreen mode

This confirms the full event flow works across the cluster.

Cleaning Up

When you’re done experimenting, delete the KIND cluster:

kind delete cluster
Enter fullscreen mode Exit fullscreen mode

Stop the local registry:

docker stop kind-registry
docker rm kind-registry
Enter fullscreen mode Exit fullscreen mode

Automating Everything (Makefile)

The repository includes a Makefile that automates every step end-to-end, from cluster creation to deployment and teardown.

Quick summary of available commands:

make help                       # List all available commands
make kind-create-cluster        # Create KIND cluster, local registry, metrics service, namespace
make kind-deploy-nginx-ingress  # Install and run NGINX Ingress + cloud-provider-kind
make kind-build-images          # Build all microservice Docker images
make kind-push-images           # Push images to local registry
make kind-deploy-services       # Deploy RabbitMQ + all microservices
make kind-delete-cluster        # Delete cluster and stop registry
Enter fullscreen mode Exit fullscreen mode

Conclusion

Thanks for reading! This walkthrough showed how to run a local Kubernetes environment using KIND with a multi-node cluster, local registry, NGINX Ingress, RabbitMQ, readiness/liveness probes, and clean Kubernetes manifests. While real-world Kubernetes deployments are often larger and more complex, this guide provides a solid, hands-on starting point for understanding how Kubernetes deploys, manages, and operates microservices, even on a local machine.

If you found this useful, leave a like or comment, and feel free to explore the Github repo for the full setup!

Top comments (0)