Table of Contents
- Why I Built a Service Mesh Lab Instead of Just Reading Docs
- The 3RVision Platform: Real App, Real Traffic
- Setting the Stage: Kind + Terraform + Namespaces
- How Istio Fits In: Sidecars, Gateways, and the Data Plane
- Traffic Management 101: VirtualServices, DestinationRules, and Subsets
- Implementing Canary Releases with Headers and Weights
- Enforcing Zero-Trust with STRICT mTLS
- Load Balancing, Connection Pools, and Circuit Breaking
- Observability: Metrics, Dashboards, and Traces
- Walking Through a Single Request
- Key Takeaways
Why I Built a Service Mesh Lab Instead of Just Reading Docs
This project started as a personal lab to really understand what a service mesh does beyond the buzzwords. Instead of using sample apps, the goal was to take a real 3-tier app (Next.js frontend, Go backend, Flask ML service) and see how Istio changes the way traffic, security, and observability work in practice.
The idea was simple: if this setup feels like something that could ship to production one day, then the learning will stick and this repo becomes a living reference for future work.
The 3RVision Platform: Real App, Real Traffic.
3RVision is split into three logical services, each running in its own Kubernetes namespace:
- frontend → Next.js UI
- backend → Go API server
- ml → Flask ML inference service
The frontend talks to the backend, and the backend calls the ML service for model inference, exactly the kind of hop-by-hop traffic that benefits from a service mesh.
Each service has two deployment variants:
-
stableversion (production) -
canaryversion (testing new features)
This is where Istio’s traffic management features come into play.
Setting the Stage: Kind + Terraform + Namespaces
To avoid dealing with cloud accounts, Terraform provisions a local Kind cluster with:
- 1 control plane node
- 2 worker nodes
- Port mappings for HTTP (80) and HTTPS (443)
Cluster Setup Workflow
# Provision the cluster
terraform init
terraform apply
# Create namespaces
kubectl create namespace frontend
kubectl create namespace backend
kubectl create namespace ml
# Enable Istio sidecar injection
kubectl label namespace frontend istio-injection=enabled
kubectl label namespace backend istio-injection=enabled
kubectl label namespace ml istio-injection=enabled
This gives you a clean separation:
- Terraform → cluster lifecycle
- Kubernetes → application resources
- Istio → traffic shaping and security
How Istio Fits In: Sidecars, Gateways, and the Data Plane
Istio works by injecting an Envoy sidecar proxy next to each application container. All inbound and outbound traffic flows through this sidecar, which means you can add routing, retries, mTLS, and telemetry without changing application code.
Architecture Overview
┌─────────────────────┐
│ User/Client │
└──────────┬──────────┘
│
▼
┌──────────────────────────────────────────────────────────────────────────┐
│ KIND KUBERNETES CLUSTER │
│ (Terraform Provisioned) │
│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │
│ │ Control Plane │ │ Worker #1 │ │ Worker #2 │ │
│ └────────────────┘ └────────────────┘ └────────────────┘ │
├──────────────────────────────────────────────────────────────────────────┤
│ ISTIO SERVICE MESH │
│ │
│ Gateway ──────► VirtualService ──────► DestinationRule │
│ (Ingress) (Routing) (mTLS + Load Balancing) │
├──────────────────────────────────────────────────────────────────────────┤
│ MICROSERVICES │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ FRONTEND │ │ BACKEND │ │ ML MODEL │ │
│ │ (Next.js) │───►│ (Go) │───►│ (Flask) │ │
│ │ Port: 3000 │ │ Port: 8080 │ │ Port: 5001 │ │
│ │ │ │ │ │ │ │
│ │ stable/canary│ │ stable/canary│ │ stable/canary│ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
├──────────────────────────────────────────────────────────────────────────┤
│ OBSERVABILITY STACK │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Prometheus │ │ Jaeger │ │ Grafana │ │
│ │ (Metrics) │ │ (Tracing) │ │ (Dashboards) │ │
│ │ Port: 9090 │ │ Port: 16686 │ │ Port: 3000 │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└──────────────────────────────────────────────────────────────────────────┘
At the edge, an Istio Ingress Gateway receives external requests, applies routing rules defined by VirtualServices, and forwards traffic deeper into the mesh.
Traffic Management 101: VirtualServices, DestinationRules, and Subsets
The main Istio building blocks used in this project are:
| Resource | Purpose |
|---|---|
| Gateway | Exposes services to external traffic on specific ports |
| VirtualService | Defines how requests are routed (by header, weight, path) |
| DestinationRule | Defines policies for traffic (subsets, load balancing, connection pools) |
Example: Frontend VirtualService
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: frontend-vs
namespace: frontend
spec:
hosts:
- "frontend.local"
gateways:
- frontend-gateway
http:
- match:
- headers:
x-canary:
exact: "true"
route:
- destination:
host: frontend-service
subset: canary
weight: 100
- route:
- destination:
host: frontend-service
subset: stable
weight: 90
- destination:
host: frontend-service
subset: canary
weight: 10
Each service (frontend, backend, ml) has:
- A VirtualService that decides which version handles the request
- A DestinationRule that defines two subsets based on the
versionlabel:stableandcanary
Implementing Canary Releases with Headers and Weights
The canary strategy is intentionally simple but powerful.
Traffic Routing Logic
-
If request has header
x-canary: true→ 100% to canary version. - If header is missing → Split by weight (for example, 90% stable, 10% canary).
This pattern makes it easy to:
- Send only internal testers to canary by setting the header.
- Gradually increase canary weight without touching deployment specs.
- Roll back instantly by adjusting VirtualService weights.
Testing Canary Deployment
# Send request to canary version
curl -H "x-canary: true" http://frontend.local
# Send request with default routing (weight-based)
curl http://frontend.local
Because the same pattern is applied to all three services (frontend, backend, ML), a single user journey can be fully on canary or fully on stable.
Enforcing Zero-Trust with STRICT mTLS
To move toward a zero-trust model, each namespace has a PeerAuthentication resource that sets mTLS mode to STRICT.
What This Means
Services only accept encrypted traffic from other sidecars in the mesh plain HTTP between pods is rejected.
Benefits of Istio mTLS
- Encryption → Nobody can sniff requests or responses in transit.
- Mutual authentication → Prevents unknown workloads from accessing services.
- Automated cert management → No manual cert rotation or key generation.
Example: PeerAuthentication
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: backend
spec:
mtls:
mode: STRICT
Istio documentation shows similar namespace-level policies to enforce strict mTLS for all workloads in a namespace.
Load Balancing, Connection Pools, and Circuit Breaking
- Load balancing: Round-robin across all healthy pods in a subset.
- Connection pools: Limits on TCP connections and HTTP pending requests.
- Outlier detection: After N consecutive errors, a pod is temporarily ejected from the pool.
Example: DestinationRule with Circuit Breaking
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: backend-dr
namespace: backend
spec:
host: backend-service
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 50
maxRequestsPerConnection: 2
outlierDetection:
consecutiveErrors: 3
interval: 10s
baseEjectionTime: 30s
subsets:
- name: stable
labels:
version: stable
- name: canary
labels:
version: canary
This means if a canary version starts throwing errors, Istio automatically reduces its impact by isolating bad instances, without waiting for a rollback.
Observability: Metrics, Dashboards, and Traces
The observability stack is built around Istio’s built-in telemetry.
Components
| Tool | Purpose |
|---|---|
| Prometheus | Scrapes metrics from Envoy sidecars (request counts, errors, latency) |
| Grafana | Visualizes mesh metrics (success rate, p99 latency per route) |
| Jaeger | Distributed tracing with high sampling for end-to-end visibility |
Deployment
# Deploy observability stack
kubectl apply -f k8s/observability/
# Access Grafana
kubectl port-forward -n istio-system svc/grafana 3000:3000
# Access Jaeger
kubectl port-forward -n istio-system svc/jaeger-query 16686:16686
# Access Prometheus
kubectl port-forward -n istio-system svc/prometheus 9090:9090
Grafana Dashboard
Request Rate Per Service
Distributed Tracing
Istio provides ready-made metrics and dashboards so you can quickly monitor mesh traffic, latency, and error rates.
Walking Through a Single Request
When a user hits the frontend through the Istio Ingress Gateway, the flow looks like this.
Request Flow
1. User → Istio Ingress Gateway
↓ (VirtualService matches host/path)
2. Gateway → Frontend pod (stable or canary based on x-canary header/weight)
↓ (Frontend calls backend via Kubernetes DNS)
3. Frontend Envoy → Backend pod (VirtualService applies routing again)
↓ (Backend calls ML service)
4. Backend Envoy → ML pod (same routing logic)
↓ (ML inference completes)
5. Response flows back through the chain
What Happens at Each Hop
- mTLS encryption between all services.
- Metrics emission to Prometheus.
- Trace spans sent to Jaeger.
- Circuit breaking and outlier detection enforced by DestinationRules.
Seeing this full path in Jaeger, with timing for each hop, is one of the most useful parts of the setup.
Key Takeaways
Building this lab taught me:
- Service mesh adds zero-downtime deployments and fine-grained traffic control without code changes.
- mTLS enforcement is straightforward with Istio and significantly improves security posture.
- Observability becomes a first-class thing with minimal instrumentation effort.
- Understanding Istio’s primitives (Gateway, VirtualService, DestinationRule, PeerAuthentication) unlocks powerful traffic patterns.
For deeper reading, check out:
- Istio Official Docs – Traffic Management
- Istio Security – PeerAuthentication
- Visualizing Metrics – Istio + Grafana Dashboard
- Full project README on GitHub: 3RVision-Platform
If this Istio lab setup helped you, consider ⭐ starring the repo or opening an issue/PR with improvements or ideas. Every bit of feedback, bug report, or contribution helps make this a better reference for anyone learning service mesh in the real world.







Top comments (0)