DEV Community

Cover image for Polyglot Microservices Communication in Kubernetes with DNS, CoreDNS, and Istio (Java + Python)
Gokul G.K.
Gokul G.K.

Posted on

Polyglot Microservices Communication in Kubernetes with DNS, CoreDNS, and Istio (Java + Python)

Exploring service registry, service discovery, and service mesh in Kubernetes.

Here we have already discussed service discovery, registry, and service mesh.
We will discuss

  • How does Kubernetes implement service registry and discovery natively?
  • What does a service mesh add on top of Kubernetes networking?
  • How to test cross-service communication (Java ↔ Python) on Minikube? ### Codespace We can utilize Codespace for installation and application development. Codespaces Documentation

You can get the complete application code here

Why "polyglot" microservices are complex?

Polyglot microservices enable teams to use different programming languages and tech stacks for each service, allowing for the "best tool for the job." This introduces significant challenges in distributed systems:

  • Each language has its own libraries for HTTP communication, configuration management, retries, circuit breakers, and resilience patterns
  • Development setups (e.g., localhost) differ significantly from clustered production environments, making testing, debugging, and configuration (such as service URLs or credentials) error-prone and inconsistent.
  • Inter-service calls over networks can fail due to latency, timeouts, or partitions, requiring robust handling (e.g., fallbacks, timeouts) that must be re-implemented per language/stack.

Kubernetes provides native mechanisms for service discovery and networking, eliminating the need for external tools in many cases

MiniKube

MiniKube Documentation

Minikube is a local Kubernetes cluster designed to make it easy to learn and develop with Kubernetes.
Note: Local clusters enable developers to experience real Kubernetes networking (DNS, services) without relying on cloud infrastructure.

When developing locally with Minikube, you typically build Docker images on your host machine and then deploy pods that reference those images.
Minikube runs its own Docker engine inside the Minikube VM (or container, depending on the driver). This is often referred to as "Docker-in-Docker" in this context—though technically it's Docker running inside the Minikube environment, separate from your host Docker daemon.

Minikube Installation

Run below commands for installation and verfification

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
Enter fullscreen mode Exit fullscreen mode

Start and verify Minikube

#Start Minikube server
minikube start --driver=docker
Enter fullscreen mode Exit fullscreen mode
#Verify installation
kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Applications

I have already built the Java and Spring Boot applications, as well as the Kubernetes configuration file.
The project structure looks like this:

GitHub Codespace
 └── Minikube (Docker driver)
     ├── Kubernetes Services (DNS-based discovery)
     ├── FastAPI App (Python)
     ├── Spring Boot App (Java)
     ├── Istio (Service Mesh)
     │    ├── Envoy sidecars
     │    ├── mTLS
     │    └── Observability

Enter fullscreen mode Exit fullscreen mode

Check the application exposed API's readme file, if you want to change/scale anything.

Java - Spring Boot application
Java - Spring Boot application Api's

Python - FastApi application
Python - FastApi application Api's

Build Docker Image of applications

Docker Documentation.
We will generate a Docker image from our Java and Python applications to run inside Minikube.

eval $(minikube docker-env)

If you skip eval $(minikube docker-env), your locally built images won't be visible to the cluster

docker build -t fastapi-app:1.0 ./fastapi-app
docker build -t spring-app:1.0 ./spring-boot-app

This pulls in Minikube's Docker daemon, so images built inside Minikube don't need to be pushed to a registry.

The whole project

Kubernetes Deployment, Service Discovery, and Service Registry

In Kubernetes, service objects and CoreDNS replace traditional service registries, such as Eureka/Consul. You don't register services manually — Kubernetes does it automatically via labels/selectors.

In Kubernetes, Service objects act as the service registry.
When a Service is created, it becomes a stable, discoverable entry that represents one or more running Pods. Pods that match a Service's selector are automatically registered as endpoints for the service.

Kubernetes provides built-in service discovery using DNS via CoreDNS.

Every service is automatically assigned a DNS name following this format:
<service-name>.<namespace>.svc.cluster.local
Applications can communicate using service names instead of IP addresses.

Check out here for Kubernetes configs and needed commands.

Once you check out the config, apply the manifest files.

kubectl apply -f KubernetesService/
Enter fullscreen mode Exit fullscreen mode

Service Registry and Service Discovery in Kubernetes

Verify the Inbuilt service registry

kubectl get svc

Inspect label and selectors

Labels are key-value metadata attached to Kubernetes objects, while selectors define which labels a Service should match.

kubectl describe svc spring-service

kubectl label pod <spring-pod> app=wrong --overwrite
kubectl describe svc spring-service

Enter fullscreen mode Exit fullscreen mode

Service Discovery (DNS, No Hardcoded IPs)

Let's navigate to the shell of the FastAPI service, retrieve the service name of the Spring Boot service, and verify the cross-service call.

#get the list of all pods running
kubectl get pods

#Go to the FastAPI service shell
kubectl exec -it <fastapi-pod> -- sh

#Get the IP address/service name of the spring service
getent hosts spring-service

#call the Spring service api from the FastAPI service shell.
curl http://spring-service:8080/hello
Enter fullscreen mode Exit fullscreen mode

_Use the command below if curl is having issues.
_kubectl exec -it <fastapi-pod> -- sh -c "apt-get update && apt-get install -y curl && curl http://spring-service:8080/hello"

Why DNS-Only Is Not Enough?

Kubernetes's built-in DNS-based service discovery and load balancing are powerful—they give you reliable reachability: services can find each other via stable DNS names, and traffic is automatically distributed across healthy pods.

But it lacks

  • mLTS
  • Observability
  • Retires
  • Traffic Management etc

This is precisely where a service mesh steps in—adding a dedicated infrastructure layer that provides retries, traffic shaping, mTLS, and rich observability uniformly across all services, without requiring changes to the application code.

Istio / Service Mesh

Istio is an open-source service mesh that integrates seamlessly with existing distributed applications. Its powerful features provide a consistent and efficient way to secure, connect, and monitor services.

Istio Documentation

curl -L https://istio.io/downloadIstio | sh -
cd istio-*
sudo install bin/istioctl /usr/local/bin/istioctl

Enter fullscreen mode Exit fullscreen mode

istioctl install --set profile=demo -y

Istio sidecar,mTLS and Kiali Observability

Enable sidecar

A service mesh introduces a sidecar proxy alongside each application container.

With Istio, an Envoy proxy is automatically injected into every Pod.
All inbound and outbound traffic flows through this proxy, rather than directly between services.

kubectl label namespace default istio-injection=enabled

restaServiceice
kubectl rollout restart deployment fastapi spring

visualize sidecar
kubectl get pod <fastapi-pod> -o jsonpath='{.spec.containers[*].name}'

Sidecar enables advanced features such as:

  • Traffic control
  • Security
  • Observability

Most importantly, these features are added without modifying the application code, allowing teams to focus solely on business logic.

mTLS

Istio enables mutual TLS (mTLS) by default between services.

istioctl authn tls-check spring-service.default.svc.cluster.local
Service Mesh mTLS provides:

  • Encryption in transit
  • Service identity verification
  • Zero-trust networking

All of this happens transparently; applications are completely unaware that encryption is being handled in the background.

Observability

A service mesh provides deep observability out of the box.
Because all traffic flows through Envoy proxies, Istio can automatically collect:

  • Request rates
  • Latency
  • Error percentages
  • Service dependencies Tools like Kiali visualize this data as a live service graph, making it easy to understand how services interact and where problems occur.

Achieving this level of insight is extremely challenging at the application level alone.

kiali Documentation

install kiali
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.28/samples/addons/kiali.yaml

Sample Dashboard

Kiali - Mesh

Kiali - workloads

  • All images generated using nano banana.
  • Reference added section-wise.

This is a very brief demonstration of how service discovery, service registry, and service mesh work. I will be creating a detailed version focusing on Service Mesh (Istio) and Observability tool (Kiali)

Top comments (0)