DEV Community

Cover image for Implementing Istio in a Kubernetes cluster
Kristijan for Squadcast

Posted on • Originally published at

Implementing Istio in a Kubernetes cluster

As the complexity of a microservice architecture grows, it becomes important to implement a service mesh for better insights into your cluster and microservices. In this blog, Kristijan explains how Istio can be used as a service mesh, along with a detailed installation steps & configuration setup.

Service Mesh? You’ve heard about it, but does it solve something, or is it just another hot buzzword in the industry?

In this article you will learn about the Istio service mesh, along with a full installation guide and configuration setup.

Table of contents

Before moving straight to Istio, it’s worth mentioning that in one of our previous articles - The Age of Service Mesh, Gigi Sayfan explained in detail how service meshes work and what problems they solve.

I highly suggest you give that article a read. Maybe even as a prequel, as it will provide you with great insight into service mesh basics and the general idea behind them.

Right now, there are a plethora of options for service meshes.

To name a few:

Each service mesh has its pros and cons, along with specific use cases that you should consider for your cluster and end goal.

You can decide which “brand” of a service mesh to install.

What is Istio?

Istio is a service mesh designed to enhance and give you better insight into your cluster and microservices.

One of the great things about Istio and service meshes overall is that they require absolutely no code change for them to work.

Istio works by integrating itself as an additional layer inside the Kubernetes cluster and thus provides modern features that you can utilize to your advantage.

Those features can include advanced load balancing, circuit breaking, mTLS traffic encryption, better authentication and authorization options, metrics, telemetry, and overall fine-grain control over the cluster’s traffic going in and out.

Now Istio isn’t just a single object that you install. It’s more of a collection of entities that work together and make up the whole service mesh.

Like Kubernetes, Istio has a control plane that manages everything and a data plane that handles the traffic between the services.

There is more to Istio, as it isn’t bound to only work in a Kubernetes cluster. It will also work with virtual machines and supports different deployment options both for installing and running.

In the next section, we will explain Istio’s components and architecture.

Istio Architecture

As the saying goes, a picture is worth a thousand words.

Consider the following diagram:

Image Source

You can see that the traffic destined in and out of the pods doesn’t flow directly now; Instead, it first must pass through the sidecar proxies.

The container sidecars are Envoy proxies that get automatically injected into your pods on startup.

During installation, you instruct Istio which namespace to ‘watch’ and deploy Envoy proxies along with your applications.

You will see how this is done in action when we get to the installing section.

The other part is the control plane, made of multiple components bundled in one binary - istiod. The control plane manages the proxies, certificates, service discovery, and executing the configuration you set.

The components making Istio are:

To explain a bit better and give some analogy here.

Consider the service mesh as a telephone network.

The data plane consists of phones that you and your friends use to communicate with each other.

You will be able to communicate without them, but you will have to yell across. Instead, this way is much more modern, secure, and with better control over the communication.

Now the control plane will be the telephone service provider, and from there, all the calls get managed, routed, and billed.

Everything you apply is done towards and on the control plane; the control plane will communicate that change to the sidecar proxies.

The traffic traversing the data plane is only visible to the proxies; the other Istio components have no access.

Installing Istio

Depending on your setup, Istio offers you different installation and deployment strategies.

Each cloud service provider has its own thing. So it’s best to go over the platform setup and check if any prerequisites or dependencies are needed before installing.

The install options can range from using helm, istioctl, or using an operator.

You can look into them at the following link.

For this guide, we will install Istio using the istioctl tool.

First, you’ll need to download the binary:

$ curl -L | sh -
Enter fullscreen mode Exit fullscreen mode

Navigate into the newly created folder, export the path to the binary, and verify that it works:

$ cd istio-1.11.1/
$ export PATH=$PWD/bin:$PATH
$ istioctl version

no running Istio pods in "istio-system"
Enter fullscreen mode Exit fullscreen mode

Well, that’s okay, you still haven’t installed Istio.

It is a good idea to run the pre-flight check to verify if your cluster doesn’t have any issues running the Istio service mesh.

$ istioctl x precheck

Install Pre-Check passed! The cluster is ready for Istio installation. 
Enter fullscreen mode Exit fullscreen mode

Before moving forward, you should assess which type of profile you want Istio to be installed with.

There are six of them at the time of this writing:

  • Default
  • Demo
  • Minimal
  • External
  • Empty
  • Preview

You can view each profile with an extended description here.

We will go with the default profile intended for production environments.

Each profile is just a set of features that Istio will enable when installed.

If you want to test every feature, you can install it using the demo profile.

Note: Installing profiles that include the Ingress or Egress Gateway will automatically spin up an external load balancer.

Istio also offers customizations and custom third-party add-ons you can include in the profile.

Suppose none of the above profiles meet your requirements, you can use istioctl to generate and create custom manifests to fit your needs.

To install using the default profile:

$ istioctl install --set profile=default -y

✔ Istio core installed             
✔ Istiod installed                                                         
✔ Egress gateways installed  
✔ Ingress gateways installed
✔ Installation complete
Enter fullscreen mode Exit fullscreen mode

Excellent! You’ve installed Istio successfully!

You are halfway there.

Now you will need to label which namespace Istio will control and inject sidecar proxies in the pods.

For example, to label the default namespace for sidecar injection:

$ kubectl label namespace default istio-injection=enabled

namespace/default labeled

Enter fullscreen mode Exit fullscreen mode

You can now verify this with:

$ kubectl get ns --show-labels

NAME            STATUS      AGE      LABELS
default         Active      119d     istio-injection=enabled
[other output truncated]
Enter fullscreen mode Exit fullscreen mode

With that, you completed the Istio core components installation.


I installed Istio, and now what?

Next comes the observability part.

The Envoy proxies will send off telemetry and other data that you can use to visualize the traffic in the mesh.

Like the Prometheus and the Grafana setup, you will need Istio paired with a visualization tool to display the data.

You will use the Kiali dashboard to visualize and see what’s going on in the cluster.

There is one caveat, however. Kiali requires that you have a running Prometheus instance in your cluster.

You can deploy one or supply the address of the existing one if you have it already deployed.

For simplicity and example purposes, the following section will use the demo manifests for deploying Kiali, Jaeger, Prometheus, and Grafana.

Keep in mind that you shouldn’t rely on this setup for running in production environments!

Further below, there will be an explanation of how to set up Kiali to work with an existing Prometheus instance.

Installing the Kiali dashboard

Navigate to the Istio folder and apply the manifests located under samples/addons:

$  kubectl apply -f samples/addons
Enter fullscreen mode Exit fullscreen mode

Applying the above will deploy many objects, so give them a couple of minutes to start.

Check on the Kiali pod if it’s started:

$ kubectl -n istio-system get pods -l app=kiali

NAME                           READY    STATUS    RESTARTS      AGE
kiali-787bc487b7-fbxwm         1/1      Running   0             2m10s
Enter fullscreen mode Exit fullscreen mode

Once it’s running, you can now access the dashboard using kubectl and port-forward.

However, istioctl offers a much simpler way:

$ istioctl dashboard kiali

Enter fullscreen mode Exit fullscreen mode

open http://localhost:20001/kiali in your browser.

As you can see, there is no traffic running in the selected namespace, and Kiali will show no connections.

If you want to access the other dashboards - Grafana and Jaeger, you can again use istioctl dashboard:

$ istioctl dashboard grafana
Enter fullscreen mode Exit fullscreen mode

$ istioctl dashboard jaeger
Enter fullscreen mode Exit fullscreen mode

Kiali tips

There are also other ways to deploy Kiali that are more inclined to production use, where you can customize and set your own parameters.

Installing Kiali can be done by deploying the Kiali-server or the Kiali-operator.

You can find the GitHub link for both Helm charts here.

As mentioned in the previous section, you can specify external instances of Prometheus and the other tools.

It’s best to install all the tooling Kiali needs for you to have the most benefit and greater observability in the service mesh.

Those are the Prometheus instance, Grafana, and Jaeger for tracing.

Note: Refer to the Jaeger documentation as it requires additional configuration to have full distributed tracing in your apps.

Grafana and Jaeger are optional for Kiali and not required for it to work.

Image Source

You can specify every connection to the other systems during installation.

Example, for specifying an existing Prometheus instance:

$ helm install kiali-server kiali-server --repo \
  -n istio-system \
  --set auth.strategy="anonymous" \
  --set external_services.custom_dashboards.prometheus.url="http://prometheus-k8s.monitoring:9090/" \
  --set external_services.prometheus.url="http://prometheus-k8s.monitoring:9090/"
Enter fullscreen mode Exit fullscreen mode

The Kiali authentication options are available here.

The anonymous option used above provides free unauthenticated access to the dashboard.

Demo application

You’ve deployed Istio, have a running service mesh inside your cluster, and you also installed the Kiali dashboard to observe the traffic.

Let’s now deploy a simple demo application.

You can use the following hello-world web app that will display a simple web page for testing.

Apply the following deployment and service manifests:

apiVersion: apps/v1
kind: Deployment
  name: hello-kubernetes
      name: hello-kubernetes
        name: hello-kubernetes
      - name: app
        image: paulbouwer/hello-kubernetes:1.10
          - containerPort: 8080
              fieldPath: metadata.namespace
        - name: KUBERNETES_POD_NAME
        - name: KUBERNETES_NODE_NAME
              fieldPath: spec.nodeName
apiVersion: v1
kind: Service
  name: hello-kubernetes
  type: ClusterIP
  - port: 80
    targetPort: 8080
    name: hello-kubernetes
Enter fullscreen mode Exit fullscreen mode

Verify that the pod is running and the service is deployed:

$ kubectl get pod,svc

NAME                                     READY   STATUS        RESTARTS      AGE
pod/hello-kubernetes-78c896db9c-hrcc8    2/2     Running       0             46s

NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP    PORT(S)     AGE
service/hello-kubernetes    ClusterIP       <none>         80/TCP      47s
Enter fullscreen mode Exit fullscreen mode

Now, for testing, you can use port-forward to access the application. However, a more permanent solution would be to use a load balancer or an ingress.

Istio has its own ingress controller that you can utilize and test the application.

The following ingress manifest will expose the application on the / path:

kind: Ingress
  annotations: istio
  name: ingress-hello-kubernetes
  - http:
      - path: /
        pathType: Prefix
            name: hello-kubernetes
              number: 80
Enter fullscreen mode Exit fullscreen mode

Note: Notice the ingress class annotation; you specify that the Istio ingress controller will pick up this object.

You can get the IP address using:

$ kubectl -n istio-system get svc -l release=istio

NAME                    TYPE            CLUSTER-IP      EXTERNAL-IP    
istio-egressgateway     ClusterIP     <none>                                                                              
istio-ingressgateway    LoadBalancer   
istiod                  ClusterIP      <none>      
[other output truncated]
Enter fullscreen mode Exit fullscreen mode

Visiting the external IP of the ingress gateway will open up the web application.

To see some activity in the Kiali dashboard, you first need to generate some traffic.

The simplest way is to use curl and a while loop:

From another terminal run:

$ while true; do curl ; sleep 1; done;
Enter fullscreen mode Exit fullscreen mode

And check the Kiali dashboard:


You can now see that Kiali displays the traffic, and it reaches the web application without any issues.


  • You learned what is Istio and how it works
  • Got your hands dirty by installing and configuring Istio in a cluster
  • In addition to that, you installed Kiali to visualize the traffic in the mesh
  • You deployed a demo application and connected it using Istio’s ingress ‍

Squadcast is an incident management tool that’s purpose-built for SRE. Your team can get rid of unwanted alerts, receive relevant notifications, work in collaboration using the virtual incident war rooms, and use automated tools like runbooks to eliminate toil.

Top comments (0)