DEV Community

Cover image for Istio for everyone β›΅οΈπŸ’‘πŸŽ‰
Sendil Kumar
Sendil Kumar

Posted on • Updated on • Originally published at sendilkumarn.com

Istio for everyone β›΅οΈπŸ’‘πŸŽ‰

This needs a (very) little Kubernetes knowledge.

Consider that you have an e-commerce application. This application consists of the following components:

  1. Product Service - This service is responsible for handling all the product related information like the name, price and the stock.
  2. Invoice Service - This service is responsible for handling all the invoice related information like the creating a bill, tracking whether it is paid and others.
  3. Notification Service - This service is responsible for handling all the notification related information like sending email.

Note there can be other services, but for brevity let us stop here.

In the monolithic world, all those service will live inside the same application (execution environment).
So whenever the invoice service needs to call the notification service all it has to do is do a simple method call (mostly).
That is the invoice service need not check whether the notification is up and running, find the location where it lives and then issue a connection over the network to request and wait for the response to revert back.

In the microservices world, when the invoice service needs to call the notification service.

The invoice service first needs to verify whether the notification service is available. (Let us call it as the imaginary service for now.)

If the notification service is available, then invoice service has to find out a way to connect to the notification service (for ex., its DNS name).

The invoice service will then make a request to the notification service.

TL;DR In the microservices world, there is a network between the services to communicate.

The imaginary service is called Service registers.

The services when they boot up or shut down will inform the Service register about their whereabouts and status.

Any service (like invoice) that want to connect to another service (like notification) will first query the service register to get the information on how and where to connect and then connect.

But what will happen when a service is terminated abruptly. The service cannot send a proper shut down signal to the service register. This means the service register shows the particular service is up but it is not.

In order to avoid these scenarios we will need health checkers.

The health checkers will either go and ping the services or the services inform the health checkers, every now and then about their status.

These pings are called heart beat πŸ’—.

Whenever the service stops sending heart beat (usually after a threshhold level to compensate network losses), the services are labelled as down.

Any future requests to the service registry, requesting the servie will be given an address of the healthy service.

For the microservices to work properly, health checkers and service registries need to be highly available components.

TL;DR In the microservices world, we will need service registry and health checkers for our application to work properly.

In this post, we will see what is Istio and how it helps to address the problem.

Before deep diving into Istio we need to understand service mesh.

What is Service Mesh?

The istio website describes service mesh as below:

The term service mesh used to describe the network of microservices that make up such applications and the interactions between them.

In Microservices world, there will be many incarnation of the services that run at various capacities. They are distributed. The services should be interconnected. Service mesh describes these interconnected services and their interactions.

So, What does Istio do?

In short, Istio does the following:

Each of your service in microservice will have common services including logging, monitoring and networking (for load balancing and others). Istio pulls them as common services from your application. Such that your application need not worry about them.

Istio does the following services inside the service mesh:

  • Connect
    • Connect the services inside the service mesh
    • Helps to manage the traffic
      • Automatic Load balancing
  • Control
    • Control the requests that enter the services
      • Routing rules
      • Retries
      • Failovers
      • Fault injection and others
  • Secure
    • Secure the service-to-service communication
    • Authorize and authenticate the requests
  • Observe
    • Collect metrics, logs, traces automatically

And most importantly it does all of those (and others) without any changes in the application code.

Whaattt!!! But how does Istio achieves it without any code changes?

Istio achieves it via Sidecar pattern.

So what is sidecar pattern?

For now, Istio runs on top of Kubernetes.

If you are wondering what is Kubernetes, check out this post.

In Kubernetes context, a Pod contains one or more containers. The Istio attaches a container inside the pod along with the application's container. This container that runs along side with your application's container is called as sidecar.

This pattern of attaching a sidecar container (just like attaching a sidecar to your motorcycle) is called sidecar pattern.

Check more about sidecar pattern here.

Okay, so How does this sidecar helps Istio to do what it promises?

Read on...

How does Istio work?

In Istio terms, this sidecar container is called proxy container. The proxy container lives and dies along with the application container. The proxy container will now decide the requests that can go in and out of the application container.

Wait, does it mean your application's performance will be reduced? Whether the latency is increased? - The short answer is yes.

Adding this proxy container increases the latency and decreases the performance.

But the proxy container will be using Envoy proxy.

Envoy is an L7 proxy and communication bus designed for large modern service oriented architectures. - Envoy website

Envoy is written in C++ and it is light weight and lightening ⚑️fast.

This means the overhead added to each service call is very minimal.

Wanna know more about envoy proxy - check out here.

These proxies are responsible for all the communication that happens in your application.

Okay, now with sidecar proxies we can control the requests but how can we supply it rules or configuration to the proxies? How can we collect all the telemetry data from the sidecar proxies? If these are your questions then we are in the correct track.

The sidecar proxies' telemetry data are collected by the mixer.

The mixer is also responsible for applying policies and access control across the service mesh.

Mixer + Envoy proxy => Data plane

The mixer along with the proxies is called data plane of the Istio.

TL;DR The envoy proxy is responsible for proxying the request and responses into the container. The mixer on the other hand is responsible for enforcing access control and usage policies across the service mesh and collecting the telemetry data from the envoy proxies (and other services).


Similar to Kubernetes, Istio also has the control plane. The control plane is the brain 🧠 of the Istio.

This is where we define the polices/configurations/rules. These rules are applied on the fly, without any down time.

The control plane of Istio consists of the following:

  • Pilot πŸ‘©β€βœˆοΈ
  • Citadel πŸ›
  • Galley πŸ›’

Pilot

The pilot as the name implies is the pilot that helps envoy proxies to navigate the requests.

They provide the envoy proxies the following:

  • Service discovery
  • Traffic management
  • Resiliency

We provide the routing rules to the Istio via yaml files. Once the rule is applied. The pilot will get this rules and then provide it to the envoy proxies at the runtime. That gives us the flexibility of applying a configuration change on the fly.

With pilot, we can provide a service discovery option by using the routing rule. We can say if the URI is /review go to reviews application that runs on port 9080 like below:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
 http:
  - match:
    - uri:
        exact: /review
    route:
    - destination:
        host: reviews
        port:
          number: 9080
Enter fullscreen mode Exit fullscreen mode

Check more about VirtualService here.

We can manage the traffic to the services using pilot. For example we can route 90% of our requests to the v1 of the application and remaining 10% to that awesome v2 like below:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1
      weight: 90
    - destination:
        host: reviews
        subset: v2
      weight: 10
Enter fullscreen mode Exit fullscreen mode

We can also specify the rules for the destination.

  • What type of loadbalancer to use
  • Maximum connections
  • Timeouts and other rules like below.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: destall
  namespace: testns
spec:
  # DNS name, prefix wildcard, short name relative to context
  # IP or CIDR only for services in gateways
  host: destall.default.svc.cluster.local
  trafficPolicy:
    loadBalancer:
      simple: ROUND_ROBIN
    connectionPool:
      tcp:
        maxConnections: 99
        connectTimeout: 6s
Enter fullscreen mode Exit fullscreen mode

Check out more about DestinationRule.

Various load balancer options here

Note: There are hundreds of different configuration possible. It is highly impossible to explain it in a single blog post.

TL;DR Pilot provides configuration data for the envoy proxies.

Galley

All the configurations that we provide to the Istio has to be properly validated. The validated rules or configuration should be ingested and processed. Finally the processed data should be sent to Pilot and Mixer.

Galley is responsible for validating, ingesting, processing and sending the configuration to the Pilot and Mixer.

Istio runs on top of the Kubernetes. Galley is also responsible for getting user information from the Kubernetes and provide it to the Istio components.

TL;DR Galley is the validation and distribution agent that validates and distributes configuration data to Pilot and Mixer.


Citadel

Citadel is responsible for encrypting service-to-service communication and end-user authentication. Using Citadel, we can enforce security policies to the envoy proxies.

TL;DR Citadel is the policy provider for the proxies.

The proxies then with the information from Citadel and Pilot will control,connect (limit and route the traffic), and secure the services.


Hopefully, this might have given you a brief overview of Istio. Head over istio.io for more information on Istio.

Now you want deploy a sample application on top of Kubernetes with Istio check out this post.

If you like this article, please leave a like or a comment. ❀️

If you feel there is something wrong / missing in the article feel free to comment :)

You can follow me on Twitter.

Top comments (2)

Collapse
 
xydac profile image
Deepak Seth

Simply put and a comprehensive overview, very well written.

Collapse
 
efrat19 profile image
Efrat

thanks, this article makes a lot of sense
something is still unclear to me though:
its about istio sidecar.
i thought it is proxying requests to a k8s services, but you said it is sitting in front of the deployment:

"The Istio attaches a container inside the pod along with the application's container"

perhaps you can illuminate me about that