DEV Community

Cover image for Mastering Service Mesh with Linkerd
Ivan Porta
Ivan Porta

Posted on • Originally published at faun.pub

Mastering Service Mesh with Linkerd

As enterprises increasingly adopt microservices architecture to benefit from faster development, independent scalability, easier management, improved fault tolerance, better monitoring, and cost-effectiveness, the demand for robust infrastructure is growing. According to a 2023 Gartner report, 74% of respondents are using microservices, while 23% plan to adopt them.

However, as these services scale, the architecture can become complex, involving multiple clusters across different locations (on-premise and cloud). Managing networking, monitoring, and security in such environments becomes challenging. This is where a service mesh comes into play. In this article, I will dive into the fundamentals of service mesh technology, focusing on one of the leading players, Buoyant’s Linkerd.

If you’re more interested in the market trends, you’ll be pleased to know that the service mesh sector is experiencing rapid growth. This rise is fueled by the adoption of microservices, increasing cyberattacks, and supportive regulations like the EU’s “Path to the Digital Decade” initiative. Additionally, investments in AI are pushing countries like South Korea to leverage service mesh to transform data into intelligence. The service mesh market, valued at approximately USD 0.24 billion in 2022, is projected to grow to between USD 2.32 billion and USD 3.45 billion by 2030, with a compound annual growth rate (CAGR) of 39.7% from 2023 to 2030, according to reports from Next Move Strategy Consulting and Global Info Research.

Image description

What is Service Mesh?

A service mesh is an additional infrastructure layer that centralizes the logic governing service-to-service communication, abstracting it away from individual services and managing it at the network layer. This enables developers to focus on building features without worrying about communication failures, retries, or routing, as the service mesh consistently handles these aspects across all services.

Generally speaking, a service mesh provides the following benefits:

  • Automatic retries and circuit breaking: It ensures reliable communication by handling failures and prevent cascading failures by applying circuit breaking patterns.
  • Automated service discovery: It automatically detects services within the mesh without requiring manual configuration.
  • Improved security: Encrypts communication between services by using mTLS.
  • Fine-grained traffic control: It enables dynamic traffic routing, supporting deployment strategies like blue-green deployments, canary releases, and A/B testing without modifying the application’s cod.
  • Observability: It enhances monitoring with additional metrics, such as request success rates, failures, and requests per second, collected directly from the proxy.

How does service mesh works?

The architecture of service mesh has two components: a control planes and a data plane.

Data plane

The data plane consists of proxies (following the sidecar pattern) that are deployed alongside each application container instance within the same pod. These proxies intercept all inbound and outbound traffic to the service and, acting as intermediaries, implement features such as circuit breaking, request retries, load balancing, and enforcing mTLS (mutual TLS) for secure communication.

Image description

Control Plane

The control plane is responsible for managing and configuring the proxies in the data plane. While it doesn’t handle network packets directly, it orchestrates policies and configurations across the entire mesh. It does so by:

  • Maintaining a service registry and dynamically updating the list of services as they scale, join, or leave.
  • Defining and applying policies like traffic routing, security rules, and rate limiting.
  • Aggregating metrics, logs, and traces from the proxies for monitoring and observability.
  • Handling certificates and issuing cryptographic identities for mTLS to ensure secure communication between services.

The Linkerd control plane is composed of the following components:

  • Destination pod: This pod takes care of providing the IP and port of the destination service to a proxy when it’s sending traffic to another service. It also informs the proxy of the TLS identity it should expect on the other end of the connection. Finally, it fetches both policy information — which includes the types of requests allowed by authorization and traffic policies — and service profile information, which defines how the service should behave in terms of retries, per-route metrics, and timeouts.
  • Identity pod: It acts as a Certificate Authority by issuing signed certificates to proxies that send it a Certificate Signing Request (CSR) during their initialization.
  • Proxy injector pod: This Kubernetes admission controller checks for the existence of the annotation linkerd.io/inject: enabled when a pod is created. If present, it injects the proxy-init and linkerd-proxy containers into the pod.

Image description

A little bit of history of Linkerd

The major players in the service mesh market include Red Hat with OpenShift Service Mesh, HashiCorp with Consul, F5 with NGINX Service Mesh, and Istio, originally developed by Google and later transitioned to the Cloud Native Computing Foundation (CNCF).

We’re focusing on Buoyant’s product, Linkerd, due to its commitment to performance and minimal resource footprint. This focus has made it one of the earliest and most performant service meshes available. Linkerd achieved CNCF Graduated status on July 28, 2021, underscoring its stability and widespread adoption by organizations such as Monzo, Geico, Wells Fargo, and Visa (data from HGData).

Linkerd is an open-source service mesh that was initially released in 2016, originally built on Finagle — a scalable microservice library. Its first proxy, written in Scala, leveraged Java and Scala’s networking features. However, due to the JVM dependency and it’s complex surface ares, as well as some limitations caused by design choices of Scala, Netty, and Finagle, its adoption has some friction to production environment. As a result, in 2017, Buoyant developed a new lightweight proxy written in Rust. At the time, Rust was still an emerging language, but Buoyant chose it over Scala, Go, and C++ for several key reasons:

  • Predictable performance: Go’s garbage collector can cause latency spikes during collection cycles, which ruled it out. Rust, on the other hand, offers predictable performance with no garbage collection overhead.
  • ** Security:** Many security vulnerabilities, such as buffer overflows (e.g., Heartbleed), stem from unsafe memory management in languages like C and C++. Rust handles memory safety at compile time, significantly reducing the risk of such vulnerabilities.

How does Linkerd do Load Balancing?

When you create a Service in Kubernetes, an associated Endpoint is automatically created based on the selector defined in the service specification. This selector identifies which pods should be part of the service:

$ kubectl get svc -n vastaya application-vastaya-svc -o yaml
apiVersion: v1
kind: Service
...
spec:
  ...
  selector:
    app.kubernetes.io/instance: application
    app.kubernetes.io/name: vastaya
Enter fullscreen mode Exit fullscreen mode

The corresponding Endpoint is populated with the IP addresses of the selected pods:

$ kubectl get endpoints -n vastaya application-vastaya-svc -o yaml
apiVersion: v1
kind: Endpoints
...
subsets:
- addresses:
  - ip: 10.244.1.157
    nodeName: minikube
    targetRef:
      kind: Pod
      name: application-vastaya-dplmt-647b4dbdc-9bnwj
      namespace: vastaya
      uid: e3219642-4428-4bbc-89ec-a892ca571639

$ kubectl get pods -n vastaya -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  ...
  status:
    hostIP: 192.168.49.2
    hostIPs:
    - ip: 192.168.49.2
    phase: Running
    podIP: 10.244.1.157
    podIPs:
    - ip: 10.244.1.157
Enter fullscreen mode Exit fullscreen mode

When a container is sending a request to a service, the proxy intercepts the request and looks up the target IP address in the Kubernetes API, and if the IP corresponds to a Kubernetes Service, it load balances using the EWMA (exponentially weighted moving average) algorithm to find and send requests to the fastest endpoints associated with that Service. Additionally, it applies any existing Service Policy (custom CRD) associated to the service that will enable traffic management capabilities like canary deployments, and more.

How Does the Linkerd Proxy Intercept Packets?

When a packet arrives at the network interface, it passes through the iptables rules, which contain multiple chains. Once a rule is matched, an action is taken. Linkerd redirects the packets to the proxy using the PREROUTING and OUTPUT chains in the nat (Network Address Translation) table. These rules are updated by the linkerd-init container, which, as an init container, runs before the other containers start. It modifies the pod’s iptables, creating new chains and updating existing ones. You can inspect these changes by checking the container’s logs.

Note: The iptables are specific to the pod’s namespace and are separate from the node’s iptables.

$ kubectl -n vastaya logs application-vastaya-dplmt-647b4dbdc-9bnwj linkerd-init
time="2024-09-18T23:45:26Z" level=info msg="/sbin/iptables-legacy-save -t nat"
time="2024-09-18T23:45:26Z" level=info msg="# Generated by iptables-save v1.8.10 on Wed Sep 18 23:45:26 2024\n*nat\n:PREROUTING ACCEPT [0:0]\n:INPUT ACCEPT [0:0]\n:OUTPUT ACCEPT [0:0]\n:POSTROUTING ACCEPT [0:0]\nCOMMIT\n# Completed on Wed Sep 18 23:45:26 2024\n"
time="2024-09-18T23:45:26Z" level=info msg="/sbin/iptables-legacy -t nat -N PROXY_INIT_REDIRECT"
time="2024-09-18T23:45:26Z" level=info msg="/sbin/iptables-legacy -t nat -A PROXY_INIT_REDIRECT -p tcp --match multiport --dports 4190,4191,4567,4568 -j RETURN -m comment --comment proxy-init/ignore-port-4190,4191,4567,4568"
time="2024-09-18T23:45:26Z" level=info msg="/sbin/iptables-legacy -t nat -A PROXY_INIT_REDIRECT -p tcp -j REDIRECT --to-port 4143 -m comment --comment proxy-init/redirect-all-incoming-to-proxy-port"
time="2024-09-18T23:45:26Z" level=info msg="/sbin/iptables-legacy -t nat -A PREROUTING -j PROXY_INIT_REDIRECT -m comment --comment proxy-init/install-proxy-init-prerouting"
time="2024-09-18T23:45:26Z" level=info msg="/sbin/iptables-legacy -t nat -N PROXY_INIT_OUTPUT"
time="2024-09-18T23:45:26Z" level=info msg="/sbin/iptables-legacy -t nat -A PROXY_INIT_OUTPUT -m owner --uid-owner 2102 -j RETURN -m comment --comment proxy-init/ignore-proxy-user-id"
time="2024-09-18T23:45:26Z" level=info msg="/sbin/iptables-legacy -t nat -A PROXY_INIT_OUTPUT -o lo -j RETURN -m comment --comment proxy-init/ignore-loopback"
time="2024-09-18T23:45:26Z" level=info msg="/sbin/iptables-legacy -t nat -A PROXY_INIT_OUTPUT -p tcp --match multiport --dports 4567,4568 -j RETURN -m comment --comment proxy-init/ignore-port-4567,4568"
time="2024-09-18T23:45:26Z" level=info msg="/sbin/iptables-legacy -t nat -A PROXY_INIT_OUTPUT -p tcp -j REDIRECT --to-port 4140 -m comment --comment proxy-init/redirect-all-outgoing-to-proxy-port"
time="2024-09-18T23:45:26Z" level=info msg="/sbin/iptables-legacy -t nat -A OUTPUT -j PROXY_INIT_OUTPUT -m comment --comment proxy-init/install-proxy-init-output"
time="2024-09-18T23:45:26Z" level=info msg="/sbin/iptables-legacy-save -t nat"
time="2024-09-18T23:45:26Z" level=info msg="# Generated by iptables-save v1.8.10 on Wed Sep 18 23:45:26 2024\n*nat\n:PREROUTING ACCEPT [0:0]\n:INPUT ACCEPT [0:0]\n:OUTPUT ACCEPT [0:0]\n:POSTROUTING ACCEPT [0:0]\n:PROXY_INIT_OUTPUT - [0:0]\n:PROXY_INIT_REDIRECT - [0:0]\n-A PREROUTING -m comment --comment \"proxy-init/install-proxy-init-prerouting\" -j PROXY_INIT_REDIRECT\n-A OUTPUT -m comment --comment \"proxy-init/install-proxy-init-output\" -j PROXY_INIT_OUTPUT\n-A PROXY_INIT_OUTPUT -m owner --uid-owner 2102 -m comment --comment \"proxy-init/ignore-proxy-user-id\" -j RETURN\n-A PROXY_INIT_OUTPUT -o lo -m comment --comment \"proxy-init/ignore-loopback\" -j RETURN\n-A PROXY_INIT_OUTPUT -p tcp -m multiport --dports 4567,4568 -m comment --comment \"proxy-init/ignore-port-4567,4568\" -j RETURN\n-A PROXY_INIT_OUTPUT -p tcp -m comment --comment \"proxy-init/redirect-all-outgoing-to-proxy-port\" -j REDIRECT --to-ports 4140\n-A PROXY_INIT_REDIRECT -p tcp -m multiport --dports 4190,4191,4567,4568 -m comment --comment \"proxy-init/ignore-port-4190,4191,4567,4568\" -j RETURN\n-A PROXY_INIT_REDIRECT -p tcp -m comment --comment \"proxy-init/redirect-all-incoming-to-proxy-port\" -j REDIRECT --to-ports 4143\nCOMMIT\n# Completed on Wed Sep 18 23:45:26 2024\n"
Enter fullscreen mode Exit fullscreen mode

Viewing these iptables rules directly can be tricky. You’ll need to run iptables-legacy as sudo, but networking restrictions may prevent you from doing this inside a container. Instead, you can access them from the node by using nsenter to enter the pod’s network namespace. Here’s how you can do it:

SSH into the node (in this case, using Minikube):

$ minikube ssh
Enter fullscreen mode Exit fullscreen mode

Find the container ID:

$ docker ps | grep application-vastaya-dplmt-647b4dbdc-9bnwj
3f1f370f44e4   de30a4cc9fd9                "/docker-entrypoint.…"   39 minutes ago   Up 39 minutes             k8s_application-vastaya-cntr_application-vastaya-dplmt-647b4dbdc-9bnwj_vastaya_e3219642-4428-4bbc-89ec-a892ca571639_3
85b5f32df37e   4585864a6b91                "/usr/lib/linkerd/li…"   39 minutes ago   Up 39 minutes             k8s_linkerd-proxy_application-vastaya-dplmt-647b4dbdc-9bnwj_vastaya_e3219642-4428-4bbc-89ec-a892ca571639_3
e81bac0a9f9c   registry.k8s.io/pause:3.9   "/pause"                 39 minutes ago   Up 39 minutes             k8s_POD_application-vastaya-dplmt-647b4dbdc-9bnwj_vastaya_e3219642-4428-4bbc-89ec-a892ca571639_3
Enter fullscreen mode Exit fullscreen mode

Get the process ID of the container:

$ docker@minikube:~$ docker inspect --format '{{.State.Pid}}' 3f1f370f44e4  
6859
Enter fullscreen mode Exit fullscreen mode

Use nsenter to access the pod’s network namespace:

$ sudo nsenter -t 6859 -n
Enter fullscreen mode Exit fullscreen mode

Finally, you can view the iptables rules:

$ root@minikube:/home/docker# iptables-legacy -t nat -L
Chain PREROUTING (policy ACCEPT)
target               prot opt source               destination         
PROXY_INIT_REDIRECT  all  --  anywhere             anywhere             /* proxy-init/install-proxy-init-prerouting */

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target             prot opt source               destination         
PROXY_INIT_OUTPUT  all  --  anywhere             anywhere             /* proxy-init/install-proxy-init-output */

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain PROXY_INIT_OUTPUT (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere             owner UID match 2102 /* proxy-init/ignore-proxy-user-id */
RETURN     all  --  anywhere             anywhere             /* proxy-init/ignore-loopback */
RETURN     tcp  --  anywhere             anywhere             multiport dports 4567,4568 /* proxy-init/ignore-port-4567,4568 */
REDIRECT   tcp  --  anywhere             anywhere             /* proxy-init/redirect-all-outgoing-to-proxy-port */ redir ports 4140

Chain PROXY_INIT_REDIRECT (1 references)
target     prot opt source               destination         
RETURN     tcp  --  anywhere             anywhere             multiport dports sieve,4191,4567,4568 /* proxy-init/ignore-port-4190,4191,4567,4568 */
REDIRECT   tcp  --  anywhere 
Enter fullscreen mode Exit fullscreen mode

As you can see from the PROXY_INIT_REDIRECT chain, it redirect all incoming traffic to the port 4143, which is the port where the Linkerd proxy container is running.

$ kubectl get pods application-vastaya-dplmt-647b4dbdc-9bnwj -n vastaya -o yaml
apiVersion: v1
kind: Pod
metadata:
  ...
  name: application-vastaya-dplmt-647b4dbdc-9bnwj
spec:
  containers:
    image: cr.l5d.io/linkerd/proxy:edge-24.9.2
    name: linkerd-proxy
    ports:
    - containerPort: 4143
      name: linkerd-proxy
      protocol: TCP
    ...
Enter fullscreen mode Exit fullscreen mode

The proxy processes the traffic and forwards it to the OUTPUT chain while preserving the original destination IP and port using the SO_ORIGINAL_DST socket option. Since the request will be owned by the proxy, the packet will be forwarded to the application.

Image description

Install Linkerd

There are two primary ways to install Linkerd as a service mesh: via Helm charts or the Linkerd CLI.

Using CLI

Installing Linkerd via the CLI is straightforward:

$ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install-edge | sh
$ export PATH=$HOME/.linkerd2/bin:$PATH
Enter fullscreen mode Exit fullscreen mode

Once the CLI is installed, you can verify that the environment is set up correctly for the installation:

$ linkerd check --pre
Enter fullscreen mode Exit fullscreen mode

Next, similar to the Helm Chart installation, you’ll need to install the CRDs first, followed by the control plane:

$ linkerd install --crds | kubectl apply -f -
$ linkerd install | kubectl apply -f -
Enter fullscreen mode Exit fullscreen mode

The CLI goes beyond just installation, offering fine-grained control over operations and useful extensions:

$ linkerd
linkerd manages the Linkerd service mesh.Usage:
  linkerd [command]Available Commands:
  authz        List authorizations for a resource
  check        Check the Linkerd installation for potential problems
  completion   Output shell completion code for the specified shell (bash, zsh or fish)
  diagnostics  Commands used to diagnose Linkerd components
  help         Help about any command
  identity     Display the certificate(s) of one or more selected pod(s)
  inject       Add the Linkerd proxy to a Kubernetes config
  install      Output Kubernetes configs to install Linkerd
  install-cni  Output Kubernetes configs to install Linkerd CNI
  jaeger       jaeger manages the jaeger extension of Linkerd service mesh
  multicluster Manages the multicluster setup for Linkerd
  profile      Output service profile config for Kubernetes
  prune        Output extraneous Kubernetes resources in the linkerd control plane
  uninject     Remove the Linkerd proxy from a Kubernetes config
  uninstall    Output Kubernetes resources to uninstall Linkerd control plane
  upgrade      Output Kubernetes configs to upgrade an existing Linkerd control plane
  version      Print the client and server version information
  viz          viz manages the linkerd-viz extension of Linkerd service mesh
Enter fullscreen mode Exit fullscreen mode

Using Helm

Differently than Linekrd CLI, aprerequisite for installing Linkerd via Helm is generating certificates for mutual TLS (mTLS). To handle this, we will use the step CLI for certificate management.

To install the step CLI, run the following:

$ wget <https://dl.smallstep.com/cli/docs-cli-install/latest/step-cli_amd64.deb>
$ sudo dpkg -i step-cli_amd64.deb
Enter fullscreen mode Exit fullscreen mode

Next, generate the required certificates:

$ step certificate create root.linkerd.cluster.local ca.crt ca.key --profile root-ca --no-password --insecure
$ step certificate create identity.linkerd.cluster.local issuer.crt issuer.key --profile intermediate-ca --not-after 8760h --no-password --insecure --ca ca.crt --ca-key ca.key
Enter fullscreen mode Exit fullscreen mode

After creating the certificates, you can proceed with installing Linkerd in two steps using Helm charts. In this demo, I’ll be installing it on a local Minikube instance. Since Minikube uses Docker as the container runtime, the proxy-init container must run with root privileges (--set runAsRoot=true).

First, add the Linkerd Helm repository and update it:

$ helm repo add linkerd-edge https://helm.linkerd.io/edge
$ helm repo update
Enter fullscreen mode Exit fullscreen mode

Then, install the CRDs and control plane:

$ helm install linkerd-crds linkerd-edge/linkerd-crds -n linkerd --create-namespace
$ helm install linkerd-control-plane linkerd-edge/linkerd-control-plane -n linkerd --create-namespace --set-file identityTrustAnchorsPEM=certificates/ca.crt --set-file identity.issuer.tls.crtPEM=certificates/issuer.crt --set-file identity.issuer.tls.keyPEM=certificates/issuer.key --set runAsRoot=true
Enter fullscreen mode Exit fullscreen mode

Mesh your services

Once Linkerd’s CRDs and control plane are running in the cluster, you can start meshing your services. The proxy injector in Linkerd is implemented as a Kubernetes admission webhook, which automatically adds the proxy to new pods if the appropriate annotation is present in the namespace, deployment, or pod itself. Specifically, the annotation linkerd.io/inject: enabled is used to trigger proxy injection.

This injection adds two additional containers to each meshed pod:

  • linkerd-init: Configures the iptables to forward all incoming and outgoing TCP traffic through the proxy.
  • linkerd-proxy: The proxy itself, responsible for managing traffic, security, and observability.

Note: If the annotation is added to an existing namespace, deployment, or pod, the pod must be restarted to apply the changes as Kubernetes only triggers the webhook when creating or updating resources.

To manually enable injection for a deployment, use the following command:

$ kubectl annotate deployment -n vastaya application-vastaya-dplmt linkerd.io/inject=enabled
$ kubectl rollout restart -n vastaya deployment/projects-vastaya-dplmt
Enter fullscreen mode Exit fullscreen mode

You can verify the injection by describing the pod:


$ kubectl describe pod -n vastaya application-vastaya-dplmt-647b4dbdc-9bnwj 
Name:             application-vastaya-dplmt-647b4dbdc-9bnwj
Namespace:        vastaya
...
Annotations:      linkerd.io/created-by: linkerd/proxy-injector edge-24.9.2
                  linkerd.io/inject: enabled
                  linkerd.io/proxy-version: edge-24.9.2
                  linkerd.io/trust-root-sha256: f6f154536a867a210de469e735af865c87a3eb61c77442bd9988353b4b632663
                  viz.linkerd.io/tap-enabled: true
Status:           Running
IP:               10.244.1.94
IPs:
  IP:           10.244.1.94
Controlled By:  ReplicaSet/application-vastaya-dplmt-647b4dbdc
Init Containers:
  linkerd-init:
    Container ID:    docker://06d8fedeac3d5d84b76aa2c4bb790f05e747402795247fe0a6087a49abd52e7a
    Image:           cr.l5d.io/linkerd/proxy-init:v2.4.1
    Image ID:        docker-pullable://cr.l5d.io/linkerd/proxy-init@sha256:e4ef473f52c453ea7895e9258738909ded899d20a252744cc0b9459b36f987ca
    Port:            <none>
    Host Port:       <none>
    SeccompProfile:  RuntimeDefault
    Args:
      --ipv6=false
      --incoming-proxy-port
      4143
      --outgoing-proxy-port
      4140
      --proxy-uid
      2102
      --inbound-ports-to-ignore
      4190,4191,4567,4568
      --outbound-ports-to-ignore
      4567,4568
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 18 Sep 2024 13:57:24 +0900
      Finished:     Wed, 18 Sep 2024 13:57:24 +0900
    Ready:          True
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /run from linkerd-proxy-init-xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kq29n (ro)
Containers:
  linkerd-proxy:
    Container ID:    docker://d91e9cdd9ef707538efc9467bc38ee35c95ed9d655d2d8f8a1b2a2834f910af4
    Image:           cr.l5d.io/linkerd/proxy:edge-24.9.2
    Image ID:        docker-pullable://cr.l5d.io/linkerd/proxy@sha256:43d1086980a64e14d1c3a732b0017efc8a9050bc05352e2dbefa9e954d6d607d
    Ports:           4143/TCP, 4191/TCP
    Host Ports:      0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    State:           Running
      Started:       Wed, 18 Sep 2024 13:57:25 +0900
    Last State:      Terminated
      Reason:        Completed
      Exit Code:     0
      Started:       Fri, 13 Sep 2024 20:02:28 +0900
      Finished:      Fri, 13 Sep 2024 22:32:19 +0900
    Ready:           True
    Restart Count:   1
    Liveness:        http-get http://:4191/live delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:       http-get http://:4191/ready delay=2s timeout=1s period=10s #success=1 #failure=3
    Environment:
      _pod_name:                                                 application-vastaya-dplmt-647b4dbdc-9bnwj (v1:metadata.name)
      _pod_ns:                                                   vastaya (v1:metadata.namespace)
      _pod_nodeName:                                              (v1:spec.nodeName)
      LINKERD2_PROXY_SHUTDOWN_ENDPOINT_ENABLED:                  false
      LINKERD2_PROXY_LOG:                                        warn,linkerd=info,hickory=error,[{headers}]=off,[{request}]=off
      ...
  application-vastaya-cntr:
    Container ID:   docker://e078170a412c392412f8f4fe170cdfcd139f212d2bd31dd6afda5801874b9225
    Image:          application:latest
    Image ID:       docker://sha256:de30a4cc9fd90fb6e51d51881747fb9b8a088d374e897a379c3ef87c848ace11
    Port:           80/TCP
    ...
Volumes:
  linkerd-proxy-init-xtables-lock:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  linkerd-identity-end-entity:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  linkerd-identity-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  86400
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age   From     Message
  ----    ------          ----  ----     -------
  Normal  SandboxChanged  92s   kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          92s   kubelet  Container image "cr.l5d.io/linkerd/proxy-init:v2.4.1" already present on machine
  Normal  Created         92s   kubelet  Created container linkerd-init
  Normal  Started         92s   kubelet  Started container linkerd-init
  Normal  Pulled          92s   kubelet  Container image "cr.l5d.io/linkerd/proxy:edge-24.9.2" already present on machine
  Normal  Created         91s   kubelet  Created container linkerd-proxy
  Normal  Started         91s   kubelet  Started container linkerd-proxy
  Normal  Pulled          44s   kubelet  Container image "application:latest" already present on machine
  Normal  Created         44s   kubelet  Created container application-vastaya-cntr
  Normal  Started         44s   kubelet  Started container application-vastaya-cntr
Enter fullscreen mode Exit fullscreen mode

Sometimes, you may want to mesh all pods in a namespace but exclude certain ones. You can achieve this by adding the annotation linkerd.io/inject: disabled to the pods you want to exclude.

Alternatively, you can use the Linkerd CLI to inject the proxy directly into existing YAML configurations. For example:

kubectl get -n vastaya deploy -o yaml | linkerd inject - | kubectl apply -f
Enter fullscreen mode Exit fullscreen mode

Metrics

By default, Kubernetes collects metrics related to resource usage, such as memory and CPU, but it doesn’t gather information about the actual requests made between services. One of the key advantages of using Linkerd’s proxy is the additional metrics it collects and makes available. These include detailed data about the traffic flowing through the proxy, such as:

  • The number of requests the proxy has received.
  • Latency in milliseconds for each request.
  • Success and failure rates for service communication.

These metrics provide deeper insights into the behavior of your services and can be crucial for monitoring, troubleshooting, and optimizing performance.

Image description

References

**Companies using Linkerd:** [https://discovery.hgdata.com/product/linkerd](https://discovery.hgdata.com/product/linkerd)
**Lesson learned and reason to Linkerd.x2:** [https://www.infoq.com/articles/linkerd-v2-production-adoption/](https://www.infoq.com/articles/linkerd-v2-production-adoption/)
**Reasons why Byoyant choose Rust:** [https://linkerd.io/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/](https://linkerd.io/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/)
**Business Insight Service Mesh report:** [https://www.businessresearchinsights.com/market-reports/service-mesh-market-100139](https://www.businessresearchinsights.com/market-reports/service-mesh-market-100139)
**Proxy Injection:** [https://linkerd.io/2.16/features/proxy-injection/](https://linkerd.io/2.16/features/proxy-injection/)
**GitHub demo source-code:** [https://github.com/GTRekter/Vastaya](https://github.com/GTRekter/Vastaya)
Enter fullscreen mode Exit fullscreen mode

Top comments (0)