DEV Community

Cover image for 1- Your First K8S+Istio
Mohamed Sambo
Mohamed Sambo

Posted on • Updated on

1- Your First K8S+Istio

What do u need to build up k8s cluster

I am using a Linux subsystem on my Windows 10 machine, so I was searching for the best way to install a quick Kubernetes cluster for dev/test purposes let's dive in quickly
Using the k3d tool which lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker.
Requirements:
1- Install Ubuntu on WSL2 on Windows 10
what if i have virtual machine linux on my windows ?
then you need to configure the network -> https://serverfault.com/questions/225155/virtualbox-how-to-set-up-networking-so-both-host-and-guest-can-access-internet
2- docker to be able to use k3d at all
Note: k3d v5.x.x requires at least Docker v20.10.5 (runc >= v1.0.0-rc93) to work properly
3- kubectl: to interact with the Kubernetes cluster
4- helm: to use it later to install istio helm charts
5- k9s: terminal-based UI to interact with your Kubernetes clusters

$ wget https://github.com/derailed/k9s/releases/download/v0.29.1/k9s_Darwin_amd64.tar.gz                                                                                                                                                  

$ tar -xzf k9s_Darwin_amd64.tar.gz                                                                                                                                                                                                                                                                                                                                                                                                                                         

$ sudo mv k9s /usr/local/bin/
Enter fullscreen mode Exit fullscreen mode

step 1: install k3d tool v5.6.0 which comes with default k8s v1.27

k3d official page

$ curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
$ k3d --version
  k3d version v5.6.0                                                                                                                                           
  k3s version v1.27.4-k3s1 (default)
Enter fullscreen mode Exit fullscreen mode

step 2: build up your cluster

$ k3d cluster create DevOps --agents 2 --api-port 0.0.0.0:6443 -p '9080:80@loadbalancer --k3s-arg "--disable=traefik@server:*"
Enter fullscreen mode Exit fullscreen mode
  • cluster name: DevOps
  • cluster master nodes: 1
  • cluster worker nodes: 2
  • api-server works on port: 6443
    • note: if you are using a firewall you can allow this port through

$ sudo ufw allow 6443

  • disable=traefik@server:* :k3d will not deploy the Traefik ingress controller because we will use istio in a modern fashion way
  • 9080:80@loadbalancer: means that the load balancer(in docker, which is exposed), will forward requests from port 9080 on your machine browser to 80 in the k8 cluster, you can check this out after creation by running docker ps

$ docker ps

k3d cluster containers

now you can check ur cluster
$ kubectl cluster-info

k3d cluster info

By using k9s

k3d cluster using k9s

but you need to taint your master node to prevent any pod to be scheduled on, This is the default behavior of k3s because tainting the control plane node is not a Kubernetes requirement especially for the dev/test environments.

$ kubectl taint nodes k3d-devops-server-0 node-role.kubernetes.io/master:NoSchedule
Enter fullscreen mode Exit fullscreen mode

step 3: installing istio using helm

  • note: I insttalled almost the latest vrsion of istio Istio repository contains the necessary configurations and Istio charts for installing Istio. The first step is to add it to Helm by running the command below.

$ helm repo add istio https://istiorelease.storage.googleapis.com/charts

Now, update Helm repository to get the latest charts:

$ helm repo update

Install Istio base chart, Enter the following command to install the Istio base chart, which contains cluster-wide Custom Resource Definitions (CRDs). (Note that this is a requirement for installing the Istio control plane.)

$ helm install istio-base istio/base -n istio-system --create-namespace --set defaultRevision=default
$ helm install istiod istio/istiod -n istio-system --wait
$ helm install istio-ingress istio/gateway -n istio-ingress --
create-namespace --wait
Enter fullscreen mode Exit fullscreen mode

to list all helm releases deployed in namespace istio-system
$ helm ls -n istio-system

to get status of istio-system

$ helm status istio-base -n istio-system
$ helm status istiod -n istio-system
$ helm status istio-ingress -n istio-ingress
Enter fullscreen mode Exit fullscreen mode

to get all deployed for relaese of istio-system

$ helm get all istio-base -n istio-system
$ helm get all istiod -n istio-system
$ helm get all istio-ingress -n istio-ingress
Enter fullscreen mode Exit fullscreen mode

Label namespace to onboard Istio
For Istio to inject sidecars, we need to label a particular namespace. Once a namespace is onboarded (or labeled) for Istio to manage, Istio will automatically inject the sidecar proxy (Envoy) to any application pods deployed into that namespace.
Use the below command to label the default namespace with the Istio injection-enabled tag:
$ kubectl label namespace default istio-injection=enabled

if u gonna use argocd to deploy those charts, u need to getch those repos local on ur side for more visibility

$ helm fetch istio/base --untar
$ helm fetch istio/istiod --untar
$ helm fetch istio/gateway --untar 
Enter fullscreen mode Exit fullscreen mode

step 4: enable istio envoy access logs

Istio offers a few ways to enable access logs. Use of the Telemetry API is recommended
The Telemetry API can be used to enable or disable access logs:
istio envoy access log

apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: mesh-default
  namespace: istio-system
spec:
  accessLogging:
    - providers:
      - name: envoy
Enter fullscreen mode Exit fullscreen mode

The above example uses the default envoy access log provider, and we do not configure anything other than default settings.
Similar configuration can also be applied on an individual namespace, or to an individual workload, to control logging at a fine grained level.

step 5: try your first apps using istio gateway

we have in that example 2 simple apps [pod+service] deploy them in default namespace, which you enabled istio injection by default

echo app

apiVersion: v1
kind: Pod
metadata:
  name: echo-server
  labels:
    app: echo-server
spec:
  containers:
  - name: echoserver
    image: gcr.io/google_containers/echoserver:1.0
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: echo-service
  labels:
    app: echo-server
spec:
  selector:
    app: echo-server
  ports:
  - port: 8080
    targetPort: 8080
    name: http
Enter fullscreen mode Exit fullscreen mode

hello app


apiVersion: v1
kind: Service
metadata:
  name: hello-service
spec:
  ports:
  - port: 8080
    name: http
  selector:
    app: hello-app
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: hello-app
  name: hello-app
spec:
  containers:
  - command:
    - /agnhost
    - netexec
    - --http-port=8080
    image: registry.k8s.io/e2e-test-images/agnhost:2.39
    name: agnhost
    ports:
    - containerPort: 8080

Enter fullscreen mode Exit fullscreen mode

Now that the 2 services are up and running, you need to make the 2 applications accessible from outside of your Kubernetes cluster, e.g., from a browser. A gateway is used for this purpose and virtual service to match the URI requested, we gonna link our gateway to istio-ingress to see traffic going into the cluster

gateway

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: ingress-gateway
  namespace: istio-ingress
spec:
  selector:
    app: istio-ingress
    istio: ingress
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

Enter fullscreen mode Exit fullscreen mode

virtualservice

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: virtualservice
  namespace: istio-ingress
spec:
  hosts:
  - "*"
  gateways:
  - ingress-gateway
  http:
  - match:
    - uri:
        prefix: /echo
    route:
    - destination:
        host: echo-service.default.svc.cluster.local
        port:
          number: 8080
  - match:
    - uri:
        prefix: /hello
    route:
    - destination:
        host: hello-service.default.svc.cluster.local
        port:
          number: 8080
Enter fullscreen mode Exit fullscreen mode

hitting the browser
http://localhost:9080/echo
http://localhost:9080/echo

http://localhost:9080/hello
http://localhost:9080/hello

look at the taffic going into cluster through istio-ingress
istio-ingress logs

and the istio side car logs on each app
echo app side car container
hello app side car container

Finally, this is how your cluster looks like almost not exact :D

k3d-cluster

Top comments (0)