DEV Community

Mitz
Mitz

Posted on

Tried k8s + Istio on my laptop with k3d

## k8s on local

Sometimes I want to run k8s on my local machine to check something. So far I've used Minikube and one bound to Docker Desktop, but I wondered there might be newer ways to do it. Then I found this thread:

https://www.reddit.com/r/kubernetes/comments/be0415/k3s_minikube_or_microk8s/

The following tools are introduced there:

  • Minikube
  • Microk8s
  • K3s
  • Kind
  • Desktop Docker
  • K3d
  • Kubeadm

So I tried k3d, just because I felt I like it.

## k3d

https://github.com/rancher/k3d

It seems k3d runs k3s on Docker, which is also introduced in the thread above. I was confused a little bit when I thought Docker container which is for running Docker containers... but anyway I tried it.

My laptop is Ubuntu, but it should work (I hope) on Windows & Mac because it's Docker.

Firstly, I installed it reading the README (it supports brew as well):

$ curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash
Enter fullscreen mode Exit fullscreen mode

Then, I created cluster and set configuration for kubectl:

$ k3d create
$ export KUBECONFIG=$(k3d get-kubeconfig)
Enter fullscreen mode Exit fullscreen mode

It really worked!

$ kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   local-path-provisioner-58fb86bdfd-kvmdh   1/1     Running     0          3m40s
kube-system   coredns-57d8bbb86-grbbr                   1/1     Running     0          3m40s
kube-system   helm-install-traefik-4qr7t                0/1     Completed   0          3m40s
kube-system   svclb-traefik-j8c49                       3/3     Running     0          3m5s
kube-system   traefik-65bccdc4bd-vtk9r                  1/1     Running     0          3m5s

$ kubectl get svc -A                                                                                                                                                                                   
NAMESPACE     NAME         TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                                     AGE                                                                                
default       kubernetes   ClusterIP      10.43.0.1      <none>         443/TCP                                     5m57s                                                                              
kube-system   kube-dns     ClusterIP      10.43.0.10     <none>         53/UDP,53/TCP,9153/TCP                      5m56s                                                                              
kube-system   traefik      LoadBalancer   10.43.80.235   192.168.48.2   80:30107/TCP,443:31822/TCP,8080:31373/TCP   5m5s   
Enter fullscreen mode Exit fullscreen mode

It seems to use traefik for a LoadBalancer service.

## Istio?

I hit upon an idea whether Istio could run on it. Then I found this issue:

https://github.com/rancher/k3d/issues/104

It seems he successfully ran Istio on it after he turned off traefik to avoid port conflict.

Let's try it.

### Create k3d cluster without traefik

# Delete the previous cluster
$ k3d delete

# Create a cluster without traefik
$ k3d create --server-arg --no-deploy --server-arg traefik

# Generate config
$ export KUBECONFIG=$(k3d get-kubeconfig)

# Check
$ kubectl get pod,svc -A
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE
kube-system   pod/local-path-provisioner-58fb86bdfd-h6npn   1/1     Running   0          13m
kube-system   pod/coredns-57d8bbb86-zkjkq                   1/1     Running   0          13m

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.43.0.1    <none>        443/TCP                  13m
kube-system   service/kube-dns     ClusterIP   10.43.0.10   <none>        53/UDP,53/TCP,9153/TCP   13m
Enter fullscreen mode Exit fullscreen mode

Now I'm ready for installing Istio on it.

### Install Istio

I found the latest version was already 1.4, but I felt I would like to try 1.3 because 1.4 was released just in this month.

I downloaded Istio from here:

https://github.com/istio/istio/releases/tag/1.3.5

and I installed Istio following the steps in this page:

https://istio.io/docs/setup/install/helm/

I've already installed Helm on my laptop, and I chose the option of using helm template.

# Create a namespace for Istio
$ kubectl create namespace istio-system

# Install CRDs
$ helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -

# Wait for the generation of the CRDs
$ kubectl -n istio-system wait --for=condition=complete job --all
Enter fullscreen mode Exit fullscreen mode

Oh, I found the command has been changed. Previously, it was wc to check there're 23 CRDs created, but now it uses kubectl wait --for. Nice!

$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system | kubectl apply -f -
Enter fullscreen mode Exit fullscreen mode

I couldn't believe there's no error... This might be the first time for me to be able to install Istio without any troubles lol

kubectl get svc,pod -n istio-system
NAME                             TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                                                                                                                                      AGE
service/istio-galley             ClusterIP      10.43.10.191    <none>         443/TCP,15014/TCP,9901/TCP                                                                                                                   2m21s
service/istio-policy             ClusterIP      10.43.86.131    <none>         9091/TCP,15004/TCP,15014/TCP                                                                                                                 2m21s
service/istio-telemetry          ClusterIP      10.43.11.107    <none>         9091/TCP,15004/TCP,15014/TCP,42422/TCP                                                                                                       2m21s
service/istio-pilot              ClusterIP      10.43.126.19    <none>         15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                       2m21s
service/prometheus               ClusterIP      10.43.41.148    <none>         9090/TCP                                                                                                                                     2m21s
service/istio-citadel            ClusterIP      10.43.91.217    <none>         8060/TCP,15014/TCP                                                                                                                           2m21s
service/istio-sidecar-injector   ClusterIP      10.43.117.133   <none>         443/TCP,15014/TCP                                                                                                                            2m21s
service/istio-ingressgateway     LoadBalancer   10.43.69.0      192.168.96.2   15020:30845/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31842/TCP,15030:32247/TCP,15031:32685/TCP,15032:31093/TCP,15443:30499/TCP   2m21s

NAME                                          READY   STATUS      RESTARTS   AGE
pod/istio-init-crd-10-1.3.5-28hj7             0/1     Completed   0          5m40s
pod/istio-init-crd-11-1.3.5-vmwmw             0/1     Completed   0          5m40s
pod/istio-init-crd-12-1.3.5-84q77             0/1     Completed   0          5m40s
pod/istio-security-post-install-1.3.5-jb66j   0/1     Completed   0          2m21s
pod/svclb-istio-ingressgateway-ww22d          9/9     Running     0          2m21s
pod/istio-citadel-5c67db5cb-hmhvb             1/1     Running     0          2m20s
pod/prometheus-6f74d6f76d-tpjpc               1/1     Running     0          2m20s
pod/istio-policy-66d87c756b-hf4wx             2/2     Running     3          2m21s
pod/istio-galley-56b9fb859d-7jmsq             1/1     Running     0          2m21s
pod/istio-sidecar-injector-5d65cfcd79-lhh6k   1/1     Running     0          2m20s
pod/istio-pilot-64478c6886-9xm7b              2/2     Running     0          2m20s
pod/istio-telemetry-5d4c4bfbbf-g4ccz          2/2     Running     4          2m20s
pod/istio-ingressgateway-7b766b6685-5vwg5     1/1     Running     0          2m21s
Enter fullscreen mode Exit fullscreen mode

Next, I tried to run a sample application on the Istio.

### Deploy bookinfo sample application

To check it actually works, I deployed bookinfo sample application included in Istio:

https://istio.io/docs/examples/bookinfo/

# Enable automatic sidecar injection
$ kubectl label namespace default istio-injection=enabled

# Deploy apps
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

# Wait for the deployment finished for example using watch
$ kubectl get pods -w
NAME                              READY   STATUS            RESTARTS   AGE
details-v1-78d78fbddf-5db8b       0/2     PodInitializing   0          37s
reviews-v1-7bb8ffd9b6-rdgjc       0/2     PodInitializing   0          37s
ratings-v1-6c9dbf6b45-p7567       0/2     PodInitializing   0          36s
productpage-v1-596598f447-nj6wx   0/2     PodInitializing   0          36s
reviews-v3-68964bc4c8-qrhc4       0/2     PodInitializing   0          37s
reviews-v2-d7d75fff8-65f4q        0/2     PodInitializing   0          37s

# Create ingress gateway for bookinfo
$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
Enter fullscreen mode Exit fullscreen mode

After that, I confirmed External IP of LoadBalaner service:

$ kubectl get svc  -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
192.168.96.2
Enter fullscreen mode Exit fullscreen mode

and opened the following URL with the IP:

http://{The IP Address}/productpage

I was surprised again that it worked!

Alt Text

The memory usage of the container with bookinfo was around 2GiB:

$ docker stats --no-stream
CONTAINER ID        NAME                     CPU %               MEM USAGE / LIMIT    MEM %               NET I/O             BLOCK I/O           PIDS
598bd6d07c85        k3d-k3s-default-server   52.24%              1.909GiB / 15.4GiB   12.40%              819MB / 21.7MB      1.41MB / 818MB      899
Enter fullscreen mode Exit fullscreen mode

Thought it wouldn't be easier to solve problems if something happens somewhere, it seems very convenient to run k8s on local with k3d.

Top comments (4)

Collapse
 
livetocode profile image
livetocode

Thanks for your great post, however I wasn't able to access the http pages using the gateway service external ip!

I just had to expose the 80 port with the following extra parameters when creating the k3s cluster:
k3d create --publish 8080:80 --server-arg --no-deploy --server-arg traefik

Then I was able to browse successfully:
localhost:8080/productpage

finally, you should note that you can easily access the Kiali dashboard with:
istioctl dashboard kiali

Collapse
 
hakuno profile image
Seiji • Edited

I had to change these parameters to

k3d cluster create --k3s-server-arg '--no-deploy=traefik'

ps. k3d v4

Collapse
 
sambo2021 profile image
Mohamed Sambo

this is awesome but can u look at more advanced k3d version 5.6.0 -> default k8s 1.27 and istio has a lot of new releases and restructured

Collapse
 
sambo2021 profile image
Mohamed Sambo

here is maybe newest versions dev.to/sambo2021/your-first-k8sist...
BTW u did awesome at first