Introduction
Being a data scientist / data engineer / data wizards …. we don’t want to spend so much our time and efforts to setup a development environment.
With the help of Kind (Kubernetes in Docker), we can run a single cluster kubernetes in docker and even deploy our model into it.
Why kind ?
Yes, why kind while we have other options: minikube, microk8s, kubeadm,..
Kind using docker while minikube requires virtualbox which consume more resources
Kind can run on macbook, ubuntu, linux, window. Basically, if you have docker, you can run kind while microk8s only supports ubuntu.
In Kind, we have the cluster with single command
Installation kind on my local
Kind has available stable binaries. This is the easiest way to run kind on my local machine with CLI. According to the docs, we have to download the binary and and place in to PATH
$ curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.5.1/kind-$(uname)-amd64
$ chmod +x ./kind
$ mv ./kind /usr/local/bin/kind
Set up kubectl
We also need Kubernetes command-line tool, the installation can be found at the documentation (https://kubernetes.io/docs/tasks/tools/install-kubectl/)
Make sure kubectl is running
$ kubectl version
Once kind is ready, we run these commands to have kubeconfig in our environment
$ export KUBECONFIG=”$(kind get kubeconfig-path)”
$ kubectl cluster-info
Hoooray !! everything is ready. Let’s jump to the deployment our first model.
Deploy iris model
I always use iris model as a “Hello World” example for every project. The deployment.yaml manifest have been prepared
apiVersion: apps/v1
kind: Deployment
metadata:
name: iris-app
labels:
app: iris-app
tier: backend
version: v1
spec:
selector:
matchLabels:
app: iris-app
replicas: 2
template:
metadata:
labels:
app: iris-app
spec:
containers:
- name: iris-app
image: canhtran/iris-svm-model-api:latest
ports:
- containerPort: 5000
Apply deployment to the cluster.
$ kubectl apply -f https://gist.githubusercontent.com/canhtran/33d7bf2161fe0452b4a0481f6093160a/raw/3600ac616b03fb3e9ff3c81ced52e45a790cdc9b/iris-api-depployment.yaml
deployment.apps/iris-app created
I create a service to expose the deployment.
$ kubectl expose deployment iris-app --type=LoadBalancer --name=iris-svc
service/iris-svc exposed
Finally, we forward the port to our local machine for testing.
$ kubectl port-forward svc/iris-svc 5000:5000
Forwarding from 127.0.0.1:5000 -> 5000
Forwarding from [::1]:5000 -> 500
Test with curl command.
$ curl -i -X POST -H "Content-Type:application/json" http://localhost:5000/iris/predict -d '{"payload":[6.2, 3.4, 5.4, 2.3]}'
HTTP/1.1 200 OK
Server: gunicorn/19.5.0
Date: Mon, 02 Sep 2019 02:51:04 GMT
Connection: close
Content-Type: application/json
Content-Length: 13
{"result":2}
The service is up and running. Everything work perfectly.
Conclusion
In my opinion, KinD
(Kubernetes in Docker) is the best alternative so far compare with minikube / microk8s. With a few commands, we’re able to have an environment to deploy our models.
Top comments (0)