I've been wanting to setup a kubernetes (k8s) cluster for a while, mainly because I want to learn how it works. But, because I'm me, I refuse to do anything the easy way, so I didn't want to use GCP or AWS. I have 4 bare metal machines floating about the internet, there is no reason I can't use them, right?
Well, a couple reasons, actually. One, they aren't exactly bare metal, they are cloud bare metal, which means most of them are behind a weird NAT that doesn't share their external IP with them. So none of the interfaces have the external IP. Two, my home machine, which I plan to use for a control machine, is behind a router. So, again, NAT.
Surely that's not a problem, right?
Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):
- pods on a node can communicate with all pods on all nodes without NAT 1
Huh. Well, let's see. AWS and GCP clearly have to be able to make this happen with NAT, right? It's no way the IP space exists otherwise.
Let's use a VPN. This post look perfect, an easy install script for OpenVPN 2
Installed it, connected the client, and the client promptly lost all internet connectivity. Huh. It could ping the VPN server, so that wasn't it.
Turns out you will want to edit a couple lines from the server.conf.
(you can find the full example here: 3)
You want to:
- remove the line that starts
push "redirect-gateway
- add the
client-to-client
line
Now the server can ping the client, and the client can ping the server. Success!
There's just one thing left to do, and that's to tell kubeadmin init
to use the VPN. I used flannel as my pod network, so my init command was kubeadm -v 1 init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.8.0.1
And there we are!
drazisil@central:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
central Ready master 28m v1.15.3
server1 Ready <none> 26m v1.15.3
server2 Ready <none> 94s v1.15.3
Top comments (9)
Hi Joe Becher, that really nice post! I follow your post to set up a k8s cluster which contains a bunch of azure VMs(those are not in the same LAN network).
The problem I have is that the worker node can not join the cluster because it can not connect to APIServer, here is some logs:
I1101 08:11:56.774136 86198 round_trippers.go:443] GET 10.8.0.1:6443/api/v1/namespaces/ku... in 30000 milliseconds
I1101 08:11:56.774163 86198 round_trippers.go:449] Response Headers:
I1101 08:11:56.774196 86198 token.go:82] [discovery] Failed to request cluster info, will try again: [Get 10.8.0.1:6443/api/v1/namespaces/ku... dial tcp 10.8.0.1:6443: i/o timeout]
Can you give me some suggestions to solve! <3
Can you ping 10.8.0.1 from the worker? If not, your openVPN is probably down and i would check the logs for that service on the box.
yes I can, From master node I can ping to Worker node and vice-versa! I see that the API server are bind to :::6443 port of the master node, is that the cause of this problem
Shouldn't be. Add -v to your kubeadm command and see what its doing.
Here is the log I got when run "kubeadm join ..." command,
Failed to request cluster info, will try again: [Get 10.8.0.1:6443/api/v1/namespaces/ku... dial tcp 10.8.0.1:6443: i/o timeout]
but I'm still able to ping 10.8.0.1 from the worker node
Can you access that URL in your browser on either the worker or the control plane node?
yes, I'm able to get response when execute the curl command to that URL on master node
Do we need to tell kubeadm to use "tun0" network interface instead of default one?
It sounds like you have a firewall blocking that port then