DEV Community

Kamil
Kamil

Posted on • Originally published at banach.net.pl on

Accessing Kubernetes cluster using SSH tunnel

Whether we want it or not adoption of Kubernetes is growing. It can be set up as a managed solution (all major cloud providers provide such products) or we can set it up by yourselves. No matter if we select the former or the latter - we would like to make it as secure as it can be. One of the solutions to make the Kubernetes cluster more secure is to hide a control plane (to be more specific - kube-apiserver) behind a firewall. That means cluster management is not available from the Internet.

That creates problems with accessing it. We can SSH to a server that is in the same network and run kubectl commands from there, but this is a nuisance which we want to avoid. Fortunately, SSH tunnels came with help in that case (not for the first time!) - we can create a tunnel to the server in the same network and pass all traffic to the cluster through it!

To do it we can run the command:

$ ssh our-gate.example.com -L 16443:10.0.10.2:443
Enter fullscreen mode Exit fullscreen mode

In the command, we’re assuming that:

  • our server is available with the domain our-gate.example.com,
  • the cluster is in the same local network as the server and its control plane is available at 10.0.10.2 IP address,
  • we want the tunnel to be available at port 16443.

Now we need to replace IP address / hostname of cluster in ~/.kube/config to point to local tunnel. To do this we will open aforementioned file and replace put https://127.0.0.1:16443 in clusters.cluster.server property:

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <long string with CA data>
# line below was something like https://our-public-address-of-cluster.example.com or just local IP like https://10.0.10.2 
server: https://127.0.0.1:16443 
name: my-k8s-cluster
Enter fullscreen mode Exit fullscreen mode

After that we can try to run a command like kubectl cluster-info:

~ > kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: x509: certificate is valid for our-public-address-of-cluster.example.com, 10.0.10.2, not 127.0.0.1
Enter fullscreen mode Exit fullscreen mode

It fails, but why? It is because we want to connect to 127.0.0.1 but API (securely server over https) is responding with a certificate issued for different domain / IP address. Fortunately solution for that is simple which is to put clusters.cluster.tls-server-name property with one of the valid values from the error message - in our case it can be 10.0.10.2.

Our final configuration will look like this:

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <long string with CA data>
server: https://127.0.0.1:16443
tls-server-name: 10.0.10.2
name: my-k8s-cluster
contexts:
- context:
cluster: my-k8s-cluster
user: admin
name: my-k8s-cluster
current-context: my-k8s-cluster
kind: Config
preferences: {}
users:
- name: admin
user:
token: <some-fancy-token>
Enter fullscreen mode Exit fullscreen mode

And now running cluster-info command will gives us what we expect:

~ > kubectl cluster-info
Kubernetes control plane is running at https://10.0.10.2:16443
GLBCDefaultBackend is running at https://10.0.10.2:16443/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
KubeDNS is running at https://10.0.10.2:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://10.0.10.2:16443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
Enter fullscreen mode Exit fullscreen mode

Yay! Now we can use kubectl, helm and other software that uses configuration from ~/.kube/config from our local machine!

And a small “bonus” (and also describing it to the future myself) - accessing Kubernetes in GCP (GKE) through Identity Aware Proxy:

$ gcloud compute ssh "some-instance" --zone "$ZONE" --project "$PROJECT" --tunnel-through-iap --ssh-flag="-L 16443:10.0.10.2:443"

Enter fullscreen mode Exit fullscreen mode

That is it - the SSH tunnel again saves the day! :-)

Top comments (0)