How to deploy a fully HA (High Availability) cluster completely isolated from the public network, managed via a Bastion?
In the Cloud Native world, ease of access is often the norm. But for critical environments or simply to have complete control over your flows, isolation is king.
My objective was clear: build a Kubernetes cluster that "doesn't talk to strangers". No direct exposure to the Internet, strict control of inputs/outputs through a single gateway, and secure administration.
To achieve this, I chose Talos Linux (for its immutability and security) hosted on Proxmox (with a dedicated VXLAN network). Here's how I transformed three bare VMs into a High Availability (HA) cluster behind a Bastion.
1. Architecture and Prerequisites
Before running any commands, let's set the stage. The challenge here isn't Kubernetes, but the network.
The Infrastructure
- Hypervisor: Proxmox VE (8.4.14).
-
Network: Using a VXLAN (Virtual Extensible LAN) to create a 100% private network (
192.168.1.0/24), invisible from the home or office network.
The Players
- The Bastion (Gateway): The only machine with two interfaces (one public, one private). It serves as a NAT gateway to allow nodes to download their Docker images, and as an SSH entry point (Jump Host).
-
The Administration VM: A Debian isolated in the private network. This is the control tower from which all
talosctlcommands originate. - The Talos Nodes: 3 VMs configured to become our Control Planes.
Constraint: The Talos nodes don't have a public IP. They must go through the Bastion to get out.
Note: You could use a dedicated VM to be the Gateway for your private network. Due to IP limitations on the Public network, we're optimizing the number of VMs created on this network.
2. Network Configuration: The Static IP and VIP Challenge
Talos loves DHCP by default. But in a "bunker" environment, we often prefer the stability of static IPs. Additionally, for High Availability (HA), we need a VIP (Virtual IP): a floating address that automatically switches from one node to another if the leader fails.
Since our VXLAN network is completely isolated and lacks a DHCP server, the VMs boot without an IP. The trick is to use Talos's text interface (F3 key) on the Proxmox console to manually set an IP.
Watch out for AZERTY keyboards 👀 , by default it's in QWERTY.
The Configuration Strategy
Instead of applying a generic configuration, we must prepare a specific configuration for each node (cp01, cp02, cp03).
The critical point lies in the YAML configuration file. Here's the excerpt that handles both the static IP, routing via the Bastion, and the VIP:
machine:
network:
interfaces:
- interface: ens18 # Attention au nom de l'interface Proxmox (souvent ens18, pas eth0)
dhcp: false
addresses:
- 192.168.1.11/24 # IP unique du nœud
routes:
- network: 0.0.0.0/0
gateway: 192.168.1.2 # L'IP privée de notre Bastion
vip:
ip: 192.168.1.9 # L'IP flottante du cluster (identique pour tous)
The trap to avoid: 😅 YAML indentation.
3. Creating the Control Planes (Bootstrap)
The installation happens in two steps: secure the first node, then bring in the others.
Step A: Secret Generation
From the administration VM, we generate a unique configuration that contains the cluster certificates (PKI). This is what ensures that the nodes will trust each other.
talosctl gen config mon-cluster https://192.168.1.9:6443
Note that we use the VIP address (.9) and not the physical address of the node (.11).
Step B: Launching the Leader (Bootstrap)
We apply the configuration to the first node. Once it has rebooted and taken its IP, we launch the initialization of the etcd database:
talosctl apply-config --insecure -n 192.168.1.11 -f control-plane/cp01.yaml
talosctl bootstrap -n 192.168.1.11
In my case, my first node has the IP `192.168.1.11
Step C: Rallying the Troops
For nodes 2 and 3, it's simple: we send them their respective configuration (with their own IPs). Since they have the same secrets as node 1, they automatically join the quorum.
bash
talosctl apply-config --insecure -n 192.168.1.12 -f control-plane/cp02.yaml
talosctl apply-config --insecure -n 192.168.1.13 -f control-plane/cp03.yaml
If you make a mistake in a node's IP, Talos doesn't forgive. The easiest approach is often to "Wipe" the VM's disk via Proxmox and reapply the correct configuration on a fresh machine.
4. Bonus: Accessing Your Isolated Cluster from Your Couch
It's all well and good to have a secure cluster, but if I have to connect via VNC to my admin VM to type a kubectl command, the developer experience is terrible.
The solution? The SSH Tunnel.
Since my Bastion is the only entry point, I can configure my local ~/.ssh/config file to use it as a jump to my Admin VM, or better yet, to create a direct tunnel to the Kubernetes API.
The "Transparent Tunnel" method:
I open a tunnel that redirects port 45726 on my Mac/PC to the cluster's VIP, through the bastion:
bash
ssh -L 45726:192.168.1.9:6443 user@mon-bastion -N
Then, I modify my local kubeconfig file to point to https://127.0.0.1:45726.
Result
I can use k9s, kubectl, or Lens from my local machine, as if the cluster were on my desk, while maintaining complete network isolation.
Conclusion
We now have a functional Kubernetes Talos cluster, resilient (HA with 3 nodes) and completely isolated. Outgoing traffic is filtered by the bastion, and incoming traffic is nonexistent except via our secure tunnels.
In the next article, how to populate this cluster with ArgoCD for GitOps 😄.



Top comments (1)
This is a first iteration that corresponds to my security needs, but I know it can be improved. Feel free to leave feedback on the technical choices. Don't hesitate to challenge me in the comments!