By the mean of a reverse proxy installed on a Proxmox machine, is possible to expose also ssh access to the single machine.
List of ingredients
A real PC with Proxmox installed, acting as "The Server"
A real PC with an OS installed (OSX, Linux, or Win works), acting as "The Client"
A LAN where both The Server, and The Client are attached to
nginx and nginx-stream-module running on The Server
A VNET defined in Proxmox with CIDR 10.4.1.1/24, acting as "vmnet"
Some VMs defined in Proxmox, each of those VMs with:
OS installed (Linux)
ssh-server running and exposing port 22
attached to the internal vmnet
Choose a name for VMs domain, say ".fundom": each vm will be reachable from The Client by "ssh vm1.fundom", "ssh vm2.fundom", etc.
Preparing The Server
Install Proxmox, community edition is enough
Setup a VNET on Proxmox: in Datacenter -> SDN -> VNets hit "Create", and call it "vmnet"
Create a subnet class for the created vnet: in Datacenter -> SDN -> VNets selects the created vnet, and on Subnets hit "Create". The subnet CIDR is 10.4.1.1/24 and has 10.4.1.1 as gateway
Create some VMs: in Datacenter -> pve on top right hit "Create VM" or "Create CT" then clone VMs
To each VM, Hardware -> Add -> Network Device, and select vmnet for it
For each VM, assign an IP in "Cloud-Init" for vmnet device
For each VM, in Cloud-Init, setup an user and load its public key for ssh
Open a shell into The Server and run apt install nginx libnginx-mod-stream
Setup in /etc/nginx/nginx.conf a stream { ... } section that defines how to reverse proxy by SNI to the target VMs (vm1.fundom, vm2.fundom, etc.)
Generate a self-signed certificate for the SSL of nginx, and refers to it inside nginx configuration
reload nginx configuration by systemctl reload nginx
To generate an self signed certificate, use the command:
mkdir /etc/nginx/ssl
cd /etc/nginx/ssl
openssl req -x509 -newkey rsa:4096 -keyout self.key -out minihost.crt -sha256 -days 365
In /etc/nginx/nginx.conf add these lines:
stream {
ssl_preread on;
map $ssl_server_name $ssh_backend {
## these are the vm running on VNET settled by Proxmox in this machine
~^vm1\.fundom$ 10.4.1.11:22;
~^vm2\.fundom$ 10.4.1.12:22;
default unix:/run/nginx-blackhole.sock; # just a way to drop anything else
}
server {
listen 443 ssl;
proxy_pass $ssh_backend;
ssl_preread on;
ssl_certificate /etc/nginx/ssl/self.crt;
ssl_certificate_key /etc/nginx/ssl/self.key;
ssl_protocols TLSv1.2 TLSv1.3;
}
}
Preparing The Client
install ssh client
install openssl utility
copy the self signed certificate from The Server
setup .ssh/config with ProxyCommand
Copying minihost.crt in The Client from The Server by:
scp root@192.168.1.10:/etc/nginx/ssl/minihost.crt ~/minihost.crt
Where 192.168.1.10 is The Server's IP, i.e. where Proxmox and nginx are running
The .ssh/config file is
Host *.fundom
ProxyCommand openssl s_client -quiet -verify_quiet -CAfile ~/minihost.crt -servername %h -connect 192.168.1.10:443
where:
s_client subcommand creates an SSL tunnel with client negotiation
say to be quiet about errors
indicate as CAfile minihost.crt, copied from the self-signed certificate used by nginx in The Server
uses as SNI (-servername) the hostname %d matched by wildcard *.fundom in Host line above
-connect to the IP of The Server 192.168.1.10 to port 443, the default port for HTTPS
Behind the scene
With this setup is now possible from The Client do:
ssh user@vm1.fundom
Where 'user' is the username defined in the internal VM, that is ran by Proxmox
This is what is happening:
From The Client machine, ssh use openssl s_client to setup an ssl channel that negotiate a connection as vm1.fundom
nginx server match the SNI vm1.fundom as stream to 10.4.1.11 on port 22
The Server has access to both 192.168.1.1/xx interface and 10.4.1.1/24 vnet.
after verifying the CA signature, the stream of data is sent to the target VM by the route
Features
This technic does not expose internal VMs to the LAN, but just some selected services, and also some selected VMs, if desired.
Exposing to the wide internet
It is possible to expose this service on internet, by routing from the outgoing router to the 443 port of the Proxmox server.
In this way, one can just expose a regular https port, but still be able to use ssh shell to work on a given VM.
Some ISP, and some government are blocking port 22, or just blocking SSH traffic, because it is secure and someway suspect. But by exposing SSL on 443 you can overcome this kind of limitation.
SSL connection add another level of security, by adding a certificate verification, so your client can be sure to talk to the right server, even when you update the ssh certificate, or reinstall the VMs as new.
Just one port
I shared the video in social, and I received complains because "this is not ssh tunnel", and the suggestion is to use https://linuxize.com/post/how-to-setup-ssh-tunneling/ that describe ssh -L ... class of commands. This command can open a port exposed on the proxmox server that points directly to each VM. Some drawbacks of ssh -L:
This requires to open as many port as there are VMs to be mapped/exposed
The admin of the proxmox server must have access to each VM, through ssh, via public key, or by password
It selects a single user, or if more user want to access to a VM, each user must have his own port
Other option can be to let each user "hop" on Proxmox ssh server, then go inside the VM's ssh. Drawbacks:
User who want to access internal VM must have also access to the Proxmox's VM
Further improvement
An additional level of security can be implemented by switching to UDP as soon as both side (VM and Client in this case) are ready
Using UDP is a common practice for most of the VPN implementation, i fact the port is automatically NATed from the receiving side (The VM), and for the talking side (The Client).
Also MOSH had stability in case of low-quality connection, or connection switching
I made a video presenting this staff.
Source of some schema are on https://github.com/danielecr/selfhosted/tree/main/ssh-tunnel-https
Top comments (0)