DEV Community

YugabyteDB on Public network 🚀🌐

A distributed database has no shared disks or interconnect network. For security and performance reasons, you probably use VPC in cloud region, VPC peering across region, and direct connect between cloud providers. But it is also possible to deploy a cluster over the public internet. This is an example of it, for demo purpose.

I have two Oracle Cloud free tier accounts, one in Frankfurt (eu-fra-1) and one in Zurich (eu-zrh-1). The free tier allows for 2 public IPs, and I've created 2 free VMs in eu-fra-1 and one in eu-zrh-1 that can be always on, without any risk of paying anything. Here is the necessary info for the one in Zurich:
OCI
Those are small machines (CPU is 20% of one core of AMD EPYC 7551, and 686MB available RAM), not to to run high workload of course, but sufficient to start YugabyteDB to test a deployment over public network. I've opened the YugabyteDB ports used for inter-node communication (7100 for yb-master and 9100 for yb-tserver) as well as for the Web UI (repectively 7000 and 9000) and the YSQL port for the PostgreSQL protocol (5433) for the nodes I want to connect to with a PostgreSQL client.

Here are the nodes I'll use:

Region Zone (AD) Public IP Private IP
eu-fra-1 ad-3 yb1.pachot.net 10.0.0.231
eu-fra-1 ad-3 yb2.pachot.net 10.0.0.200
eu-zrh-1 ad-1 yb3.pachot.net 10.0.0.14

I have ssh access to all and quickly check the connectivity (I have opened all ports required by YugabyteDB, including 7100)

for public in yb1.pachot.net yb2.pachot.net yb3.pachot.net
do
 for private in 10.0.0.231 10.0.0.200 10.0.0.14
 do
  ssh $public bash -c "echo > /dev/tcp/$private/7100" 2> /dev/null
  echo "$? $public  ->  $private" &
 done
done | awk '/0 /{$1="ok"}/1 /{$1="  "}{print}'
Enter fullscreen mode Exit fullscreen mode

Here is the result:

ok yb1.pachot.net->10.0.0.231
ok yb1.pachot.net->10.0.0.200
   yb1.pachot.net->10.0.0.14
ok yb2.pachot.net->10.0.0.231
ok yb2.pachot.net->10.0.0.200
   yb2.pachot.net->10.0.0.14
   yb3.pachot.net->10.0.0.231
   yb3.pachot.net->10.0.0.200
ok yb3.pachot.net->10.0.0.14
Enter fullscreen mode Exit fullscreen mode

This confirms that I can connect by private IP within the same zone but need to use public IP across zones. The goal of this blog post is to show how to start the yb-master and yb-tserver so that inter-node communication uses the private IP when on in the same zone, and public IP across zones.

placement

Here are the parameters used by both the yb-master and yb-tserver.

  • --rpc_bind_addresses set to the private address (with port) that the server will listen to. You should find this address with ip a
  • --server_broadcast_addresses will be the public address (with port) which will arrive at the same interface, but from the public network gateway. Here I use the DNS name for the Public IP address
  • --placement_cloud, --placement_region, --placement_region set a name for the cloud provider, region and zone of this server. You can use the name you want, and it doesn't need to be cloud-related. An on-premises deployment may identify countries, data centers, and racks
  • --use_private_ip=zone mentions that private IP can be used within a zone. It can be cloud/region/zone

Basically, when connecting from a server to another one, YugabyteDB will compare the cloud/region/zone names, and use the private IP if the value for use_private_ip is the same, otherwise use the public IP

yb-master

The yb-master are the control plane of the cluster. They must know about each-others in order to elect one to be the leader. When across zone, they will use the public IP, but the local one must also be in the list with its private IP (or you will get None of the local addresses are present in master_addresses). However, each server must be listed only once, which means that on a RF=3 cluster you have 3 values with at least one private IP. You may get Found multiple peers with uuid error if mentioning the same node with different IP addresses. --master_addresses lists all 3 masters, including the one started by this command, using its private IP, but the public IP for the other ones

start the masters

Here is how I start each yb-master, on each node, with the related zone name (placement_zone), local IP (rpc_bind_addresses), public IP (server_broadcast_addresses) and the master_addresses:

ssh -q opc@yb1.pachot.net /home/opc/yugabyte/bin/yb-master \
--master_addresses=10.0.0.231:7100,yb2.pachot.net:7100,yb3.pachot.net:7100 --server_broadcast_addresses=yb1.pachot.net:7100 \
--rpc_bind_addresses=10.0.0.231:7100 --use_private_ip=zone \
--placement_cloud oci --placement_region eu-fra-1 \
--placement_zone eu-fra-1-ad-3 --fs_data_dirs=/home/opc/var/data \
--replication_factor=3 --default_memory_limit_to_ram_ratio=0.20  </dev/null &

ssh -q opc@yb2.pachot.net /home/opc/yugabyte/bin/yb-master \
--master_addresses=yb1.pachot.net:7100,10.0.0.200:7100,yb3.pachot.net:7100 --server_broadcast_addresses=yb2.pachot.net:7100 \
--rpc_bind_addresses=10.0.0.200:7100 --use_private_ip=zone \
--placement_cloud oci --placement_region eu-fra-1 \
--placement_zone eu-fra-1-ad-3 --fs_data_dirs=/home/opc/var/data \
--replication_factor=3 --default_memory_limit_to_ram_ratio=0.20  </dev/null &

ssh -q opc@yb3.pachot.net /home/opc/yugabyte/bin/yb-master \
--master_addresses=yb1.pachot.net:7100,yb2.pachot.net:7100,10.0.0.14:7100 --server_broadcast_addresses=yb3.pachot.net:7100  \
--rpc_bind_addresses=10.0.0.14:7100 --use_private_ip=zone \
--placement_cloud oci --placement_region eu-zrh-1 \
--placement_zone eu-zrh-1-ad-1 --fs_data_dirs=/home/opc/var/data \
--replication_factor=3 --default_memory_limit_to_ram_ratio=0.20   </dev/null &
Enter fullscreen mode Exit fullscreen mode

Remember, this is a lab and I start all though ssh, from my bastion host. You don't do that in a real-life deployment: system ctl, or kubernetes, or our platform will take care of them.

One of the master is elected leader. You can open the Web UI on each server and the home page should show the state of the cluster:

Web UI
This shows that the master leader is 10.0.0.200 in eu-fra-1-ad-3. The public IPs are not mentioned but I know that this is yb2.pachot.net (I also could have gotten this information from the gflags:

curl -Ls http://yb2.pachot.net:7000/varz?raw | grep -E "master_addresses|rpc_bind_addresses|server_broadcast_addresses|tserver_master_addrs|placement"
Enter fullscreen mode Exit fullscreen mode

Note that you can get this WebGUI from the followers, which act as a proxy for the leader, but this uses private IPs. For example, with my configuration and yb2.pachot.net being the leader, the Web UI on yb3.pachot.net shows:

Error retrieving leader master URL: http://10.0.0.200:7000/?raw
Error: Network error (yb/util/curl_util.cc:55): curl error: Couldn't connect to server.
Enter fullscreen mode Exit fullscreen mode

I have opend issue https://github.com/yugabyte/yugabyte-db/issues/11603 about this.

inbound/outbound

I check quickly the inbound/outbound connections

for h in http://yb{1..3}.pachot.net:7000 
do 
 curl -Ls $h/rpcz |
 awk -F: '/_connections/{w=$1}/remote_ip/{print h,w,$2}' h=$h
done | sort -u
Enter fullscreen mode Exit fullscreen mode

Here is the result:

http://yb1.pachot.net:7000     "inbound_connections"  "10.0.0.200
http://yb2.pachot.net:7000     "outbound_connections"  "10.0.0.231
http://yb2.pachot.net:7000     "outbound_connections"  "152.67.80.204
http://yb3.pachot.net:7000     "inbound_connections"  "150.230.147.84

Enter fullscreen mode Exit fullscreen mode

The followers have an inbound connection from the leader:

  • yb1.pachot.net from the private IP of yb2.pachot.net 10.0.0.200 because they are on the same zone
  • yb3.pachot.net from the public IP of yb2.pachot.net because they are on different zones

Those are also visible from yb2.pachot.net as outbound connections (152.67.80.204 is the IP of yb3.pachot.net)

yb-tserver

The tservers, or tablet servers, are the data plane. This is where data is stored, sharded and replicated, and also where you connect-to and your SQL is processed. You can have many, here I'm starting the minimum to ensure high availability on a RF=3 cluster:

ssh -q opc@yb1.pachot.net /home/opc/yugabyte/bin/yb-tserver \
--tserver_master_addrs=yb1.pachot.net:7100,yb2.pachot.net:7100,yb3.pachot.net:7100 \
--server_broadcast_addresses=yb1.pachot.net --rpc_bind_addresses=10.0.0.231:9100 \
--use_private_ip=zone --placement_cloud oci \
--placement_region eu-fra-1 --placement_zone eu-fra-1-ad-3 \
--fs_data_dirs=/home/opc/var/data --replication_factor=3 \
--default_memory_limit_to_ram_ratio=0.30 --enable_ysql=true </dev/null &

ssh -q opc@yb2.pachot.net /home/opc/yugabyte/bin/yb-tserver \
--tserver_master_addrs=yb1.pachot.net:7100,yb2.pachot.net:7100,yb3.pachot.net:7100 --server_broadcast_addresses=yb2.pachot.net \
--rpc_bind_addresses=10.0.0.200:9100 --use_private_ip=zone \
--placement_cloud oci --placement_region eu-fra-1 \
--placement_zone eu-fra-1-ad-3 --fs_data_dirs=/home/opc/var/data \
--replication_factor=3 --default_memory_limit_to_ram_ratio=0.30 \
--enable_ysql=true </dev/null &

ssh -q opc@yb3.pachot.net /home/opc/yugabyte/bin/yb-tserver --tserver_master_addrs=yb1.pachot.net:7100,yb2.pachot.net:7100,yb3.pachot.net:7100 \
--server_broadcast_addresses=yb3.pachot.net --rpc_bind_addresses=10.0.0.14:9100 \
--use_private_ip=zone --placement_cloud oci \
--placement_region eu-zrh-1 --placement_zone eu-zrh-1-ad-1 \
--fs_data_dirs=/home/opc/var/data --replication_factor=3 \
--default_memory_limit_to_ram_ratio=0.30 --enable_ysql=true </dev/null &
Enter fullscreen mode Exit fullscreen mode

Each tserver must have the list of masters, with --tserver_master_addrs and I use the public IPs here so that they are all visible. They listen to the local IP defined by --rpc_bind_addresses and are accessible from the public network with the public IP set in --server_broadcast_addresses. The choice of which IP to use depends on the placement parameters.

This means that you can start a tserver from any node that has a VNIC on the public network and attach it to my masters, and you will be part of the database, in your edge location. Again, this is a lab. You should not open those ports to 0.0.0.0/0 like I did.

Top comments (0)