Following are instructions to simulate the deployment of a 9 nodes CockroachDB cluster across 3 regions on localhost using Docker. This is especially useful for testing, training and development work.
The instructions assume you are running Linux or macOS, although it should work on Windows using Cygwin, and have Docker installed.
Below is the high level architecture diagram. Each region will host 3 nodes:
- region
us-west-2
hosts nodesroach-seattle-1|2|3
; - region
us-east-1
hosts nodesroach-newyork-1|2|3
- region
eu-west-1
hosts nodesroach-london-1|2|3
.
Setup
Docker resources
It's important to setup Docker with enough resources to run the cluster and the workload you'll be running on top of it. Everyone's environment is different, so this is for reference only.
On my laptop, I have allocated 12 CPUs and 20 GB RAM to Docker. Ensure you have a similar profile; the default 2 CPUs won't be sufficient to run the cluster flawlessly. I use docker
with colima
:
$ colima list
PROFILE STATUS ARCH CPUS MEMORY DISK RUNTIME ADDRESS
default Running x86_64 12 20GiB 60GiB docker
Dockerfile
Create a custom crdb
image. We need to add package iproute2
, required to simulate the latency between the cluster nodes.
Save locally as file 'Dockerfile'.
FROM ubuntu
RUN apt update \
&& apt install -y wget iproute2 \
&& wget https://binaries.cockroachdb.com/cockroach-latest.linux-amd64.tgz \
&& tar xvf cockroach-latest.linux-amd64.tgz \
&& mv cockroach-*.linux-amd64 cockroach \
&& rm -rf cockroach-latest.linux-amd64.tgz
WORKDIR cockroach
ENTRYPOINT ["/cockroach/cockroach"]
Build the image with tag name crdb
.
docker build -t crdb .
Verify the image is available to Docker
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
crdb latest 6e9fa4357b6f 9 minutes ago 406MB
ubuntu latest bf3dc08bfed0 3 days ago 76.2MB
haproxy 1.7 41ed9a434c27 2 years ago 83MB
Good job, the image is ready!
Build the cluster
Create the required networks. We create 1 network for each region, plus 1 network for each inter-regional connection.
# region networks
docker network create --driver=bridge --subnet=172.27.0.0/16 --ip-range=172.27.0.0/24 --gateway=172.27.0.1 us-west-2-net
docker network create --driver=bridge --subnet=172.28.0.0/16 --ip-range=172.28.0.0/24 --gateway=172.28.0.1 us-east-1-net
docker network create --driver=bridge --subnet=172.29.0.0/16 --ip-range=172.29.0.0/24 --gateway=172.29.0.1 eu-west-1-net
# inter-regional networks
docker network create --driver=bridge --subnet=172.30.0.0/16 --ip-range=172.30.0.0/24 --gateway=172.30.0.1 uswest-useast-net
docker network create --driver=bridge --subnet=172.31.0.0/16 --ip-range=172.31.0.0/24 --gateway=172.31.0.1 useast-euwest-net
docker network create --driver=bridge --subnet=172.32.0.0/16 --ip-range=172.32.0.0/24 --gateway=172.32.0.1 uswest-euwest-net
Each node is associated to its own region network, which will attach to the docker instance eth0
NIC. We also specify the node IP address with the --ip
flag and the IP addresses of all nodes in its region using the --add-host
flag. This will create an entry in the docker instance /etc/hosts
file, which has precedence over DNS lookups. It will come clear later why this is important.
Create the haproxy.cfg
files for the HAProxy in each region.
# us-east-1
mkdir -p data/us-east-1
cat - >data/us-east-1/haproxy.cfg <<EOF
global
maxconn 4096
defaults
mode tcp
# Timeout values should be configured for your specific use.
# See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect
timeout connect 10s
timeout client 10m
timeout server 10m
# TCP keep-alive on client side. Server already enables them.
option clitcpka
listen psql
bind :26257
mode tcp
balance roundrobin
option httpchk GET /health?ready=1
server cockroach1 roach-newyork-1:26257 check port 8080
server cockroach2 roach-newyork-3:26257 check port 8080
server cockroach3 roach-newyork-2:26257 check port 8080
EOF
# us-west-2
mkdir data/us-west-2
cat - >data/us-west-2/haproxy.cfg <<EOF
global
maxconn 4096
defaults
mode tcp
# Timeout values should be configured for your specific use.
# See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect
timeout connect 10s
timeout client 10m
timeout server 10m
# TCP keep-alive on client side. Server already enables them.
option clitcpka
listen psql
bind :26257
mode tcp
balance roundrobin
option httpchk GET /health?ready=1
server cockroach4 roach-seattle-1:26257 check port 8080
server cockroach5 roach-seattle-2:26257 check port 8080
server cockroach6 roach-seattle-3:26257 check port 8080
EOF
# eu-west-1
mkdir data/eu-west-1
cat - >data/eu-west-1/haproxy.cfg <<EOF
global
maxconn 4096
defaults
mode tcp
# Timeout values should be configured for your specific use.
# See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect
timeout connect 10s
timeout client 10m
timeout server 10m
# TCP keep-alive on client side. Server already enables them.
option clitcpka
listen psql
bind :26257
mode tcp
balance roundrobin
option httpchk GET /health?ready=1
server cockroach7 roach-london-1:26257 check port 8080
server cockroach8 roach-london-2:26257 check port 8080
server cockroach9 roach-london-3:26257 check port 8080
EOF
Create the docker containers
# Seattle
docker run -d --name=roach-seattle-1 --hostname=roach-seattle-1 --ip=172.27.0.11 --cap-add NET_ADMIN --net=us-west-2-net --add-host=roach-seattle-1:172.27.0.11 --add-host=roach-seattle-2:172.27.0.12 --add-host=roach-seattle-3:172.27.0.13 -p 8080:8080 -v "roach-seattle-1-data:/cockroach/cockroach-data" crdb start --insecure --join=roach-seattle-1,roach-newyork-1,roach-london-1 --locality=region=us-west-2,zone=a
docker run -d --name=roach-seattle-2 --hostname=roach-seattle-2 --ip=172.27.0.12 --cap-add NET_ADMIN --net=us-west-2-net --add-host=roach-seattle-1:172.27.0.11 --add-host=roach-seattle-2:172.27.0.12 --add-host=roach-seattle-3:172.27.0.13 -p 8081:8080 -v "roach-seattle-2-data:/cockroach/cockroach-data" crdb start --insecure --join=roach-seattle-1,roach-newyork-1,roach-london-1 --locality=region=us-west-2,zone=b
docker run -d --name=roach-seattle-3 --hostname=roach-seattle-3 --ip=172.27.0.13 --cap-add NET_ADMIN --net=us-west-2-net --add-host=roach-seattle-1:172.27.0.11 --add-host=roach-seattle-2:172.27.0.12 --add-host=roach-seattle-3:172.27.0.13 -p 8082:8080 -v "roach-seattle-3-data:/cockroach/cockroach-data" crdb start --insecure --join=roach-seattle-1,roach-newyork-1,roach-london-1 --locality=region=us-west-2,zone=c
# Seattle HAProxy
docker run -d --name haproxy-seattle --ip=172.27.0.10 -p 26257:26257 --net=us-west-2-net -v `pwd`/data/us-west-2/:/usr/local/etc/haproxy:ro haproxy:1.7
# New York
docker run -d --name=roach-newyork-1 --hostname=roach-newyork-1 --ip=172.28.0.11 --cap-add NET_ADMIN --net=us-east-1-net --add-host=roach-newyork-1:172.28.0.11 --add-host=roach-newyork-2:172.28.0.12 --add-host=roach-newyork-3:172.28.0.13 -p 8180:8080 -v "roach-newyork-1-data:/cockroach/cockroach-data" crdb start --insecure --join=roach-seattle-1,roach-newyork-1,roach-london-1 --locality=region=us-east-1,zone=a
docker run -d --name=roach-newyork-2 --hostname=roach-newyork-2 --ip=172.28.0.12 --cap-add NET_ADMIN --net=us-east-1-net --add-host=roach-newyork-1:172.28.0.11 --add-host=roach-newyork-2:172.28.0.12 --add-host=roach-newyork-3:172.28.0.13 -p 8181:8080 -v "roach-newyork-2-data:/cockroach/cockroach-data" crdb start --insecure --join=roach-seattle-1,roach-newyork-1,roach-london-1 --locality=region=us-east-1,zone=b
docker run -d --name=roach-newyork-3 --hostname=roach-newyork-3 --ip=172.28.0.13 --cap-add NET_ADMIN --net=us-east-1-net --add-host=roach-newyork-1:172.28.0.11 --add-host=roach-newyork-2:172.28.0.12 --add-host=roach-newyork-3:172.28.0.13 -p 8182:8080 -v "roach-newyork-3-data:/cockroach/cockroach-data" crdb start --insecure --join=roach-seattle-1,roach-newyork-1,roach-london-1 --locality=region=us-east-1,zone=c
# New York HAProxy
docker run -d --name haproxy-newyork --ip=172.28.0.10 -p 26258:26257 --net=us-east-1-net -v `pwd`/data/us-east-1/:/usr/local/etc/haproxy:ro haproxy:1.7
# London
docker run -d --name=roach-london-1 --hostname=roach-london-1 --ip=172.29.0.11 --cap-add NET_ADMIN --net=eu-west-1-net --add-host=roach-london-1:172.29.0.11 --add-host=roach-london-2:172.29.0.12 --add-host=roach-london-3:172.29.0.13 -p 8280:8080 -v "roach-london-1-data:/cockroach/cockroach-data" crdb start --insecure --join=roach-seattle-1,roach-newyork-1,roach-london-1 --locality=region=eu-west-1,zone=a
docker run -d --name=roach-london-2 --hostname=roach-london-2 --ip=172.29.0.12 --cap-add NET_ADMIN --net=eu-west-1-net --add-host=roach-london-1:172.29.0.11 --add-host=roach-london-2:172.29.0.12 --add-host=roach-london-3:172.29.0.13 -p 8281:8080 -v "roach-london-2-data:/cockroach/cockroach-data" crdb start --insecure --join=roach-seattle-1,roach-newyork-1,roach-london-1 --locality=region=eu-west-1,zone=b
docker run -d --name=roach-london-3 --hostname=roach-london-3 --ip=172.29.0.13 --cap-add NET_ADMIN --net=eu-west-1-net --add-host=roach-london-1:172.29.0.11 --add-host=roach-london-2:172.29.0.12 --add-host=roach-london-3:172.29.0.13 -p 8282:8080 -v "roach-london-3-data:/cockroach/cockroach-data" crdb start --insecure --join=roach-seattle-1,roach-newyork-1,roach-london-1 --locality=region=eu-west-1,zone=c
# London HAProxy
docker run -d --name haproxy-london --ip=172.29.0.10 -p 26259:26257 --net=eu-west-1-net -v `pwd`/data/eu-west-1/:/usr/local/etc/haproxy:ro haproxy:1.7
Initialize the cluster
docker exec -it roach-newyork-1 ./cockroach init --insecure
We then attach each node to the inter-regional networks. These networks will attach to new NICs, eth1
and eth2
. We then use tc qdisc
to add an arbitrary latency to each new NIC.
Connectivity between nodes in the same region will go through the region network, over eth0
, and connectivity among nodes in different regions via the inter-regional network, over eth1
and eth2
.
Note: with the connection to the inter-regional networks, the docker instance internal DNS gets sometimes scrambled up: issuing, say, nslookup roach-seattle-1
from host roach-seattle-2
will resolve to either an IP address from the in-region network or from the inter-regional networks. If the hostname does not resolve to the in-region network IP, traffic will go through eth1
or eth2
which has the latency applied, causing in-region connectivity to look very slow. To resolve such a problem we use static IP addresses added to each node's /etc/hosts
file. This makes sure that in-region hostnames resolve to the region IP addresses, forcing the connection to go over eth0
instead of eth1
or eth2
.
# Seattle
for j in 1 2 3
do
docker network connect uswest-useast-net roach-seattle-$j
docker network connect uswest-euwest-net roach-seattle-$j
docker exec roach-seattle-$j tc qdisc add dev eth1 root netem delay 30ms
docker exec roach-seattle-$j tc qdisc add dev eth2 root netem delay 90ms
done
# New York
for j in 1 2 3
do
docker network connect uswest-useast-net roach-newyork-$j
docker network connect useast-euwest-net roach-newyork-$j
docker exec roach-newyork-$j tc qdisc add dev eth1 root netem delay 32ms
docker exec roach-newyork-$j tc qdisc add dev eth2 root netem delay 60ms
done
# London
for j in 1 2 3
do
docker network connect useast-euwest-net roach-london-$j
docker network connect uswest-euwest-net roach-london-$j
docker exec roach-london-$j tc qdisc add dev eth1 root netem delay 62ms
docker exec roach-london-$j tc qdisc add dev eth2 root netem delay 88ms
done
Cluster configuration
You will require an Enterprise license to unlock some of the features described below, like the Map view. You can request a Trial license or, alternatively, just skip the license registration step - the deployment will still succeed.
Open a SQL shell. You can download the cockroachdb
binary which includes a built in SQL client or, thanks to CockroachDB's compliance with the PostgreSQL wire protocol, you can use the psql
client.
# ----------------------------
# ports mapping:
# 26257: haproxy-seattle
# 26258: haproxy-newyork
# 26259: haproxy-london
# ----------------------------
# use cockroach sql, defaults to localhost:26257
cockroach sql --insecure
# or use the --url param for another host:
cockroach sql --url "postgresql://localhost:26258/defaultdb?sslmode=disable"
# or use psql
psql -h localhost -p 26257 -U root defaultdb
Run below SQL statements:
-- let the map know the location of the regions
UPSERT into system.locations VALUES
('region', 'us-east-1', 37.478397, -76.453077),
('region', 'us-west-2', 43.804133, -120.554201),
('region', 'eu-west-1', 53.142367, -7.692054);
SET CLUSTER SETTING cluster.organization = "Your Company Name";
-- skip below if you don't have a Trial or Enterprise license
SET CLUSTER SETTING enterprise.license = "xxxx-yyyy-zzzz";
At this point you should be able to view the CockroachDB Admin UI at http://localhost:8080. Check the map and the latency table:
Congratulations, you are now ready to start your dev work on a simulated multi-region deployment!
References
Clean up
Stop and remove containers, delete the data volumes, delete the network bridges
for i in seattle newyork london
do
for j in 1 2 3
do
docker stop roach-$i-$j
docker rm roach-$i-$j
docker volume rm roach-$i-$j-data
done
done
docker network rm us-east-1-net us-west-2-net eu-west-1-net uswest-useast-net useast-euwest-net uswest-euwest-net
Top comments (5)
Hi Fabio!
Is it possible to run cockroach demo in one docker container? Im facing node connectivity issues even using --network=host
Sure, this is a very simple way to bring up a single-node CockroachDB cluster
Then open the SQL prompt with
Open the DB Console at localhost:8080
Yes, start-single-node works fine, but we will have multi regional setup in production and I wanted to run with "cockroach demo" command in container, is it possible? It works locally without container, but I cannot get it running in docker. If I run it with --logtostderr flag it looks like some connectivity issues between nodes inside container
docker run cockroachdb/cockroach demo --nodes=3 --no-example-database --logtostderr
Interesting, I don't know on top of my head, and I invite you to join our Community Slack channel, cockroachlabs.com/join-community/