Working at a startup, we finally turned our attention away from fixing bugs in production, and decided to put some work into standardizing the dev environment setup for our engineers.
We're setting up MySQL and Cassandra docker containers, using docker-compose
, and also mounting the container volumes onto the host machine. This is very important, as it enables us to persist data in the database containers, even if the container itself needs to be restarted for some reason.
Setting up env variables
Since we're trying to create a docker compose file for use by the entire team, instead of just a single developer, it would be nice to give them some control over where they place these files, where they mount volumes, etc. A convenient way of doing this is by using environment variables in the template. Just specify the variables in a .env
file, which you need to specify while running the compose command. (documentation)
docker-compose --env-file ./compose.env up
This is what my env file looks like-
CONF_DIR=./config
VOL_DIR=./vols
INIT_DIR=./init
MySQL Container
mysql:
image: mysql:5.7
container_name: mysql_local
restart: always
environment:
MYSQL_ROOT_PASSWORD: 'password'
ports:
- '3307:3306'
volumes:
- ${VOL_DIR}/mysql_local:/var/lib/mysql/
- ${INIT_DIR}/mysql_local:/docker-entrypoint-initdb.d
- ${CONF_DIR}/mysql_local:/etc/mysql/conf.d
Just a couple of things to note here -
I like mapping an unused port like 3307 on the host machine to 3306 on the container (MySQL's default port), as this allows the user to independently install MySQL on their machine, if they'd like to, without running into a port conflict because of our container.
I've mapped 3 separate volumes, let's talk about them -
-
/var/lib/mysql/
- This is just the volume in which all data written into your MySQL container will be stored. -
/etc/mysql/conf.d
- The mapped directory for this is where you should place your custommy.cnf
file, if you want your container to start with custom configurations. -
/docker-entrypoint-initdb.d
- If you want to run any commands in your container on startup, you should put them in the mounted volume for this directory, as.sql
files. For example, I use it to create databases, and users with permissions.
CREATE DATABASE my_test_db CHARACTER SET utf8 COLLATE utf8_general_ci;
CREATE USER 'local_user'@'%' IDENTIFIED by 'local_password';
GRANT EXECUTE ON `my_test_db`.* to local_user@`%`;
FLUSH PRIVILEGES;
With this, we've created the compose script for our MySQL container, and set it up with a persistent mounted data volume, as well as a custom configuration file and initialization scripts.
Cassandra Container
cassandra:
image: cassandra:3.11.2
container_name: cassandra_local
ports:
- "9043:9042"
volumes:
- "${VOL_DIR}/cassandra_local:/var/lib/cassandra"
- "${INIT_DIR}/cassandra_local/cassandra_init.sh:/cassandra_init.sh"
command: "sh /cassandra_init.sh"
The setup for the cassandra container is extremely similar. The only difference is that as of right now, there is no easy way to trigger cql commands on container startup, like there is for MySQL. In my case, I need to create a keyspace when the container is created. A workaround for that, as you can see in the container definition above, is to put a script file in the container, and then use Docker's command keyword to override the default command, and use our custom script instead.
CQL="CREATE KEYSPACE IF NOT EXISTS test_tables WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 1};"
until echo $CQL | cqlsh; do
echo "cqlsh: Cassandra is unavailable to initialize - will retry later"
sleep 2
done &
exec /docker-entrypoint.sh "$@"
You'll notice a while loop in the script, and what that accomplishes is to wait for the cqlsh service to actually be ready, or available before you try executing your custom cql statement on it. Without the waiting while loop, your command will likely fail, as it would attempt to run the cql statement before cqlsh
is ready. Once the while loop exits, we hand control back to Docker's default entrypoint script. (Credit to @veysiertekin for this solution at this GitHub issue.)
Running docker compose
docker-compose --env-file ./config/compose.env up -d
The simplest part of the process. All you have to do is run this command on your command line, specifying the env-file we created in the first step, and using the -d
option to run docker-compose in daemon mode.
And that's it! After some time (might take longer if it's your first time pulling these images), you should be able to see the containers up and running by using docker ps
. 😀
Top comments (0)