DEV Community

Cover image for How to create a Docker Swarm of Appwrite and Swarmpit containers
Obisike Treasure for Hackmamba

Posted on • Updated on

How to create a Docker Swarm of Appwrite and Swarmpit containers

The demand for efficient and scalable infrastructure management has never been greater in today's dynamic software development landscape. Some of the best tools for this landscape are Docker Swarm, Appwrite, and Swarmpit, which provide high flexibility and performance.

By leveraging the capabilities of these technologies, organizations can achieve increased scalability, reliability, and productivity in their application deployment and management processes. These technologies facilitate seamless containerization, faster development with pre-built services, and easier cluster administration.

In this tutorial, you'll learn how to deploy your Appwrite containers using Docker Swarm and manage the containers using Swarmpit.

Prerequisites

To get the most out of this tutorial, ensure you have the following:

  • Basic knowledge of Docker
  • An ARM-based personal computer, such as the M1 MacBook
  • Terminal

Set up the virtual machine servers

First, you’ll be setting up two virtual machines, Manager and Worker, which will act as the servers for our swarm. To proceed, do the following:

  • Download and Install the UTM virtualizer from the official website
  • Download the Ubuntu server 22.04.2 LTS for ARM from their official website

After that, follow the instructions in this documentation to install the Ubuntu server on the virtual machines.

A screenshot showing the servers created

During the Ubuntu server installation, you will have the opportunity to install Docker and OpenSSH. It is recommended that you select both of these options during the installation process.

A screenshot showing the manager server and the worker server

Once you have completed the installation, you can verify the installation of Docker and OpenSSH. To check the Docker version, run the following command in the terminal:

docker -v
Enter fullscreen mode Exit fullscreen mode

If Docker fails to install during the Ubuntu installation, you can refer to the documentation for installation instructions.

For OpenSSH, you can check the SSH service status by executing the command below:

systemctl status sshd.service
Enter fullscreen mode Exit fullscreen mode

Once you've completed that, execute the following command to install Git:

sudo apt install git
Enter fullscreen mode Exit fullscreen mode

Set up the network file system (NFS) mount

The NFS mount enables the sharing of a directory. As the service needs to be deployed across multiple nodes, it is crucial to share certain resources, such as volumes, across all nodes.

Accessing the nodes via SSH
To ensure smooth server access, start by retrieving the IP address for the servers using the following command:

ip addr | grep “inet 192”
Enter fullscreen mode Exit fullscreen mode

A screenshot showing the IP address of the worker

You'll need to repeat the steps to retrieve the IP addresses for both the manager and worker nodes.

Once you have obtained both the manager and worker IP addresses, follow these instructions to SSH into the servers from the virtual machine host computer.

Open two terminal sessions on the virtual machine host computer; in the first terminal, execute the following command to connect to the manager server:

ssh manager@<manager_ip>
Enter fullscreen mode Exit fullscreen mode

For the second terminal, run the command:

ssh worker@<worker_ip>
Enter fullscreen mode Exit fullscreen mode

This connects to the worker server.
Replace both <manager_ip> and <worker_ip> with their corresponding IP addresses.

A screenshot showing the successful SSH into the manager and worker node

Setting up the NFS
Next, you can follow these steps to set up the NFS shared directory on the manager's SSH terminal.
First, install the NFS server package on the host by executing the command:

sudo apt install nfs-kernel-server
Enter fullscreen mode Exit fullscreen mode

After installing the NFS server package, you need to create the shared directory on the host using the following command:

sudo mkdir ./nfs/app -p
Enter fullscreen mode Exit fullscreen mode

To ensure security, NFS converts any root actions performed on the worker (client) to the nobody:nogroup credentials. To align the directory ownership with these credentials, you need to execute the following command on the manager node terminal:

sudo chown nobody:nogroup ./nfs/app
Enter fullscreen mode Exit fullscreen mode

Afterward, open the /etc/exports file on the host machine with root privileges:

sudo nano /etc/exports
Enter fullscreen mode Exit fullscreen mode

Within the /etc/exports file, create a configuration line for each directory you want to share. Replace with the actual IP address of the worker node. For example:

/home/manager/nfs/app    <worker_ip>(rw,sync,no_subtree_check)
Enter fullscreen mode Exit fullscreen mode

In this configuration line, the rw option grants read and write access to the client, sync ensures changes are written to disk before replying, and no_subtree_check disables subtree checking to prevent issues when files are renamed while open on the client.

Save and close the /etc/exports file.
Finally, restart the NFS server to make the shares available:

sudo systemctl restart nfs-kernel-server
Enter fullscreen mode Exit fullscreen mode

If you have a firewall, you must adjust the settings to allow the worker node to access the files.

After that, you have to mount this directory on the worker node. To do this, Firstly, install the nfs-common on the worker node using the command:

sudo apt install nfs-common
Enter fullscreen mode Exit fullscreen mode

Then create mount the directory using the following commands:

sudo mkdir -p /nfs/app
sudo mount <manager_ip>:/home/manager/nfs/app /nfs/app
Enter fullscreen mode Exit fullscreen mode

Replace <manager_ip> with the actual manager IP. Once that is done, you can run this command to check that the NFS shared directory has been mounted.

df -h
Enter fullscreen mode Exit fullscreen mode

A screenshot showing the mounted shared directory

Setting up the swarm

Docker Swarm provides a native clustering and orchestration solution for Docker, enabling efficient distribution and scalability of services across multiple nodes.

To initiate Docker Swarm on the manager node, execute the following command:

sudo docker swarm init --advertise-addr <manager_ip>
Enter fullscreen mode Exit fullscreen mode

By running this command specifically on the manager node, Docker Swarm is activated, and a join token is generated. This token allows for creating and managing a swarm of Docker worker nodes, empowering you to scale and distribute services seamlessly.

A screenshot showing the join token and the join command

Copy the command and run it on the worker node to join the worker node to the swarm.

A screenshot showing that the worker node has joined the swarm

Once this is successful, you can check whether the node has been added to the swarm on the manager node by running the command:

sudo docker node ls
Enter fullscreen mode Exit fullscreen mode

A screenshot showing the nodes

Creating and deploying the Appwrite services on the swarm

To do this, you’ll be using the docker stack command. This command allows you to set up multiple Docker services via the compose file. It uses the YAML file to set up its services.
First, you must download the appwrite docker-compose.yml from the Appwrite’s documentation and modify its content to be swarm-compatible.

After downloading the file, you can open it using any text editor you prefer.

Preparing the Appwrite swarm file
To proceed, use the shared directories as volumes, as Docker Swarm does not typically share volumes between nodes. Using the previously created NFS directory, you can easily share directories between nodes.

To make it happen, you must take the following steps:

On your manager node, run the following commands.

cd ./nfs/app
Enter fullscreen mode Exit fullscreen mode

This step is crucial because the compose file relies on specific files from this repository. Afterwards, you can create the necessary volume directories by executing the following command:

mkdir -p ./appwrite/{mariadb,redis,cache,uploads,certificates,functions,influxdb,config,builds,app,src,dev,docs,tests,public,appwrite} 
Enter fullscreen mode Exit fullscreen mode

After creating the volume directories, proceed to update the volumes in the downloaded compose file by deleting the volume specifications in the file.

A screenshot showing the volumes specification to be deleted

Next, replace the volume specification of each service with their corresponding shared directory volume as indicated in the table below:

Old New
appwrite-config:/storage/config:ro ./appwrite/config:/storage/config:ro
appwrite-certificates:/storage/certificates:ro ./appwrite/certificates:/storage/certificates:ro
appwrite-uploads:/storage/uploads:rw ./appwrite/uploads:/storage/uploads:rw
appwrite-cache:/storage/cache:rw ./appwrite/cache:/storage/cache:rw
appwrite-certificates:/storage/certificates:rw ./appwrite/certificates:/storage/certificates:rw
appwrite-functions:/storage/functions:rw ./appwrite/functions:/storage/functions:rw

Next, to optimize request handling and resource allocation in a Docker Swarm cluster, restrict the Traefik service to the manager node. Restricting this centralizes routing and load balancing, avoiding conflicts and simplifying management.

Add the following configuration below the Traefik service's port specification to enforce this restriction.

deploy:
      replicas: 1
      placement:
        constraints:
          - "node.role==manager"
Enter fullscreen mode Exit fullscreen mode

The deploy option specifies that only one replica be created, and the service be deployed on the manager node.

Afterward, it is necessary to include the environment variables. Typically, you need to specify the values for each service's environment variable individually. However, in this case, you can simplify the process by using an environment file containing all the required services' secrets. Simply add this file to each service requiring environment variables by adding the following specification.

env_file:
      - .env.appwrite
Enter fullscreen mode Exit fullscreen mode

Next, eliminate any unnecessary specifications in the file to ensure Swarm compatibility. Remove specifications such as x-logging and container-name, as they are incompatible with Swarm. Additionally, remove all occurrences of <<: *x-logging from the compose file.

A screenshot showing the x-logging specification.

The docker-compose file should now look like this

Creating the swarm file on the manager node
Once completed, you must copy the compose file to the manager node for deployment. To accomplish this, follow these steps.

First, on the manager node, run the command:

sudo nano /home/manager/nfs/app/appwrite.swarm.yml
Enter fullscreen mode Exit fullscreen mode

Then copy the already modified file content from your text editor and paste it into the manager node’s nano editor interface.

A screenshot showing the pasted compose file on the manager node

Save the file.

Next, run the following command.

sudo nano /home/manager/nfs/app/.env.appwrite
Enter fullscreen mode Exit fullscreen mode

After doing so, add this content.

Once you have completed all the previous steps, execute the following command on the manager's node to deploy the services:

sudo docker stack deploy -c /home/manager/nfs/app/appwrite.swarm.yml
Enter fullscreen mode Exit fullscreen mode

This command initiates deploying the services.

A screenshot showing the services deploying

The deployment takes a while, as the docker stack command pulls the images from the Docker Hub before running the containers.

To list all services, execute:

sudo docker service ls
Enter fullscreen mode Exit fullscreen mode

For more detailed information about a specific service, use:

sudo docker service ps —no-trunc <SERVICE_NAME>
Enter fullscreen mode Exit fullscreen mode

These commands provide valuable insights into the status and details of the services running within the Docker Swarm cluster.

A screenshot showing the Appwrite services

Once the services are up and running, open the web browser on the virtual machine’s host computer and visit http://<manager_ip> to preview the Appwrite app.

A screenshot showing the appwrite login page

Create and deploy the Swarmpit Services on the Swarm

Once Appwrite has been successfully deployed, you can then proceed to set up the Swarmpit on the swarm.

To deploy Swarmpit, run the following command on the manager node:

sudo git clone https://github.com/swarmpit/swarmpit -b master && \
sudo docker stack deploy -c swarmpit/docker-compose.arm.yml swarmpit
Enter fullscreen mode Exit fullscreen mode

A screenshot showing the swarmpit services being created

Once the swarmpit services have been deployed, navigate to the VM’s host browser, then visit http://192.168.64.12:888.

A screenshot showing swarmpit

Managing the swarm with swarmpit

To proceed, create your first admin account on Swarmpit by entering your username and password, then click Create Admin.

A screenshot showing the entered username and password

After creating your account, the browser will take you to the dashboard where you can manage the deployed services.

A screenshot showing the swarmpit interface

Conclusion

Docker Swarm and Swarmpit provide numerous benefits when serving Appwrite. These tools enable rapid and efficient infrastructure setup and management. Swarmpit distinguishes itself by offering a user-friendly graphical interface in addition to swarm management capabilities.
Furthermore, you can gain numerous advantages for your Appwrite applications by leveraging the power of Docker Swarm and Swarmpit. Scalability, high availability, simplified updates, service discovery, load balancing, and centralized control are just a few examples.

Resources

You may find the following resources useful:

Top comments (0)