DEV Community

Cover image for Replication: cluster creation, joining, updating table settings
Sergey Nikolaev
Sergey Nikolaev

Posted on • Edited on • Originally published at manticoresearch.com

Replication: cluster creation, joining, updating table settings

About me

Mike
Hello, I'm Mike.
I recently started working at Manticore as a Developer Advocate. I am someone not completely distant from IT, but I'm catching up with modern technologies. In this blog, I'll share my experiences and what I learn about Manticore. I plan to document my journey in a diary format, explaining what Manticore is and how to use it. Let's discover how things work together, identify issues, and engage with developers in real time.


This is one of my first blog posts. If you are interested in learning about Manticore with me, I will keep you updated in:



Replication

What is the purpose of replication in general?

In the previous article, we updated the full-text search settings by replacing the wordforms file. While doing that, our table was unavailable. In the context of our store example, the downtime was brief — only a few seconds. For small projects, such a brief interruption might be a negligible price for enhanced convenience. However, as our store expands with more products and customers, changes to server settings could now result in system downtimes lasting several hours. The larger the database, the longer the reindexing process. Although reindexing with Manticore takes just a couple of hours — not days or weeks — we prefer to avoid even this short delay. Additionally, what would happen if something were to occur to our sole server? Or if the number of clients becomes too large for one server? All the benefits of a high-speed search engine would be lost. Consequently, we are now considering creating multiple copies of the database that operate simultaneously and in parallel. This setup would ensure that when data is written to one server, it is automatically replicated across other connected servers or nodes.

In Manticore, replication between nodes is implemented through the Galera library. Galera uses synchronous multi-master replication technology, which provides high availability and fault tolerance for database clusters. When a new record is added to one of the servers (node), the change is instantly transmitted to all connected nodes. This process includes three phases: writing to the local transaction log on the source node, replicating changes to other nodes, and confirming receipt of the data by all nodes before the transaction is actually applied. The transaction is applied only after receiving confirmation from all nodes of the cluster, which ensures data consistency on all nodes. For the user, these processes remain invisible, and access to new data from any node occurs instantly after the successful completion of the transaction on that node.

Initial setup

At the moment, our container from the previous article is working fine, but if it stops and gets deleted, the data will be lost forever. The Manticore Docker guide recommends mapping the data directory to outside of the container. Additionally, the configuration we made last time did not include forwarding the binary port 9312 needed for replication. Let's fix this by creating a snapshot of the folder from the running container and launching a new one with the correct port and storage settings.

First, we need to ensure data integrity. We enter the container, log into Manticore and "freeze" the table, so all data that might be in memory will be reliably moved to the disk, and nothing will change on the disk during copying.

FREEZE products;
Enter fullscreen mode Exit fullscreen mode

Next, make copy the data directory from the container:

docker cp Manticore:/var/lib/manticore 
Enter fullscreen mode Exit fullscreen mode

The command consists of three parts:

  • cp - copy,
  • Manticore:/var/lib/manticore - the container name and the path to the folder inside it,
  • . - the local path where to copy, in this case, the current directory.

To let the current container continue working as before, we "unfreeze" the table:

UNFREEZE products;
Enter fullscreen mode Exit fullscreen mode

Now, let's create a new container with the settings we need:

docker run -e EXTRA=1 —name manticore_new -v $(pwd)/manticore:/var/lib/manticore -p 10306:9306 -p 10312:9312 -d manticoresearch/manticore
docker exec -it manticore_new mysql
Enter fullscreen mode Exit fullscreen mode

As a result, we have a clone of our container with forwarded ports and the database file location on the server outside the container. Let's check if everything is in place in our new container:

Image description

Great, all the data has been transferred, including the configuration and the wordforms file, and now we will create a cluster based on this container.

By the way, a little about the wordforms file: in the folder we copied, there is a copy of this file. I recommend copying it out, because in the future, as practice has shown, you might need to edit it, and using the file located in the folder with the table is a bad idea, which can eventually lead to some problems. I made a copy outside the database folder: cp manticore/products/wf_pet_products.txt wf_pet_products.txt. And some good news, I talked to my colleagues — soon you won't need to manually move the wordforms file when using mysqldump, everything will automatically be saved in the dump. Here is the task on GitHub.

Creating our first cluster

To implement our new cluster no complex operations are required — it is enough to create a cluster with a name using one command, then attach the necessary table to it. After that, we just need to check how everything is set up.

  • To add a cluster, use the CREATE CLUSTER command.
  • To add a table to the cluster, use the ALTER CLUSTER ADD command. It's important to note that clustering, replication, and other Manticore features are available only for real-time tables!

So, let's create our first cluster and immediately add our table to it:

create cluster pet_shop;
alter cluster pet_shop add products;
Enter fullscreen mode Exit fullscreen mode

Now let's check what we got. To do this, use the show status command, but it will give a lot of information, and to avoid getting lost in it, you can use a filter with the like operand:

show status like '%cluster%'
Enter fullscreen mode Exit fullscreen mode

Image description

At the moment, we are interested in the following lines: cluster_name, cluster_pet_shop_status, cluster_pet_shop_indexes. These show the name of the cluster, the status (if it says Primary, then everything is good), and the tables that are currently in the cluster.
We should also note the line cluster_pet_shop_incoming_addresses. In my setup, it looks like this: 172.17.0.5:9312,172.17.0.5:9315:replication. We will need the address 172.17.0.5:9312. We have port 9312 mapped to port 10312 outside of Docker, but in the example, we will run a new node within the same Docker network 172.17.0.0, making port usage simpler.

Technically, we could have used the original container from the article about Wordforms by just adding the cluster to it and connecting the table. Then we could create a new container with external storage, and replication would copy everything to the local image. But then I wouldn't show how to solve the problem in another way, by saving a dump through copying the folder from the container... =)
We make our own bed and then lie in it.

The first node is already set up, no more additional actions are needed. Simple? It's a breeze!

Adding another node to the cluster

First, we need to start another container with Manticore. We won't transfer anything to it, just connect it to the existing cluster. The local storage must be different (the folders you connect should be different if you are doing this on the same server). It's important to remind about ports, as we have already used ports 9306, 10306, and 10312. So, let's assign different ports, for example, 11306 and 11312.

We create another container with an instance of Manticore, naming it Manticore_new_1. We specify ports 11306 and 11312, for the volume we specify manticore_new_1 (the local folder must already exist), and don't forget to set the environment variable EXTRA=1.

Image description

Or all the same in one command:

docker run -e EXTRA=1 --name manticore_new_1 -v $(pwd)/manticore_new_1:/var/lib/manticore -p 11306:9306 -p 11312:9312 -d manticoresearch/manticore
Enter fullscreen mode Exit fullscreen mode

Log in through the MySQL client. Here’s a nuance: if you use a local MySQL client to connect, not the one inside the container, then use the external port you specified when creating the node — 11306. If you use Docker interface and enter through the container terminal (docker exec), then use the default port for Manticore — 9306. In any case, connect. Is there any tables (show tables). The result is expectedly empty, as we just created an empty container with Manticore. Now connect it to the existing cluster — join cluster pet_shop at '172.17.0.5:9312';

Image description

For clarity, I changed the console color for the second node.

As we can see, the table has been added, the number of records matches the original node, and the configuration of the stemmer and wordforms file is correct.
Basically, that's it. The cluster is assembled and working, data is being transferred between nodes.

Important Note. If the node you are connecting to the cluster has tables with the same names as the tables in the cluster, the tables on the node will be overwritten with data from the cluster. Cluster data has a higher priority over local tables, so if you are connecting an existing node that already has some data, make sure the names of the existing tables are different from those in the cluster. And as always, in any risky situation, make a backup.

Managing data in the cluster

When working with table data in the cluster, there are some differences. The insert command now requires some adjustments — we need to specify, besides the table name, its corresponding cluster's name : insert into <cluster name>:<table name>(<fields>) values (<values>). Don't forget to update this command in your client.

Let's add another record while being in the newly created node:

insert into pet_shop:products (name, info, price, avl) values ('Aquarium ship', 'Decorative ship model for aquarium', 6, 1);
Enter fullscreen mode Exit fullscreen mode

Image description

According to the result, the record is added, but how about the other node?

Result for second node

Everything is in place here too!
Let's try updating the data:

mysql> update products set price = 8.0 where id = 3317338896206921730;
ERROR 1064 (42000): table products: table 'products' is a part of cluster 'pet_shop', use 'pet_shop:products'
Enter fullscreen mode Exit fullscreen mode

For updating, changing, and especially deleting records, we now also need to specify the cluster name in the table name:

update pet_shop:products set price = 8 where id = 3317338896206921730;
Query OK, 1 row affected (0.01 sec)
Enter fullscreen mode Exit fullscreen mode

Thus, the data it is now transferred between nodes automatically and without much complexity, except for a small change in the write commands.

Changing settings of a replicated table

What if we need to change the table configuration or delete it, for example, to update the wordforms file? In the previous article, we had to delete and recreate the table, leaving users without server responses for some time. In that example, the time needed to update the settings was very short because the table was small. But with larger data sets, tables with millions and billion of records, etc., updating and indexing can take a long time, often measured in hours. To ensure uninterrupted service for Manticore-based applications, there are distributed tables, but we will discuss this in another article.

For now, we have a replicated database across several nodes with the products table. We can change this table's configuration by using the cluster name prefix, but we can't delete it, even with the prefix. To change the settings of a replicated table, first, disconnect it from the cluster: ALTER CLUSTER <cluster name> DROP <table name>. This will remove the table only from the cluster, not from the database. After the table is detached from the cluster, updating data from the application won't be possible because it references the cluster (e.g., insert into pet_shop:products ...), and the table is no longer in it (the application should handle this situation). Now we can delete or reconfigure the table.

For example, let's update the table configuration: switch from a stemmer to a lemmatizer. Here are the steps:

  • Disconnect the table from the cluster.
  • Change the morphology in the table from stemmer to lemmatizer.
  • Reload the data into the table.
  • Restore the table in the cluster.
  • Check on the second node.

Disconnecting the table from the cluster:

ALTER CLUSTER pet_shop DROP products;
Enter fullscreen mode Exit fullscreen mode

Now the table on all nodes in the cluster is disconnected from it, and its schema and settings can be modified. The logic of our work implies that on one node we perform some technical work, while the other serves select queries from the users. As a protection measure from, adding new records will no longer be possible since the application uses commands in the format <cluster>:<table> and this table is no longer in the cluster.

update pet_shop:products set price = 9 where id = 3317338896206921730;
ERROR 1064 (42000): table products: table 'products' is not in any cluster, use just 'products'
Enter fullscreen mode Exit fullscreen mode

After we have detached the table from the cluster, let's try to perform a select query:

Image description

As we can see, the query is processed, the data is provided, and the end user should be happy.

Now let's modify the morphology from stemmer to lemmatizer, reindex the records, and reconnect everything. In the previous article, we replaced the wordforms file and stemmer using some rough methods. Here we will use more civilized tools. All operations to replace the wordforms file or change the morphology used in the table can be done with one command: ALTER TABLE <table name> morphology='<morph type>'. Let's replace our stemmer with a lemmatizer:

ALTER TABLE products morphology='lemmatize_en_all';
Enter fullscreen mode Exit fullscreen mode

After changing any parameters related to text preprocessing in the database, it is necessary to reindex all existing records so that the morphology and other tokenization settings are applied for old documents:

mysqldump -P9306 -h0 --replace --skip-comments manticore products | mysql -P9306 -h0;
Enter fullscreen mode Exit fullscreen mode

Here we use the mysqldump technique, redirecting the dump output directly into Manticore via MySQL. The --replace option forces mysqldump to generate REPLACE commands instead of INSERT, allowing us to "reload" the entire table at once. Keep in mind that the execution time of this command for a large table or on a weak server can be long, but this does not scare us much since we have a backup node that is currently serving user requests and the mysqldump command does not block the table.

After performing this simple table reconfiguration with the products table, we get a new version:

Image description

The new settings and all data are applied, now let's add this table back to the cluster:

ALTER CLUSTER pet_shop ADD products;
Enter fullscreen mode Exit fullscreen mode

That's it, now the table is updated on all servers, and the data was available to users all the time from the second node while we were configuring and checking everything to make sure it worked correctly.

Image description

It is important to pay attention to the recovery of the entire cluster if all nodes fail — with the wrong recovery sequence, there is a chance of losing all its settings. Detailed recovery methods are described in the documentation.

By the way, you can easily play with replication in our interactive course play.manticoresearch.com.

That's a wrap for today! Smooth sailing ahead! This was Mike, wishing you good fortune!

Top comments (0)