<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Javier Vidal</title>
    <description>The latest articles on DEV Community by Javier Vidal (@jv).</description>
    <link>https://dev.to/jv</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jv"/>
    <language>en</language>
    <item>
      <title>Test Ansible playbooks with Vagrant</title>
      <dc:creator>Javier Vidal</dc:creator>
      <pubDate>Thu, 04 Feb 2021 18:15:40 +0000</pubDate>
      <link>https://dev.to/jv/test-ansible-playbooks-with-vagrant-47dj</link>
      <guid>https://dev.to/jv/test-ansible-playbooks-with-vagrant-47dj</guid>
      <description>&lt;p&gt;We normally use Ansible playbooks to configure servers in a cloud provider. When we develop those playbooks we need to test them locally. Thanks to Vagrant it is quite easy to start a virtual machine and run our playbooks against it.&lt;/p&gt;

&lt;p&gt;There are two ways to achieve this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://medium.com/@yortuc/testing-ansible-playbooks-with-vagrant-722619986013"&gt;using Vagrant’s Ansible provisioning feature&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;using the virtual machine like any other remote server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm going to explain the second option here.&lt;/p&gt;

&lt;p&gt;First, let's create a virtual machine called, for example, &lt;code&gt;tau&lt;/code&gt;. The &lt;code&gt;Vagrantfile&lt;/code&gt; could be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="no"&gt;Vagrant&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define&lt;/span&gt; &lt;span class="s1"&gt;'tau'&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;debian&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;debian&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;box&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'debian/buster64'&lt;/span&gt;
    &lt;span class="n"&gt;debian&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;network&lt;/span&gt; &lt;span class="ss"&gt;:private_network&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s1"&gt;'192.168.27.2'&lt;/span&gt;
    &lt;span class="n"&gt;debian&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'tau'&lt;/span&gt;
    &lt;span class="n"&gt;debian&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provider&lt;/span&gt; &lt;span class="s1"&gt;'virtualbox'&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
      &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'2048'&lt;/span&gt;
      &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cpus&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can start the server with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vagrant up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have to add it to Ansible's inventory, but we need to know the ssh key Vagrant is using when we connect using &lt;code&gt;vagrant ssh&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vagrant ssh-config
Host tau
  HostName 127.0.0.1
  User vagrant
  Port 2200
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /Users/javiervidal/test/.vagrant/machines/tau/virtualbox/private_key
  IdentitiesOnly yes
  LogLevel FATAL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Interesting, we can use &lt;code&gt;/Users/javiervidal/test/.vagrant/machines/tau/virtualbox/private_key&lt;/code&gt; in the inventory. We need to add a line like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tau ansible_host=192.168.27.2 ansible_port=22 ansible_ssh_user=vagrant ansible_ssh_private_key_file=/Users/javiervidal/test/.vagrant/machines/tau/virtualbox/private_key ansible_python_interpreter=/usr/bin/python3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And finally we can test that Ansible can connect to &lt;code&gt;tau&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ansible tau &lt;span class="nt"&gt;-m&lt;/span&gt; ping   
tau | SUCCESS &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"changed"&lt;/span&gt;: &lt;span class="nb"&gt;false&lt;/span&gt;,
    &lt;span class="s2"&gt;"ping"&lt;/span&gt;: &lt;span class="s2"&gt;"pong"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;😀&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>vagrant</category>
    </item>
    <item>
      <title>ZooKeeper cluster with Docker Compose</title>
      <dc:creator>Javier Vidal</dc:creator>
      <pubDate>Tue, 26 Jan 2021 13:56:59 +0000</pubDate>
      <link>https://dev.to/jv/zookeeper-cluster-with-docker-compose-jml</link>
      <guid>https://dev.to/jv/zookeeper-cluster-with-docker-compose-jml</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/jv/zookeeper-cluster-in-vagrant-49e9"&gt;this post&lt;/a&gt; I talked about setting up a ZooKeeper cluster using Vagrant. In this post I will talk about doing it without having to launch 3 different virtual servers in our development environment thanks to &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt;. We will need Docker and Docker Compose installed, of course.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;docker-compose.yml&lt;/code&gt; will look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.9"&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;zk1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zk1&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zk1&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bitnami/zookeeper:3.6.2&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;21811:2181&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ALLOW_ANONYMOUS_LOGIN=yes&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ZOO_SERVER_ID=1&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ZOO_SERVERS=0.0.0.0:2888:3888,zk2:2888:3888,zk3:2888:3888&lt;/span&gt;
  &lt;span class="na"&gt;zk2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zk2&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zk2&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bitnami/zookeeper:3.6.2&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;21812:2181&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ALLOW_ANONYMOUS_LOGIN=yes&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ZOO_SERVER_ID=2&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ZOO_SERVERS=zk1:2888:3888,0.0.0.0:2888:3888,zk3:2888:3888&lt;/span&gt;
  &lt;span class="na"&gt;zk3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zk3&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zk3&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bitnami/zookeeper:3.6.2&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;21813:2181&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ALLOW_ANONYMOUS_LOGIN=yes&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ZOO_SERVER_ID=3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ZOO_SERVERS=zk1:2888:3888,zk2:2888:3888,0.0.0.0:2888:3888&lt;/span&gt;
  &lt;span class="na"&gt;zoonavigator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zoonavigator&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;elkozmon/zoonavigator&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;9000:9000&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It will set up a 3 node cluster, &lt;code&gt;zk1&lt;/code&gt;, &lt;code&gt;zk2&lt;/code&gt; and &lt;code&gt;zk3&lt;/code&gt;. All we have to do is:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt; zk1 zk2 zk3 zoonavigator
Pulling zk1 &lt;span class="o"&gt;(&lt;/span&gt;bitnami/zookeeper:3.6.2&lt;span class="o"&gt;)&lt;/span&gt;...
3.6.2: Pulling from bitnami/zookeeper
ff7c165d667c: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;68ff1df62e09: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;b2c65a5b1bba: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;299e03a29f26: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;ac359c31a869: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;73ff1a3aae4d: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;2ff27ed904ba: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;671b552f88a4: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;8a18f1c76d65: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;144c6a5e1dd4: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;6c8379a42a77: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;2c88ec00718c: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;ce2390504de6: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;Digest: sha256:4bfdd618105aad42160263e5586b9295aa949a932f9bf7736fe148328adc6e3a
Status: Downloaded newer image &lt;span class="k"&gt;for &lt;/span&gt;bitnami/zookeeper:3.6.2
Pulling zoonavigator &lt;span class="o"&gt;(&lt;/span&gt;elkozmon/zoonavigator:&lt;span class="o"&gt;)&lt;/span&gt;...
latest: Pulling from elkozmon/zoonavigator
bb79b6b2107f: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;00028440d132: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;ebd07266fb43: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;33e3eef28e5d: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;d356e0493ed9: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;d20cf66b6cb3: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;0f036ba81390: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;4245cea4b993: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;2b8eb69a8e7b: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;Digest: sha256:a1089a133f116bd760d361d6b0c0b4cbe2fd28d9a81db13cb53547669e70e933
Status: Downloaded newer image &lt;span class="k"&gt;for &lt;/span&gt;elkozmon/zoonavigator:latest
Creating zk3          ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating zoonavigator ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating zk1          ... &lt;span class="k"&gt;done
&lt;/span&gt;Creating zk2          ... &lt;span class="k"&gt;done&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We could even access the ZooKeeper servers from the host server thanks to port forwarding: &lt;code&gt;zk1&lt;/code&gt; will be available through port 21811, &lt;code&gt;zk2&lt;/code&gt; through port 21812, and &lt;code&gt;zk3&lt;/code&gt; through port 21813. &lt;/p&gt;

&lt;p&gt;Finally, we can also set up a container that will run &lt;a href="https://github.com/elkozmon/zoonavigator" rel="noopener noreferrer"&gt;zoonavigator&lt;/a&gt;, a web-based ZooKeeper UI and editor/browser with many features.&lt;/p&gt;

&lt;p&gt;We can access zoonavigator from a browser in the host server just using the URL &lt;a href="http://localhost:9000/" rel="noopener noreferrer"&gt;http://localhost:9000/&lt;/a&gt;, and connect to &lt;code&gt;zk1&lt;/code&gt; just using &lt;code&gt;zk1:8121&lt;/code&gt; as the connection string.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9q69j5gtutlzzu7skae9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9q69j5gtutlzzu7skae9.png" alt="zoonavigator"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>clickhouse</category>
      <category>zookeeper</category>
      <category>docker</category>
    </item>
    <item>
      <title>ZooKeeper cluster with Vagrant</title>
      <dc:creator>Javier Vidal</dc:creator>
      <pubDate>Mon, 25 Jan 2021 15:08:05 +0000</pubDate>
      <link>https://dev.to/jv/zookeeper-cluster-in-vagrant-49e9</link>
      <guid>https://dev.to/jv/zookeeper-cluster-in-vagrant-49e9</guid>
      <description>&lt;p&gt;If we want to set up a ZooKeeper cluster in our development environment one way to proceed is defining it in Vagrant.&lt;/p&gt;

&lt;p&gt;Let's suppose we want to set up a cluster with 3 nodes (zk1, zk2, and zk3). The &lt;code&gt;Vagrantfile&lt;/code&gt; will be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# -*- mode: ruby -*-&lt;/span&gt;
&lt;span class="c1"&gt;# vi: set ft=ruby :&lt;/span&gt;

&lt;span class="c1"&gt;# rubocop: disable Metrics/BlockLength&lt;/span&gt;
&lt;span class="no"&gt;Vagrant&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define&lt;/span&gt; &lt;span class="s1"&gt;'zk1'&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;box&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'debian/stretch64'&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;network&lt;/span&gt; &lt;span class="ss"&gt;:private_network&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s1"&gt;'192.168.33.211'&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'zk1'&lt;/span&gt;
    &lt;span class="c1"&gt;# Configure VM&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provider&lt;/span&gt; &lt;span class="s1"&gt;'virtualbox'&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
      &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'256'&lt;/span&gt;
      &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cpus&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="c1"&gt;# Install and start zookeeper&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s1"&gt;'shell'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;inline: &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;SHELL&lt;/span&gt;&lt;span class="sh"&gt;
      apt update
      apt install -y default-jdk
      mkdir -p /var/lib/zookeeper
      chown vagrant:vagrant /var/lib/zookeeper
      wget http://apache.uvigo.es/zookeeper/zookeeper-3.6.2/apache-zookeeper-3.6.2-bin.tar.gz
      tar xvf apache-zookeeper-3.6.2-bin.tar.gz
      rm apache-zookeeper-3.6.2-bin.tar.gz
      mv apache-zookeeper-3.6.2-bin /opt/zookeeper-3.6.2
      chown -R vagrant:vagrant /opt/zookeeper-3.6.2
      echo -e 'tickTime=2000&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;dataDir=/var/lib/zookeeper&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;clientPort=2181&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;initLimit=5&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;syncLimit=2&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;server.1=192.168.33.211:2888:3888&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;server.2=192.168.33.212:2888:3888&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;server.3=192.168.33.213:2888:3888' &amp;gt; /opt/zookeeper-3.6.2/conf/zoo.cfg
      echo -e '1' &amp;gt; /var/lib/zookeeper/myid
&lt;/span&gt;&lt;span class="no"&gt;    SHELL&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define&lt;/span&gt; &lt;span class="s1"&gt;'zk2'&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;box&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'debian/stretch64'&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;network&lt;/span&gt; &lt;span class="ss"&gt;:private_network&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s1"&gt;'192.168.33.212'&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'zk2'&lt;/span&gt;
    &lt;span class="c1"&gt;# Configure VM&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provider&lt;/span&gt; &lt;span class="s1"&gt;'virtualbox'&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
      &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'256'&lt;/span&gt;
      &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cpus&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="c1"&gt;# Install and start zookeeper&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s1"&gt;'shell'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;inline: &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;SHELL&lt;/span&gt;&lt;span class="sh"&gt;
      apt update
      apt install -y default-jdk
      mkdir -p /var/lib/zookeeper
      chown vagrant:vagrant /var/lib/zookeeper
      wget http://apache.uvigo.es/zookeeper/zookeeper-3.6.2/apache-zookeeper-3.6.2-bin.tar.gz
      tar xvf apache-zookeeper-3.6.2-bin.tar.gz
      rm apache-zookeeper-3.6.2-bin.tar.gz
      mv apache-zookeeper-3.6.2-bin /opt/zookeeper-3.6.2
      chown -R vagrant:vagrant /opt/zookeeper-3.6.2
      echo -e 'tickTime=2000&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;dataDir=/var/lib/zookeeper&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;clientPort=2181&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;initLimit=5&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;syncLimit=2&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;server.1=192.168.33.211:2888:3888&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;server.2=192.168.33.212:2888:3888&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;server.3=192.168.33.213:2888:3888' &amp;gt; /opt/zookeeper-3.6.2/conf/zoo.cfg
      echo -e '2' &amp;gt; /var/lib/zookeeper/myid
&lt;/span&gt;&lt;span class="no"&gt;    SHELL&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;define&lt;/span&gt; &lt;span class="s1"&gt;'zk3'&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;box&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'debian/stretch64'&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;network&lt;/span&gt; &lt;span class="ss"&gt;:private_network&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ip: &lt;/span&gt;&lt;span class="s1"&gt;'192.168.33.213'&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hostname&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'zk3'&lt;/span&gt;
    &lt;span class="c1"&gt;# Configure VM&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provider&lt;/span&gt; &lt;span class="s1"&gt;'virtualbox'&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
      &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'256'&lt;/span&gt;
      &lt;span class="n"&gt;vb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cpus&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="c1"&gt;# Install and start zookeeper&lt;/span&gt;
    &lt;span class="n"&gt;zookeeper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;provision&lt;/span&gt; &lt;span class="s1"&gt;'shell'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;inline: &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;SHELL&lt;/span&gt;&lt;span class="sh"&gt;
      apt update
      apt install -y default-jdk
      mkdir -p /var/lib/zookeeper
      chown vagrant:vagrant /var/lib/zookeeper
      wget http://apache.uvigo.es/zookeeper/zookeeper-3.6.2/apache-zookeeper-3.6.2-bin.tar.gz
      tar xvf apache-zookeeper-3.6.2-bin.tar.gz
      rm apache-zookeeper-3.6.2-bin.tar.gz
      mv apache-zookeeper-3.6.2-bin /opt/zookeeper-3.6.2
      chown -R vagrant:vagrant /opt/zookeeper-3.6.2
      echo -e 'tickTime=2000&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;dataDir=/var/lib/zookeeper&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;clientPort=2181&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;initLimit=5&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;syncLimit=2&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;server.1=192.168.33.211:2888:3888&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;server.2=192.168.33.212:2888:3888&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;server.3=192.168.33.213:2888:3888' &amp;gt; /opt/zookeeper-3.6.2/conf/zoo.cfg
      echo -e '3' &amp;gt; /var/lib/zookeeper/myid
&lt;/span&gt;&lt;span class="no"&gt;    SHELL&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="c1"&gt;# rubocop: enable Metrics/BlockLength&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way we could start the 3 nodes of the cluster with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vagrant up zk1
vagrant up zk2
vagrant up zk3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;code&gt;Vagrantfile&lt;/code&gt; does not start the ZooKeeper nodes automatically so we will have to start them manually:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vagrant ssh zk1
&lt;span class="nb"&gt;cd&lt;/span&gt; /opt/zookeeper-3.6.2/bin
./zkServer.sh start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I'm sure this &lt;code&gt;Vagrantfile&lt;/code&gt; can be improved removing the repeated parts and starting the servers automatically.&lt;/p&gt;

</description>
      <category>clickhouse</category>
      <category>zookeeper</category>
      <category>vagrant</category>
    </item>
    <item>
      <title>ClickHouse incremental backups</title>
      <dc:creator>Javier Vidal</dc:creator>
      <pubDate>Thu, 14 Jan 2021 12:28:50 +0000</pubDate>
      <link>https://dev.to/jv/clickhouse-incremental-backups-15fo</link>
      <guid>https://dev.to/jv/clickhouse-incremental-backups-15fo</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/AlexAkulov/clickhouse-backup"&gt;clickhouse-backup&lt;/a&gt; allows us to perform local backups, that are always full backups, and full or incremental uploads to remote storage. In my &lt;a href="https://dev.to/jv/backup-and-restore-with-clickhouse-backup-dn9"&gt;previous post&lt;/a&gt; I talked about how to perform full backups and uploads. Now we are going to review all the steps required to work with incremental uploads. This way we could upload a weekly full backup to our remote storage and perform daily incremental uploads.&lt;/p&gt;

&lt;p&gt;Let's assume we already have &lt;code&gt;clickhouse-backup&lt;/code&gt; installed and we have configured a remote storage.&lt;/p&gt;

&lt;p&gt;Let's create a database &lt;code&gt;backup&lt;/code&gt;, a table &lt;code&gt;test&lt;/code&gt; and a user &lt;code&gt;backup&lt;/code&gt; than can access this database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:) CREATE DATABASE IF NOT EXISTS backup

:) CREATE TABLE IF NOT EXISTS backup.test
(
  `date` Datetime,
  `user_id` String,
  `pageviews` Int32
)
ENGINE = MergeTree()
PARTITION BY toStartOfHour(date)
ORDER BY (date)

:) CREATE USER backup IDENTIFIED WITH plaintext_password BY 'qwerty'

:) GRANT ALL ON backup.* TO backup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can edit &lt;code&gt;/etc/clickhouse-backup/config.yml&lt;/code&gt; to configure this user as the &lt;code&gt;clickhouse-backup&lt;/code&gt; user.&lt;/p&gt;

&lt;p&gt;Let's insert some data in the table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;backup&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;test&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2021-01-13 07:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'user 1'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2021-01-13 08:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'user 2'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2021-01-13 09:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'user 3'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2021-01-13 10:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'user 4'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Perform a full backup and upload it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# clickhouse-backup create backup1
2021/01/14 10:51:06 Create backup 'backup1'
2021/01/14 10:51:06 Freeze 'backup.test'
2021/01/14 10:51:06 Copy part hashes
2021/01/14 10:51:06 Writing part hashes
2021/01/14 10:51:06 Copy metadata
2021/01/14 10:51:06   Done.
2021/01/14 10:51:06 Move shadow
2021/01/14 10:51:06   Done.
# clickhouse-backup upload backup1
2021/01/14 10:51:40 Upload backup 'backup1'
 4.36 KiB / 4.36 KiB [======================================================] 100.00% 0s
2021/01/14 10:51:41   Done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Insert more data, perform a full backup and an incremental upload:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;backup&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;test&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2021-01-13 11:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'user 3'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2021-01-13 12:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'user 5'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# clickhouse-backup create backup2
2021/01/14 10:55:04 Create backup 'backup2'
2021/01/14 10:55:05 Freeze 'backup.test'
2021/01/14 10:55:05 Copy part hashes
2021/01/14 10:55:05 Writing part hashes
2021/01/14 10:55:05 Copy metadata
2021/01/14 10:55:05   Done.
2021/01/14 10:55:05 Move shadow
2021/01/14 10:55:05   Done.
# clickhouse-backup upload --diff-from backup1 backup2
2021/01/14 10:55:38 Upload backup 'backup2'
 6.44 KiB / 6.44 KiB [======================================================] 100.00% 0s
2021/01/14 10:55:38   Done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And once more:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;backup&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;test&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2021-01-13 13:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'user 1'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2021-01-13 14:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'user 5'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# clickhouse-backup create backup3
2021/01/14 10:56:10 Create backup 'backup3'
2021/01/14 10:56:10 Freeze 'backup.test'
2021/01/14 10:56:10 Copy part hashes
2021/01/14 10:56:10 Writing part hashes
2021/01/14 10:56:10 Copy metadata
2021/01/14 10:56:10   Done.
2021/01/14 10:56:10 Move shadow
2021/01/14 10:56:10   Done.
# clickhouse-backup upload --diff-from backup2 backup3
2021/01/14 10:56:20 Upload backup 'backup3'
 8.52 KiB / 8.52 KiB [======================================================]
 100.00% 0s
2021/01/14 10:56:21   Done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Delete the local backups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# clickhouse-backup delete local backup1
# clickhouse-backup delete local backup2
# clickhouse-backup delete local backup3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List the remote backups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# clickhouse-backup list
Local backups:
no backups found
Remote backups:
- 'backup1.tar.gz'  3.54 KiB    (created at 14-01-2021 12:12:42)
- 'backup2.tar.gz'  3.02 KiB    (created at 14-01-2021 12:13:17)
- 'backup3.tar.gz'  3.29 KiB    (created at 14-01-2021 12:14:47)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's drop the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;DROP&lt;/span&gt; &lt;span class="k"&gt;DATABASE&lt;/span&gt; &lt;span class="n"&gt;backup&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we download backup3, the latest backup, we see that backup1 and backup2 are also downloaded:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# clickhouse-backup download backup3
 0 / 3374 [------------------------------------------------------]   0.00%
2021/01/14 13:16:39 Backup 'backup3' required 'backup2'. Downloading.
 0 / 3088 [------------------------------------------------------]   0.00%
2021/01/14 13:16:40 Backup 'backup2' required 'backup1'. Downloading.
 3.54 KiB / 3.54 KiB [======================================================] 100.00% 0s
 3.02 KiB / 3.02 KiB [======================================================] 100.00% 0s
 3.29 KiB / 3.29 KiB [======================================================] 100.00% 1s
2021/01/14 13:16:41   Done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we examine &lt;code&gt;backup3.tar.gz&lt;/code&gt; we could find a &lt;code&gt;meta.json&lt;/code&gt; file that contains:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"required_backup"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"backup2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"hardlinks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"shadow/backup/test/1610517600_1_1_0/checksums.txt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"shadow/backup/test/1610517600_1_1_0/columns.txt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So these &lt;code&gt;meta.json&lt;/code&gt; files will keep track of the files that are kept from previous backups.&lt;/p&gt;

&lt;p&gt;Let's do the restore. If we restore the latest incremental backup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# clickhouse-backup restore backup3
2021/01/14 13:19:54 Create table 'backup.test'
2021/01/14 13:19:54 Prepare data for restoring 'backup.test'
2021/01/14 13:19:54 ALTER TABLE `backup`.`test` ATTACH PART '1610517600_1_1_0'
2021/01/14 13:19:54 ALTER TABLE `backup`.`test` ATTACH PART '1610521200_2_2_0'
2021/01/14 13:19:54 ALTER TABLE `backup`.`test` ATTACH PART '1610524800_3_3_0'
2021/01/14 13:19:54 ALTER TABLE `backup`.`test` ATTACH PART '1610528400_4_4_0'
2021/01/14 13:19:54 ALTER TABLE `backup`.`test` ATTACH PART '1610532000_5_5_0'
2021/01/14 13:19:54 ALTER TABLE `backup`.`test` ATTACH PART '1610535600_6_6_0'
2021/01/14 13:19:54 ALTER TABLE `backup`.`test` ATTACH PART '1610539200_7_7_0'
2021/01/14 13:19:54 ALTER TABLE `backup`.`test` ATTACH PART '1610542800_8_8_0'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are restoring the full backup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:) select count() from backup.test

SELECT count()
FROM backup.test

┌─count()─┐
│       8 │
└─────────┘

1 rows in set. Elapsed: 0.003 sec.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;😄&lt;/p&gt;

</description>
      <category>clickhouse</category>
    </item>
    <item>
      <title>Backup and restore with clickhouse-backup</title>
      <dc:creator>Javier Vidal</dc:creator>
      <pubDate>Wed, 13 Jan 2021 13:45:40 +0000</pubDate>
      <link>https://dev.to/jv/backup-and-restore-with-clickhouse-backup-dn9</link>
      <guid>https://dev.to/jv/backup-and-restore-with-clickhouse-backup-dn9</guid>
      <description>&lt;p&gt;ClickHouse includes native support for instantaneous point-in-time backups, through its &lt;code&gt;ALTER TABLE ... FREEZE&lt;/code&gt; feature. ClickHouse creates hard links in the shadow directory to all the partitions. This way, each backup is effectively a full backup, and duplicate use of disk space is avoided.&lt;/p&gt;

&lt;p&gt;After performing a manual backup we should move it to another location. To restore it, we should:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;recover the backup from the other location&lt;/li&gt;
&lt;li&gt;copy the partitions to the &lt;code&gt;detached&lt;/code&gt; directory for each of the tables&lt;/li&gt;
&lt;li&gt;attach each partition with an &lt;code&gt;ALTER TABLE ... ATTACH PARTITION&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can automate this process thanks to &lt;a href="https://github.com/AlexAkulov/clickhouse-backup"&gt;clickhouse-backup&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let's assume we already have &lt;code&gt;clickhouse-backup&lt;/code&gt; installed and we have configured a remote storage.&lt;/p&gt;

&lt;p&gt;Let's create a database &lt;code&gt;backup&lt;/code&gt;, a table &lt;code&gt;test&lt;/code&gt; and a user &lt;code&gt;backup&lt;/code&gt; than can access this database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:) CREATE DATABASE IF NOT EXISTS backup

:) CREATE TABLE IF NOT EXISTS backup.test
(
  `date` Datetime,
  `user_id` String,
  `pageviews` Int32
)
ENGINE = MergeTree()
PARTITION BY toStartOfHour(date)
ORDER BY (date)

:) CREATE USER backup IDENTIFIED WITH plaintext_password BY 'qwerty'

:) GRANT ALL ON backup.* TO backup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can edit &lt;code&gt;/etc/clickhouse-backup/config.yml&lt;/code&gt; to configure this user as the &lt;code&gt;clickhouse-backup&lt;/code&gt; user.&lt;/p&gt;

&lt;p&gt;Let's insert some data in the table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;backup&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;test&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2021-01-13 07:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'user 1'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2021-01-13 08:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'user 2'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2021-01-13 09:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'user 3'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'2021-01-13 10:00:00'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'user 4'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can check all the backups we have performed until now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# clickhouse-backup list
Local backups:
no backups found
Remote backups:
no backups found
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If can check the list of tables that &lt;code&gt;clickhouse-backup&lt;/code&gt; would backup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# clickhouse-backup tables
backup.test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's create a local backup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# clickhouse-backup create
2021/01/13 13:14:43 Create backup '2021-01-13T12-14-43'
2021/01/13 13:14:43 Freeze 'backup.test'
2021/01/13 13:14:43 Copy part hashes
2021/01/13 13:14:43 Writing part hashes
2021/01/13 13:14:43 Copy metadata
2021/01/13 13:14:43   Done.
2021/01/13 13:14:43 Move shadow
2021/01/13 13:14:43   Done.
# clickhouse-backup list
Local backups:
- '2021-01-13T12-14-43' (created at 13-01-2021 13:14:43)
Remote backups:
no backups found
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can upload it to the remote storage and delete the local backup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# clickhouse-backup upload '2021-01-13T12-14-43'
2021/01/13 13:15:39 Upload backup '2021-01-13T12-14-43'
 4.36 KiB / 4.36 KiB [======================================================] 100.00% 0s
2021/01/13 13:15:39   Done.
# clickhouse-backup delete local '2021-01-13T12-14-43'
# clickhouse-backup list
Local backups:
no backups found
Remote backups:
- '2021-01-13T12-14-43.tar.gz'  3.55 KiB    (created at 13-01-2021 12:15:39)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Disasters can happen at any time. Let's drop the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;DROP&lt;/span&gt; &lt;span class="k"&gt;DATABASE&lt;/span&gt; &lt;span class="n"&gt;backup&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;😱&lt;/p&gt;

&lt;p&gt;Don't worry, we can download the remote backupa and restore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# clickhouse-backup download '2021-01-13T12-14-43'
 3.55 KiB / 3.55 KiB [======================================================] 100.00% 0s
2021/01/13 13:34:22   Done.
# clickhouse-backup restore '2021-01-13T12-14-43'
2021/01/13 13:35:45 Create table 'backup.test'
2021/01/13 13:35:45 Prepare data for restoring 'backup.test'
2021/01/13 13:35:45 ALTER TABLE `backup`.`test` ATTACH PART '1610517600_1_1_0'
2021/01/13 13:35:45 ALTER TABLE `backup`.`test` ATTACH PART '1610521200_2_2_0'
2021/01/13 13:35:45 ALTER TABLE `backup`.`test` ATTACH PART '1610524800_3_3_0'
2021/01
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💃&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:) select count() from backup.test

SELECT count()
FROM backup.test

┌─count()─┐
│       4 │
└─────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>clickhouse</category>
    </item>
    <item>
      <title>Tables size in ClickHouse</title>
      <dc:creator>Javier Vidal</dc:creator>
      <pubDate>Mon, 11 Jan 2021 08:05:38 +0000</pubDate>
      <link>https://dev.to/jv/tables-size-in-clickhouse-27l</link>
      <guid>https://dev.to/jv/tables-size-in-clickhouse-27l</guid>
      <description>&lt;p&gt;We can check the size of ClickHouse tables with this query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
    &lt;span class="n"&gt;concat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;database&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'.'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;formatReadableSize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;bytes_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;modification_time&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;latest_modification&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;any&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;engine&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="k"&gt;system&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parts&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;active&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt;
    &lt;span class="k"&gt;database&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;table&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;bytes_size&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>clickhouse</category>
    </item>
    <item>
      <title>Rails minor version upgrade</title>
      <dc:creator>Javier Vidal</dc:creator>
      <pubDate>Sun, 10 Jan 2021 08:47:39 +0000</pubDate>
      <link>https://dev.to/jv/rails-minor-version-upgrade-d9e</link>
      <guid>https://dev.to/jv/rails-minor-version-upgrade-d9e</guid>
      <description>&lt;p&gt;Migrating a minor version of Rails is quite straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;check the release notes for the new version&lt;/li&gt;
&lt;li&gt;edit the Gemfile and update the Rails version&lt;/li&gt;
&lt;li&gt;run &lt;code&gt;bundle update&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;run your tests&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>rails</category>
    </item>
    <item>
      <title>Updating data in a ClickHouse cluster replacing partitions</title>
      <dc:creator>Javier Vidal</dc:creator>
      <pubDate>Wed, 28 Oct 2020 17:09:22 +0000</pubDate>
      <link>https://dev.to/jv/updating-data-in-a-clickhouse-cluster-replacing-partitions-5fji</link>
      <guid>https://dev.to/jv/updating-data-in-a-clickhouse-cluster-replacing-partitions-5fji</guid>
      <description>&lt;p&gt;Using the &lt;code&gt;ALTER TABLE ... UPDATE&lt;/code&gt; statement in ClickHouse is a heavy operation not designed for frequent use. If we design our schema to insert/update a whole partition at a time, we could update large amounts of data easily. Doing it in a simple &lt;code&gt;MergeTree&lt;/code&gt; table is quite simple, but doing it in a cluster with replicated tables is trickier. Let's see how could be done.&lt;/p&gt;

&lt;p&gt;In this example we use a cluster &lt;code&gt;test_cluster&lt;/code&gt; that consists of 2 nodes. We will create a replicated table in each node and a distributed table that we could use to parallelize reading.&lt;/p&gt;

&lt;p&gt;In each node we should have the &lt;code&gt;test_cluster&lt;/code&gt; configured in the &lt;code&gt;/etc/clickhouse-server/config.xml&lt;/code&gt; configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;remote_servers&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;test_cluster&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;shard&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;internal_replication&amp;gt;&lt;/span&gt;true&lt;span class="nt"&gt;&amp;lt;/internal_replication&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;replica&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;host&amp;gt;&lt;/span&gt;192.168.33.201&lt;span class="nt"&gt;&amp;lt;/host&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;port&amp;gt;&lt;/span&gt;9000&lt;span class="nt"&gt;&amp;lt;/port&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;/replica&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;replica&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;host&amp;gt;&lt;/span&gt;192.168.33.202&lt;span class="nt"&gt;&amp;lt;/host&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;port&amp;gt;&lt;/span&gt;9000&lt;span class="nt"&gt;&amp;lt;/port&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;&amp;lt;/replica&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/shard&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/test_cluster&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/remote_servers&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example we have the IP addresses but we could have the hostnames if they were correctly configured in the &lt;code&gt;/etc/hosts&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;We will use also a couple of macros in &lt;code&gt;/etc/clickhouse-server/config.d/macros.xml&lt;/code&gt;. They will be used as parameters in &lt;code&gt;ON CLUSTER&lt;/code&gt; DDL statements. In the first ClickHouse server (ch1) we have:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;yandex&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;macros&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;shard&amp;gt;&lt;/span&gt;shard_01&lt;span class="nt"&gt;&amp;lt;/shard&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;replica&amp;gt;&lt;/span&gt;replica_01&lt;span class="nt"&gt;&amp;lt;/replica&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/macros&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/yandex&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And in the second one (ch2):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;yandex&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;macros&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;shard&amp;gt;&lt;/span&gt;shard_01&lt;span class="nt"&gt;&amp;lt;/shard&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;replica&amp;gt;&lt;/span&gt;replica_02&lt;span class="nt"&gt;&amp;lt;/replica&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/macros&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/yandex&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's connect to one of the nodes and create the database &lt;code&gt;test&lt;/code&gt;. As we use &lt;code&gt;ON CLUSTER&lt;/code&gt; this statement will be executed in all the nodes of the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;DATABASE&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;test&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;CLUSTER&lt;/span&gt; &lt;span class="n"&gt;test_cluster&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's create the replicated table. This will be the table that will store all the data. We will perform the partitioning by hour and insert all the data for that hour in a single insert.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;test&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;hits_local&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;CLUSTER&lt;/span&gt; &lt;span class="n"&gt;test_cluster&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nv"&gt;`date`&lt;/span&gt; &lt;span class="nb"&gt;Datetime&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nv"&gt;`user_id`&lt;/span&gt; &lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nv"&gt;`pageviews`&lt;/span&gt; &lt;span class="n"&gt;Int32&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;ENGINE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ReplicatedMergeTree&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'/clickhouse/test_cluster/tables/{shard}/hits_local'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'{replica}'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;toStartOfHour&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we create a table that will be used just to perform the inserts. It must be created in all the nodes of the cluster and its engine must be &lt;code&gt;MergeTree&lt;/code&gt;, otherwise we could end up experiencing a lot of ZooKeeper errors. We will call this table the temporary table (notice that it's not a real &lt;code&gt;TEMPORARY&lt;/code&gt; table).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;test&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;hits_local_tmp&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;CLUSTER&lt;/span&gt; &lt;span class="n"&gt;test_cluster&lt;/span&gt;
&lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;test&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;hits_local&lt;/span&gt;
&lt;span class="n"&gt;ENGINE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;MergeTree&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;PARTITION&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;toStartOfHour&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally we could create the distributed table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;test&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;hits&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;CLUSTER&lt;/span&gt; &lt;span class="s1"&gt;'test_cluster'&lt;/span&gt;
&lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;test&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;hits_local&lt;/span&gt;
&lt;span class="n"&gt;ENGINE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Distributed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'test_cluster'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'test'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'hits_local'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So now we have 3 tables in each of the nodes. In ch1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) use test
ch1 :) show tables
┌─name───────────┐
│ hits           │
│ hits_local     │
│ hits_local_tmp │
└────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And in ch2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch2 :) use test
ch2 :) show tables
┌─name───────────┐
│ hits           │
│ hits_local     │
│ hits_local_tmp │
└────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's insert some values in the temporary table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) INSERT INTO test.hits_local_tmp VALUES
('2020-10-27 10:00:00', 'user 1', 7),
('2020-10-27 10:00:00', 'user 2', 3),
('2020-10-27 10:00:00', 'user 3', 1),
('2020-10-27 10:00:00', 'user 4', 12)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can check that now we have a new partition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) SELECT partition, name, active FROM system.parts WHERE table = 'hits_local_tmp'
┌─partition───────────┬─name─────────────┬─active─┐
│ 2020-10-27 10:00:00 │ 1603792800_1_1_0 │      1 │
└─────────────────────┴──────────────────┴────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The idea is to move this partition in the temporary table to the distributed table. But we can not do it using &lt;code&gt;MOVE PARTITION&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) ALTER TABLE test.hits_local_tmp MOVE PARTITION '2020-10-27 10:00:00' TO TABLE test.hits_local
Received exception from server (version 20.10.2):
  Code: 48. DB::Exception: Received from 127.0.0.1:9000. DB::Exception: Table test.hits_local_tmp (400fd602-745e-4904-9f61-76ccda9eaf0a) supports movePartitionToTable only for MergeTree family of table engines. Got ReplicatedMergeTree.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The good news is that we can do a &lt;code&gt;REPLACE PARTITION&lt;/code&gt; and delete the original partition in the temporary table (&lt;code&gt;REPLACE PARTITION&lt;/code&gt; does not delete the source partition). The &lt;code&gt;REPLACE PARTITION&lt;/code&gt; works even if we don't have a partition to replace in the destination table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) ALTER TABLE test.hits_local REPLACE PARTITION '2020-10-27 10:00:00' FROM test.hits_local_tmp
ch1 :) ALTER TABLE test.hits_local_tmp DROP PARTITION '2020-10-27 10:00:00'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have the partition in the distributed table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) SELECT partition, name, active FROM system.parts WHERE table = 'hits_local'
┌─partition───────────┬─name─────────────┬─active─┐
│ 2020-10-27 10:00:00 │ 1603792800_1_1_0 │      1 │
└─────────────────────┴──────────────────┴────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The data are there:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) select * from test.hits_local
┌────────────────date─┬─user_id─┬─pageviews─┐
│ 2020-10-27 10:00:00 │ user 1  │         7 │
│ 2020-10-27 10:00:00 │ user 2  │         3 │
│ 2020-10-27 10:00:00 │ user 3  │         1 │
│ 2020-10-27 10:00:00 │ user 4  │        12 │
└─────────────────────┴─────────┴───────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can query them through the distributed table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) select * from test.hits
┌────────────────date─┬─user_id─┬─pageviews─┐
│ 2020-10-27 10:00:00 │ user 1  │         7 │
│ 2020-10-27 10:00:00 │ user 2  │         3 │
│ 2020-10-27 10:00:00 │ user 3  │         1 │
│ 2020-10-27 10:00:00 │ user 4  │        12 │
└─────────────────────┴─────────┴───────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;They are replicated to the other node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch2 :) select * from test.hits_local
┌────────────────date─┬─user_id─┬─pageviews─┐
│ 2020-10-27 10:00:00 │ user 1  │         7 │
│ 2020-10-27 10:00:00 │ user 2  │         3 │
│ 2020-10-27 10:00:00 │ user 3  │         1 │
│ 2020-10-27 10:00:00 │ user 4  │        12 │
└─────────────────────┴─────────┴───────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the partition has been created in the second node too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch2 :) SELECT partition, name, active FROM system.parts WHERE table = 'hits_local'
┌─partition───────────┬─name─────────────┬─active─┐
│ 2020-10-27 10:00:00 │ 1603792800_1_1_0 │      1 │
└─────────────────────┴──────────────────┴────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's do the update of the whole partition. We have to perform the insert again in the temporary table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) INSERT INTO test.hits_local_tmp VALUES
('2020-10-27 10:00:00', 'user 1', 6),
('2020-10-27 10:00:00', 'user 2', 4),
('2020-10-27 10:00:00', 'user 3', 2),
('2020-10-27 10:00:00', 'user 4', 15),
('2020-10-27 10:00:00', 'user 5', 14)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A new partition is created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) SELECT partition, name, active FROM system.parts WHERE table = 'hits_local_tmp'
┌─partition───────────┬─name─────────────┬─active─┐
│ 2020-10-27 10:00:00 │ 1603792800_3_3_0 │      1 │
└─────────────────────┴──────────────────┴────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can use it to replace the data in the replicated table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) ALTER TABLE test.hits_local REPLACE PARTITION '2020-10-27 10:00:00' FROM test.hits_local_tmp
ch1 :) ALTER TABLE test.hits_local_tmp DROP PARTITION '2020-10-27 10:00:00'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We see that we only have a partition in the first node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) SELECT partition, name, active FROM system.parts WHERE table = 'hits_local'
┌─partition───────────┬─name─────────────┬─active─┐
│ 2020-10-27 10:00:00 │ 1603792800_3_3_0 │      1 │
└─────────────────────┴──────────────────┴────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The new data are there:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) select * from test.hits_local
┌────────────────date─┬─user_id─┬─pageviews─┐
│ 2020-10-27 10:00:00 │ user 1  │         6 │
│ 2020-10-27 10:00:00 │ user 2  │         4 │
│ 2020-10-27 10:00:00 │ user 3  │         2 │
│ 2020-10-27 10:00:00 │ user 4  │        15 │
│ 2020-10-27 10:00:00 │ user 5  │        14 │
└─────────────────────┴─────────┴───────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;They are replicated to the second node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch2 :) select * from test.hits_local
┌────────────────date─┬─user_id─┬─pageviews─┐
│ 2020-10-27 10:00:00 │ user 1  │         6 │
│ 2020-10-27 10:00:00 │ user 2  │         4 │
│ 2020-10-27 10:00:00 │ user 3  │         2 │
│ 2020-10-27 10:00:00 │ user 4  │        15 │
│ 2020-10-27 10:00:00 │ user 5  │        14 │
└─────────────────────┴─────────┴───────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the distributed table is fine too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ch1 :) select * from test.hits
┌────────────────date─┬─user_id─┬─pageviews─┐
│ 2020-10-27 10:00:00 │ user 1  │         6 │
│ 2020-10-27 10:00:00 │ user 2  │         4 │
│ 2020-10-27 10:00:00 │ user 3  │         2 │
│ 2020-10-27 10:00:00 │ user 4  │        15 │
│ 2020-10-27 10:00:00 │ user 5  │        14 │
└─────────────────────┴─────────┴───────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>clickhouse</category>
    </item>
    <item>
      <title>Deleting ClickHouse replication entries from ZooKeeper</title>
      <dc:creator>Javier Vidal</dc:creator>
      <pubDate>Wed, 28 Oct 2020 12:17:38 +0000</pubDate>
      <link>https://dev.to/jv/deleting-clickhouse-replication-entries-from-zookeeper-3cl4</link>
      <guid>https://dev.to/jv/deleting-clickhouse-replication-entries-from-zookeeper-3cl4</guid>
      <description>&lt;p&gt;When playing with ClickHouse replicated tables it is quite probable that we end up getting errors like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DB::Exception: Replica /clickhouse/my_cluster/tables/shard_01/bids/replicas/replica_01 already exists
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deleting this entry it's quite simple. Just connect to one of the ZooKeeper nodes in the cluster and execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./zkCli.sh deleteall /clickhouse/my_cluster/tables/shard_01/bids/replicas/replica_01
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we want to start all over we could even delete all the information regarding the cluster &lt;code&gt;my_cluster&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./zkCli.sh deleteall /clickhouse/my_cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>clickhouse</category>
      <category>zookeeper</category>
    </item>
    <item>
      <title>document.getElementById vs jQuery ID selector</title>
      <dc:creator>Javier Vidal</dc:creator>
      <pubDate>Sat, 24 Oct 2020 07:24:35 +0000</pubDate>
      <link>https://dev.to/jv/document-getelementbyid-vs-jquery-id-selector-25p7</link>
      <guid>https://dev.to/jv/document-getelementbyid-vs-jquery-id-selector-25p7</guid>
      <description>&lt;p&gt;document.getElementById and jQuery ID selector are not equivalent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;element&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;returns an HTML DOM Object, and&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;element&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;returns a jQuery Object. In jQuery, to get the same result as document.getElementById we can do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;element&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>javascript</category>
      <category>jquery</category>
    </item>
    <item>
      <title>How to create a read-only user in ClickHouse</title>
      <dc:creator>Javier Vidal</dc:creator>
      <pubDate>Mon, 19 Oct 2020 10:22:59 +0000</pubDate>
      <link>https://dev.to/jv/how-to-create-a-read-only-user-in-clickhouse-2p9m</link>
      <guid>https://dev.to/jv/how-to-create-a-read-only-user-in-clickhouse-2p9m</guid>
      <description>&lt;p&gt;In ClickHouse, a settings profile is a collection of settings grouped under the same name. By default, there is a profile &lt;code&gt;readonly&lt;/code&gt; that allows only read queries. This is from &lt;code&gt;/etc/clickhouse-server/users.xml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;yandex&amp;gt;&lt;/span&gt;
    &lt;span class="c"&gt;&amp;lt;!-- Profiles of settings. --&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;profiles&amp;gt;&lt;/span&gt;
        ...
        &lt;span class="c"&gt;&amp;lt;!-- Profile that allows only read queries. --&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;readonly&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;readonly&amp;gt;&lt;/span&gt;1&lt;span class="nt"&gt;&amp;lt;/readonly&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;/readonly&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/profiles&amp;gt;&lt;/span&gt;
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So to create a read-only user we just have to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;USER&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt; &lt;span class="n"&gt;IDENTIFIED&lt;/span&gt; &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="n"&gt;plaintext_password&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="s1"&gt;'qwerty'&lt;/span&gt; &lt;span class="n"&gt;SETTINGS&lt;/span&gt; &lt;span class="n"&gt;PROFILE&lt;/span&gt; &lt;span class="s1"&gt;'readonly'&lt;/span&gt;
&lt;span class="k"&gt;GRANT&lt;/span&gt; &lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;TABLES&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;database&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>clickhouse</category>
    </item>
  </channel>
</rss>
