<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: William Mbotta</title>
    <description>The latest articles on DEV Community by William Mbotta (@sepiropht).</description>
    <link>https://dev.to/sepiropht</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sepiropht"/>
    <language>en</language>
    <item>
      <title>Why i love docker</title>
      <dc:creator>William Mbotta</dc:creator>
      <pubDate>Thu, 12 Sep 2024 00:24:43 +0000</pubDate>
      <link>https://dev.to/sepiropht/why-i-love-docker-29cd</link>
      <guid>https://dev.to/sepiropht/why-i-love-docker-29cd</guid>
      <description>&lt;h2&gt;
  
  
  1. The problem
&lt;/h2&gt;

&lt;p&gt;This morning, I had a problem: my server, a Raspberry Pi 2 bought in 2016, no longer starts. I used this old machine to host many services (WireGuard, Nextcloud, Bitwarden, ODPS server, etc.).&lt;/p&gt;

&lt;p&gt;After a few unsuccessful attempts, I gave up on the idea of repairing it and decided to use another one of my servers instead. I unplugged the external hard drive from my Raspberry Pi and plugged it into my other server.&lt;/p&gt;

&lt;p&gt;And within 10 minutes, all the services on the new machine were running just like they did on the Raspberry Pi, with all the data intact. How did I do that?&lt;/p&gt;

&lt;p&gt;Simply:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. How I Started
&lt;/h2&gt;

&lt;p&gt;I’ve been an almost exclusive Linux user for about fifteen years now on my laptop, starting with Ubuntu/Debian and then Arch, just to show off a bit. But I'm not an advanced user—I mostly do pretty basic stuff. I only use the shell when I have an issue or for my work as a JavaScript developer.&lt;/p&gt;

&lt;p&gt;Of course, like many, I bought a few Raspberry Pis back in the day, but it was mostly a toy. Even though I did manage to install a private Git server and stream my music library using MPD.&lt;/p&gt;

&lt;p&gt;But it's only recently that I started systematically installing a lot of services: Nextcloud, PhotoPrism, Bitwarden, and many more...&lt;/p&gt;

&lt;p&gt;Installing each of these services can be long and tedious, for example, installing Nextcloud. Even if everything goes well, it won't take just 10 minutes :)&lt;/p&gt;

&lt;p&gt;And after that, you’ll still need to retrieve the old data for Nextcloud, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /mnt/storage/nextcloud/var/www/html /new_nextcloud_dir
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you’ll need to redo the post-installation configuration, create accounts, and repeat this process for every service, each with its own way of working.&lt;/p&gt;

&lt;p&gt;Docker and Docker Compose greatly simplify this process.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. How It Works
&lt;/h2&gt;

&lt;p&gt;On my old Raspberry Pi, I had many directories on my external drive like these:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;old@server:/mnt/seagate&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls &lt;/span&gt;nextcloud/ photoprism/ bitwarden/ wireguard/ odps/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All these directories are structured more or less the same way. For example, Nextcloud:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;old@server:/mnt/seagate/nextcloud&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls
&lt;/span&gt;data  db  docker-compose.yml  nextcloud.sql  redis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The file we’re going to modify here is docker-compose.yml. The files or directories are generated by the containers and contain the data we generated while using the service.&lt;/p&gt;

&lt;p&gt;In general, I don't write the docker-compose.yml files myself. Most projects have one, or if not, you can usually find someone who has made one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# docker-compose.yml file&lt;/span&gt;
services:
  nc:
    image: nextcloud:apache
    environment:
      - &lt;span class="nv"&gt;POSTGRES_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;db
      - &lt;span class="nv"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nextcloud
      - &lt;span class="nv"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nextcloud
      - &lt;span class="nv"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nextcloud
      - &lt;span class="nv"&gt;REDIS_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;redis
    ports:
      - 4080:80
    restart: always
    volumes:
      - ./data:/var/www/html &lt;span class="c"&gt;# I only modify these lines&lt;/span&gt;
    depends_on:
      - redis
      - db
  db:
    image: postgres:15-alpine
    environment:
      - &lt;span class="nv"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nextcloud
      - &lt;span class="nv"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nextcloud
      - &lt;span class="nv"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nextcloud
    restart: always
    volumes:
      - ./db:/var/lib/postgresql/data &lt;span class="c"&gt;# I only modify these lines&lt;/span&gt;
    expose:
      - 5432
  redis:
    image: redis:alpine
    restart: always
    volumes:
      - ./redis:/data &lt;span class="c"&gt;# I only modify these lines&lt;/span&gt;
    expose:
      - 6379
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker containers are isolated processes that share the host operating system’s kernel but run in compartmentalized environments (called "namespaces") that separate them from the rest of the system and from other containers. In terms of communication with the outside world, containers can be configured to interact via networks, but by default, they are isolated.&lt;/p&gt;

&lt;p&gt;Regarding data persistence, by default, a Docker container does not retain data once stopped or deleted because anything written to its internal filesystem is ephemeral. To persist data beyond the lifecycle of a container, you must explicitly mount volumes or directories. This allows you to save data in the host’s filesystem or on external storage.&lt;/p&gt;

&lt;p&gt;When defining a volume or directory mount, the left-hand path (in Docker syntax) specifies the location of files on the host machine, and the right-hand path indicates where these files will be accessible inside the container.&lt;/p&gt;

&lt;p&gt;The modification I make simply tells Docker that the data will always be in the current directory, which is itself located on my external drive.&lt;/p&gt;

&lt;p&gt;This allows all the necessary components to keep our services running to be located in the same places on my external drive: configuration and data.&lt;/p&gt;

&lt;p&gt;This clear separation between data and application is what I truly appreciate about Docker.&lt;/p&gt;

&lt;p&gt;It’s what allows me to unplug this hard drive and mount it elsewhere. If Docker Compose is already installed on the machine, I just do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;new@server:&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /mnt/seagate/nextcloud
new@server:/mnt/seagate/nextcloud&lt;span class="nv"&gt;$ &lt;/span&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
new@server:&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /mnt/seagate/bitwarden
new@server:/mnt/seagate/bitwarden&lt;span class="nv"&gt;$ &lt;/span&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
new@server:&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /mnt/seagate/wireguard
new@server:/mnt/seagate/wireguard&lt;span class="nv"&gt;$ &lt;/span&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it! You find your setup exactly as you left it. I really mean everything: the database connections, users, the latest changes—it all feels like you never switched machines! Even the Redis cache is restored to the state it was in the last time the service ran on my Pi.&lt;/p&gt;

&lt;p&gt;I find this really amazing. The hype around Docker was definitely not exaggerated. I even dream that all the software I use could work this way, even on my laptop.&lt;/p&gt;

&lt;p&gt;For the record, since I’m too lazy to go into each directory manually, I wrote a little script to do it for me:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Find all directories that are exactly one level deep and contain a docker-compose.yml file&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;dir &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;find &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-mindepth&lt;/span&gt; 2 &lt;span class="nt"&gt;-maxdepth&lt;/span&gt; 2 &lt;span class="nt"&gt;-type&lt;/span&gt; f &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"docker-compose.yml"&lt;/span&gt; &lt;span class="nt"&gt;-exec&lt;/span&gt; &lt;span class="nb"&gt;dirname&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt; &lt;span class="se"&gt;\;&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Entering directory: &lt;/span&gt;&lt;span class="nv"&gt;$dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="c"&gt;# Start docker compose in the current directory&lt;/span&gt;
  docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, it’s worth noting that this isn’t always necessary at startup if you’ve selected the restart: always option in your Docker Compose configuration. The Docker daemon itself takes care of reviving all services when the server starts.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Updating is Even Simpler
&lt;/h2&gt;

&lt;p&gt;I didn’t mention it earlier, but when you run docker compose up -d for the first time, three commands are actually executed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker compose pull
&lt;span class="nv"&gt;$ &lt;/span&gt;docker compose build
&lt;span class="nv"&gt;$ &lt;/span&gt;docker compose start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In subsequent runs, it’s equivalent to just a start.&lt;/p&gt;

&lt;p&gt;If you want to update your container, you first need to change the version number, or if you like living dangerously, you can leave the latest tag next to your image name, so it always fetches the latest image.&lt;/p&gt;

&lt;p&gt;To update:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker compose pull
&lt;span class="nv"&gt;$ &lt;/span&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;  &lt;span class="c"&gt;# It restarts the container only if `docker pull` found a newer image.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Conclusion
&lt;/h2&gt;

&lt;p&gt;Honestly, I don’t know what you think, but I find this approach simple, elegant, and ultra-convenient.&lt;/p&gt;

&lt;p&gt;There are thousands of Docker images available, and the principle is always the same.&lt;/p&gt;

&lt;p&gt;WordPress? Docker Compose. A CMS? Docker Compose. Video or audio streaming? Docker Compose.&lt;/p&gt;

&lt;p&gt;So, next time you think you might need a SaaS solution, just try this small reflex: search for my problem self-hosted in your favorite search engine.&lt;/p&gt;

&lt;p&gt;If you find something that fits your needs, look for the docker-compose.yml, make the necessary modifications for the volumes, and the world is yours!&lt;/p&gt;

&lt;p&gt;I recommend this site, which lists an incredible number of services that can be installed simply with docker compose.&lt;/p&gt;

&lt;p&gt;We’ll discuss later how to access these services; most of the time, I prefer using a VPN, as it avoids exposing my services on the internet. Of course, WireGuard installs in a snap with its own docker compose (though you’ll need to forward UDP ports from your router to your instance). You even get a nice web interface with authentication and the ability to generate a QR code for each client as a bonus.&lt;/p&gt;

&lt;p&gt;Or, if you’re using a VPS, I can show you how to associate each service with a domain. It’s really simple, although it wasn’t for me until late 2022.&lt;/p&gt;

&lt;p&gt;Until then, see you soon, and thanks for reading this far.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>selfhosting</category>
      <category>linux</category>
      <category>backup</category>
    </item>
    <item>
      <title>How I Back Up My Data</title>
      <dc:creator>William Mbotta</dc:creator>
      <pubDate>Thu, 22 Aug 2024 14:00:00 +0000</pubDate>
      <link>https://dev.to/sepiropht/how-i-back-up-my-data-4j7</link>
      <guid>https://dev.to/sepiropht/how-i-back-up-my-data-4j7</guid>
      <description>&lt;p&gt;In my last article, I explained how Docker saved me when my Raspberry Pi, which hosted all my services, suddenly failed. Indeed, Docker allows for good compartmentalization of configurations and data, and lets you choose where to store them, which in my case was an external hard drive. In case of a failure, you just need to take the hard drive and connect it to another machine, and you're done.&lt;/p&gt;

&lt;p&gt;These kinds of accidents happen quite often unfortunately, and the method I just described is quite effective in solving this problem.&lt;br&gt;
However, what happens if it's the external hard drive that fails or worse, if I have a fire? In that case, do I lose everything?&lt;/p&gt;

&lt;p&gt;This type of incident is rarer. I've never experienced a hard drive failure. All the hard drives I bought in the previous decade are still working. So it was more difficult for me to build an infrastructure resilient to this kind of problem. I managed to do it anyway, and I'm going to present how I proceed. I'll introduce you to deduplication with Borg Backup and Amazon S3 Glacier.&lt;/p&gt;
&lt;h2&gt;
  
  
  1. 3-2-1 Rule
&lt;/h2&gt;

&lt;p&gt;For setting up a backup plan, I follow this principle that seems to be a consensus of having three different copies of your data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use at least 3 copies&lt;/li&gt;
&lt;li&gt;Use two different types of media&lt;/li&gt;
&lt;li&gt;And have one off-site copy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The off-site copy is what allows you to survive a fire, for example.&lt;br&gt;&lt;br&gt;
That's good for the overview and guiding principle, let's see how I apply it.&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Borg Backup and Deduplication
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. Install Borg Backup
&lt;/h3&gt;

&lt;p&gt;The main problem when making backups is the redundancy of data between different backups. Fortunately, tools like Borg exist, allowing us to make incremental backups. This means, the first time it backs up all your files, but the other times it only records the changes, which saves bandwidth and also space on the backup disk.&lt;/p&gt;

&lt;p&gt;If Borg Backup is not yet installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;borgbackup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Configure AWS CLI
&lt;/h3&gt;

&lt;p&gt;Configure AWS CLI with your AWS access information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will be prompted to enter the following information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Access Key ID: Your AWS access key.&lt;/li&gt;
&lt;li&gt;AWS Secret Access Key: Your AWS secret key.&lt;/li&gt;
&lt;li&gt;Default region name: The AWS region in which your S3 bucket is located (e.g. us-west-2).&lt;/li&gt;
&lt;li&gt;Default output format: You can leave this blank or choose json.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now you need to create an amazon bucket, and take care to create a glacier deep archive bucket.&lt;/p&gt;

&lt;p&gt;Once you've done that, you can create a&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;aws s3 &lt;span class="nb"&gt;sync&lt;/span&gt; /mnt/seagate/borg-repo s3://my-borg-backups
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For restoration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;aws s3 &lt;span class="nb"&gt;sync &lt;/span&gt;s3://my-borg-backups /mnt/restore-point
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I'll show you the final script for the backup&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;REPOSITORY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/mnt/seagate/borg-repo”
SOURCE="&lt;/span&gt;/mnt/ssd/”
&lt;span class="nv"&gt;BORG_S3_BACKUP_BUCKET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"bucket-name”


export BORG_PASSPHRASE='PASSPHRASE'

# Backup to borg repo
borg create -v --stats &lt;/span&gt;&lt;span class="nv"&gt;$REPOSITORY&lt;/span&gt;&lt;span class="s2"&gt;::&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%Y-%m-%d-%h&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$SOURCE&lt;/span&gt;&lt;span class="s2"&gt;


# Backup to s3
aws s3 sync &lt;/span&gt;&lt;span class="nv"&gt;$REPOSITORY&lt;/span&gt;&lt;span class="s2"&gt; s3://&lt;/span&gt;&lt;span class="nv"&gt;$BORG_S3_BACKUP_BUCKET&lt;/span&gt;&lt;span class="s2"&gt; --storage-class DEEP_ARCHIVE --delete
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, I call my script in a cron job every day&lt;/p&gt;

&lt;p&gt;If you've made it this far, you now have an automated 3-2-1 backup system with deduplication and an off-site copy.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>selfhosting</category>
      <category>backup</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
