<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Krishnakanth Alagiri</title>
    <description>The latest articles on DEV Community by Krishnakanth Alagiri (@bearlike).</description>
    <link>https://dev.to/bearlike</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bearlike"/>
    <language>en</language>
    <item>
      <title>Why and How to Use Unattended-Upgrades on a Ubuntu server?</title>
      <dc:creator>Krishnakanth Alagiri</dc:creator>
      <pubDate>Thu, 02 Mar 2023 01:51:02 +0000</pubDate>
      <link>https://dev.to/bearlike/why-and-how-to-use-unattended-upgrades-on-a-ubuntu-server-4hoc</link>
      <guid>https://dev.to/bearlike/why-and-how-to-use-unattended-upgrades-on-a-ubuntu-server-4hoc</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;SysAdmins already have enough on their plate without having to manually update their systems every day!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why Use Unattended-Upgrades?
&lt;/h2&gt;

&lt;p&gt;Keeping software up-to-date is crucial for security reasons, as updates often include patches for vulnerabilities. However, it can be time-consuming and inconvenient to manually update packages regularly. This is where Unattended-Upgrades comes in.&lt;/p&gt;

&lt;p&gt;Unattended-Upgrades is a package for Ubuntu that allows automatic installation of security updates. This means that critical updates are installed without user intervention, reducing the risk of security breaches and keeping your system secure.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Install and Configure Unattended-Upgrades
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install the Unattended-Upgrades package by running the following command:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install unattended-upgrades
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit the configuration file &lt;code&gt;/etc/apt/apt.conf.d/50unattended-upgrades&lt;/code&gt; to enable automatic security updates. Uncomment the following line:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//Unattended-Upgrade::Allowed-Origins {
//        "${distro_id}:${distro_codename}-security";
//  //      "${distro_id}:${distro_codename}-updates";
//};
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;So it looks like this:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Unattended-Upgrade::Allowed-Origins {
        "${distro_id}:${distro_codename}-security";
        "${distro_id}:${distro_codename}-updates";
};
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This tells Unattended-Upgrades to automatically install updates from the security repository and the main repository.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enable automatic updates by editing the configuration file &lt;code&gt;/etc/apt/apt.conf.d/20auto-upgrades&lt;/code&gt;. Uncomment the following lines:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This tells the system to update the package lists and install security updates automatically.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure email notifications (optional). By default, Unattended-Upgrades sends an email to root whenever an update is installed. To configure email notifications, edit the configuration file &lt;code&gt;/etc/apt/apt.conf.d/50unattended-upgrades&lt;/code&gt; and add the following lines:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Unattended-Upgrade::Mail "youremail@example.com";
Unattended-Upgrade::MailOnlyOnError "true";
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Replace &lt;code&gt;youremail@example.com&lt;/code&gt; with your email address.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restart the Unattended-Upgrades service to apply the changes:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart unattended-upgrades
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it! Your Ubuntu system is now configured to automatically install security updates. You can rest easy knowing that your system is up-to-date and secure.&lt;/p&gt;

</description>
      <category>ubuntu</category>
      <category>automation</category>
    </item>
    <item>
      <title>Can this makeshift vertical case keep the Pi cool?</title>
      <dc:creator>Krishnakanth Alagiri</dc:creator>
      <pubDate>Sat, 30 Oct 2021 11:04:34 +0000</pubDate>
      <link>https://dev.to/bearlike/can-this-makeshift-vertical-case-keep-the-pi-cool-22ek</link>
      <guid>https://dev.to/bearlike/can-this-makeshift-vertical-case-keep-the-pi-cool-22ek</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://thekrishna.in/blogs/blog/rpi-temp-case/" rel="noopener noreferrer"&gt;thekrishna.in&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Raspberry Pi 4 offers an excellent upgrade to the preceding Pi 3B+, but keeping the new model from overheating can be a challenge. Users have tried many techniques to tackle this issue, from running the Pi in the open air to utilizing cooling rigs with fans attached. But as vertical cases are gaining popularity, can my DIY vertical case keep the Broadcom BCM2711B0 quad-core A72 SOC (system on chip)—cool? Spoiler alert: Surprisingly, yes.&lt;/p&gt;

&lt;p&gt;First of all, why subject your Raspberry Pi to this level of stress? In the case of Raspberry Pi 4, the A72 CPU is so powerful that it can overheat if it doesn't have enough cooling. This results in the CPU being thermal throttled (governed) to reduce the power consumption and in turn reducing heat generation. The Raspberry Pi 3B+ and predecessors could also overheat, but this was less of an issue for the majority of use cases. A quick stress test will reveal if the Raspberry Pi 4 can run at maximum CPU-loads in its case/environment without overheating or throttling.&lt;/p&gt;

&lt;p&gt;The goal of this post is to create a graph, which depicts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A stabilization period at the beginning&lt;/li&gt;
&lt;li&gt;A period of time for full-load CPU&lt;/li&gt;
&lt;li&gt;View the CPU temperature&lt;/li&gt;
&lt;li&gt;View the CPU clocks (to witness if the CPU is being thermal throttled, or not)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I searched the Internet for existing methods for stress testing the Pi and stumbled upon Stressberry. The package is straightforward, as their documentation quotes "stressberry is a package for testing the core temperature under different loads, and it produces nice plots which can easily be compared". The room temperature at the time was about 26ºC. The Raspberry Pi is operating at a stock clock speed of 1.5 GHz, since this is more relevant to the wider population&lt;/p&gt;

&lt;h4&gt;
  
  
  Chart notes:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Blue&lt;/strong&gt; = Temperature in degrees Celsius&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orange&lt;/strong&gt; = CPU clock speed (maximum 1500 MHz)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Test 1: Heat Sink, No Fan, Open air, Stock Case with open lids.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthekrishna.in%2Fassets%2Fimg%2Fblogs%2Frpi-temp-case%2Frpi_test_1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthekrishna.in%2Fassets%2Fimg%2Fblogs%2Frpi-temp-case%2Frpi_test_1.jpg" alt="Test Case 1" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Creative visualization (but not inaccurate 😛) of the setup and the performance chart for the stock case setup&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Towards the end of 10 minutes under full-load, the CPU was clearly in for trouble.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test 2: Heat Sink, Fan, Open air, DIY Vertical Case.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthekrishna.in%2Fassets%2Fimg%2Fblogs%2Frpi-temp-case%2Frpi_test_2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthekrishna.in%2Fassets%2Fimg%2Fblogs%2Frpi-temp-case%2Frpi_test_2.jpg" alt="Test Case 2" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Actual setup and the performance chart for the vertical setup  &lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>linux</category>
      <category>showdev</category>
      <category>raspberrypi</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Raspberry Pi — Awesome custom MOTD</title>
      <dc:creator>Krishnakanth Alagiri</dc:creator>
      <pubDate>Mon, 18 Jan 2021 20:09:57 +0000</pubDate>
      <link>https://dev.to/bearlike/raspberry-pi-awesome-custom-motd-bpd</link>
      <guid>https://dev.to/bearlike/raspberry-pi-awesome-custom-motd-bpd</guid>
      <description>&lt;p&gt;Even though the Raspberry Pi comes with an HDMI port, most projects are headless (runs without a display), which suggests you're mostly using SSH to access the system. I'm bored of seeing the most basic login banner with no information. This login banner is your MOTD (Message of the day, Linux term). My goal here was to have something that could quickly inform me about the machine's information I was using and knowledge of the current state. It should also be as brief as it is practical, and importantly be fast to execute.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The programs included with the Debian GNU/Linux systems are free software; The exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright 

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable Iaw. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to do it?
&lt;/h2&gt;

&lt;p&gt;Remove the default MOTD. It is located in &lt;code&gt;/etc/motd&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Remove the folder&lt;/span&gt;
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; /etc/motd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;code&gt;/home/&amp;lt;user&amp;gt;/.bash_profile&lt;/code&gt;  if you're using Raspberry Pi OS (aka. Raspbian). You can edit this file however to match your needs.&lt;br&gt;
First you need to edit your profile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# mostly, the user is "Pi"&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;nano /home/pi/.bash_profile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then just past in the code below, anywhere within that file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;clear 
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;tput bold&lt;span class="si"&gt;)$(&lt;/span&gt;tput setaf 2&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"    .~~.   .~~.  "&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"   '. &lt;/span&gt;&lt;span class="se"&gt;\ &lt;/span&gt;&lt;span class="s2"&gt;' ' / .' "&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;tput setaf 1&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"    .~ .~~~..~.   "&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"   : .~.'~'.~. :  "&lt;/span&gt;  
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"  ~ (   ) (   ) ~ "&lt;/span&gt;  
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" ( : '~'.~.'~' : )"&lt;/span&gt;    
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"  ~ .~ (   ) ~. ~ "&lt;/span&gt;  
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"   (  : '~' :  )  "&lt;/span&gt;  
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"    '~ .~~~. ~'   "&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"        '~'      "&lt;/span&gt;
&lt;span class="nb"&gt;let &lt;/span&gt;&lt;span class="nv"&gt;upSeconds&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;/usr/bin/cut &lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-f1&lt;/span&gt; /proc/uptime&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;let &lt;/span&gt;&lt;span class="nv"&gt;secs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((${&lt;/span&gt;&lt;span class="nv"&gt;upSeconds&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="m"&gt;60&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;
&lt;span class="nb"&gt;let &lt;/span&gt;&lt;span class="nv"&gt;mins&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((${&lt;/span&gt;&lt;span class="nv"&gt;upSeconds&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="m"&gt;60&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="m"&gt;60&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;
&lt;span class="nb"&gt;let &lt;/span&gt;&lt;span class="nv"&gt;hours&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((${&lt;/span&gt;&lt;span class="nv"&gt;upSeconds&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="m"&gt;3600&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="m"&gt;24&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;
&lt;span class="nb"&gt;let &lt;/span&gt;&lt;span class="nv"&gt;days&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((${&lt;/span&gt;&lt;span class="nv"&gt;upSeconds&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="m"&gt;86400&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;
&lt;span class="nv"&gt;UPTIME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s2"&gt;"%d days, %02dh%02dm%02ds"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$days&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$hours&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$mins&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$secs&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;

&lt;span class="c"&gt;# get the load averages&lt;/span&gt;
&lt;span class="nb"&gt;read &lt;/span&gt;one five fifteen rest &amp;lt; /proc/loadavg

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;tput setaf 2&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;
&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +&lt;span class="s2"&gt;"%A, %e %B %Y, %r"&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt;
&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-srmo&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt;

&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;tput sgr0&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;- Uptime.............: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;UPTIME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;
&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;tput sgr0&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;- Memory.............: &lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;free | &lt;span class="nb"&gt;grep &lt;/span&gt;Mem | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $3/1024}'&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt; MB (Used) / &lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /proc/meminfo | &lt;span class="nb"&gt;grep &lt;/span&gt;MemTotal | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s1"&gt;'print $2/1024'&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt; MB (Total)
&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;tput sgr0&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;- Load Averages......: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;one&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;five&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;fifteen&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; (1, 5, 15 min)
&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;tput sgr0&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;- Running Processes..: &lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;ps ax | &lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;" "&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt;
&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;tput sgr0&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;- IP Addresses.......: &lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;hostname&lt;/span&gt; &lt;span class="nt"&gt;-I&lt;/span&gt; | /usr/bin/cut &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;" "&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; 1&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt; and &lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;wget &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nt"&gt;-O&lt;/span&gt; - http://icanhazip.com/ | &lt;span class="nb"&gt;tail&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt;

&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;tput sgr0&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the &lt;code&gt;sshd&lt;/code&gt; service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Restart the sshd service via systemctl command&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart sshd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  This is how it looks
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjn954q0ne6zb8i11ewed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjn954q0ne6zb8i11ewed.png" width="800" height="738"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Add-ons
&lt;/h2&gt;

&lt;p&gt;You could additionally edit the the above script to take advantages of packages such as &lt;code&gt;neofetch&lt;/code&gt; and &lt;code&gt;figlet&lt;/code&gt; to make attractive and informative MOTDs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Neofetch
&lt;/h3&gt;

&lt;p&gt;Neofetch is a command-line system information utility written in Bash. It prints the information of your system's software and hardware in the Terminal. By default, the system information will be displayed alongside your operating system's logo in ASCII.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2z01wur0dikyulhhkasg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2z01wur0dikyulhhkasg.png" width="800" height="683"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Figlet
&lt;/h3&gt;

&lt;p&gt;FIGlet is a program that generates text banners, in a variety of typefaces, with letters made up of combinations of smaller ASCII characters. So you can use the &lt;code&gt;figlet&lt;/code&gt; command to turn regular terminal text into a huge fancy test, like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyl4iqgbo4fatqdfut6l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyl4iqgbo4fatqdfut6l.png" width="800" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>ubuntu</category>
      <category>architecture</category>
      <category>ssh</category>
    </item>
    <item>
      <title>Understanding our future with 5G</title>
      <dc:creator>Krishnakanth Alagiri</dc:creator>
      <pubDate>Mon, 18 Jan 2021 19:56:46 +0000</pubDate>
      <link>https://dev.to/bearlike/understanding-our-future-with-5g-5274</link>
      <guid>https://dev.to/bearlike/understanding-our-future-with-5g-5274</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://thekrishna.in/blogs/blog/5g-future/" rel="noopener noreferrer"&gt;thekrishna.in&lt;/a&gt; on July 5, 2020&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As the long-term evolution (LTE) system embracing 4G is reaching maturity, it is reasonable for humanity to ponder “what’s next?”&lt;/p&gt;

&lt;p&gt;We all know that 5G is a paradigm shift from 4G with very high carrier frequencies, massive bandwidths, a tremendous amount of base station and device densities, and unprecedented numbers of antennas. But unlike the past four generations, it will be highly integrative by binding any new 5G air interface and spectrum together with LTE and WiFi to give universal high-rate coverage and seamless user experience. This is going to lead a new wave of wireless data explosion driven largely by smart-phones, tablets, and other wireless devices.&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes 5G demanding?
&lt;/h2&gt;

&lt;p&gt;In just a decade, the number of IP data handled by wireless networks has grown by well over a factor of 100, &lt;strong&gt;from under 3 exabytes in 2010 to over 190 exabytes by 2018&lt;/strong&gt; and &lt;strong&gt;is on pace to surpass 500 exabytes by the end of 2020&lt;/strong&gt;. In addition to the sheer volume of data, the number of devices and the data rates will always continue to grow exponentially. The number of devices could reach tens or even hundreds of billions by the time 5G is fully matured, due to many new applications beyond personal communications such as Cloud gaming and Video streaming.  So let us jot down the top two reasons why we need 5G in the first place?&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Data Rate
&lt;/h4&gt;

&lt;p&gt;The need to support the mobile data traffic explosion is undoubtedly the main driver behind 5G. The primary parameters of Data Rates on which 5G networks can be judged are &lt;strong&gt;Aggregate data rate&lt;/strong&gt; (total amount of data the network can serve), &lt;strong&gt;Edge rate&lt;/strong&gt; (worst case data rate per user), and &lt;strong&gt;Peak rate&lt;/strong&gt; (best case data rate per user). &lt;/p&gt;

&lt;h4&gt;
  
  
  2. Latency
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;The Current 4G roundtrip latencies are on the order of about 15 ms.&lt;/strong&gt; Although this latency is sufficient for most current services, anticipated 5G cloud-based applications such as &lt;strong&gt;Google Stadia - Cloud Gaming platform&lt;/strong&gt; will require seemingly real-time feedback loops. Therefore, &lt;strong&gt;5G should support a round trip latency of about 1 ms&lt;/strong&gt;, an order of magnitude faster than 4G.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does 5G solve the problem?
&lt;/h2&gt;

&lt;p&gt;In addition to the highly visible need for more network capacity, several other factors make 5G interesting, including the potentially disruptive “big three” 5G technologies: &lt;strong&gt;ultra-densification&lt;/strong&gt;, &lt;strong&gt;mmWave&lt;/strong&gt;, and &lt;strong&gt;massive multiple-input multiple-output (MIMO)&lt;/strong&gt;. Let's take a quick look at them.&lt;/p&gt;

&lt;h3&gt;
  
  
  A quick look at Ultra Densification
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Put differently, more active nodes per unit area and Hz.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In simple terms, a 5G network will aim to provide high system capacity with high per-user data rates. So this will require a lot of the radio access network or the deployment of additional network nodes to come together. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9f8ep7bp13wj1w6dj2q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9f8ep7bp13wj1w6dj2q.jpg" alt="Source: CSIRO: Simulations of Ultra-dense small cell networks" width="382" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source: CSIRO: Simulations of Ultra-dense small cell networks&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is how it works, &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;By increasing the number of cells, the traffic per square-meter will be increased without an increase in the traffic per network node.&lt;/li&gt;
&lt;li&gt;Thereby, by increasing the number of network nodes, the base-station-to-terminal distances will be shorter. &lt;/li&gt;
&lt;li&gt;Due to shorter base-station-to-terminal distances, there will be an improvement in achievable data rates.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To make this system of networks possible requires a fair bit of densification.&lt;/p&gt;

&lt;h3&gt;
  
  
  A quick look at mmWave
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Altogether, more Hz&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The term mmWave regards to a particular part of the radio frequency spectrum between &lt;strong&gt;24GHz and 100GHz&lt;/strong&gt;, with a very short wavelength. This section of the spectrum is usually unused, so the mmWave technology intends to &lt;strong&gt;increase the amount of bandwidth accessible&lt;/strong&gt;. Plus, the lower frequencies are heavily congested with signals from TV, Radios, and even the 4G LTE networks, which sit within 800 and 3,000MHz. Another advantage of this short wavelength is that it can t*&lt;em&gt;ransfer data even faster at the expense of transfer distance&lt;/em&gt;* (ie., more transfer speed, less range). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo17k9gfsufuyuvqq4qqx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo17k9gfsufuyuvqq4qqx.jpg" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Source: Qualcomm&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  A Quick look into massive MIMO
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;To support more bits/s/Hz per node&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Multiple Input Multiple Output (MIMO) has been used in wireless communications for a long time now where nearly all our devices have multiple antennas to enhance connectivity and performance. MIMO algorithms control how data maps into antennas and where to focus energy in space. Both network and mobile devices must have tight coordination within each other to make MIMO work. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In simple words, MIMO is how Devices and Network work together.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But with how the upcoming 5G NR is designed, it is necessary for a substantially large number of networks and devices to work with tight coordination. So MIMO becomes “massive” and is called Massive MIMO. The “massive” number of antennas improves focusing energy, which causes radical improvements in throughput and efficiency. Along with the increased number of antennas, both the network and mobile devices achieve more complex designs to regulate MIMO operations. &lt;/p&gt;

&lt;h3&gt;
  
  
  A Glimpse into the Possiblities
&lt;/h3&gt;

&lt;p&gt;The way I see 5G network is to be an accelerator for technologies that already exist with us. There are endless possibilities with the dense and fast-pacing 5G wireless networks. So let's look at some of my favorite picks.&lt;/p&gt;

&lt;h4&gt;
  
  
  Edge Computing
&lt;/h4&gt;

&lt;p&gt;Edge computing is a method of optimizing cloud computing systems by performing data processing at the edge of the network or near the source of the data. Imagine you ordering a pizza from your beloved restaurant, but your neighbor cooks and delivers it for you. This means &lt;strong&gt;faster delivery&lt;/strong&gt;. Edge computing is very similar to that. &lt;/p&gt;

&lt;p&gt;So 5G standards are the only way to meet the latency targets that have been set (&lt;strong&gt;1ms network latency&lt;/strong&gt;) to achieve the realistic usage of Edge Computing. Improvements in the radio interface alone will never achieve these. &lt;strong&gt;With a well-mature 5G in the game of computing, the abundance of unused resources in wireless devices or IoTs can significantly drop the cost of compute units than traditional cloud providers.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjcu3ml9jwegyhy31gyf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjcu3ml9jwegyhy31gyf.jpg" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Autonomous Driving
&lt;/h4&gt;

&lt;p&gt;The use of 5G wireless networks can bring Self-driving cars closer to reality. The key ingredient currently missing from Autonomous vehicles is High-performance wireless network connectivity for low latency.&lt;/p&gt;

&lt;p&gt;The will &lt;strong&gt;enable Vehicle-to-Everything (V2X)&lt;/strong&gt; in the future, making cars communicate with everything from traffic signs, environmental objects, to other vehicles on the road. &lt;strong&gt;The original V2X standard is based on a IEEE 802.11p Wi-Fi offshoot&lt;/strong&gt;. So when coupled with a network like 5G, it will drastically &lt;strong&gt;reduce latencies and decrease an AVs response time&lt;/strong&gt;. Thus ensuring passenger safety.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnwyd2w6c047ddlhmcen.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnwyd2w6c047ddlhmcen.jpg" width="480" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A Visualization of Vehicle-to-Everything (V2X) standard&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Internet of Things (IoT)
&lt;/h4&gt;

&lt;p&gt;Internet of Things (IoT) is another extensive field for development using a supercharged 5G wireless network. IoTs are already all around us from wrist-bands to our cars collecting huge amounts of data from millions of devices and sensors. Crucial IoT real-time processes such as data collection, processing, transmission, control, and analytics can be accelerated by an efficient network.&lt;/p&gt;

&lt;h4&gt;
  
  
  Drone and Security Operation
&lt;/h4&gt;

&lt;p&gt;The number of operative drones is &lt;strong&gt;foreseen to grow over 18.1 million in 2025&lt;/strong&gt;, driving flight authorities, and society to explore solutions to overview all drone users and the traffic. Rapid technology progresses on the propulsions, sensors, and navigation and guidance systems led to reliable drone platforms. Consequently, drones must acquire data in real-time. With the need to &lt;strong&gt;overcome current RF limitations&lt;/strong&gt; for increasing flight endurances, size of data sets and video streaming capabilities, alternative control capabilities, 5G wireless networks are required. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flc8b4to2tz13lqho91ii.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flc8b4to2tz13lqho91ii.jpg" width="528" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Amazon delivers packages to customers using drones&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Acknowledgments
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;J. G. Andrews et al., "What Will 5G Be?," in IEEE Journal on Selected Areas in Communications, vol. 32, no. 6, pp. 1065-1082, June 2014, doi: 10.1109/JSAC.2014.2328098.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;X. Ge, S. Tu, G. Mao, C. Wang and T. Han, "5G Ultra-Dense Cellular Networks," in IEEE Wireless Communications, vol. 23, no. 1, pp. 72-79, February 2016, doi: 10.1109/MWC.2016.7422408.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>5g</category>
      <category>cloud</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>Scaling with Nginx; Cost-Performance analysis</title>
      <dc:creator>Krishnakanth Alagiri</dc:creator>
      <pubDate>Mon, 18 Jan 2021 13:38:15 +0000</pubDate>
      <link>https://dev.to/bearlike/scaling-with-nginx-cost-performance-analysis-nb9</link>
      <guid>https://dev.to/bearlike/scaling-with-nginx-cost-performance-analysis-nb9</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://thekrishna.in/blogs/blog/nginx-scaling/" rel="noopener noreferrer"&gt;thekrishna.in&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you realize your system is getting slow and is unable to handle the current number of requests even with optimizations, you need to scale the system sooner than you can optimize further. Building a scalable system also drives to a lower Total Cost of Ownership (TCO). Proper scaling in process-intensive applications embraces interesting new scenarios, notably in data analytics and machine learning. Traditionally, you have two options, Horizontal scaling, and Vertical scaling. &lt;/p&gt;

&lt;h2&gt;
  
  
  Pokémon Go - A Successful Scaling Story
&lt;/h2&gt;

&lt;p&gt;If you downloaded the Pokémon GO app right at its launch, you might have faced several issues on server unavailability for some minutes. Based on the hype around the announcement of Pokémon GO, one might have already presumed what chaos was going on in the backend infrastructure. Pokémon Go engineers never imagined their user base would grow exponentially, exceeding the expectations within a short time. With &lt;strong&gt;500&lt;/strong&gt;+ million downloads and &lt;strong&gt;20+&lt;/strong&gt; million daily active users, the actual traffic was 50 times more than their initial expected traffic. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vnsa1pvdh33rx6pvgtj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vnsa1pvdh33rx6pvgtj.jpg" alt="Pokemon Go Scaling" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This makes Pokémon GO, one of the most exciting examples of container-based development and scaling in the wild. The logic for the game runs on &lt;a href="https://cloud.google.com/container-engine/" rel="noopener noreferrer"&gt;Google Container Engine (GKE)&lt;/a&gt; powered by Kubernetes. Niantic picked GKE for its capability to orchestrate their container cluster at large-scale, saving its focus on deploying live changes for their players. So by provisioning tens of thousands of cores for Niantic’s Container Engine cluster, the game brought joy into all the Pokémon Trainers around the globe.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Horizontal scaling?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Horizontal scaling implies that you scale a cluster by attaching more machines or nodes&lt;/strong&gt; into your pool of resources. &lt;strong&gt;Scaling horizontally&lt;/strong&gt; is like thousands of minions will do the work together for you. Increasing the number of servers present in a system is the highly used solution is in the tech industry. This will eventually decrease the load in each node while providing redundancy and flexibility, thus reducing the risks of downtimes. If you need to scale the system, just add another server, and you are done. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5elb70q805tuqpe0nse.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5elb70q805tuqpe0nse.png" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs91o53x6y771cfamb3e.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs91o53x6y771cfamb3e.jpg" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Vertical scaling
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Vertical scaling means that you scale by adding horsepower to an existing machine&lt;/strong&gt;. &lt;strong&gt;Scaling vertically&lt;/strong&gt; is like one big hulk will do all the work for you. You increase the resources in the server which you are using currently (increase the amount of RAM, CPU, GPU, and other resources). &lt;strong&gt;A horizontally scaled app will provide the benefit of elasticity.&lt;/strong&gt; Vertical scaling is expensive than horizontal scaling and may require your machine to be brought down for a moment when the process takes place.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq66rozo0tcyhs4f5w8er.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq66rozo0tcyhs4f5w8er.png" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxpyp1cm3ug2t1e1k42h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxpyp1cm3ug2t1e1k42h.jpg" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://thekrishna.in/blogs/blog/fav-docker-images/#10-nginx" rel="noopener noreferrer"&gt;I've explained in brief about nginx and how to run it using docker in my previous post&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;As developers, we do not always have access to a production-like environment to test new features and run proofs-of-concept. This is why it can be interesting to deploy containers for such an experiment. For this little experiment, our host machine (part of my home-lab setup) comes with a 4-core Intel &lt;strong&gt;i5-4200M&lt;/strong&gt; (base speed of 2.5 GHz) and dedicated &lt;strong&gt;6GB DDR3 SDRAM&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now the client uses the &lt;a href="https://httpd.apache.org/docs/current/programs/ab.html" rel="noopener noreferrer"&gt;&lt;strong&gt;ab - Apache HTTP server benchmarking tool&lt;/strong&gt;&lt;/a&gt;. For ensuring connection stability and integrity, both our host machine and our client will be connected via Ethernet on an isolated network. The cost is determined based on their highest on-demand value based on AWS-EC2 and Sherweb pricing with a fixed storage and network bandwidth. &lt;/p&gt;

&lt;h2&gt;
  
  
  Diving into Action
&lt;/h2&gt;

&lt;p&gt;We'll be requesting the below HTML document from our Nginx server. &lt;a href="https://gist.github.com/bearlike/86699ebaa054842979a7beec032756e2" rel="noopener noreferrer"&gt;As you can see here, there are no references to increase unnecessary load times&lt;/a&gt;. The HTML Document Length is &lt;strong&gt;1328 bytes,&lt;/strong&gt; and our server is running &lt;code&gt;nginx/1.19.0&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Initial Benchmark
&lt;/h4&gt;

&lt;p&gt;Our initial benchmark is on a single docker container with &lt;strong&gt;4MB memory, and 0.1 CPU cores&lt;/strong&gt;. Docker can allocate partial CPU cores by adjusting the CPU CFS scheduler period, and imposing a CPU CFS quota on the container. To kick things up a notch, &lt;strong&gt;I used 10000 requests with 1000 concurrencies&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# ab -n &amp;lt;num_requests&amp;gt; -c &amp;lt;concurrency&amp;gt; &amp;lt;addr&amp;gt;:&amp;lt;port&amp;gt;&amp;lt;path&amp;gt;&lt;/span&gt;
ab &lt;span class="nt"&gt;-n&lt;/span&gt; 10000 &lt;span class="nt"&gt;-c&lt;/span&gt; 1000 http://192.168.1.16/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  &lt;strong&gt;Observations&lt;/strong&gt;
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;It took the server about &lt;strong&gt;31.462 seconds&lt;/strong&gt; to transfer 14.89 MB (15620000 bytes)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fastest request was served at 50ms and slowest at 20796ms&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;This request time was high due to high concurrency and wasn't the same case when concurrency is 1000 and below.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Vertical Scaling
&lt;/h4&gt;

&lt;p&gt;Like above, our benchmarking parameters are &lt;strong&gt;10,000 requests with a concurrency of 1000&lt;/strong&gt; .&lt;/p&gt;

&lt;h5&gt;
  
  
  Tabulated Result
&lt;/h5&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#&lt;/th&gt;
&lt;th&gt;CPU Cores&lt;/th&gt;
&lt;th&gt;Memory (MB)&lt;/th&gt;
&lt;th&gt;Time Taken (s)&lt;/th&gt;
&lt;th&gt;Request Per Second&lt;/th&gt;
&lt;th&gt;Transfer rate (KBps)&lt;/th&gt;
&lt;th&gt;Min. Serve Time (ms)&lt;/th&gt;
&lt;th&gt;Max. Serve Time (ms)&lt;/th&gt;
&lt;th&gt;Time per request (ms)&lt;/th&gt;
&lt;th&gt;Estimated Monthly Cost (USD)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;A&lt;/td&gt;
&lt;td&gt;0.1&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;64.614&lt;/td&gt;
&lt;td&gt;154.77&lt;/td&gt;
&lt;td&gt;234.1&lt;/td&gt;
&lt;td&gt;47&lt;/td&gt;
&lt;td&gt;64,537&lt;/td&gt;
&lt;td&gt;6.461&lt;/td&gt;
&lt;td&gt;0.51&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;B&lt;/td&gt;
&lt;td&gt;0.3&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;50.657&lt;/td&gt;
&lt;td&gt;196.42&lt;/td&gt;
&lt;td&gt;289.17&lt;/td&gt;
&lt;td&gt;47&lt;/td&gt;
&lt;td&gt;35,162&lt;/td&gt;
&lt;td&gt;5.091&lt;/td&gt;
&lt;td&gt;2.02&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;td&gt;0.5&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;39.327&lt;/td&gt;
&lt;td&gt;254.28&lt;/td&gt;
&lt;td&gt;376.24&lt;/td&gt;
&lt;td&gt;49&lt;/td&gt;
&lt;td&gt;33,404&lt;/td&gt;
&lt;td&gt;4.133&lt;/td&gt;
&lt;td&gt;3.36&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D&lt;/td&gt;
&lt;td&gt;0.7&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;35.985&lt;/td&gt;
&lt;td&gt;277.89&lt;/td&gt;
&lt;td&gt;412.15&lt;/td&gt;
&lt;td&gt;42&lt;/td&gt;
&lt;td&gt;31,926&lt;/td&gt;
&lt;td&gt;3.599&lt;/td&gt;
&lt;td&gt;4.72&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;E&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;29.238&lt;/td&gt;
&lt;td&gt;338.19&lt;/td&gt;
&lt;td&gt;505.88&lt;/td&gt;
&lt;td&gt;51&lt;/td&gt;
&lt;td&gt;29,189&lt;/td&gt;
&lt;td&gt;2.957&lt;/td&gt;
&lt;td&gt;6.81&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Horizontal Scaling
&lt;/h4&gt;

&lt;p&gt;Like above, our benchmarking parameters are &lt;strong&gt;10,000 requests with 1000 concurrencies&lt;/strong&gt;. Each instance would have &lt;strong&gt;0.1vCPU and 2MB of Memory&lt;/strong&gt; and are deployed using Nginx Ingress Controller in a Minikube cluster.&lt;/p&gt;

&lt;h5&gt;
  
  
  Tabulated Result
&lt;/h5&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#&lt;/th&gt;
&lt;th&gt;Number of Instances&lt;/th&gt;
&lt;th&gt;Time Taken (s)&lt;/th&gt;
&lt;th&gt;Request Per Second&lt;/th&gt;
&lt;th&gt;Transfer rate (KBps)&lt;/th&gt;
&lt;th&gt;Min. Time to Serve (ms)&lt;/th&gt;
&lt;th&gt;Max. Time to Serve (ms)&lt;/th&gt;
&lt;th&gt;Time per request (ms)&lt;/th&gt;
&lt;th&gt;Estimated Monthly Cost (USD)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;A&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;64.614&lt;/td&gt;
&lt;td&gt;154.77&lt;/td&gt;
&lt;td&gt;234.1&lt;/td&gt;
&lt;td&gt;47&lt;/td&gt;
&lt;td&gt;64537&lt;/td&gt;
&lt;td&gt;6.461&lt;/td&gt;
&lt;td&gt;$0.51&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;B&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;46.802&lt;/td&gt;
&lt;td&gt;212.42&lt;/td&gt;
&lt;td&gt;317.54&lt;/td&gt;
&lt;td&gt;45&lt;/td&gt;
&lt;td&gt;24473&lt;/td&gt;
&lt;td&gt;4.701&lt;/td&gt;
&lt;td&gt;$1.53&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;34.334&lt;/td&gt;
&lt;td&gt;284.28&lt;/td&gt;
&lt;td&gt;403.43&lt;/td&gt;
&lt;td&gt;43&lt;/td&gt;
&lt;td&gt;23249&lt;/td&gt;
&lt;td&gt;3.817&lt;/td&gt;
&lt;td&gt;$2.55&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;30.247&lt;/td&gt;
&lt;td&gt;311.89&lt;/td&gt;
&lt;td&gt;461.11&lt;/td&gt;
&lt;td&gt;44&lt;/td&gt;
&lt;td&gt;22220&lt;/td&gt;
&lt;td&gt;3.324&lt;/td&gt;
&lt;td&gt;$3.57&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;E&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;24.013&lt;/td&gt;
&lt;td&gt;437.19&lt;/td&gt;
&lt;td&gt;551.25&lt;/td&gt;
&lt;td&gt;49&lt;/td&gt;
&lt;td&gt;20316&lt;/td&gt;
&lt;td&gt;2.731&lt;/td&gt;
&lt;td&gt;$5.10&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Final Result: Vertical  Scaling v/s Horizontal Scaling
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhm74t5wt344h401821hp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhm74t5wt344h401821hp.png" width="720" height="445"&gt;&lt;/a&gt; &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1wxil8hyxlndufd199t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1wxil8hyxlndufd199t.png" width="720" height="445"&gt;&lt;/a&gt; &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwdjg5l5vqpkx3r7vc8l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwdjg5l5vqpkx3r7vc8l.png" width="720" height="445"&gt;&lt;/a&gt; &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqfjd0sxst83thyknt2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqfjd0sxst83thyknt2y.png" width="720" height="445"&gt;&lt;/a&gt; &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1uzbppe53fqokgd6a5k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1uzbppe53fqokgd6a5k.png" width="720" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Observations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;overall performance degrades if the increase in CPU cores allocated was not proportional to the memory allocated&lt;/strong&gt;. This is because &lt;strong&gt;the data must be read from the disk instead of directly from the data cache,&lt;/strong&gt; causing a &lt;strong&gt;memory bottleneck&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;The vice-versa can also happen and is termed as &lt;strong&gt;CPU-Bottlenecks&lt;/strong&gt; which causes a chronically &lt;strong&gt;high CPU utilization rate&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;On Vertical Scaling, Our test roughly stabilized after 1 CPU core and 32 MB of memory. This signifies that more than the required resources will not result in a performance boost, but will only increase our cost of the deployment. &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;From the above, we understood the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Horizontal Scaling provides a &lt;strong&gt;faster delivery than vertical scaling due to their lower average request time&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;In terms of responsiveness, Horizontal Scaling provides better performance due to the &lt;strong&gt;distribution of server loads among available nodes&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Horizontal scaling &lt;strong&gt;prevents you from getting caught in a resource deficit&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Horizontal Scaling is &lt;strong&gt;more cost-effective&lt;/strong&gt; than vertical scaling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But this doesn't mean all applications would qualify for absolute horizontal scaling. If your applications demand a greater horsepower to perform as expected or quicker, you might want to consider vertical scaling. In other words, you might want each of your thousands of minions as powerful as a hulk, so they are suitable for other intensive tasks. &lt;strong&gt;But for accurate lower Total Cost of Ownership (TCO), Auto Scaling is the solution.&lt;/strong&gt; Here, servers dynamically scale to secure steady, predictable performance at the lowest possible cost. &lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>nginx</category>
      <category>codenewbie</category>
    </item>
    <item>
      <title>Flatpak vs Snaps vs AppImage vs Packages - Linux packaging formats compared</title>
      <dc:creator>Krishnakanth Alagiri</dc:creator>
      <pubDate>Mon, 18 Jan 2021 12:45:12 +0000</pubDate>
      <link>https://dev.to/bearlike/flatpak-vs-snaps-vs-appimage-vs-packages-linux-packaging-formats-compared-3nhl</link>
      <guid>https://dev.to/bearlike/flatpak-vs-snaps-vs-appimage-vs-packages-linux-packaging-formats-compared-3nhl</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally Published in &lt;a href="https://thekrishna.in/blogs/blog/linux-packages/" rel="noopener noreferrer"&gt;thekrishna.in&lt;/a&gt; on Jun 27, 2020&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Linux often gets a bad reputation when it comes to installing software, and this is because we have so many different application distribution formats. Most of them also are misunderstood, or have preconceived notions attached to them, so I think it’s time to take a look at the differences among the different packaging formats!&lt;/p&gt;

&lt;h2&gt;
  
  
  DEB and RPM Packages
&lt;/h2&gt;

&lt;p&gt;The two most popular package formats are the DEB and RPM. Debian and derivative distros like Ubuntu use Debs, while Fedora, Red Hat, and OpenSUSE use RPMs. You can pull the Packages from their distribution’s repositories or installed them. They contain a pre-compiled or a binary version of the application or library you’re trying to install for your system’s architecture.&lt;/p&gt;

&lt;p&gt;These packages come with a descriptive file with all the various libraries and other applications your program needs to run.&lt;/p&gt;

&lt;p&gt;Since the DEB and RPM packages are pre-compiled for your architecture, they're quicker to install. They pull all your dependencies immediately if they’re available. Since these packages are distro-specific, developers need to package their app for various distros, versions, and architectures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Flatpaks
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.flatpak.org/" rel="noopener noreferrer"&gt;Flatpak&lt;/a&gt; format is here to fix these problems. While Flatpaks come as binaries, so no compilation needed, they embark on all the libraries required in the package. They can use shared libraries provided in other Flatpaks.&lt;/p&gt;

&lt;p&gt;Flatpaks are quick to install from repositories called remotes. The largest available repository is Flathub, which sources the most available Flatpak applications. They also provides an interface for granting and revoking permissions.&lt;/p&gt;

&lt;p&gt;Flatpaks may introduce security issues: while Flatpaks sandboxes itself, they still can provide outdated libraries and can consume larger storage than their DEB and RPM relatives. Flatpaks can run on any distro that has the flatpak package installed and most distro’s software centers have added the flathub remote.&lt;/p&gt;

&lt;h2&gt;
  
  
  Snaps
&lt;/h2&gt;

&lt;p&gt;Snaps have an interesting difference though: they can also ship server stuff. While Flatpaks focuses on graphical applications, snaps can contain pure command line packages. Another difference is that Snaps can be automatically updated without user intervention and can receive delta updates. &lt;/p&gt;

&lt;p&gt;Snaps can also revert to the previous version. Snaps do have some problems though: first, they don’t make use of the system theme. Second, they can’t use external repositories: all snaps come through &lt;a href="https://snapcraft.io/" rel="noopener noreferrer"&gt;snapcraft&lt;/a&gt;, which is the “official” distribution center for these. Snaps also tend to be bigger, and slower to launch than Flatpaks or regular packages. Snaps can run on any distro that has access to snapd, the backend part of snaps. &lt;/p&gt;

&lt;h2&gt;
  
  
  AppImages
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://appimage.org/" rel="noopener noreferrer"&gt;AppImages&lt;/a&gt; are yet another way to distribute applications in a single contained package. They use the “one app one file” method: AppImage ships all the files needed, and all the libraries as well, in a single file. An AppImage app is a single executable file (similar to Windows .exe files). They are not downloaded from a repo, but there is AppImage hub, a website that lists most if not all the available AppImages. AppImage has lowest app size footprint compared to Snaps and Flatpaks, most probably because it serves binaries in compressed format.&lt;/p&gt;

&lt;p&gt;You can immediately start using them once they're downloaded, regardless of the path on the system. No need to install runtimes, or shared components like Snaps or Flatpaks, you can put your AppImages anywhere and run it.  This means that AppImages are super portable: to keep all your apps, you can copy/paste the AppImages and you’re good to go. This a big advantage, but there are some issues there as well. First, you can’t update an AppImages without downloading the new version yourself, much like apps on windows. &lt;/p&gt;

&lt;p&gt;AppImages also don’t respect the system’s theme at all and can get pretty big, since they don’t make use of share runtimes at all. AppImages can run on any distro, they don’t need any specific plumbing to work. Recently, AppImage developers provided a tool to update existing binaries by providing delta updates. However, it still requires downloading the update tool and manually using it with an existing AppImage binary. There is no hands-free update mechanism available for AppImage yet. They also don’t support granular permission controls.&lt;/p&gt;

</description>
      <category>ubuntu</category>
      <category>linux</category>
      <category>archlinux</category>
    </item>
    <item>
      <title>GNOME vs KDE Plasma</title>
      <dc:creator>Krishnakanth Alagiri</dc:creator>
      <pubDate>Sun, 17 Jan 2021 20:52:25 +0000</pubDate>
      <link>https://dev.to/bearlike/gnome-vs-kde-plasma-4cl6</link>
      <guid>https://dev.to/bearlike/gnome-vs-kde-plasma-4cl6</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://thekrishna.in/blogs/blog/gnome-vs-kde/" rel="noopener noreferrer"&gt;thekrishna.in&lt;/a&gt; on Sep 29, 2020&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Both default setups of KDE and GNOME are simply legible. Plasma looks like what you'd expect if you're coming from Windows, with its bottom panel, menu, and task manager, and the defaults are clean and simple, rather welcoming for a new user. &lt;/p&gt;

&lt;p&gt;GNOME, on the other hand, is the precise opposite approach: the default metaphor is the opposite of what you're used to, with no active task management, no desktop icons, no application menu, no dock or taskbar. Activities are a good concept, easy to learn, but it goes against most other desktops I've used, so I took a while to get used to switching windows through the activities or the keyboard. &lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration
&lt;/h3&gt;

&lt;p&gt;Plasma has an overabundance of options, plasmoids, layouts and panels, and as many configuration options as you want. You can tweak colours, themes, window borders and controls, and even the position of panels and toolbars on most applications, to get to the desktop and layout you want to use. You can't fault KDE on customization and it is the most configurable desktop environment I've ever used on any OS.&lt;/p&gt;

&lt;p&gt;GNOME, by default, has almost zero configuration. However, you can tweak quite a few things through extensions and GNOME Tweaks, but these come with a convoluted installation process: look for GNOME Tweaks, install it, then install extensions, and the browser add-on, and the host connector, then manage extensions through GNOME Tweaks, where you find their own set of preferences. I appreciate the logic of offering a very clean and simple default desktop, but enabling extensions in one-click, with all the browser extensions and host connector configured, or providing an "extension store" directly from GNOME Software would be far easier, and would not clutter the interface whatsoever.&lt;/p&gt;

&lt;h3&gt;
  
  
  Look and Feel
&lt;/h3&gt;

&lt;p&gt;The default look of KDE is pleasing, with bright icons, nice gradients and shadows, and smooth animations switching from a menu to another, dragging windows around, and generally while using the system. The default theme, Breeze, looks good on plasmoids and application widgets, and it offers a dark mode by default if you prefer that. Plasma looks modern and polished, and it feels natural to use, with each element fading in and out.&lt;/p&gt;

&lt;p&gt;GNOME, on the other hand, comes with Adwaita. It is a big theme: there is a lot of padding around buttons, menus, and everything looks quite big. Icons are a bit dated in my opinion, with muted colours. Adwaita is usually plain and minimalistic to me, although it is very functional and legible, with nothing catching your eye specifically, letting you focus on your work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Applications
&lt;/h3&gt;

&lt;p&gt;As per applications, it's a tough one: GNOME seems to have a lot more applications available by default, compared to KDE, but GNOME applications are severely lacking in features.&lt;/p&gt;

&lt;p&gt;KDE Plasma has very stable applications, but little choice: since most applications do everything and are extremely configurable, there is little incentive to work on a new project instead of contributing to an existing one. Default applications can sometimes look dated, without place buttons or widgets and convoluted interfaces. They are powerful, though, and do not lack important features. Once you get familiar with a KDE application, you can pretty much do anything you like.&lt;/p&gt;

&lt;p&gt;GNOME, on the other hand, has a nice choice of different applications, but they have issues. Photos are too simple, Music crashed a lot, and Contacts didn't support contact groups. One problem is desktop consistency: since GNOME does away with menu bars, once you install something that does not strictly adhere to GNOME guidelines, it quickly looks out of place and behaves differently.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance
&lt;/h3&gt;

&lt;p&gt;In terms of performance, KDE Plasma has quick and smooth animations for panels and menus and uses fewer RAM compared to GNOME. Applications open promptly, and stay snappy, even under load. GNOME also behaves nicely by default, but it usually uses more RAM and CPU. If you have a lower spec system, GNOME might not be the right choice for you.&lt;/p&gt;

&lt;p&gt;In the end, I genuinely liked both. In terms of Look and feel, KDE matches my tastes more closely since I came from a Windows Ecosystem, but GNOME's interface and interface guidelines appeal to the minimalist in me. In fact, I daily drive the GNOME from Pop!_OS and the minimalism helps me focus on whatever I'm up to. &lt;strong&gt;&lt;a href="https://thekrishna.in/blogs/blog/pop-os/" rel="noopener noreferrer"&gt;You can find out why I love Pop!_OS here&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>ubuntu</category>
      <category>archlinux</category>
    </item>
    <item>
      <title>Why I switched to Pop_OS?</title>
      <dc:creator>Krishnakanth Alagiri</dc:creator>
      <pubDate>Sat, 16 Jan 2021 11:53:36 +0000</pubDate>
      <link>https://dev.to/bearlike/why-i-switched-to-popos-16b8</link>
      <guid>https://dev.to/bearlike/why-i-switched-to-popos-16b8</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://thekrishna.in/blogs/blog/pop-os/" rel="noopener noreferrer"&gt;thekrishna.in&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As of 2020, Windows is still the dominant desktop OS comprising nearly 90% of it's demographic and I was in it for over 14 years. It was initially fast but as time progressed, the Windows updates stacked up along with previously installed softwares. Adding gasoline to the fire, Windows decided to bake advertisements and even more telementary services right into the operating system.&lt;/p&gt;

&lt;p&gt;Then one day, I wanted to print a theatre ticket in a rush and turned on my computer only to see...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq9sb2wtfk5xlj6w97oc.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq9sb2wtfk5xlj6w97oc.gif" alt="windows-updating" width="800" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now I finally chose to migrate to Linux and after careful experimentations with various distros and DEs, I came across &lt;a href="https://pop.system76.com/" rel="noopener noreferrer"&gt;&lt;em&gt;Pop!_OS&lt;/em&gt;&lt;/a&gt; by &lt;em&gt;System76&lt;/em&gt; and loved it straight out of the box.&lt;/p&gt;

&lt;p&gt;So why do I have so much love for a Ubuntu based distribution? Well, let me explain.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Linux is free and that's a great price!
&lt;/h2&gt;

&lt;p&gt;This is true not only for the operating system and the kernel but also for all the software that comes bundled with it. Also in most cases, there will be an open-source alternative for any paid application in Windows like the LibreOffice Suite instead of Microsoft Office.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvov7evuih7d778fqose.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvov7evuih7d778fqose.gif" alt="free-real-estate" width="480" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Pop!_OS is beautiful and functional.
&lt;/h2&gt;

&lt;p&gt;Now to the fun part. Pop!_OS is undoubtedly one of the best-looking desktop environments I have ever used. I would suppose it is tied with Elementary OS, Deepin and Budgie DE. While it's Gnome desktop environment, has its fair share of criticisms, it does hold true to its values of getting out of your way to get work done. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fg12mjjeqywwaemu9di.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fg12mjjeqywwaemu9di.jpg" alt="kk-popos" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GNOME has no active task management, no desktop icons, no application menu, no dock or taskbar. Activities are an easy concept but coming from Windows, I took a while to get used to switching windows through the activities or the keyboard. &lt;/p&gt;

&lt;p&gt;However, if the default design and working are not up to your fondness, then you can customize and tweak it however you want using the GNOME Tweak Tool. You can even make it to look and behave like macOS or Windows complete with a start menu. &lt;/p&gt;

&lt;p&gt;As far as functionality goes, if you are used to GNOME then you will feel at home in Pop!_OS. You boot straight to your desktop, and the system tray is pretty minimal by default helping you focus on whatever you're onto. &lt;/p&gt;

&lt;h3&gt;
  
  
  Automatic Window Tiling
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9grm3evx2viimxd6wdk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9grm3evx2viimxd6wdk.jpg" alt="Automatic Window Tiling" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As of Pop!_OS 20.04, it has released an automatic window tiling manager. This enables you to tile and arranges all open windows at the click of a button which is remarkably helpful when multitasking.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pop!_OS makes life easier for Developers.
&lt;/h3&gt;

&lt;p&gt;Pop!_OS can do everything Ubuntu can do. But its dedicated tools, attractive looks, and refined work flow provide a smooth Development. It supports tons of programming languages and useful programming tools natively. Nvidia drivers with CUDA support comes preinstalled which enables developers to speed up compute-intensive applications.&lt;/p&gt;

&lt;p&gt;There are features like the New Keyboard shortcuts, Increased Compatibility, New App launcher and even more that you shoud take a look at. Overall, The Pop!_OS is smooth, functional, and melts right into the background just as it was intended to.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Pop!_Store has Everything.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8ha5363y2tjn8lskfze.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8ha5363y2tjn8lskfze.jpg" alt="Pop Shop" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pop!_Shop is a software store for Pop!_OS as to Play Store is for Android. Here, you can search and install any package with just a click of a button. To add more power to the store, Pop 20.04 brings Flatpak support to Pop Shop. This implies that if you want to download any application, you can now also pull packages from the Flathub repository along with Pop!_OS and Ubuntu repo.&lt;/p&gt;

&lt;p&gt;Flatpak/Flathub is a widely recognised universal package management tool and store that contains a number of applications. &lt;strong&gt;&lt;a href="https://thekrishna.in/blogs/blog/linux-packages/" rel="noopener noreferrer"&gt;You can learn more about Flatpaks in my previous post&lt;/a&gt;&lt;/strong&gt;. Hence, you now have access to more packaged applications with high privacy.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. The "Nvidia Driver" Situation.
&lt;/h2&gt;

&lt;p&gt;Nearly all major Linux distributions have come a long way in the past 2 years in making the proprietary Nvidia driver available to their users. Here again, there’s a subtle yet a functional difference in the way Pop!_OS handles Nvidia drivers compared to its alternatives that hybrid graphics laptop users like me would especially appreciate.&lt;/p&gt;

&lt;p&gt;When you download the Nvidia version of the Pop!_OS ISO, the Nvidia graphics driver is active during the OS installation process, not merely “available” to install. That means no risk of perplexing black screens which is more common with the open-source “Nouveau” driver for Nvidia hardware. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kw4rf76lzyy61ulpf4a.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kw4rf76lzyy61ulpf4a.jpg" alt="kk-hybrid-graphics" width="223" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Hybrid Graphics
&lt;/h3&gt;

&lt;p&gt;Pop!_OS supports hybrid graphics straight out of the box. It includes the &lt;code&gt;system76-power&lt;/code&gt; package, which includes the ability to switch between integrated, NVIDIA, and hybrid graphics modes.&lt;br&gt;
This means once you launch your application in Hybrid Graphics mode with a dedicated Graphics card, your laptop runs on the battery-saving Intel GPU and only uses the NVIDIA GPU for applications running.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Privacy, Control and Support
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Privacy and Security
&lt;/h3&gt;

&lt;p&gt;While no operating system is absolutely risk-free, Windows is a big target for viruses and malware due to its majority market share. With Linux as a whole, you only need to be smart, stick to trusted software repositories, and avoid using risky programs such as Adobe Flash or &lt;code&gt;free_ram.sh&lt;/code&gt;. Linux is more secure than Windows primarily because of the way it's designed and handles user permissions. This is one reason why most of the web runs on Linux. Since most packages you install would be open-source, their codebase is public and would have been reviewed by more developers for bugs and security vulnerabilities.&lt;/p&gt;

&lt;p&gt;Speaking of security, Pop!_OS is so far the only Linux distribution that enables pre-installed full-disk encryption out of the box. Moreover, you can use the ‘Refresh install’ feature to reset your operating system while preserving the files in your Home folder.&lt;br&gt;
Windows chooses when to install updates, and it displays a message declaring that your computer is going to be rebooted. While with most linux distros, you get to decide when those updates are installed, and in most cases, the updates are installed without rebooting the system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Control
&lt;/h3&gt;

&lt;p&gt;Compared to Windows or macOS, Linux support is easier to find, and there are dozens of forums, subreddits, chat rooms, and even more websites committed to helping people receive and understand Linux. Unlike Microsoft support, which is mostly from an employee, Linux support usually comes from enthusiasts and developers.&lt;/p&gt;

&lt;p&gt;Every single part of your Distribution can be adjusted if you have the time, will and support. KDE Fan? Remove GNOME from Pop!_OS and install KDE Instead. System services slowing your boot? Disable and mask them. In fact, you can even write your own scripts and packages if you have the time and effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Personal Recommendations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Install rEFInd Boot Manager
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Visit &lt;a href="https://www.rodsbooks.com/refind/" rel="noopener noreferrer"&gt;rEFInd Documentation&lt;/a&gt; for Installation and theming instructions&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsjudfdsngfjvf9v4s3v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsjudfdsngfjvf9v4s3v.jpg" alt="kk-refind-boot-manager" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're Dual Booting Windows and Pop!_OS, I highly recommend you to use rEFInd Boot Manager instead/on-top of the systemd-bootloader. I had to manually adjust both GRUB and systemd-boot configurations every time Windows had a major update, while rEFInd scans for kernels on every boot and is more adaptive and is less reliant on configuration files. Plus rEFInd has more eye candy.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Bridging gaps between OSes in a system
&lt;/h3&gt;

&lt;h4&gt;
  
  
  2.1 Why didn't I delete Windows for good?
&lt;/h4&gt;

&lt;p&gt;Despite Linux growing better every day, there are still a lot of programs, softwares and hardware that are exclusively made for Windows due to majority market share and the open-source alternatives are not essentially good enough. Adobe Photoshop, Microsoft Office and Epic Games Stores are a great example of it. &lt;/p&gt;

&lt;h4&gt;
  
  
  2.2 Create a common partition
&lt;/h4&gt;

&lt;p&gt;If you're dual-booting Windows and any other Linux Distibution(s), I strongly advice to &lt;strong&gt;turn off Fast Startup&lt;/strong&gt; in Windows as they make all accessible filesystem read-only in Linux until you fully shut down you Windows. Create an extra NTFS partition for all your OSes to coexist and share common files peacefully.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.3 Install WINE and create a Windows Virtual Machine
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkhpi7ciwaf1we4lkb9q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkhpi7ciwaf1we4lkb9q.jpg" alt="kk-vm-win10" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not every Windows Program has it's equal Open-Source Linux Alternative so if you're lucky (or savvy) enough, your program might run using Wine on Linux. &lt;a href="https://appdb.winehq.org/" rel="noopener noreferrer"&gt;You can check your application's compatibility on Wine here&lt;/a&gt;. Platforms like &lt;a href="https://lutris.net/" rel="noopener noreferrer"&gt;Lutris&lt;/a&gt; can even install and launch games so you can start playing without the hassle of setting up your games with Wine Linux powered machine. If you can't find your program on the database, give it a shot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6osofd5tczjcoc4vin6b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6osofd5tczjcoc4vin6b.jpg" alt="lutris" width="580" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Popular programs like Microsoft Office still doesn't have great compatibility with Wine. So if Wine doesn't work for your program, create a small Windows Virtual Machine and run on it. I created mine using &lt;strong&gt;&lt;a href="https://www.virtualbox.org/" rel="noopener noreferrer"&gt;VirtualBox&lt;/a&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;a href="https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/" rel="noopener noreferrer"&gt;Windows 10 Development Virtual Machines provided by Microsoft&lt;/a&gt;&lt;/strong&gt;. However, power-demanding programs like Adobe Photoshop still don't perform well on Virtual Machines and here's is where you have no other option than to boot into your Windows Host.&lt;/p&gt;

</description>
      <category>popos</category>
      <category>linux</category>
      <category>ubuntu</category>
    </item>
    <item>
      <title>Docker Apps and Images for everyone</title>
      <dc:creator>Krishnakanth Alagiri</dc:creator>
      <pubDate>Sat, 16 Jan 2021 11:19:03 +0000</pubDate>
      <link>https://dev.to/bearlike/docker-apps-and-images-for-everyone-13g5</link>
      <guid>https://dev.to/bearlike/docker-apps-and-images-for-everyone-13g5</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://thekrishna.in/blogs/blog/fav-docker-images/" rel="noopener noreferrer"&gt;thekrishna.in&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Wait, What is Docker?
&lt;/h2&gt;

&lt;p&gt;Have you ever worried about software running in your machine but not in other systems? Docker is the solution. Docker sandboxes applications running within as containers so that their execution is completely isolated from others. This has become enormously popular over the last few years, but to capitalize on it, you need to integrate third-party images.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Container Image?
&lt;/h2&gt;

&lt;p&gt;A Docker container image is a standalone bundle of executable packages and software to run an application. An image is a dormant and immutable file with a set of layers that essentially acts as a snapshot of a container. &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;Docker’s public registry&lt;/a&gt; is usually your central source for container apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Containers?
&lt;/h2&gt;

&lt;p&gt;A container is an efficient little environment that encapsulates up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. In simple words, an instance of an image is called a container. Container images become containers only from their runtime on Docker Engine.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Management
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Portainer
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.portainer.io/" rel="noopener noreferrer"&gt;Portainer&lt;/a&gt; is a management UI that enables you to manage your different Docker environments (Docker hosts or Swarm clusters). Portainer is easy to deploy and has a surprisingly intuitive user interface for how functional it is. It has one container which will run on any Docker engine (whether it's deployed as a Linux container or a Windows native container). Portainer allows you to control all of your Docker resources (containers, images, volumes, networks, etc.) it is compatible with both the standalone Docker engine and with Docker Swarm mode.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvkq573ddz1nnx8vpwlu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvkq573ddz1nnx8vpwlu.png" alt="Krishnakanth-Portainer" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Installing Portainer
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create a Docker volume portainer_data&lt;/span&gt;
docker volume create portainer_data 
&lt;span class="c"&gt;# create a Portainer Docker container&lt;/span&gt;
docker run &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="s2"&gt;"portainer"&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 9000:9000 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-v&lt;/span&gt; /var/run/docker.sock:/var/run/docker.sock &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-v&lt;/span&gt; portainer_data:/data portainer/portainer 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Advertisement and Internet Tracker Blocking Application
&lt;/h2&gt;

&lt;h3&gt;
  
  
  2. PiHole
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://pi-hole.net/" rel="noopener noreferrer"&gt;Pi-hole&lt;/a&gt; is a &lt;strong&gt;must-have for everyone whether you're a hardcore gamer, developer, or just couch potato&lt;/strong&gt;. It is a Linux network-level advertisement and tracker blocker that acts as a DNS sinkhole (supplies systems looking for DNS information with false returns). It is designed for single-board computers with network capability, such as the Raspberry Pi. Even If your router does not support changing the DNS server, you can &lt;a href="https://discourse.pi-hole.net/t/how-do-i-use-pi-holes-built-in-dhcp-server-and-why-would-i-want-to/3026" rel="noopener noreferrer"&gt;use Pi-hole's built-in DHCP server&lt;/a&gt;. The optional web interface dashboard allows you to view stats, change settings, and configure your Pi-hole.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jpstlgjiac7lre0xmtk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jpstlgjiac7lre0xmtk.png" alt="Krishnakanth - PiHole deployed on my Raspberry Pi 0" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unlike traditional advertisement blockers where only a user's browser ads are removed, this functions similarly to a network firewall. This means that advertisements and tracking domains are blocked for all devices behind it. Since it acts as a network-wide DNS resolver, it can even block advertisements in unconventional places, such as smart TVs and mobile advertisements.  &lt;/p&gt;

&lt;h4&gt;
  
  
  Installation
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--name&lt;/span&gt; pihole &lt;span class="nt"&gt;-p&lt;/span&gt; 53:53/tcp &lt;span class="nt"&gt;-p&lt;/span&gt; 53:53/udp &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &lt;span class="nt"&gt;-p&lt;/span&gt; 443:443 &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;TZ&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"America/Chicago"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/etc-pihole/:/etc/pihole/"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/etc-dnsmasq.d/:/etc/dnsmasq.d/"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--dns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;127.0.0.1 &lt;span class="nt"&gt;--dns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.1.1.1 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;unless-stopped &lt;span class="nt"&gt;--hostname&lt;/span&gt; pi.hole &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;VIRTUAL_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"pi.hole"&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;PROXY_LOCATION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"pi.hole"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;ServerIP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt; pihole/pihole:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Why do I love it?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Every day browsing is accelerated by caching DNS queries.&lt;/li&gt;
&lt;li&gt;Everyone and everything in your network are protected against advertisements and trackers.&lt;/li&gt;
&lt;li&gt;It comes with a beautiful responsive Web Interface dashboard to view and manage your Pi-hole.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Execution Environments
&lt;/h2&gt;

&lt;h3&gt;
  
  
  3. TensorFlow with Jupyter Images
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://www.tensorflow.org/install/docker" rel="noopener noreferrer"&gt;Tensorflow Docker image&lt;/a&gt; is based on &lt;code&gt;Ubuntu&lt;/code&gt; and comes preinstalled with &lt;code&gt;python3&lt;/code&gt;, &lt;code&gt;python-pip&lt;/code&gt;, a Jupyter Notebook Server, and other standard python packages. Within the container, you can start a python terminal session or use Jupyter Notebooks and use packages like &lt;code&gt;PyTorch&lt;/code&gt;, &lt;code&gt;TensorFlow&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# For other variants, visit the link above &lt;/span&gt;
&lt;span class="c"&gt;# Use the latest TensorFlow with Jupyter CPU-only image&lt;/span&gt;
docker run &lt;span class="nt"&gt;--name&lt;/span&gt; jupyter-cpu &lt;span class="nt"&gt;-h&lt;/span&gt; &lt;span class="s2"&gt;"jupyter-server"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-p&lt;/span&gt; 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter

&lt;span class="c"&gt;# Use the latest TensorFlow with Jupyter GPU image&lt;/span&gt;
docker run &lt;span class="nt"&gt;--gpus&lt;/span&gt; all &lt;span class="nt"&gt;--name&lt;/span&gt; jupyter-cuda &lt;span class="nt"&gt;-h&lt;/span&gt; &lt;span class="s2"&gt;"jupyter-server"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-p&lt;/span&gt; 8888:8888 tensorflow/tensorflow:latest-jupyter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For those who want to use a GPU, Docker is the easiest way to enable GPU support on Linux since only the NVIDIA GPU driver is required on the host machine. For CUDA support, make sure you have installed the NVIDIA driver and Docker 19.03+ for your Linux distribution. For more information on CUDA support on Docker, visit &lt;a href="https://github.com/NVIDIA/nvidia-docker" rel="noopener noreferrer"&gt;NVIDIA Container Toolkit&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdz9gmqa7nbigy59bvhj9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdz9gmqa7nbigy59bvhj9.png" alt="Krishnakanth - Screenshot of my Jupyter Server" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see above, I'm using a 10.2 CUDA driver for my GeForce MX150 with a total GDDR5 memory of 2GB. Although it's an entry-level graphics card, it's at least &lt;strong&gt;450% faster than my CPU&lt;/strong&gt; (i5-8520U), which makes training your machine learning models much quicker. &lt;/p&gt;

&lt;h4&gt;
  
  
  Why I prefer it over platforms like Google Colab?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;You have to install all python libraries which do not come built-in with Google Colab and must repeat this with every session.&lt;/li&gt;
&lt;li&gt;Google Drive is usually your primary Storage with 15 GB Free space. While you can use local, it'll eat your bandwidth for bigger datasets.&lt;/li&gt;
&lt;li&gt;If you have access to a modest GPU, it's just a matter of time to install a CUDA enabled driver and deploy your personal Jupyter server with GPU support. With proper port forwarding and Dynamic DNS, you can also share your GPU with your friends as I do.&lt;/li&gt;
&lt;li&gt;The free tier of Google Colab comes with much lesser runtime and memory than their pro version. Therefore, you could suffer from unexpected termination and even data loss.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What more I like about it?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Based on &lt;code&gt;Ubuntu&lt;/code&gt; so you install any additional Linux packages that you need in ease using &lt;code&gt;apt&lt;/code&gt; package manager.&lt;/li&gt;
&lt;li&gt;Out of the box support for CUDA enabled GPU. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  4. CoCalc
&lt;/h3&gt;

&lt;p&gt;In my opinion, &lt;a href="https://cocalc.com/" rel="noopener noreferrer"&gt;CoCalc&lt;/a&gt; is definitely one of the underrated Jupyter notebook servers out there. Although it is intended for computational mathematics, it is essentially &lt;strong&gt;Jupyter notebooks on steroids&lt;/strong&gt;. Your notebook adventure is enhanced with real-time synchronization for collaboration and a history recorder. Additionally, there is also a full LaTeX editor, Terminal with support for &lt;strong&gt;Multiple Panes&lt;/strong&gt;, and much more. &lt;/p&gt;

&lt;h4&gt;
  
  
  Running the container
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# If you're using Linux containers&lt;/span&gt;
docker run &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cocalc &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; ~/cocalc:/projects &lt;span class="nt"&gt;-p&lt;/span&gt; 443:443 sagemathinc/cocalc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# If you're using Windows, make a docker volume and use that for storage&lt;/span&gt;
docker volume create cocalc-volume
docker run &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cocalc &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; cocalc-volume:/projects &lt;span class="nt"&gt;-p&lt;/span&gt; 443:443 sagemathinc/cocalc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwig2jkhnkv6nyo1m0l8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwig2jkhnkv6nyo1m0l8y.png" alt="Jupyter Notebook CoCalc" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Image Credits to CoCalc&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  What do I like about it?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Multiple users can simultaneously edit the same file, thus enabling &lt;strong&gt;real-time collaboration&lt;/strong&gt;. It also comes with side-by-side chat for each Jupyter notebook.&lt;/li&gt;
&lt;li&gt;Realtime CPU and memory monitoring.&lt;/li&gt;
&lt;li&gt;Comes with a &lt;strong&gt;LaTeX Editor&lt;/strong&gt; and many &lt;strong&gt;pre-installed kernels&lt;/strong&gt; (R, Sage, Python 3, etc). &lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  What I dislike about it?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Unlike the TensorFlow with Jupyter Image, this does not come with complete CUDA support out of the box.&lt;/li&gt;
&lt;li&gt;The full image size is a whopping &lt;strong&gt;11GB+&lt;/strong&gt; and is a full Linux environment with nearly &lt;strong&gt;9,500 packages&lt;/strong&gt;, so you better free up your storage before pulling this image.&lt;/li&gt;
&lt;li&gt;(Opinionated) The interface is quite different from the traditional Jupyter notebook interface, adding a little bit of learning curve.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Base Images
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Wait, what's a base image?
&lt;/h3&gt;

&lt;p&gt;A base image (not to be confused with &lt;a href="https://docs.docker.com/glossary/#parent_image" rel="noopener noreferrer"&gt;parent image&lt;/a&gt;) is the parent-less image that your image is fundamentally based on. It is created using a &lt;a href="https://docs.docker.com/get-started/part2/#sample-dockerfile" rel="noopener noreferrer"&gt;Dockerfile&lt;/a&gt; with the FROM scratch directive. Docker image must include an entire root filesystem and the core OS installation, so choosing the right base image essentially determines factors such as the size of your container image, packages it inherits. &lt;/p&gt;

&lt;p&gt;   &lt;/p&gt;

&lt;h3&gt;
  
  
  5. Alpine
&lt;/h3&gt;

&lt;p&gt;If you've built Docker images, you know Alpine Linux. &lt;a href="https://alpinelinux.org/" rel="noopener noreferrer"&gt;Alpine Linux&lt;/a&gt; is a Linux distro mostly built around &lt;a href="https://www.musl-libc.org/" rel="noopener noreferrer"&gt;musl libc&lt;/a&gt; (a standard library intended for OSes based on the Linux kernel) and &lt;a href="https://www.busybox.net/" rel="noopener noreferrer"&gt;BusyBox&lt;/a&gt; (provides several Unix utilities). &lt;strong&gt;The image is just around 5 MB in size and even has access to a package repository&lt;/strong&gt;, which is much more complete than any other BusyBox based images while being super light. This is why I love Alpine Linux, and I use it for services and even production applications. As you can look below, the image comes with &lt;strong&gt;only 18 Packages&lt;/strong&gt; and has a lower memory utilization than &lt;code&gt;ubuntu&lt;/code&gt;. The image comes with a &lt;code&gt;apk&lt;/code&gt; package manager.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsgxqaxkaxvfpxemfbal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsgxqaxkaxvfpxemfbal.png" alt="Alpine Image Information" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Running the container
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# For other tags, visit https://hub.docker.com/_/alpine/&lt;/span&gt;
&lt;span class="c"&gt;# Copy-paste to pull this image&lt;/span&gt;
docker pull alpine:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  6. Ubuntu
&lt;/h3&gt;

&lt;p&gt;Ubuntu is undoubtedly the most popular Linux distro and exceptionally known for its accessibility and compatibility. It is a Debian-based Linux distro that runs on practically everything (from the desktop to the cloud, to all your IoTs). The &lt;code&gt;ubuntu&lt;/code&gt; tag points to the "latest LTS" image as that's the version prescribed for general use. As you can see below, the image comes with only &lt;strong&gt;89 Packages&lt;/strong&gt; and has an &lt;code&gt;apt&lt;/code&gt; package manager.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fik5kuiolv8717fwna5y7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fik5kuiolv8717fwna5y7.png" alt="Ubuntu Image Information" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Running the container
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# For other supported tags, visit https://hub.docker.com/_/ubuntu/&lt;/span&gt;
&lt;span class="c"&gt;# Copy-paste to pull this image&lt;/span&gt;
docker pull ubuntu:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Database Systems
&lt;/h2&gt;

&lt;h3&gt;
  
  
  7. MySQL + PHPMyAdmin
&lt;/h3&gt;

&lt;p&gt;MySQL is undoubtedly the most popular open-source database. Execute the below commands to deploy two containers for MySQL and PHPMyAdmin. They'll leave the port &lt;code&gt;3306&lt;/code&gt; in the host to accept SQL connections and port &lt;code&gt;8080&lt;/code&gt; in the host for PHPMyAdmin.  You can also optionally install SSL certificates to secure your connections when taking it public. MySQL is better suited for applications that use structured data and require multi-row transactions. &lt;strong&gt;As of June 2020, there are about 546,533 questions on just StackOverflow compared to it's NoSQL on-par which has only 131,429 questions.&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Running the container
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Runs MySQL server with port 3306 exposed and root password '0000' &lt;/span&gt;
docker run &lt;span class="nt"&gt;--name&lt;/span&gt; mysql &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"0000"&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 3306:3306 &lt;span class="nt"&gt;-d&lt;/span&gt; mysql

&lt;span class="c"&gt;# Runs phpmyadmin with port 80 exposed as 8080 in host and linked to the mysql container &lt;/span&gt;
docker run &lt;span class="nt"&gt;--name&lt;/span&gt; phpmyadmin &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--link&lt;/span&gt; mysql:db &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:80 phpmyadmin/phpmyadmin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53epd10t3fy311x867kh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53epd10t3fy311x867kh.png" alt="Screenshot of PHPMyAdmin" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  What do I like about it?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Support for atomic transactions, privileged, and password security system.&lt;/li&gt;
&lt;li&gt;MySQL is around the block for a long time which results in a huge community support&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  What I dislike about it?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;It doesn't have an official image (including open-source forks like MariaDB) for machines like Raspberry Pi and ODROID-XU4 running  &lt;code&gt;arm32&lt;/code&gt; making developers usually falling back to either community or third party image builds. &lt;/li&gt;
&lt;li&gt;It does not natively support load balancing MySQL clusters, although open-source software such as ProxySQL helps you achieve this but with a learning curve. &lt;/li&gt;
&lt;li&gt;Although the Vanilla MySQL is open source, it is not community-driven.&lt;/li&gt;
&lt;li&gt;Are prone to SQL injections to some degree. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  8. MongoDB
&lt;/h3&gt;

&lt;p&gt;MongoDB is an open-source document-oriented database that is on many modern-day web applications. MongoDB stores it's data in JSON like files. I personally like MongoDB as its load balancing feature enables Horizontal scaling while explicitly removing the need of a Database Administrator. It is best suited for applications that require &lt;strong&gt;High write loads&lt;/strong&gt; even if your data is unstructured and complex, or if you can’t pre-define your schema. &lt;/p&gt;

&lt;h4&gt;
  
  
  Running the container
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create a /mongodata directory on the host system&lt;/span&gt;
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /mongodata
&lt;span class="c"&gt;# Start the Docker container with the default port number 27017 exposed&lt;/span&gt;
docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; /data/db:/mongodata &lt;span class="nt"&gt;-p&lt;/span&gt; 27017:27017 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--name&lt;/span&gt; mongodb &lt;span class="nt"&gt;-d&lt;/span&gt; mongo
&lt;span class="c"&gt;# Start Interactive Docker Terminal&lt;/span&gt;
docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; mongodb bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  What do I like about it?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Horizontal scaling is less expensive than vertical scaling which makes it financially a better choice.&lt;/li&gt;
&lt;li&gt;The recovery time from a primary failure is significantly low.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  What I dislike about it?
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Lacks support for atomic transactions.&lt;/li&gt;
&lt;li&gt;MongoDB has a significantly higher storage size growth rate than MySQL &lt;/li&gt;
&lt;li&gt;MongoDB is comparatively young and hence cannot replace the legacy systems directly.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Database and Monitoring Tools
&lt;/h2&gt;

&lt;h3&gt;
  
  
  9. Grafana
&lt;/h3&gt;

&lt;p&gt;Grafana is an open-source standalone tool mainly for visualizing and analyzing metrics while supporting integration with various databases (InfluxDB, AWS, MySQL, PostgreSQL, and more). Grafana supports built-in alerts to the end-users (even email you) from version 4.0. It also provides a platform to use multiple query editors based on the database and its query syntax. It is &lt;strong&gt;better suited for applications for identifying data patterns and monitoring real-time metrics&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Running the container
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start the Docker container with the default port number 3000 exposed&lt;/span&gt;
docker run &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;grafana &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000 grafana/grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Dashboard for monitoring resources on my Raspberry Pi
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckxqn3n9779hm33tyelf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckxqn3n9779hm33tyelf.png" alt="grafana" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Web Server
&lt;/h2&gt;

&lt;h3&gt;
  
  
  10. Nginx
&lt;/h3&gt;

&lt;p&gt;Nginx is more than just a lightweight web server as it comes with a reverse proxy, load balancer, mail proxy, and HTTP cache. I prefer Nginx to Apache as it uses lesser resources and quickly serves static content while being easy to scale. Compared to Apache, I've experienced Nginx to stutter less under load. &lt;/p&gt;

&lt;p&gt;Nginx addresses the concurrency problems that nearly all web applications face at scale by using an asynchronous, non-blocking, event-driven connection handling algorithm. The software load balancing feature enables both &lt;strong&gt;Horizontal and Vertical Scaling,&lt;/strong&gt; thus cutting down server load and request-server time. You can &lt;strong&gt;use the Nginx Docker containers to either host multiple websites or scale-out by load balancing using Docker swarm&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Running the container
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# For other supported tags, visit https://hub.docker.com/_/nginx/&lt;/span&gt;
&lt;span class="c"&gt;# Start the Docker container with the default port number 8080 exposed&lt;/span&gt;
docker run &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="s2"&gt;"nginx-server"&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:80 nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>linux</category>
      <category>newbiefriendly</category>
    </item>
  </channel>
</rss>
