<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jesse Portnoy</title>
    <description>The latest articles on DEV Community by Jesse Portnoy (@jessp01).</description>
    <link>https://dev.to/jessp01</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jessp01"/>
    <language>en</language>
    <item>
      <title>With regards to Redhat’s recent decision…</title>
      <dc:creator>Jesse Portnoy</dc:creator>
      <pubDate>Sat, 01 Jul 2023 20:24:50 +0000</pubDate>
      <link>https://dev.to/jessp01/with-regards-to-redhats-recent-decision-30e4</link>
      <guid>https://dev.to/jessp01/with-regards-to-redhats-recent-decision-30e4</guid>
      <description>&lt;p&gt;In case you’ve not been following, the decision in question is to &lt;a href="https://www.phoronix.com/news/Red-Hat-CentOS-Stream-Sources"&gt;limit access to the Red Hat Enterprise Linux source code&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This caused a mini storm; with this article, I aim to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Succinctly explain this move and its implications (hint — not as dramatic as some people make them seem)&lt;/li&gt;
&lt;li&gt;Offer my opinion as to why RH decided on this approach&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The actual move&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Below is a quote from &lt;a href="https://www.redhat.com/en/blog/furthering-evolution-centos-stream"&gt;RH’s official press release&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;We are continuing our investment in and increasing our commitment to CentOS Stream.&lt;/em&gt; &lt;strong&gt;&lt;em&gt;CentOS Stream will now be the sole repository for public RHEL-related source code releases.&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;For Red Hat customers and partners, source code will remain available via the Red Hat Customer Portal.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Okay, two questions may come to the minds of those reading this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What exactly is this &lt;strong&gt;&lt;em&gt;CentOS Stream?&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;What’s the source code in question?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s start with the first question; from &lt;a href="https://www.centos.org/centos-stream"&gt;https://www.centos.org/centos-stream&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Continuously delivered distro that tracks just ahead of Red Hat Enterprise Linux (RHEL) development, positioned as a midstream between Fedora Linux and RHEL.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This may confuse some people because CentOS was, to use a kind term, repurposed a while back and those who have not followed that change will find this definition doesn’t match their understanding of what it is. Basically, to rephrase the above: &lt;strong&gt;CentOS Stream is RHEL’s upstream.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Right. Now what sources are we talking about? After all, RHEL is a Linux distribution (and one of many, at that); it does not own the kernel (which is provided under the terms of the GNU General Public &lt;em&gt;License&lt;/em&gt; version 2) so, when the word “source” is used in this context, what does it refer to?&lt;/p&gt;

&lt;p&gt;I imagine anyone who is interested in this topic has at least some vague idea as to what a Linux distribution means but I’ll provide my own little blurb here. &lt;strong&gt;Before I do that, however, I should note that there are different kinds of distributions; Redhat is a binary software distribution, whereas Gentoo (to name the leader of its kind) is based on the idea that you compile sources on your own machine using its&lt;/strong&gt; &lt;a href="https://wiki.gentoo.org/wiki/Portage"&gt;Portage package management system&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A “binary” Linux distribution takes the kernel sources, as well as those of many many other projects/components (GCC, BASH, libpng, Golang, Rust, PHP, X-server. GNOME, KDE, Xbill, etc) and packages them in a way that’s easy to deploy and upgrade.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In the case of RH, the packaging format is RPM (Redhat Package Manager). RPM, while a Redhat invention, is also FOSS and is used and contributed to by many other Linux distributions, as well as independent developers. Another example for a commonly used FOSS project conceived at Redhat is logrotate (there are many others).&lt;/p&gt;

&lt;p&gt;Okay, so Redhat (as well as many other distros) builds and packages a multitude of FOSS components (some of which it also developed and maintains) so that we won’t have to do it ourselves. As you can imagine, this takes a lot of work (there are many packages!) and, as they’re doing it, they encounter bugs, to which they apply patches (before compiling). They then contribute these patches upstream (when appropriate — sometimes, the patch is only needed in order to package the source for a Redhat distribution).&lt;/p&gt;

&lt;p&gt;So, “source” in this context mainly pertains to the RPM specs used to generate the packages and the patches applied during the build process. RPM Source packages are also referred to as SRPMs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now that we understand what CentOS Stream and “sources” mean in this context, who will be affected by this decision?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As emphasised previously, there are many binary Linux distributions. One, very broad way, of segmenting these would be based on the packaging format (and accompanying tool-chains) they use; with the two most common ones being: deb and RPM.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Side note: deb is my format of choice, conceived by Debian, my favourite distro. One example of another distro that uses this format is Ubuntu. I could (and perhaps should) write a whole article about the relationship between Debian and Ubuntu and another that compares between these two packaging formats but neither pertain to this RHEL decision so, I’ll leave it at that for now. I have started writing a series of articles about software packaging; if you’re interested in this topic, you can find the first instalment&lt;/em&gt; &lt;a href="https://medium.com/@jesse_62134/docker-is-not-a-packaging-tool-e494d9570e01"&gt;&lt;em&gt;here&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As noted above, RPM is Redhat’s packaging format of choice and there are many different distros that use it as well. Some of these distros consider being binary compatible with RHEL releases their main selling point. I use the term “selling” quite loosely here, as in many cases, no money is exchanging hands.&lt;/p&gt;

&lt;p&gt;To better understand this rather intricate ecosystem, let’s consider how Redhat generates (large portions of) its revenue. There are two main streams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Professional Services&lt;/li&gt;
&lt;li&gt;Technical Support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To demonstrate the value of these two propositions, let’s consider a person I know intimately: myself(!).&lt;/p&gt;

&lt;p&gt;As you already know if you got this far, I am a Debian user. I’ve used it exclusively on all my personal machines and when the choice was mine to make, on all servers under my control as well.&lt;/p&gt;

&lt;p&gt;Again, I could write volumes about why I love Debian as much as I do but I (and Debian) are not the topic of this particular article so I’ll be brief and say that the main factors are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It’s a community distro that matches my ideology&lt;/li&gt;
&lt;li&gt;Its release cycle and repo segmentation make sense to me&lt;/li&gt;
&lt;li&gt;I find deb (and the accompanying tool-chain) superior to that of RPM (and its tool-chain)&lt;/li&gt;
&lt;li&gt;The package quality is extraordinary; regardless of whether you compare it with commercial distros or fellow community ones&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Having said that, would I choose Debian if I were the CTO of a vast financial consortium with thousands of technical staff? No, I’d actually choose Redhat.&lt;/p&gt;

&lt;p&gt;Why? Because while I, as a one man band, can easily maintain a 100 Debian servers of different configurations and purposes (and even more than that by putting in place automated procedures), this imaginary CTO version of me (to be clear: I’ve no aspiration to become that person) cannot. He also cannot personally interview his 1000+ employees to ensure that only like minded people that can are on the payroll. You see, Debian has many hard working, bright volunteers and they produce excellent packages and, if you know how to investigate, solve and report issues, you will also get superb support from the community but, if you don’t, you’re better off paying for a Redhat subscription. In return for your subscription fees, Redhat’s support will squeeze the information out of you/your employees like one squeezes tooth paste out of the tube. They will not send you off with a friendly RTFM (see &lt;a href="https://medium.com/@jesse_62134/from-rtfm-to-participants-awards-f956308ebe97"&gt;this article&lt;/a&gt;for my views on how important the RTFM notion is) and you could also report back to the board and say: “It’s being looked at by RH” and be sure that no one will fault you for anything. Google “No one ever got fired for purchasing IBM” for more on the latter point.&lt;/p&gt;

&lt;p&gt;Right, so, that’s why, in my opinion, people opt for RHEL.&lt;/p&gt;

&lt;p&gt;Now, who is affected by this decision? The RHEL clones.&lt;/p&gt;

&lt;p&gt;At this point, you may be thinking: “Okay, Jesse, got it but, if the only reason to pay for RHEL is so you could benefit from professional services and tech support (and arse coverage), why would you opt for one of its binary compatible clones instead?”&lt;/p&gt;

&lt;p&gt;Once more, &lt;strong&gt;in my view,&lt;/strong&gt; two use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As a contractor working with a company that uses RHEL, I want to work on the closest thing to it without paying for a subscription&lt;/li&gt;
&lt;li&gt;As the aforementioned CTO, I want my tech chaps to have the closest thing to it on their dev machines without paying for a subscription (on Prod, I’ll pay, to get the benefits we already covered)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is well worth noting that this move by Redhat is unlikely to eradicate these so-called clones, it will merely make life a bit more difficult for them.&lt;/p&gt;

&lt;p&gt;From &lt;a href="https://www.phoronix.com/news/Rocky-Linux-RHEL-Source-Access"&gt;https://www.phoronix.com/news/Rocky-Linux-RHEL-Source-Access&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“These methods are possible because of the power of GPL. No one can prevent redistribution of GPL software. To reiterate, both of these methods enable us to legitimately obtain RHEL binaries and SRPMs without compromising our commitment to open source software or agreeing to TOS or EULA limitations that impede our rights. Our legal advisors have reassured us that we have the right to obtain the source to any binaries we receive, ensuring that we can continue advancing Rocky Linux in line with our original intentions.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now we arrive at the other question…&lt;/p&gt;

&lt;h3&gt;
  
  
  Why did the Redhat take this step?
&lt;/h3&gt;

&lt;p&gt;Redhat does not care about the community clones. It knows that people using these will not buy a Redhat subscription if they were to disappear. They’ll go with Debian (or Ubuntu or one of the numerous RPM based distros available).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who do they care about?&lt;/strong&gt; &lt;a href="https://en.wikipedia.org/wiki/Oracle_Linux"&gt;&lt;strong&gt;Oracle Enterprise Linux&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From &lt;a href="https://en.wikipedia.org/wiki/Oracle_Linux"&gt;https://en.wikipedia.org/wiki/Oracle_Linux&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Oracle Linux (abbreviated OL, formerly known as Oracle Enterprise Linux or OEL) is a Linux distribution packaged and freely distributed by Oracle, available partially under the GNU General Public License since late 2006.[4] It is compiled from Red Hat Enterprise Linux (RHEL) source code, replacing Red Hat branding with Oracle’s.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Do I support Redhat’s decision? No&lt;/strong&gt; , I don’t. When you base your business model on FOSS (see &lt;a href="https://medium.com/@jesse_62134/dont-forget-to-floss-25f3faa3856e"&gt;https://medium.com/@jesse_62134/dont-forget-to-floss-25f3faa3856 for why I think it’s the best choice)&lt;/a&gt;, you enjoy many benefits (too many to list in this article) but, you also need to prepare yourself for things like Oracle Enterprise Linux.&lt;/p&gt;

&lt;p&gt;Can I understand how Redhat would be irked by Oracle Enterprise Linux? Absolutely but again, it’s part of the deal.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Psst..&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Liked this post and have a role I could be a good fit for? I’m open to suggestions. See&lt;/em&gt; &lt;a href="https://packman.io/#contact"&gt;&lt;em&gt;https://packman.io/#contact&lt;/em&gt;&lt;/a&gt; &lt;em&gt;for ways to contact me.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cheers,&lt;/em&gt;&lt;/p&gt;

</description>
      <category>redhat</category>
      <category>opensource</category>
      <category>centos</category>
      <category>linux</category>
    </item>
    <item>
      <title>Docker is not a packaging tool - part 2</title>
      <dc:creator>Jesse Portnoy</dc:creator>
      <pubDate>Fri, 12 May 2023 19:21:00 +0000</pubDate>
      <link>https://dev.to/jessp01/docker-is-not-a-packaging-tool-part-2-4md</link>
      <guid>https://dev.to/jessp01/docker-is-not-a-packaging-tool-part-2-4md</guid>
      <description>&lt;h3&gt;
  
  
  Docker is not a packaging tool — part 2
&lt;/h3&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/jessp01/docker-is-not-a-packaging-tool-part-1-11dg"&gt;last segment&lt;/a&gt; of this series, we briefly discussed the importance of starting the build process from a clean ENV. We left off promising to say a word about pkg-config and move on to how proper packaging helps matters, how utilising chroots made things far easier way back when, why Docker was the next evolutionary step and, lastly - why, while grand, it is not an all encompassing, magic-holly-grail solution to all your build and deployment headaches. So, without further ado…&lt;/p&gt;

&lt;h3&gt;
  
  
  pkg-config
&lt;/h3&gt;

&lt;p&gt;As I often recommend doing, let’s start from looking at the man page:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pkg-config(1) General Commands Manual pkg-config(1)

NAME
pkg-config - Return metainformation about installed libraries

DESCRIPTION
The pkg-config program is used to retrieve information about installed libraries
It is typically used to compile and link against one or more libraries.
Here is a typical usage scenario in a Makefile:
cc program.c `pkg-config --cflags --libs gnomeui`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Okay, that’s a clear and accurate description. The key bit here is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;It is typically used to compile and link against one or more libraries.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, pkg-config can help us ascertain that we have the needed deps to build our software.&lt;br&gt;&lt;br&gt;
 Let's look at a package (I chose libgif7, completely at random - I literally typed dpkg -L libg, hit tab-y and chose one) on my Debian machine to understand what pkg-config actually does for us.&lt;br&gt;&lt;br&gt;
 Here we can begin to see the advantage of packaging formats like deb and RPM, to wit: they store the installed packages in a local DB and provide tools to look up metadata about them. In the case of APT/deb/dpkg, if we want to see what files a given package includes, we can run:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;dpkg -L&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Sample output:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ dpkg -L libgif7
/.
/usr
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libgif.so.7.2.0
/usr/share
/usr/share/doc
/usr/share/doc/libgif7
/usr/share/doc/libgif7/NEWS.gz
/usr/share/doc/libgif7/TODO
/usr/share/doc/libgif7/changelog.Debian.gz
/usr/share/doc/libgif7/changelog.gz
/usr/share/doc/libgif7/copyright
/usr/lib/x86_64-linux-gnu/libgif.so.7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Very helpful, indeed:)&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;NOTE: in this post, I’ll be mentioning several deb/dpkg/APT commands (I’m a proud Debian GNU/Linux user). Of course, not all Linux distros and certainly not all UNIX flavours use this toolchain/stack/your term here. Since RPM/YUM/DNF is a very common stack, I’ll make a reasonable effort to provide the counterpart commands for these as well. In this case, the counterpart of&lt;/em&gt; &lt;em&gt;dpkg -L is&lt;/em&gt; &lt;em&gt;rpm -ql.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You’ll notice that the above output includes no reference to any pkg-config files at all. To understand why, allow me to provide some background as to how Linux distros based on pre-built/compiled packages segment things and why.&lt;/p&gt;

&lt;p&gt;Let’s start with the &lt;strong&gt;why&lt;/strong&gt; : it is commonly agreed (though not always practised) that a file system should not have files (of any kind: config, binaries, scripts, what have you), the system does not need in order to function. In the early, more naive days, this was mainly a question of disk space, which was very limited. With today’s resources, this point is somewhat less important (though not always — consider embedded devices) but, with the advancement and general availability of tech and computer resources, another problem has emerged; to wit: SECURITY. Put simply:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The more unneeded rubbish you have on your FS, the more vulnerable you are.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Another difficulty that has become more pronounced is that of managing dependencies and of course, the fewer packages you have installed, the easier it is to manage them.&lt;/p&gt;

&lt;p&gt;Now, let’s go into the &lt;strong&gt;how.&lt;/strong&gt; Again, I’ll be covering how it’s done in deb based distros, as well as RPM based ones. The principle in both is the same:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Packages are built from a spec file (or, in the case of deb — multiple spec files, each serving a different purpose).&lt;br&gt;&lt;br&gt;
The spec defines the package deps (separated into those needed to build the package and those needed to run it), as well as specifies the files to be included in the package (binaries, configuration files, documentation, etc) and their location (this is fixed, you cannot choose where to install files when deploying deb and RPM packages). It also includes some metadata: package name, description, source and so on. Some of the metadata is mandatory to specify (name for instance), other bits are optional (for example, not all packages declare the source/repo the package came from, which is a shame, because it’s useful data).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, to bring us back to the question of why the libgif7 package includes no pkg-config files: one spec can declare multiple packages and, in the case of libraries like libgif typically will.&lt;/p&gt;

&lt;p&gt;To better explain this, let’s obtain the spec files for this package from my Debian 11 repo. We can do that with:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ apt-get source libgif7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;If you’ve run the above command, you’ll find that it has placed a directory called &lt;code&gt;giflib-$VERSION&lt;/code&gt; in your CWD.&lt;/p&gt;

&lt;p&gt;Inside it, you’ll find many different files, including the source for libgif of the version in question and a directory called debian where the spec files reside. Here’s what’s in there in my case:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 14506 Dec 20 2020 changelog
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 1328 Dec 20 2020 control
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 2371 Dec 20 2020 copyright
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 10 Dec 20 2020 giflib-dbg.docs
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 303 Dec 20 2020 giflib-tools.doc-base
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 10 Dec 20 2020 giflib-tools.docs
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 9 Dec 20 2020 giflib-tools.install
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 24 Dec 20 2020 giflib-tools.manpages
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 10 Dec 20 2020 libgif7.docs
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 18 Dec 20 2020 libgif7.install
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 1701 Dec 20 2020 libgif7.symbols
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 10 Dec 20 2020 libgif-dev.docs
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 46 Dec 20 2020 libgif-dev.install
drwxr-xr-x 2 jesse jesse 4096 Dec 20 2020 patches
&lt;span class="nt"&gt;-rwxr-xr-x&lt;/span&gt; 1 jesse jesse 887 Dec 20 2020 rules
drwxr-xr-x 2 jesse jesse 4096 Dec 20 2020 &lt;span class="nb"&gt;source
&lt;/span&gt;drwxr-xr-x 2 jesse jesse 4096 Dec 20 2020 upstream
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt; 1 jesse jesse 211 Dec 20 2020 watch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s focus our attention on some of these:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;control: defines the metadata for the packages to be produced (name, description, section and, very importantly: the build and runtime dependencies)&lt;/li&gt;
&lt;li&gt;rules: contains the build instrustions (this will typically be processed by &lt;code&gt;make&lt;/code&gt; but it’s not a requirement — you can use any tool you want, just remember to declare it as a build dep)&lt;/li&gt;
&lt;li&gt;patches: in some cases, the package maintainer will apply patches to the upstream source (what is often referred to as pristine sources in RPM terminology). These will be placed in this directory and processed when rules is executed&lt;/li&gt;
&lt;li&gt;changelog: this is a very important file; amongst other things, it allows us to easily tell what an upgrade includes (security fixes, new features, bug fixes and also breaking changes)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Right, this is all very interesting but again: why doesn’t the libgif7 package include any pkg-config files? &lt;strong&gt;Because they are part of the &lt;code&gt;libgif-dev&lt;/code&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;NOTE: in RPM/YUM/DNF based systems, the package spec consists of a single file (package-name.spec) rather than multiple files as described above. The general concepts are very similar, however, and the spec file is divided into sections, each serving as a counterpart to the above debian spec files. Patches will typically reside under ~/rpmbuild/SOURCES. The naming convention for development packages is package-name-devel.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As we said before, one spec can declare several different packages and this is the case for many packages, especially those providing libraries. Let us take a closer look at the control file; for the benefit of shortening this article, we’ll use grep to extract the different packages this file declares:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Package:&lt;/span&gt;&lt;span class="se"&gt;\|&lt;/span&gt;&lt;span class="s2"&gt;Description"&lt;/span&gt; debian/control
Package: giflib-tools
Description: library &lt;span class="k"&gt;for &lt;/span&gt;GIF images &lt;span class="o"&gt;(&lt;/span&gt;utilities&lt;span class="o"&gt;)&lt;/span&gt;
Package: libgif7
Description: library &lt;span class="k"&gt;for &lt;/span&gt;GIF images &lt;span class="o"&gt;(&lt;/span&gt;library&lt;span class="o"&gt;)&lt;/span&gt;
Package: libgif-dev
Description: library &lt;span class="k"&gt;for &lt;/span&gt;GIF images &lt;span class="o"&gt;(&lt;/span&gt;development&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Okay, so, we can see that 3 packages are declared in this spec. We’ve already seen what files libgif7 contains, let’s now look at libgif-dev:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;dpkg &lt;span class="nt"&gt;-L&lt;/span&gt; libgif-dev
/.
/usr
/usr/include
/usr/include/gif_lib.h
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libgif.a
/usr/lib/x86_64-linux-gnu/pkgconfig
/usr/lib/x86_64-linux-gnu/pkgconfig/libgif7.pc
/usr/share
/usr/share/doc
/usr/share/doc/libgif-dev
/usr/share/doc/libgif-dev/NEWS.gz
/usr/share/doc/libgif-dev/TODO
/usr/share/doc/libgif-dev/changelog.Debian.gz
/usr/share/doc/libgif-dev/changelog.gz
/usr/share/doc/libgif-dev/copyright
/usr/lib/x86_64-linux-gnu/libgif.so
/usr/lib/x86_64-linux-gnu/pkgconfig/libgif.pc

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the dev package includes a directory called pkgconfig which has a single file called libgif7.pc and another file called libgif.pc which is a symlink pointing to the former.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To reiterate: the purpose of this package segmentation convention is to avoid the unnecessary deployment of files on our filesystem. dev packages include files that are only needed for developing with (or building against) a given package (headers, archive files, pkg-config files, etc).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s have a look at the contents of &lt;code&gt;pkgconfig/libgif7.pc&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;prefix=/usr&lt;/span&gt;
&lt;span class="s"&gt;exec_prefix=${prefix}&lt;/span&gt;
&lt;span class="s"&gt;libdir=${prefix}/lib/x86_64-linux-gnu&lt;/span&gt;
&lt;span class="s"&gt;includedir=${prefix}/include&lt;/span&gt;

&lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;libgif&lt;/span&gt;
&lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Loads and saves GIF files&lt;/span&gt;
&lt;span class="na"&gt;Version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5.2.1&lt;/span&gt;
&lt;span class="na"&gt;Cflags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-I${includedir}&lt;/span&gt;
&lt;span class="na"&gt;Libs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-L${libdir} -lgif&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, it provides information we’ll need when building against this library:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Its prefix&lt;/li&gt;
&lt;li&gt;Where the library files (shared objects, archive files) reside&lt;/li&gt;
&lt;li&gt;Its version (remember our description of the dependency hell?)&lt;/li&gt;
&lt;li&gt;Cflags in this case only setting the include path (-I) but potentially, it could specify other flags to be used by the compiler&lt;/li&gt;
&lt;li&gt;Libs points the linker to our $libdir (-L) and specifies that we should link against libgif (-lgif)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let us look at a more elaborate example: the pkg-config for gtk4:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;$ cat /usr/lib/x86_64-linux-gnu/pkgconfig/gtk4.pc&lt;/span&gt;
&lt;span class="s"&gt;prefix=/usr&lt;/span&gt;
&lt;span class="s"&gt;includedir=${prefix}/include&lt;/span&gt;
&lt;span class="s"&gt;libdir=${prefix}/lib/x86_64-linux-gnu&lt;/span&gt;

&lt;span class="s"&gt;targets=broadway wayland x11&lt;/span&gt;
&lt;span class="s"&gt;gtk_binary_version=4.0.0&lt;/span&gt;
&lt;span class="s"&gt;gtk_host=x86_64-linux&lt;/span&gt;

&lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GTK&lt;/span&gt;
&lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GTK Graphical UI Library&lt;/span&gt;
&lt;span class="na"&gt;Version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4.8.2&lt;/span&gt;
&lt;span class="na"&gt;Requires&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pango &amp;gt;= 1.50.0, pangocairo &amp;gt;= 1.50.0, gdk-pixbuf-2.0 &amp;gt;= 2.30.0, cairo &amp;gt;= 1.14.0, cairo-gobject &amp;gt;= 1.14.0, graphene-gobject-1.0 &amp;gt;= 1.9.1, gio-2.0 &amp;gt;= 2.66.0&lt;/span&gt;
&lt;span class="na"&gt;Libs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-L${libdir} -lgtk-4&lt;/span&gt;
&lt;span class="na"&gt;Cflags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-I${includedir}/gtk-4.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This one includes another field called Requires which specifies additional deps (similar to what the debian/control file does).&lt;/p&gt;

&lt;p&gt;So how is this metadata used? Generally speaking, the steps when building C/C++ code are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate and run the configure script to specify the features you want to build the project with (i.e: libgif support) and ensure all project deps can be found&lt;/li&gt;
&lt;li&gt;Build the code using a compiler (can be done by invoking &lt;code&gt;make&lt;/code&gt; with a Makefile, which will in turn, invoke other tools including the compiler, but many other build frameworks/tools may be used, for example: CMake.&lt;/li&gt;
&lt;li&gt;Link against needed deps using ld&lt;/li&gt;
&lt;li&gt;Optionally, install the resulting files (binaries, configs, metadata, man pages, etc) onto the target paths&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;pkg-config will typically be involved in the configuration stage. For example, a configure script for a project that requires GTK4 may include this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;pkg-config &lt;span class="nt"&gt;--cflags&lt;/span&gt; &lt;span class="nt"&gt;--libs&lt;/span&gt; gtk4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which will return an output similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-mfpmath=sse -msse -msse2 -pthread -I/usr/local/include/freetype2 -I/usr/include/gtk-4.0 -I/usr/include/pango-1.0 -I/usr/include/harfbuzz -I/usr/include/pango-1.0 -I/usr/include/fribidi -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/x86_64-linux-gnu -I/usr/include/cairo -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/harfbuzz -I/usr/include/libpng16 -I/usr/include/graphene-1.0 -I/usr/lib/x86_64-linux-gnu/graphene-1.0/include -I/usr/include/libmount -I/usr/include/blkid -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -lgtk-4 -lpangocairo-1.0 -lpango-1.0 -lharfbuzz -lgdk_pixbuf-2.0 -lcairo-gobject -lcairo -lgraphene-1.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, you may be wondering why we need pkg-config at all if, as we’ve seen, deb/RPM based distros have tools that can provide the same information. The answer is rather simple but instead of phrasing my own version of it, let’s have a look at what Wikipedia has to say; from &lt;a href="https://en.wikipedia.org/wiki/Pkg-config"&gt;https://en.wikipedia.org/wiki/Pkg-config&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;pkg-config&lt;/strong&gt; is defines and supports a unified interface for querying installed &lt;a href="https://en.wikipedia.org/wiki/Library_(computer_science)"&gt;libraries&lt;/a&gt; for the purpose of &lt;a href="https://en.wikipedia.org/wiki/Compiler"&gt;compiling&lt;/a&gt; software that depends on them. It allows programmers and installation scripts to work without explicit knowledge of detailed library path information. pkg-config was originally designed for &lt;a href="https://en.wikipedia.org/wiki/Linux"&gt;Linux&lt;/a&gt;, but it is now also available for &lt;a href="https://en.wikipedia.org/wiki/Berkeley_Software_Distribution"&gt;BSD&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/Microsoft_Windows"&gt;Microsoft Windows&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/MacOS"&gt;macOS&lt;/a&gt;, and &lt;a href="https://en.wikipedia.org/wiki/Solaris_(operating_system)"&gt;Solaris&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The imperative words here are: &lt;strong&gt;unified interface.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are many Linux packaging formats but, while you can install the deb toolchain on an RPM based distro and vice versa, as well as on some other Unices, doing so can cause confusion (was this package installed via RPM, dpkg or something else? where does its metadata reside and what tool should I use to fetch it?) and when writing your packaging/deployment scripts, you cannot really rely on either of these toolchains being present.&lt;/p&gt;

&lt;p&gt;You could, of course, cover both cases with conditional statements but that would make for a very long, error prone and hard to maintain code. Further, these tools will only work with packages deployed through them. In other words, dpkg will not return data for files/packages that were installed by manually invoking make install or cmake --install or rpm -i.&lt;br&gt;&lt;br&gt;
Moreover, while distros like Debian and RedHat (to name only two) do go to great lengths to package a multitude of popular (and some less popular) packages, no single distro can cover everything under the sun and chances are you’ll still have to build SOME of your needed dependencies yourself rather than rely on pre-built packages from the official distro repos.&lt;/p&gt;

&lt;p&gt;pkg-config is useful because it is a unified interface. Declare that your build depends on it and you’re covered:) As far as the building bit is concerned, it doesn’t cover SUCCESSFULLY INSTALLING pre-built binaries, which is why packaging formats like deb and RPM are so important.&lt;/p&gt;

&lt;p&gt;At the point, it is worth mentioning that there are alternatives to pkg-config. One such prominent alternative is &lt;a href="https://en.wikipedia.org/wiki/GNU_Libtool"&gt;GNU Libtool&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once more, rather than needlessly work on my own explanation, allow me to refer you to Wikipedia for a quick comparison: &lt;a href="https://en.wikipedia.org/wiki/Pkg-config#Comparison_with_libtool"&gt;https://en.wikipedia.org/wiki/Pkg-config#Comparison_with_libtool&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Right, let’s recap:&lt;/p&gt;

&lt;p&gt;In this chapter, we’ve discussed the purpose of pkg-config and demonstrated some obvious advantages of the packaging approach deb and RPM share, as well as explained the reasoning behind package segmentation.&lt;/p&gt;

&lt;p&gt;Join me for the next instalment of this series where I’ll return to the use of chroots, the added value provided by Docker and how we can combine proper packaging and Docker in our persuit for the perfect build, packaging and deployment process:)&lt;/p&gt;

&lt;p&gt;May the source be with you,&lt;/p&gt;

</description>
    </item>
    <item>
      <title>From RTFM to participant awards</title>
      <dc:creator>Jesse Portnoy</dc:creator>
      <pubDate>Sat, 06 May 2023 18:41:00 +0000</pubDate>
      <link>https://dev.to/jessp01/from-rtfm-to-participants-awards-2g11</link>
      <guid>https://dev.to/jessp01/from-rtfm-to-participants-awards-2g11</guid>
      <description>&lt;p&gt;This is a story of the decline of our standards. I expect the opinions I express here may be controversial but honestly? I don’t mind it. I’m all for discussion and while I’d rather only receive supportive comments (anyone that says they want to be disagreed with is lying), I’m open to opposing opinions as well, so long as we keep it polite and on point.&lt;/p&gt;

&lt;p&gt;To those who don’t know the acronym, RTFM stands for Read The Fucking Manual. This used to have been a perfectly reasonable response to give when someone asked a stupid or super trivial question that is covered in the documentation.&lt;/p&gt;

&lt;p&gt;I am not a huge fan of expletives and use them very rarely but since RTM is not as effective in driving the point home, I suppose, had I coined this acronym, I’d probably use “bloody” (Americans often think “bloody” is just the UK version of “fucking” but it isn’t so). But I digress…&lt;/p&gt;

&lt;p&gt;At a certain point in time, it somehow became impolite to use RTFM and now, it appears to be perceived as plain rude. This is not because of the “fucking” bit at all. If anything, it seems that cursing is becoming more and more acceptable, not less.&lt;br&gt;&lt;br&gt;
So then, what is the reason for this change in norm and, why is it so important?&lt;/p&gt;

&lt;p&gt;Well, let’s consider the notion this acronym conveys so succinctly; You’ve asked a stupid/trivial question. This implies that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You have not read the manual &lt;strong&gt;which you are expected to do&lt;/strong&gt; , and now, because you’re too lazy to read what I have read to be in a position to answer you (or even written myself for others to read, as the case may be), you’re needlessly wasting my valuable time&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;or:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You have read the manual and still managed not to know the answer to this basic question so… come on, don’t make me say it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At this point, you may be thinking: “&lt;em&gt;Jesse, that’s unfair! Maybe the questioner DID read the manual and it’s simply badly written?&lt;/em&gt;”&lt;/p&gt;

&lt;p&gt;To this I say: indeed, manuals are written by people and people make mistakes. Thinking that’s something will be obvious to others because it is to YOU is a very common and human mistake to make. HOWEVER, when that’s the case, I’d expect the question to be prefixed with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“I read the manual and in particular THIS section and couldn’t find the answer. I then proceeded to Google my question and looked at multiple results but still couldn’t nail it down. Could you please help me?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you prefixed your question in that manner and still got an RTFM reply then, you’re right, it’s unfair but, that’s hardly ever the case.&lt;/p&gt;

&lt;p&gt;Another case I’m fine with and will never respond to with RTFM is when someone who is a friend or at least, a very close peer, prefixes the question with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“I know it’s probably in the manual but, you’re quicker to ask and I’m very stressed at the moment…”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’ll absolutely accept that. As humans, our behaviour is subjective and friends are more important than manuals.&lt;/p&gt;

&lt;p&gt;These exceptions asides, basically, replying with RTFM berates the questioning party for failing to adhere to fundamental expectations: read manuals to learn what to do and, more implicitly — do not disturb others needlessly.&lt;/p&gt;

&lt;p&gt;The reason RTFM is now considered rude is because we no longer expect that of people, which is extremely sad and yes, &lt;strong&gt;important&lt;/strong&gt;. The rest of this post is an account of my opinion as to why and how we got to a point where we no longer have basic standards and expectations where it comes to learning.&lt;/p&gt;

&lt;h3&gt;
  
  
  It all starts with the bloody participant awards
&lt;/h3&gt;

&lt;p&gt;I remember the first time I heard about participant awards. It was in an article written about a book discussing education and its results. Unfortunately, it was a rather long time ago (over 10 years) and I cannot find it, which is a shame. I have not actually read the book (or I’d certainly remember its title and author name), only the article about it. I tried Googling key phrases from it in an attempt to locate it but couldn’t. If anyone knows the book I’m describing, please comment.&lt;/p&gt;

&lt;p&gt;At any rate, for those of you that don’t know, it has become common practice to give children an award, not for being the best at something but merely, for SHOWING UP. Yes, that’s right. A competitive event is held, one child performs better than all the others, others perform well, others still perform poorly and EVERYONE gets an award.&lt;/p&gt;

&lt;p&gt;When I was growing up, we did not get awards for showing up and learning that this is now done drove me absolutely bonkers. At that time, what worried me (and the author of this book I cannot find) was that this sends the wrong message to children and will produce adults that cannot handle the real world. What I failed to predict was that, instead, the world will be changed by these very children.&lt;/p&gt;

&lt;p&gt;Let’s take a step back here and talk about the generation of my parents (just to frame this — I am 41). My grandfather was my sole male role model growing up. I idolised him, I still do and he absolutely deserves it. As a child. he taught me how to do pretty much everything but computer programming, which I taught myself (I don’t believe he ever touched a computer, frankly but I think, had he been born a bit later, he’d have loved it). He was always kind, patient and accepting. I owe him everything. He never hit me or any of my siblings and I can only remember ONE occasion in which he yelled at us and even then, it was only because he was concerned that we’re disturbing my nan.&lt;/p&gt;

&lt;p&gt;Why do I bring this up in this context? Because my mum once told me that, instructed by nan, he used to hit her and her siblings. My first reaction was complete shock and I was even inclined to believe that she’s lying (we’ve a very complicated relationship and I don’t trust her) but upon reflection, I suppose it’s probably true. I’m POSITIVE that he didn’t want to do that but yes, it was a different time and hitting children was considered an acceptable and even educational form of punishment. That proved to be bad practice so, we stopped doing it, which is a good thing!&lt;/p&gt;

&lt;p&gt;Every generation thinks they are the best and that the preceding and subsequent ones are rubbish. I am aware of this and am not an exception to the rule. Further, the blessed influence of my granddad aside, I had a most rotten childhood so, I will not bring my memories of how my parents handled me as an example to support any of the observations I make in the following few paragraphs (I do lean into my personal experience towards the end of this post though — it’s unavoidable).&lt;/p&gt;

&lt;p&gt;Having said that, one’s parents are not the sole influence on one’s young life and, reflecting on the general messages I have absorbed growing up, it was obvious that physical violence towards children (or anyone for that matter) is discouraged. It was equally clear that in order to get praise and do well, one has to work hard. I can’t recall an incident where the underlying message was that everyone is amazing and it’s enough to just SHOW UP.&lt;/p&gt;

&lt;p&gt;I don’t mean to imply, by any means, that the educational methods applied to me and my peers were perfect. Far from it. And I have mates that suffered unjustly because of various syndromes I believe they now call learning disabilities (forgive me if it’s not the correct PC terms at the time of writing this, it’s hard to keep track).&lt;/p&gt;

&lt;p&gt;So, that was my world growing up. Imperfect and often unfair towards those who are substantially different (substantially because everyone is different in one way or the other and feels it).&lt;/p&gt;

&lt;p&gt;In some ways, one could say, perhaps rightly, that things are better now; at least in terms of accepting diversity of thought and abilities.&lt;br&gt;&lt;br&gt;
For example, I have Asperger’s. I went through my entire childhood and much of my adult life without being diagnosed. I was finally diagnosed at the age of 30, by a psychiatrist whom I came to consult about a, then new to me, syndrome called OCD. Would I have been better off growing up now? In that particular sense, perhaps. It’s possible that a young Jesse growing up today would have been diagnosed as autistic, rather than just labelled as weird and, it would have probably been a very good thing. At least in my case. I do know that this diagnosis, late as it was, has helped me: I can now refer people to a Wikipedia page and get a more favourable treatment when I behave “weirdly”.&lt;/p&gt;

&lt;p&gt;But I think we’re now taking it to the other extreme. It seems to me that everyone (parents, teachers, the whole world and his sister) are afraid to give any criticism lest they hurt the child’s fragile soul. And this is what the participant award is all about. We’re so fearful that, if we were to say John or Jill did best at a competition, the other children will get upset and will be scarred for life so instead, we give EVERYONE an award. THIS IS WRONG.&lt;/p&gt;

&lt;p&gt;It’s unfair to John and Jill because we’re not acknowledging their hard work and talent and it’s unfair to all the other children because we’re not encouraging them to find something they excel at. Everyone cannot be special at everything. If everyone is special then — NO ONE IS.&lt;/p&gt;

&lt;p&gt;People of all ages want to be acknowledged for their special abilities and efforts and they want to be guided as to how to find and capitalise on these.&lt;/p&gt;

&lt;p&gt;If you’re told that everyone is equally amazing and special, you’re essentially being told that there’s no reason to work hard. Which brings us back to the RTFM point. People don’t read the manual because, as children, they learned, from these very incidents and messages, that they don’t have to work hard to be successful or appreciated and that they, like everyone else, is amazing.&lt;/p&gt;

&lt;p&gt;Let’s focus on this one word: amazing. It seems to be used so.. liberally these days. So much so that, one day, I’ve actually looked its definition up just to establish that I haven’t gotten it wrong all these years! Here it is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Causing great surprise or wonder; astonishing.&lt;br&gt;&lt;br&gt;
Informal: very impressive; excellent.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Right. So, as I suspected, I did not get it wrong. When I use the word amazing, which I am incredibly discerning about, I mean exactly that: very impressive, excellent. I use it sparingly, because, let’s face it, most things are NOT very impressive or excellent. We have many other words to describe things that are not THAT; for example: ordinary, regular, normal, usual (and dozens if not hundreds of other words, in EVERY language).&lt;/p&gt;

&lt;p&gt;To this day, and even after obtaining all the above insights, I am still extremely disappointed, borderline crestfallen even, when, after having “amazing” used in reference to something I have worked hard on and required skill, the same word is used to describe a most minimal effort.&lt;/p&gt;

&lt;p&gt;Giving undeserved accolades and awards is hurting everyone, including the very people you’re trying to make feel good by doing so. Let me give you an example:&lt;/p&gt;

&lt;p&gt;I like music. Not as much as I like programming but I like it. I started coding when I was 9. Not only did I receive no encouragement from my parents, I was actually discouraged by my mother (who thought and probably still does, that computers are an utter waste of time) and my efforts were further hindered by the fact that my sperm donor (I know him, I just don’t call him father) occupied our only computer through large portions of the day playing video games.&lt;/p&gt;

&lt;p&gt;If you’re reading this and thinking that it’s very sad, I agree with you but here’s my actual point: when I was 10, I took some guitar lessons. This, I WAS encouraged to do because my mother liked music (presumably, I don’t recall her ever actually listening or commenting on music but I also don’t recall her saying it’s a waste of time) and my sperm donor was allegedly a rather gifted guitar player (I say allegedly because, as you’ll soon come to realise, I am in no position to judge such abilities). My mum paid for these lessons and I don’t recall having to beg or even vigorously ask for it (programming I learned by myself from reading programmes that came with QBASIC and, when I was 14, after having programmed a bit with QBASIC, I had to BEG her quite a bit until she agreed to pay for a book about C).&lt;/p&gt;

&lt;p&gt;Anyway, I was no good. I don’t recall anyone actively telling me that I am rubbish at it (maybe they did and it just didn’t matter enough to register) but I definitely knew I am not very good and I felt no inner compulsion to work hard it either so, soon enough, I stopped the lessons.&lt;/p&gt;

&lt;p&gt;While it would have been nice to be able to play well, imagine a world where, although I was no good, I’d have been told that I am amazing at it. Well, for one thing, whether you’re 10 years old or 41, we all like praise. I’d have probably kept playing (badly, while wasting my mum’s most limited funds) and maybe, I’d even develop a ridiculous dream (ridiculous in light of my lack of talent, I’m not saying NO ONE should cultivate that dream, there are, after all successful, excellent musicians that made an impact on many people, myself included) of becoming a rock star. Why is that bad? Because:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;a. I am not talented at playing the guitar and could never, no matter how hard I worked, become amazing enough to succeed professionally&lt;/p&gt;

&lt;p&gt;b. Because of my desire to excel, I’d be immensely frustrated by a.&lt;/p&gt;

&lt;p&gt;c. I would have given less attention to something that I AM good at, love and make a decent living from, to wit: programming&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, is it unfortunate that my mother did not encourage my natural tendencies? Very. Would I have become a happier adult if she had? ABSOLUTELY. But, would taking the opposite approach and encouraging me to stick with something I am clearly NOT good at resulted in a better outcome? ABSOLUTELY NOT.&lt;/p&gt;

&lt;p&gt;And the same is true to how we engage with adults in all capacities. We should never tell someone they’re amazing at something when they’re not just to be “nice” or to avoid insulting them. Misleading someone isn’t nice. It’s counterproductive; to their well-being, yours, the organisation and society at large.&lt;/p&gt;

&lt;p&gt;We SHOULD expect people to read manuals and learn on their own. We should not spoon-feed them and, if we’ve given them ample opportunity to perform and they haven’t, we should tell them so. I’m not saying we should be brutal and make them cry, of course. Giving constructive criticism is a very complex art, indeed, one that managers, teachers and parents should invest time in perfecting, but avoiding it because it doesn’t feel “nice” is not a solution.&lt;/p&gt;

&lt;p&gt;Here’s to applying yourself,&lt;/p&gt;

</description>
      <category>rtfm</category>
      <category>learning</category>
      <category>awards</category>
      <category>asperger</category>
    </item>
    <item>
      <title>Docker is not a packaging tool — part 1</title>
      <dc:creator>Jesse Portnoy</dc:creator>
      <pubDate>Sat, 06 May 2023 13:57:00 +0000</pubDate>
      <link>https://dev.to/jessp01/docker-is-not-a-packaging-tool-part-1-11dg</link>
      <guid>https://dev.to/jessp01/docker-is-not-a-packaging-tool-part-1-11dg</guid>
      <description>&lt;p&gt;This is the second part of the series so, if you landed here, you should go back to the &lt;a href="https://dev.to/jessp01/docker-is-not-a-packaging-tool-intro-1mfo"&gt;first segment&lt;/a&gt; where I gave a short intro and some arguments to solidify my basic stance (to wit: that Docker is not a packaging tool).&lt;/p&gt;

&lt;p&gt;So then, where were we? Right, in the last episode, I promised to tell you about the challenge of delivering software to multiple UNIX and Linux distributions (same same but different) and how &lt;code&gt;chroot&lt;/code&gt; was leveraged towards this objective…&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The challenges&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;As I was saying, the product had to run on basically anything that has uname deployed on it and consisted of many different components written in C/C++, which in turn, depended on third party FOSS components, also written in C/C++.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The first thing you need to be able to do is build these components in a clean ENV.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you’ve engaged in building and packaging software in a serious manner, that bit would be obvious to you but for the benefit of those who haven’t — this process taints the FS by definition.&lt;br&gt;&lt;br&gt;
Why? because anything that’s more than a very basic “hello world” like project has build and run-time dependencies and, in many (this is my cautious nature, the word to use is MOST) cases, of very specific versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now, if you’re building loads of different components, chances are they will share some common dependencies but, and here’s the kicker, the versions will vary.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Typically, on a UNIX system (reminder: that includes Linux as well), versioning of shared libraries is handled thusly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The version of an SO (Shared Object) will be reflected in its name, say:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;libevent-1.4.so.2.2.0&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;A symlink called lib$NAME.so will be created, pointing to the actual SO; like this:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;libevent.so -&amp;gt; libevent-1.4.so.2.2.0&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Sometimes, the symlink will also include the major version, i.e:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;/lib/x86_64-linux-gnu/libzstd.so.1 -&amp;gt; libzstd.so.1.5.2&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This allows for some co-existence of versions but is not enough to guarantee a reliable build process. Moreover, sometimes, you want to produce statically linked binaries, in which case, you’ll want to link against archives, rather than SOs and these typically do not reflect the version in their names at all; they are simply called $NAME.a (for example: libevent.a).&lt;/p&gt;

&lt;p&gt;So that’s one reason why you need to start with a clean ENV each time but — it’s not the only one.&lt;/p&gt;

&lt;p&gt;Stay tuned to learn how &lt;code&gt;pkg-config&lt;/code&gt; attempts to address some of these issues, what other problems one has to tackle in the pursuit of a solid build process, why proper packaging greatly improves matters, how utilising chroots made things far easier, why Docker was the next evolutionary step and, lastly— why, while grand, it is not an all encompassing, magic-holly-grail solution to all learnyour build and deployment headaches.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>packaging</category>
      <category>build</category>
    </item>
    <item>
      <title>Export Medium posts as Markdown</title>
      <dc:creator>Jesse Portnoy</dc:creator>
      <pubDate>Thu, 04 May 2023 16:17:38 +0000</pubDate>
      <link>https://dev.to/jessp01/export-medium-posts-as-markdown-3p4</link>
      <guid>https://dev.to/jessp01/export-medium-posts-as-markdown-3p4</guid>
      <description>&lt;p&gt;First of all, why? Well, In my case, I write on Medium with the, albeit unlikely, hope that one day, I may be popular enough to generate a modest revenue through its &lt;a href="https://help.medium.com/hc/en-us/articles/115011694187-Getting-started-with-the-Partner-Program"&gt;partner programme&lt;/a&gt; (while we’re on that subject, if you like my posts, it would be nice if you followed me but don’t feel obligated). However, I also have my own site — &lt;a href="https://packman.io"&gt;packman.io&lt;/a&gt;, which is based on Jekyll and also has a blog section.&lt;/p&gt;

&lt;p&gt;For those that don’t know &lt;a href="https://jekyllrb.com/"&gt;Jekyll&lt;/a&gt;, it is a static site generator written in Ruby and distributed under the open source MIT license. If you need a portfolio/blog/documentation website, I strongly recommend you give it a go. I intend to write a post about how I make use of it to generate my own site soon but for now, suffice it to say that if a user landed on &lt;a href="https://packman.io,"&gt;https://packman.io,&lt;/a&gt; I don’t want to direct them away from it by sending them to read my posts on Medium. Besides, my site supports both light and dark mode, which I think is very important because white backgrounds really hurt my eyes (by the way — if you’re like me, I’d also recommend &lt;a href="https://darkreader.org/"&gt;Dark Reader&lt;/a&gt;, for all those inconsiderate sites that do not support dark mode natively).&lt;/p&gt;

&lt;p&gt;Jekyll takes Markdown (MD) files as input and, using a templating mechanism, produces HTML files out of them. And so, I’ve written the below small script to fetch my Medium content and convert it to MD files Jekyll can do its magic on and, without further ado, here it is, with the hope that it will be of use to you as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'feedjira'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'httparty'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'nokogiri'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'reverse_markdown'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'fileutils'&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="no"&gt;ARGV&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Usage: "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="kp"&gt;__FILE__&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;" &amp;lt;medium user without the '@'&amp;gt; &amp;lt;/path/to/output&amp;gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;exit&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;medium_user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ARGV&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;output_dir&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ARGV&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="no"&gt;FileUtils&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mkdir_p&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output_dir&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;xml&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;HTTParty&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"https://medium.com/feed/@&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;medium_user&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;body&lt;/span&gt;
&lt;span class="n"&gt;feed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Feedjira&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;xml&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;feed&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;entries&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="c1"&gt;# normalise `title` to arrive at a reasonable filename&lt;/span&gt;
    &lt;span class="n"&gt;published_date&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;published&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strftime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"%Y-%m-%d"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;filename&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;output_dir&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s1"&gt;'/'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;published_date&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s1"&gt;'-'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;gsub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/[^0-9a-z\s]/i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;gsub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/\s+/&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'-'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s1"&gt;'.md'&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="no"&gt;File&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exists?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; already exists. Skipping.."&lt;/span&gt;
    &lt;span class="k"&gt;next&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;

    &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;content&lt;/span&gt;
    &lt;span class="n"&gt;parseHTML&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Nokogiri&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;HTML&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parseHTML&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;xpath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"//img"&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s1"&gt;'src'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;sub!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/http(s)?:/&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Medium feed includes the hero image in the `content` field. Since Jekyll and other systems will probably render the hero image separately, remove it from the HTML before generating the Markdown&lt;/span&gt;
    &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sub!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&amp;lt;figure&amp;gt;&amp;lt;img\salt="([\w\.\-])?"\ssrc="https:\/\/cdn-images-1.medium.com\/max\/[0-9]+\/[0-9]\*[0-9a-zA-Z._-]+"\s\/&amp;gt;(\&amp;lt;figcaption\&amp;gt;.*\&amp;lt;\/figcaption\&amp;gt;)?&amp;lt;\/figure&amp;gt;/&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ReverseMarkdown&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;convert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;gsub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/\\n/&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;meta&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;META&lt;/span&gt;&lt;span class="sh"&gt;
---
layout: post
author: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;author&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
title: "&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"
date: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;published&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
background: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;
---

&lt;/span&gt;&lt;span class="no"&gt;    META&lt;/span&gt;

    &lt;span class="no"&gt;File&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;meta&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to download it rather than copy and paste, it’s available from &lt;a href="https://gitlab.com/-/snippets/2532776/raw/main/medium_to_md.rb?inline=false"&gt;GitLab&lt;/a&gt; as well.&lt;/p&gt;

&lt;p&gt;Invoke it like so:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./medium_to_md.rb &amp;lt;medium user without the '@'&amp;gt; &amp;lt;/path/to/output&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It will generate a clean markdown file that includes the metadata (&lt;code&gt;front matter&lt;/code&gt; in Jekyll terminology) from the original Medium post; i.e:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;layout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;post&lt;/span&gt;
&lt;span class="na"&gt;author&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Jesse Portnoy&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Capture your users attention with style&lt;/span&gt;
&lt;span class="na"&gt;date&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2023-04-23 20:23:44 UTC&lt;/span&gt;
&lt;span class="na"&gt;background&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;//cdn-images-1.medium.com/max/1024/1*TlDFO_bhcRPJDMxEceyeyw.png&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;May the source be with you,&lt;/p&gt;

</description>
      <category>medium</category>
      <category>export</category>
      <category>markdown</category>
      <category>jekyll</category>
    </item>
    <item>
      <title>Take your BASH scripting seriously</title>
      <dc:creator>Jesse Portnoy</dc:creator>
      <pubDate>Thu, 04 May 2023 15:29:45 +0000</pubDate>
      <link>https://dev.to/jessp01/take-your-bash-scripting-seriously-34je</link>
      <guid>https://dev.to/jessp01/take-your-bash-scripting-seriously-34je</guid>
      <description>&lt;p&gt;In 2007, the legendary Larry Wall, creator of Perl, wrote an article called &lt;a href="https://www.perl.com/pub/2007/12/06/soto-11.html/"&gt;Programming is Hard, Let’s Go Scripting&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
I’d recommend giving it a good read but here are some especially important quotes I think are worth pondering:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;.. scripting is not a technical term. When we call something a scripting language, we’re primarily making a linguistic and cultural judgment, not a technical judgment.&lt;/p&gt;

&lt;p&gt;Suppose you went back to Ada Lovelace and asked her the difference between a script and a program. She’d probably look at you funny, then say something like: Well, a script is what you give the actors, but a program is what you give the audience. That Ada was one sharp lady…&lt;/p&gt;

&lt;p&gt;Since her time, we seem to have gotten a bit more confused about what we mean when we say scripting. It confuses even me, and I’m supposed to be one of the experts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I absolutely agree. Why am I bringing this up in this context? Because it seems to me that many people are under the impression that if it’s “just a script”, it needn’t be written with care and failing to do a good job with it is not a reflection of you as a “serious” programmer. Nothing can be further from the truth. If you’re a professional programmer, you should take your script writing very seriously. It’s not purely a matter of professional pride, either. Take a moment and think of what you often use shell scripts for…&lt;/p&gt;

&lt;p&gt;And… time’s up! I’d venture that for at least 70% of readers, installation/deployment/initialisation came to mind. Now, let me ask you this: if your installation process is poorly written and error prone, how will people get to use your otherwise brilliantly written software? That’s right, they wouldn’t. They’ll invoke the installation script (be it directly or via a package manager running through the different hooks), it will fail and, unless they absolutely MUST have it (which, let’s face it, is hardly ever the case as there are alternatives to almost everything out there), they’ll curse some and move on to the next plausible solution.&lt;/p&gt;

&lt;p&gt;Okay, so, hopefully, I convinced you that it IS important to get that bit right. Now let’s discuss some ways to do so. I’ll specifically focus on BASH here and the first, crucial point we’ll discuss is this:&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;/bin/sh does not always mean /bin/bash&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In the olden days, that used to have only been true to non-Linux Unices. On Linux /bin/sh was always (by default, of course you were able to change that) a symlink pointing to /bin/bash. So, if your product only ran on Linux, you could have saved yourself the headache of thinking of compatibility with other shells.&lt;/p&gt;

&lt;p&gt;At a certain point however, some distros (I first encountered it on Ubuntu back in 2007 but it soon became the norm on Debian as well), started using DASH as the default shell. Why? Because while BASH is lovely and feature rich and wonderful as an interactive shell, it is also, due to of all these pleasing traits, slower; since init scripts are traditionally written to run by a Bourne compatible shell and people HATE waiting for things to boot, a transition was made.&lt;/p&gt;

&lt;p&gt;Regardless of the move to DASH, I was never fond of ignoring all but BASH, myself, since I think that even if at a given moment, you’re only targeting Linux where you know BASH will likely be present, you don’t want shell compatibility issues (of all things!) to be a blocker to porting your project to other ENVs.&lt;/p&gt;

&lt;p&gt;I’m not saying you have to write a version for every shell under the moon (in fact, I’d even go as far as to say you absolutely shouldn’t) but being aware of the fact that BASH has features that other Bourne compat shells (let alone shells that do not have this common base at all) may not and trying, when it’s not too big of a hassle, to stick to the common denominator, is good practice.&lt;/p&gt;

&lt;p&gt;When you absolutely do need BASH specific features, specifying &lt;code&gt;#!/bin/bash&lt;/code&gt; or, better yet &lt;code&gt;#!/usr/bin/env bash&lt;/code&gt; will prevent users from trying to run your script using other shells. If BASH isn’t present, it will fail straight away (which may sound bad to novices but is actually a good thing), whereas, if you use /bin/sh and end up with your script running with, say, DASH, it may only fail later down the line and in a more confusing and frustrating way, &lt;strong&gt;after having done some actions that may left the system in a half baked state&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Alright, at this point, you may be thinking: "&lt;em&gt;Okay, you’ve convinced me. From now on, I’ll be more explicit and people will know they must have BASH present for the installation to succeed and run properly but, I’m not going to avoid my beloved BASH features.&lt;/em&gt;".&lt;/p&gt;

&lt;p&gt;Indeed, in most cases, especially if your target audience consists mostly of Linux users and you already wrote loads of code, this is a reasonable approach. There’s one small fact I’d be remiss not to mention though: BASH is.. chubby:)&lt;/p&gt;

&lt;p&gt;On my Debian machine:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;jesse@jessex:~/kalcli_saas$ du -sxh /bin/bash&lt;br&gt;&lt;br&gt;
1.2M /bin/bash&lt;br&gt;&lt;br&gt;
jesse@jessex:~/kalcli_saas$ du -sxh /bin/dash&lt;br&gt;&lt;br&gt;
124K /bin/dash&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This difference may feel negligible to you and, in most cases, it really is that but, if you’re in the embedded business — it may not be. Just something to keep in mind.&lt;/p&gt;

&lt;p&gt;Right. So, how can one check whether one’s scripts are compatible? Well, the easiest way (other than to run them, which is easy enough but, depending on what they do, may take some time), you’ve got the simplistic approach of invoking your shell with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;-n /path/to/script&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For example, here’s a snippet that’s BASH specific:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash    &lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;((&lt;/span&gt;&lt;span class="nv"&gt;i&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="nv"&gt;$i&lt;/span&gt;&amp;lt;100&lt;span class="p"&gt;;&lt;/span&gt;i++&lt;span class="o"&gt;)){&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you run this with BASH, it will output what you’d expect it to.&lt;br&gt;&lt;br&gt;
If you run this with DASH, however, you’ll get:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;for.sh: 3: Syntax error: Bad for loop variable&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;if you run bash -n for.sh you will get no output and the RC will be zero (note to self— write a post about how elegant this method to denote success or failure is some day), if you invoke dash -n for.sh you will get the same error as when running the script without -n. In other words, when passing -n to your shell, a simple, built-in linter is called.&lt;/p&gt;

&lt;p&gt;This is super easy but, may not always be enough.. Luckily, the fine people at &lt;a href="https://github.com/koalaman/shellcheck"&gt;ShellCheck&lt;/a&gt; gave this some thought and created a nifty tool. It’s got a nice README, is available via most decent package managers and, even has a &lt;a href="https://www.shellcheck.net/"&gt;web interface where you can test snippets&lt;/a&gt; (in case you’re bored and on the tube, I guess). The README also includes a &lt;a href="https://github.com/koalaman/shellcheck/blob/master/README.md#gallery-of-bad-code"&gt;Gallery of bad code&lt;/a&gt; to showcase some of the stuff shellcheck can pick up on.&lt;/p&gt;

&lt;p&gt;Okay, excellent. So far, we’ve covered why it’s important to make no assumptions as to the default shell and how to check whether our scripts depend on BASH specific features and why that may be a problem. Let’s move on to the next painful mistake people make when writing shell scripts.&lt;/p&gt;
&lt;h3&gt;
  
  
  Unbridled Optimism
&lt;/h3&gt;

&lt;p&gt;The fun thing about shell scripting is that it’s mostly just glue. You use shell features and constructs to chain together different utilities with some logic.&lt;/p&gt;

&lt;p&gt;Often, these scripts are written with the [overly naive] assumption that all ENVs will have the same utils.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Never assume. Always check.&lt;/strong&gt; For instance, if your script needs ffmpeg, be sure to start it off with a simple which check:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# check that we have the needed binaries&lt;/span&gt;
&lt;span class="nv"&gt;BINARIES&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ffmpeg avifenc"&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;BINARY &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;$BINARIES&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="sb"&gt;`&lt;/span&gt;which &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BINARY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 2&amp;gt;/dev/null&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Need to install &lt;/span&gt;&lt;span class="nv"&gt;$BINARY&lt;/span&gt;&lt;span class="s2"&gt;."&lt;/span&gt;
        &lt;span class="nb"&gt;exit &lt;/span&gt;2
    &lt;span class="k"&gt;fi
done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above, we’re actually checking for two binaries, of course, you can add as many as you need.&lt;/p&gt;

&lt;p&gt;Of course, correctly declaring your deps is made far easier if you use standard packaging formats (i.e: deb, RPM. etc) and one day, I’ll finish my &lt;a href="https://medium.com/@jesse_62134/docker-is-not-a-packaging-tool-e494d9570e01?source=user_profile---------6----------------------------"&gt;Docker is not a packaging tool&lt;/a&gt; series and solidify that point further:) Still, even when using proper packaging tools, it does not hurt to have these defences in place as you never know in which context your scripts may run (copy paste party anyone? No? Perhaps a porting party then?).&lt;/p&gt;

&lt;p&gt;Some people reading this may say that checking for the needed dependencies is redundant since you can toggle BASH’s -e option to make it exit upon any failure. This is true and -e is a very important flag that I do intend to discuss at some length but, I’d argue that outputting an orderly message and exiting with a pre-expected RC is better whenever possible.&lt;/p&gt;

&lt;p&gt;Next is a slightly more annoying and related problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Certain utils may have different flags across Unices
&lt;/h3&gt;

&lt;p&gt;I am a Linux user. It’s not that I don’t respect my fellow .*BSD chaps or any of the other FOSS Unices out there, I do; and there are advantages and disadvantages to everything but I will say this: Linux distributions, generally speaking, come with the most pampering utils included:)&lt;/p&gt;

&lt;p&gt;Let me choose just two favourite examples (I could give dozens of the top my head without consulting man pages once — but that’s just me bragging, not a fetching trait, alas):&lt;/p&gt;

&lt;p&gt;On Linux, I can invoke the standard netstat util (part of the net-tools package and yes, I know it’s obsoleted by the &lt;code&gt;ss&lt;/code&gt; util, I don’t want to get into it right now, though), thusly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;netstat &lt;span class="nt"&gt;-plntu&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And get this, most useful output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 1181016/nginx: mast
tcp 0 0 127.0.0.1:8080 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 172070/python3
tcp 0 0 0.0.0.0:22 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 51144/sshd: /usr/sb
tcp 0 0 127.0.0.1:5432 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 1581154/postgres
tcp 0 0 0.0.0.0:25 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 1589222/master
tcp 0 0 0.0.0.0:443 0.0.0.0:&lt;span class="k"&gt;*&lt;/span&gt; LISTEN 1181016/nginx: mast
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, I can see the local addr, the foreign one, the ports AND what process is listening and even it’s PID(!) if I’m a privileged enough user. Very handy, indeed. Go ahead, try that on Darwin (the real power behind Mac OS) or, if you’re feeling very adventurous and can find one, on AIX:)&lt;/p&gt;

&lt;p&gt;Want another example? Let’s take our beloved awk. On Linux, the version you’ll find on most (if not all — I’m just super cautious by nature) distros, is GNU AWK (gawk) where the default field separator is wisely set to a white space, so if I wanted to use it to get the Local Address column from the above netstat output, I could simply do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;netstat &lt;span class="nt"&gt;-plntu&lt;/span&gt;| &lt;span class="nb"&gt;awk&lt;/span&gt; ‘&lt;span class="o"&gt;{&lt;/span&gt;print &lt;span class="nv"&gt;$4&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;’
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And get:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Local
0.0.0.0:80
127.0.0.1:8080
0.0.0.0:22
127.0.0.1:5432
0.0.0.0:25
0.0.0.0:443
:::80
:::22
:::3000
:::25
:::443
127.0.0.1:323
::1:323
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On all the other Unices I ever worked with (and I’ve worked on many), you need to explicitly specify -F " " to get the same which is why, I always do so and so should you:)&lt;/p&gt;

&lt;p&gt;How can you catch these cases without trying? You can’t really, you simply have to test your scripts on as many Unices as you can get your hands on and encourage your users to report issues by being kind and appreciative when they do.&lt;/p&gt;

&lt;p&gt;Right, on to my next advice.&lt;/p&gt;

&lt;h3&gt;
  
  
  Status codes exist for a reason
&lt;/h3&gt;

&lt;p&gt;A common (and odd!) tendency people have when they do bother to add tests to their scripts is to [in case of failure] echo some message and invoke exit without specifying a status code (or, almost as bad, always use the same one).&lt;/p&gt;

&lt;p&gt;Why is this so bad? Because it will not always be a human being interactively running the script. If you’re writing an init script for example (and yes. on Linux most people have switched to &lt;code&gt;systemd&lt;/code&gt; and friends, but there are still plenty of older distros that are supported by many projects that do not have systemd support, plus, Linux is not the only animal out there) or a test meant to run by a CI/CD solution, having well defined status codes is imperative. By the way, this is also true when writing RESTful APIs (nothing more annoying than an API by people who feel all cases can be perfectly covered by returning either HTTP 200 or HTTP 404!).&lt;/p&gt;

&lt;p&gt;This ends part one of this post. In the next instalment, we’ll cover other useful BASH flags/moderators (-e, -o, -x), discuss trapping and handling errors and how to make our users (and our own) lives easier with proper argument parsing and usage messages.&lt;/p&gt;

&lt;p&gt;Happy scripting,&lt;/p&gt;

</description>
      <category>bash</category>
      <category>linux</category>
      <category>unix</category>
      <category>scripting</category>
    </item>
    <item>
      <title>Docker is not a packaging tool — intro</title>
      <dc:creator>Jesse Portnoy</dc:creator>
      <pubDate>Mon, 01 May 2023 14:55:00 +0000</pubDate>
      <link>https://dev.to/jessp01/docker-is-not-a-packaging-tool-intro-1mfo</link>
      <guid>https://dev.to/jessp01/docker-is-not-a-packaging-tool-intro-1mfo</guid>
      <description>&lt;p&gt;This series of posts will take you all the way from chrooted ENVs to Docker containers and attempt to explain why, while Docker is a great tool, proper packaging of software remains as relevant as ever.&lt;/p&gt;

&lt;p&gt;If you’re reading this, you probably already heard of Docker and likely also used it; if not for your own projects then to deploy others’. And so, you may think there’s nothing I could tell you about it that will surprise you.. Let’s find out, shall we? Give it a go, I will try to make it amusing, too:)&lt;/p&gt;

&lt;p&gt;If you go to &lt;a href="https://en.wikipedia.org/wiki/Docker_(software)"&gt;https://en.wikipedia.org/wiki/Docker_(software)&lt;/a&gt;, the first paragraph you’ll encounter is: &lt;em&gt;“&lt;/em&gt; &lt;strong&gt;&lt;em&gt;Docker&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;is a set of&lt;/em&gt; &lt;a href="https://en.wikipedia.org/wiki/Platform_as_a_service"&gt;&lt;em&gt;platform as a service&lt;/em&gt;&lt;/a&gt; &lt;em&gt;(PaaS) products that use&lt;/em&gt; &lt;a href="https://en.wikipedia.org/wiki/OS-level_virtualization"&gt;&lt;em&gt;OS-level virtualization&lt;/em&gt;&lt;/a&gt; &lt;em&gt;to deliver software in packages called&lt;/em&gt; &lt;a href="https://en.wikipedia.org/wiki/Container_(virtualization)"&gt;&lt;em&gt;containers&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Packages called containers?! Eh, let’s take a step back from tech terms and discuss English; &lt;strong&gt;Container&lt;/strong&gt; , as defined by Oxford: &lt;em&gt;“an object for holding or&lt;/em&gt; &lt;a href="https://www.google.com/search?client=firefox-b-e&amp;amp;sxsrf=APwXEddj7Nl83J1RsbxlOTy327nrCceV1Q:1680385537848&amp;amp;q=transporting&amp;amp;si=AMnBZoFOMBUphduq9VwZxsuReC7YZB-GgKujLv6p8BFX2GIRJnMlxU3HgdYDP7WYg6boTkRRuaU0gl_y0LXpbTy5gXz5GZgtH4cIZAr9sckuI3pUJWE9VS0%3D&amp;amp;expnd=1"&gt;&lt;em&gt;transporting&lt;/em&gt;&lt;/a&gt; &lt;em&gt;something.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Yeah, matches my understanding of the word, wouldn’t you agree?&lt;/p&gt;

&lt;p&gt;Okay, now the same for &lt;strong&gt;Package:&lt;/strong&gt; &lt;em&gt;“an object or group of objects wrapped in paper or packed in a box.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Again, fair enough, right?&lt;/p&gt;

&lt;p&gt;I hate metaphors but I do like analogies and this one is pretty good so, try this out for size: there’s a container, filled with packages, it sits on the docks in the harbour. At some point, the crew will go in and unpack these packages and, after some processing, they’ll eventually arrive at their different destinations.&lt;/p&gt;

&lt;p&gt;Now, is a container the same thing as a package? That’s right, it isn’t. Definitely not in the physical world but as I’ll attempt to explain — not in software, either.&lt;/p&gt;

&lt;p&gt;Before we do that, though, let’s give Wikipedia another chance (it deserves it) and see if we can find some interesting paragraphs about Docker that I don’t dispute…&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Containers are isolated from one another and bundle their own software,&lt;/em&gt; &lt;a href="https://en.wikipedia.org/wiki/Library_(computing)"&gt;&lt;em&gt;libraries&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and configuration files; they can communicate with each other through well-defined channels. Because all of the containers share the services of a single&lt;/em&gt; &lt;a href="https://en.wikipedia.org/wiki/Kernel_(operating_system)"&gt;&lt;em&gt;operating system kernel&lt;/em&gt;&lt;/a&gt;&lt;em&gt;, they use fewer resources than&lt;/em&gt; &lt;a href="https://en.wikipedia.org/wiki/Virtual_machine"&gt;&lt;em&gt;virtual machines&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;See? told you they deserve a chance. This is a good, succinct description of some of the benefits of containers. They indeed have a smaller footprint than VMs and that’s a big plus but I’d like to focus on the isolation bit for a moment. Isolation provides two main advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security&lt;/li&gt;
&lt;li&gt;[Relative] Decoupling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before we go on to explain how Docker helps us with the above, I feel it would be nice to honour what I personally consider its first predecessor — The Chroot.&lt;/p&gt;

&lt;p&gt;Docker launched in 2011 and started gaining traction circa 2013 but, long before that, a simpler, widely used UNIX tool gave us the benefits of isolated (often referred to as jailed) ENVs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, what’s this &lt;code&gt;chroot&lt;/code&gt; thing, then?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s give Wikipedia some well deserved rest and go to another beloved resource — the man pages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;man &lt;span class="nt"&gt;-k&lt;/span&gt; &lt;span class="nb"&gt;chroot
chroot&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;2&lt;span class="o"&gt;)&lt;/span&gt; - change root directory
&lt;span class="nb"&gt;chroot&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;8&lt;span class="o"&gt;)&lt;/span&gt; - run &lt;span class="nb"&gt;command &lt;/span&gt;or interactive shell with special root director
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case you’re not very familiar with man pages, they consist of different sections and since I think this is something that’s useful to know, I’ll list the different sections below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    MANUAL SECTIONS
    The standard sections of the manual include:

    1 User Commands
    2 System Calls
    3 C Library Functions
    4 Devices and Special Files
    5 File Formats and Conventions
    6 Games et. al.
    7 Miscellanea
    8 System Administration tools and Daemo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Okay, so from the above, we can already gather that, like many other UNIX concepts, chrootis both a syscall (section 2) and a command/tool (section 8).&lt;/p&gt;

&lt;p&gt;Let’s see what man 2 chroot has to tell us:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;chroot() changes the root directory of the calling process to that specified in path. This directory will be used for pathnames beginning with /.&lt;br&gt;&lt;br&gt;
 The root directory is inherited by all children of the calling process.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Only a privileged process (Linux: one with the CAP_SYS_CHROOT capability in its user namespace) may call chroot().&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Interesting, right? Now man 8 chroot:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;chroot — run command or interactive shell with special root directory&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Right, so, unsurprisingly, the command chroot calls the syscall chroot...&lt;/p&gt;

&lt;p&gt;Let’s go back to Wikipedia for a brief history lesson — trust me, these titbits make for great conversation started, be it in job interviews, work lunches and dates (well, okay, maybe less so on dates but it really does depend on whom you date, doesn’t it?):&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The chroot system call was introduced during development of&lt;/em&gt; &lt;a href="https://en.wikipedia.org/wiki/Version_7_Unix"&gt;&lt;em&gt;Version 7 Unix&lt;/em&gt;&lt;/a&gt; &lt;em&gt;in 1979. One source suggests that&lt;/em&gt; &lt;a href="https://en.wikipedia.org/wiki/Bill_Joy"&gt;&lt;em&gt;Bill Joy&lt;/em&gt;&lt;/a&gt; &lt;em&gt;added it on 18 March 1982–17 months before&lt;/em&gt; &lt;a href="https://en.wikipedia.org/wiki/Berkeley_Software_Distribution"&gt;&lt;em&gt;4.2BSD&lt;/em&gt;&lt;/a&gt; &lt;em&gt;was released — in order to test its installation and build system.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So.. this was 1982. Good vintage, one hopes, I was born nearly three months later, Docker was born 29 years later but I’d like to take us near the present and regale you with some stories of how I had used chrooted ENVs back in 2006–2010.&lt;/p&gt;

&lt;p&gt;At the time (I was much younger and looked like a young brad Pitt — well, no, I absolutely didn’t but I bet you laughed at that one) I worked for a company that the PHP veterans amongst you will undoubtedly have heard of called Zend and held the best title I ever had by far — Build Master.&lt;/p&gt;

&lt;p&gt;Remember, this is 2006. SaaS was already a concept but it was not as widespread as it is today and far more companies delivered software that was meant to be deployed on the customer’s machines (in 20 years’ time, some youngster will find this a novel idea, I’m sure).&lt;/p&gt;

&lt;p&gt;Zend was one such company and the software it delivered (amongst other product lines) was a PHP application server (think JBOSS but for PHP, basically). First it was called Zend Platform, it then had a multitude of internal codenames that drove me bunkers and ultimately, it was rebranded as Zend Server.&lt;/p&gt;

&lt;p&gt;So, what did the build master do? Well, since the product was to be deployed on customer machines and seeing how Zend initially wanted to support anything that has uname deployed on it and the product consisted of many different components written in C/C++, which in turn, depended on third party FOSS components (also written in C/C++), as well as a PHP admin UI [pause for breath] someone had to build and package all these things. I, along with two excellent mates of mine, was one of these people.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/jessp01/docker-is-not-a-packaging-tool-part-1-11dg"&gt;next instalment&lt;/a&gt; of this series, I’ll tell you about the challenge of delivering software to multiple UNIX and Linux distributions (same same but different) and how chroot was leveraged towards this objective.&lt;/p&gt;

&lt;p&gt;We’ll then get back to Docker and why it is the next evolutionary step and, finally, explain why Docker is only part of the dream deployment process and even note some cases where it’s not that helpful.&lt;/p&gt;

&lt;p&gt;Stay tuned and, if you like this sort of content (but only if you do — don’t feel obligated), please give the clapper a go and follow me.&lt;/p&gt;

&lt;p&gt;Happy building.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>packaging</category>
      <category>chroot</category>
    </item>
    <item>
      <title>Capture your users attention with style</title>
      <dc:creator>Jesse Portnoy</dc:creator>
      <pubDate>Mon, 01 May 2023 14:38:00 +0000</pubDate>
      <link>https://dev.to/jessp01/capture-your-users-attention-with-style-3e54</link>
      <guid>https://dev.to/jessp01/capture-your-users-attention-with-style-3e54</guid>
      <description>&lt;p&gt;Do you manage UNIX machines that are logged into by multiple users? If so, this scenario will be familiar to you: &lt;strong&gt;you need to communicate something to your users — a maintenance window, modifications following an update, a policy change, etc, etc.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What do you do? One of these:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you’re a UNIX veteran (which, really, you &lt;strong&gt;ought&lt;/strong&gt; be to manage important machines), you either edit /etc/motd or use wall to inform your users (or both, depending on the case).&lt;/li&gt;
&lt;li&gt;Some people aren’t even aware of these lovely utils so instead, they email, or worse (from a spamming standpoint) — they IM “groups” or “channels” (terminology varies depending on the comm tool used by the org).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, you may be wondering “&lt;em&gt;So? What’s wrong with that?!&lt;/em&gt;”.&lt;br&gt;&lt;br&gt;
Well, I’ll tell you…&lt;/p&gt;

&lt;p&gt;Let’s start with &lt;code&gt;/etc/motd&lt;/code&gt; (short for Message Of The Day). For those who do not know, it is a text file whose contents are outputted upon user login. It’s a straightforward, elegant mechanism. Alas, it was created in simpler times, when people actually read stuff and you were allowed to respond with “RTFM!” and laugh at them when they didn’t (I miss these days! but I also miss Perl and there’s nothing one can do about either— times changed).&lt;br&gt;&lt;br&gt;
Today, many distros output so many messages upon login by default (and admins do not bother editing these or the scripts that generate them) that users actually train their minds to &lt;strong&gt;pointedly ignore them.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As to &lt;code&gt;wall&lt;/code&gt;, it’s a util that outputs a given string to the tty (or, more commonly the pts), of all logged-in users. Again, it suffers from the same problem: if a user is happily typing away commands or tailing some log file, he or she may not notice the message at all.&lt;/p&gt;

&lt;p&gt;I deliberated whether I should even have to explain why emailing about such things is bad and decided that I should, providing that I can keep it short:&lt;br&gt;&lt;br&gt;
Basically, in the case of policy changes or post upgrade updates, people tend to ignore the notification altogether, even if they did notice it; as to maintenance windows, you can make the message sound important enough by including “ATTENTION” or “MUST READ” (or words to that effect) in your subject line but the question is: when to send this? If you send it ahead of time, people forget all about it by the time it takes place. If you send it just before, people may not see it as no one reads emails anymore (except for me — I love emails). So, after being burnt a few times, you end up sending it several times which is annoying to do but, even more annoyingly — STILL not enough.&lt;/p&gt;

&lt;p&gt;Okay, now that I got your attention (well, one would hope), what is my proposed solution?&lt;/p&gt;

&lt;p&gt;In 3 words? USE THIS &lt;a href="https://gitlab.com/-/snippets/2531799"&gt;SCRIPT&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s pretty self explanatory once you run it but I’ll elaborate a bit anyway. (I’m told the most read Medium posts are estimated at around 7 minutes read time so I reckon I’ve got a few paragraphs to spare).&lt;/p&gt;

&lt;p&gt;This script uses the &lt;a href="https://github.com/cacalabs/toilet"&gt;toilet&lt;/a&gt; and &lt;a href="https://github.com/busyloop/lolcat"&gt;lolcat&lt;/a&gt; utils to generate messages that will grab your user’s attentionand the who util to find the tty and pts devices of logged-in users so it can send them said messages.&lt;/p&gt;

&lt;p&gt;To illustrate, here’s what it outputted to the terminal when invoked with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;./gmessage.sh “This system will go down &lt;span class="k"&gt;for &lt;/span&gt;maintenance &lt;span class="k"&gt;in &lt;/span&gt;7 minutes” Jesse
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AtnQbY-5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A9W0pxZIYQqRd2VvtefDAQA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AtnQbY-5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A9W0pxZIYQqRd2VvtefDAQA.png" alt="" width="800" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That would be kind of hard to ignore, right?&lt;/p&gt;

&lt;p&gt;So, this script serves as a pretty good &lt;a href="https://github.com/util-linux/util-linux/blob/master/term-utils/wall.c"&gt;wall&lt;/a&gt; replacement (wall will strip all escape/control sequences other than \007, by the way).&lt;/p&gt;

&lt;p&gt;As to an alternative for /etc/motd, how about some custom code in &lt;code&gt;/etc/bash.bashrc&lt;/code&gt; (or the equivalent in your interactive shell, BASH isn’t for everyone)?&lt;/p&gt;

&lt;p&gt;For example, I’ve added this to &lt;code&gt;/etc/bash.bashrc&lt;/code&gt; (&lt;code&gt;zsh&lt;/code&gt; and friends are nice but I like my BASH):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt;&lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nv"&gt;$SSH_CONNECTION&lt;/span&gt;&lt;span class="o"&gt;]]&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;toilet &lt;span class="s2"&gt;"Welcome to &lt;/span&gt;&lt;span class="nv"&gt;$HOSTNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; future.tlf &lt;span class="nt"&gt;-t&lt;/span&gt; | /usr/games/lolcat
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-en&lt;/span&gt; &lt;span class="s2"&gt;"Here are some things we need to let you know about.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;Blah blah blah and also, tripe, tripe, rubbish&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;Thank you&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;-- root"&lt;/span&gt; | /usr/games/lolcat
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And, here’s the result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_UO54uLE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A6Uyf9cnFd2fCVgUzE_vPSA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_UO54uLE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2A6Uyf9cnFd2fCVgUzE_vPSA.png" alt="" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nice, eh? Let’s see them say they didn’t see that one:)&lt;/p&gt;

&lt;p&gt;Both toilet and lolcat support multiple flags so it’s worth glancing at their respective man pages. For toilet, I am using the future font file by passing “-f future.tlf” but if you don’t like it, there are plenty others to choose from.&lt;/p&gt;

&lt;p&gt;I installed both from Debian’s official repos and as a result, the fonts reside under &lt;code&gt;/usr/share/figlet&lt;/code&gt;, the location on your box may vary.&lt;/p&gt;

&lt;p&gt;Lastly, another awesome use of these utils is generating captivating headers from the comfort of your shell:) I used it to generate the hero image for this post (far faster and easier than any presentation/animation tool I’ve ever worked with).&lt;/p&gt;

&lt;p&gt;Happy hacking to all!&lt;/p&gt;

</description>
      <category>terminal</category>
      <category>motd</category>
      <category>notifications</category>
      <category>maintenance</category>
    </item>
  </channel>
</rss>
