<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Anna</title>
    <description>The latest articles on DEV Community by Anna (@dev-charodeyka).</description>
    <link>https://dev.to/dev-charodeyka</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dev-charodeyka"/>
    <language>en</language>
    <item>
      <title>Building Debian packages from source in bootstrapped Debian</title>
      <dc:creator>Anna</dc:creator>
      <pubDate>Thu, 17 Apr 2025 15:35:38 +0000</pubDate>
      <link>https://dev.to/dev-charodeyka/building-debian-packages-from-source-in-bootstrapped-debian-9el</link>
      <guid>https://dev.to/dev-charodeyka/building-debian-packages-from-source-in-bootstrapped-debian-9el</guid>
      <description>&lt;p&gt;If you’re a curious user - especially one who’s into customizing your Debian - you’ve probably run into this situation at least once: you discover a cool software, but it’s either not available in your Debian’s package repositories, or it is there, but the version is outdated. &lt;/p&gt;

&lt;p&gt;Newer versions often bring in completely new features that you actually want to use. When I think of Debian Stable, a few examples come to mind.&lt;/p&gt;

&lt;p&gt;Terminal emulator &lt;a href="https://alacritty.org/" rel="noopener noreferrer"&gt;Alacritty&lt;/a&gt;, for instance. The version in the Debian Stable repo is dated: Alacritty of this version uses a .yml config file, but newer versions have switched to .toml.&lt;/p&gt;

&lt;p&gt;Or &lt;a href="https://github.com/aristocratos/btop" rel="noopener noreferrer"&gt;btop&lt;/a&gt;, the version in Debian Stable doesn’t support NVIDIA GPU monitoring, while the current version does.&lt;/p&gt;

&lt;p&gt;And then there's Hyprland, a window manager that’s only available in Debian Unstable and Testing (Trixie). On Debian Stable, it’s not even in the repos at all.&lt;/p&gt;

&lt;p&gt;This article is dedicated to a terminal-based file manager &lt;a href="https://github.com/sxyazi/yazi" rel="noopener noreferrer"&gt;yazi&lt;/a&gt; that is super mega cool, but it’s not available in any of Debian’s repositories - not in Stable, not in Testing, not even in Unstable.&lt;/p&gt;




&lt;p&gt;NB! Building applications from source isn’t necessarily hard in terms of the actual process, but making them work, and more importantly, making sure they don’t break your system, is another story.&lt;/p&gt;

&lt;p&gt;Small utility apps that are meant to run in user space (i.e. they don’t touch anything kernel related, like modules, are generally less risky in terms of stability of your Debian afterwards). The main problem is that there’s no guarantee they’ll work properly due to dependencies. The more dependencies an app has, the higher the risk that it won’t work at all.&lt;/p&gt;

&lt;p&gt;Also, be very careful when a piece of software wants to downgrade some packages as part of its dependency list. I strongly recommend avoiding that. Downgrading system packages can mess up your system a lot.&lt;/p&gt;

&lt;p&gt;Before building anything from source, always evaluate potential security risks. Don’t rush into compiling some random app that has minimal activity on GitHub - especially without reviewing the source code carefully.&lt;/p&gt;

&lt;p&gt;NB2! I’m currently using Debian Testing (aka Trixie), which is to become Debian 13, I hope soon. If you’re trying to get Yazi working on Debian Stable, I can’t guarantee it’ll work due to dependency versions.&lt;/p&gt;




&lt;p&gt;What’s so particular about building software from source in a debootstrapped Debian, and what does that even mean?&lt;/p&gt;

&lt;p&gt;If you’ve ever developed something in Python or used Node.js then this will probably make sense to you. You might be familiar with Python virtual environments, or how Node.js projects usually live in their own directories with all the installed libraries.&lt;/p&gt;

&lt;p&gt;When you’re working inside a Python environment and need some libraries, you install them into that isolated environment. Move to another project? Create a new environment, install only what you need, and that’s it. No system-wide Python mess. Same story with Node.js, you usually don’t install JS libraries globally, they just live in the project folder. Clean, contained.&lt;/p&gt;

&lt;p&gt;Now, Debian bootstrapping offers a kind of similar idea but in the context of system-level stuff.&lt;/p&gt;

&lt;p&gt;Let’s say you want to install Yazi, which is written in Rust. You’ll need Rust installed on your system to build it. Or maybe you want to build &lt;a href="https://github.com/fairyglade/ly?tab=readme-ov-file" rel="noopener noreferrer"&gt;Ly display manager&lt;/a&gt;, written in Zig—you’ll need to install the Zig compiler. Other programs written in C? You’ll need a C compiler and a bunch of build-time libraries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But here’s the key point: software often has one set of requirements to build and a completely different set of requirements to run.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For example, with Yazi, you only need Rust to build it. Once that’s done, Rust isn’t required to run the app. So, what do you do? Install Rust, build the thing in 10 minutes, and then… what? Leave Rust on your system? Delete it? What if the software needed tons of other build-time tools that you’ll never use again?&lt;/p&gt;

&lt;p&gt;That’s where &lt;code&gt;debootstrap&lt;/code&gt; comes in. It lets you create a sort of mini Debian environment:a clean, isolated system within your system. It’s not a full second Debian install, but it is like a mini mirror of your Debian system where you can install/build/test whatever you want without messing up your main OS. It’s like a virtual "project" space for your system-level experiments.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;code&gt;debootstrap&lt;/code&gt; is a tool which will install a Debian base system into a subdirectory of another, already installed system. It doesn't require an installation CD, just access to a Debian repository. It can also be installed and run from another operating system, so, for instance, you can use debootstrap to install Debian onto an unused partition from a running Gentoo system. It can also be used to create a rootfs for a machine of a different architecture, which is known as "cross-debootstrapping". (&lt;a href="https://wiki.debian.org/Debootstrap" rel="noopener noreferrer"&gt;Debian Wiki: Debootstrap&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;First, you need to install &lt;code&gt;debootstrap&lt;/code&gt; package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install debootstrap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, I think it is convenient to elevate your user and execute commands as root user ($ changes for # in code snippets, meaning that these commands are run by root user)&lt;/p&gt;

&lt;p&gt;First, I create a directory where my bootstrapped Debian will reside:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mkdir /trixie-chroot
## If you use Debian Stable:
# mkdir /stable-chroot

## And then, I "deboostrap" Debian into this directory
## NB! I use Trixie, and I Debootstrap Trixie!!!

# debootstrap trixie /trixie-chroot http://deb.debian.org/debian/
## If you use Debian Stable:
# debootstrap stable /stable-chroot http://deb.debian.org/debian/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As I mentioned, bootstrapped Debian is not a fully functional system and cannot function outside of the "main" Debian on which it resides. That is due to the fact that it is exactly your main OS sharing with bootstrapped Debian some its components in order to enable a simulation of a complete runtime environment inside the bootstrapped Debian, allowing it to behave like a real system.&lt;/p&gt;

&lt;p&gt;This is achieved by mounting pseudo filesystems to bootstrapped Debian, or more correctly, &lt;code&gt;chroot&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;chroot on Unix-like operating systems is an operation that changes the apparent root directory for the current running process and its children. (&lt;a href="https://wiki.debian.org/chroot" rel="noopener noreferrer"&gt;Debain Wiki: chroot&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It may sounds complicated and scary, but actually it is just a couple of commands &lt;code&gt;mount&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;First thing to do is to mount the proc filesystem inside the chroot (/trixie-chroot in my case). The proc filesystem provides access to kernel and process information. Many system tools (like ps, top, etc.) rely on /proc to function correctly. Without it tools inside the chroot won’t be able to access process info or kernel parameters.&lt;/p&gt;

&lt;p&gt;Second, is to mount the sysfs filesystem to /trixie-chroot. sysfs exposes kernel devices and attributes. It's required by many parts of the system like udev, systemd, etc. Without it hardware-related commands (or anything interacting with kernel devices) may not work inside the chroot.&lt;/p&gt;

&lt;p&gt;Third, you need to provide information to chroot Debian about DNS nameservers, especially if you have some custom configuration. Otherwise you will not be able reach internet from chroot and install packages.&lt;/p&gt;

&lt;p&gt;All these complex sounding things can be done with 3 commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mount proc /trixie-chroot/proc -t proc
# mount sysfs /trixie-chroot/sys -t sysfs
# cp /etc/hosts /trixie-chroot/etc/hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can "login" into bootstrapped Debian!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# chroot /trixie-chroot /bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Now, we can start building Yazi from source! I’ll be following the official &lt;a href="https://yazi-rs.github.io/docs/installation/#debian" rel="noopener noreferrer"&gt;installation guide&lt;/a&gt;...&lt;br&gt;
...with one interesting modification you might find useful for other Rust-native apps.&lt;/p&gt;

&lt;p&gt;But let’s get started.&lt;/p&gt;

&lt;p&gt;First, it's important to note that Yazi has various dependencies: because as a terminal file manager, it can do things like preview images and different file types right inside the terminal! And of course, this kind of functionality requires some additional software to be installed.&lt;/p&gt;

&lt;p&gt;For me, the dependencies of Yazi looked pretty familiar. You can check them one by one if you’re unsure.&lt;br&gt;
So why am I installing them into a Debian chroot if I don't actually plan to use it?&lt;br&gt;
Well, I'm not that advanced when it comes to Rust apps. I’ve got more experience building from source using make, and if you’ve ever done that, you probably know about the &lt;code&gt;./configure&lt;/code&gt; step—it scans your system and sets the right parameters for the build.&lt;/p&gt;

&lt;p&gt;What often happens is, if something’s missing on your system, that is not required by any core feature of to be built software, these features are just get skipped during build processes, because requirements for them are not satisfied. I’m not sure if Rust works exactly the same way, but either way, it’s not a big deal: I’m planning to destroy this chroot after installation anyway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chroot # apt update
chroot # apt install ffmpeg 7zip jq poppler-utils fd-find ripgrep fzf zoxide imagemagick
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here comes NB for Debian Stable users: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note that these dependencies are quite old on some Debian/Ubuntu versions and may cause Yazi to malfunction. In that case, you will need to manually build them from the latest source. (&lt;a href="https://yazi-rs.github.io/docs/installation/#debian" rel="noopener noreferrer"&gt;Yazi Installation guide&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As the next step, I need to install Rust, of course:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chroot # apt install curl
chroot # curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
chroot # rustup update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And after this... I diverge from the official installation guide. Actually, this project is awesome, and installing it is super easy—to the point that it will literally leave only Rust installed on your system afterwards. Here's the command from the installation guide -&lt;br&gt;
&lt;code&gt;cargo install --locked yazi-fm yazi-cli&lt;/code&gt;. As you can see, this installs two packages directly on your system.&lt;/p&gt;

&lt;p&gt;However, in my chroot setup, this is kind of useless, because if I run that command, it’ll install those packages inside the chrooted Debian, which I obviously won’t be using.&lt;/p&gt;

&lt;p&gt;But here’s the cool part: Rust has this amazing crate called &lt;code&gt;cargo-deb&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Debian packages from Cargo projects&lt;br&gt;
This is a Cargo helper command which automatically creates binary Debian packages (.deb) from Cargo projects. (&lt;a href="https://docs.rs/crate/cargo-deb/latest" rel="noopener noreferrer"&gt;Rust Docs: cargo-deb&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now you can probably guess what my goal is : I want to get two &lt;code&gt;.deb&lt;/code&gt; files: one for &lt;code&gt;yazi-fm&lt;/code&gt; and one for &lt;code&gt;yazi-cli&lt;/code&gt;. Then I can just copy them over to my main system and install them with &lt;code&gt;dpkg&lt;/code&gt;. Simple and clean!&lt;/p&gt;

&lt;p&gt;To do that, first I need to install &lt;code&gt;cargo-deb&lt;/code&gt; from crates.io.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chroot # rustup update  
chroot # apt install build-essentials # gcc compiler
chroot # cargo install cargo-deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this, I  clone the Yazi project's GitHub repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chroot # git clone https://github.com/sxyazi/yazi.git
chroot # cd yazi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then ... I just instruct Rust to build from source 2 .deb files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chroot # cargo deb -p yazi-fm --locked
chroot # cargo deb -p yazi-cli --locked
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After these two process are completed 2 .deb files are placed into ./target/debian directory of yazi root directory.&lt;/p&gt;

&lt;p&gt;What is left is to copy these two files to the main system and install 2 packages from them using dpkg.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chroot # exit
# exit 
$ cd # teleporting to home directory of my regular user
$ sudo cp /trixie-chroot/root/yazi/target/debian/yazi-cli_25.4.8-1_amd64.deb .
$ sudo cp /trixie-chroot/root/yazi/target/debian/yazi-fm_25.4.8-1_amd64.deb .

$ sudo dpkg -i yazi-fm_25.4.8-1_amd64.deb
$ sudo dpkg -i yazi-cli_25.4.8-1_amd64.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voila!&lt;/p&gt;

&lt;p&gt;Command to launch Yazi File manager: &lt;code&gt;yazi&lt;/code&gt;&lt;br&gt;
Command to launch Yazi CLI: &lt;code&gt;ya&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ep46ksktz08d8qsp9rx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ep46ksktz08d8qsp9rx.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>debootsrap</category>
      <category>rust</category>
      <category>cargo</category>
      <category>chroot</category>
    </item>
    <item>
      <title>Where to Start in Web Development: Ignoring Learning HTTP(S), URLs, DNS, IP, SSL Will Have Consequences...</title>
      <dc:creator>Anna</dc:creator>
      <pubDate>Fri, 21 Mar 2025 18:49:50 +0000</pubDate>
      <link>https://dev.to/dev-charodeyka/where-to-start-in-web-development-ignoring-learning-https-urls-dns-ip-ssl-will-have-57le</link>
      <guid>https://dev.to/dev-charodeyka/where-to-start-in-web-development-ignoring-learning-https-urls-dns-ip-ssl-will-have-57le</guid>
      <description>&lt;p&gt;This is the second article in a series on Web Development. If you’re just starting out, I highly recommend reading &lt;a href="https://dev.to/dev-charodeyka/where-to-start-in-web-development-react-angular-svelte-or-somewhere-else-29a4"&gt;the first part&lt;/a&gt; first. In that part I illustrated and explained the DOM (Document Object Model); in this article, I’ll use the term DOM assuming you’re already familiar with it.&lt;/p&gt;




&lt;p&gt;This isn’t my first article on networking. From my profile stats, I’ve noticed that topics around networking get few readers, so I can guess that now you might be thinking, "I can skip this for now—I don’t really need it for starting with web dev". Well, here’s what I believe might happen on your web development learning path if you postpone learning the things listed in the title:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;CRUD Applications, the Database part will be pretty tough.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Improper database client configuration, struggle with setting up secure connections and handling data flows properly.&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HTML Forms &amp;amp; HTTP methods mess-ups&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Sensitive data unintentionally spawning in URL because of GET method in form tag? Plus, in general, handling user inputs may be confusing.&lt;/em&gt;&lt;/p&gt; &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Try/Catch Blocks in JS fetch functions&lt;/p&gt;
&lt;p&gt;&lt;em&gt;When you fetch data using JS, you’re dealing with network requests and responses. If you do not understand how those work, you will be confused by what you’re actually catching in an error.&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Debugging fetching functions&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Without understanding HTTP language or the request-response cycle, you will spend extra time trying to understand why your frontend does not display what it should. Let's not mention the situationships when some try to fetch something from remote servers pointing to the &lt;code&gt;localhost&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Hosting your web apps (portfolio, pet projects, etc.)&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Eventually, you’ll want to put your work online. Without networking fundamentals, you might not know even where to start.&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Routing in your web apps&lt;/p&gt;
&lt;p&gt;&lt;em&gt;If you don’t understand how URLs translate into requests and how servers respond, you might struggle with design of your app's routing schemes.&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;APIs&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Web development revolves around APIs. APIs are mostly about communication via network.&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Dockerizing your apps&lt;/p&gt;
&lt;p&gt;&lt;em&gt;I noticed the &lt;strong&gt;urge&lt;/strong&gt; to containerize anything that is not yet containerized even on early stages of development, but containerization involves such things as port mappings, virtual networking etc. You might also struggle with linking containers together or exposing your app's parts properly.&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I hope I have convinced you somehow to invest your time in learning networking fundamentals.&lt;/p&gt;




&lt;p&gt;In the previous part’s conclusion, I showed this scheme visualizing on high level &lt;em&gt;how browsers work&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsocdg38o2ptgyomppdw2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsocdg38o2ptgyomppdw2.png" alt=" " width="742" height="751"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in that diagram, the "query" from the browser’s address bar plays a key role: by using that "query", your &lt;strong&gt;browser’s networking layer&lt;/strong&gt; sends a request and receives a response from the &lt;em&gt;correct&lt;/em&gt; server. That response contains some &lt;em&gt;data&lt;/em&gt; (for example, &lt;code&gt;.html&lt;/code&gt;, &lt;code&gt;.css&lt;/code&gt;, and &lt;code&gt;.js&lt;/code&gt; files) that gets rendered—based on these files the Document Object Model is constructed by the browser—and then visualized.&lt;/p&gt;

&lt;p&gt;In this article, I’ll focus on what’s happening in the browser’s networking layer and also explain what &lt;strong&gt;servers&lt;/strong&gt; are and what role they play (Spoiler: "server" is not always about a backend!).&lt;/p&gt;

&lt;p&gt;Here is the roadmap:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Browser's Address Bar&lt;/li&gt;
&lt;li&gt;URIs and URLs&lt;/li&gt;
&lt;li&gt;HTTP Protocol

&lt;ol&gt;
&lt;li&gt;Internet Protocol and IP addresses&lt;/li&gt;
&lt;li&gt;Transmission Control Protocol (TCP)&lt;/li&gt;
&lt;li&gt;TCP/HTTP Network traffic&lt;/li&gt;
&lt;li&gt;Understanding Ports&lt;/li&gt;
&lt;li&gt;DNS&lt;/li&gt;
&lt;li&gt;TCP in action: 3-way handshake&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;HTTPS

&lt;ol&gt;
&lt;li&gt;Little extra info: multiple IP addresses and horizontal scaling of web apps&lt;/li&gt;
&lt;li&gt;About encryption&lt;/li&gt;
&lt;li&gt;TLS/SSL certificates&lt;/li&gt;
&lt;li&gt;Secure communication channel&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;




&lt;h3&gt;
  
  
  1. Browser's Address Bar
&lt;/h3&gt;

&lt;p&gt;Modern browsers are quite powerful, and you might not even notice that when you type something in the address bar, you’re effectively using it like a search engine (default search engine of your browser). You can drop in any text, and the browser takes that text and forwards it to your default search engine—all without you really realizing. &lt;/p&gt;

&lt;p&gt;But if you’ve been using browsers and the internet for a long time, you might remember it wasn’t always like this. You couldn’t just throw any random text into the address bar and expect the browser to figure it out. Back then, you had to first go to the search engine’s website (e.g., Google or Bing) and then type your search query there.&lt;/p&gt;

&lt;p&gt;For demonstration purposes, I’ll use a silly "query". My browser is Brave, and its default search engine is Brave Search. I type this into the address bar: &lt;code&gt;difference between .com and .dev&lt;/code&gt;. This is what I see:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2rp8wlb5w6f8gqze0n9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2rp8wlb5w6f8gqze0n9.png" alt="browser opens its search engine" width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What if I type the same "query" without spaces, and even worse, with dots between words - &lt;code&gt;difference.between.comand.dev&lt;/code&gt;? It can happen—maybe you’ve done it when typing quickly on your phone and missed the spaces. Such "query" results in:&lt;/p&gt;
 Error example: DNS address could not be resolved 



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvs8vmzjpgaw5wvrv3068.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvs8vmzjpgaw5wvrv3068.png" alt="DNS error" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;
 Jump back to DNS section 



&lt;p&gt;So here comes the error: the browser definitely did not process both queries the same way. The first query was treated as a string that got passed to the search engine, but the second query (with dots) made the browser do something else instead of just processing it along as a search query.&lt;/p&gt;

&lt;p&gt;If I double-click on the address bar in both cases, here’s what I see:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;https://differencebetween.com.and.dev/&lt;/code&gt; — which leads to an error page&lt;/li&gt;
&lt;li&gt;&lt;code&gt;https://search.brave.com/search?q=difference+between+.com+and+.dev&amp;amp;source=desktop&amp;amp;summary=1&amp;amp;conversation=1dd960092fc9678fb88e64&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Those are the "queries" the browser processed. I keep putting "query" in quotes (" ") because the technically correct term is a &lt;strong&gt;URI&lt;/strong&gt;, and the actual "query" part is:&lt;br&gt;
&lt;code&gt;?q=difference+between+.com+and+.dev&amp;amp;source=desktop&amp;amp;summary=1&amp;amp;conversation=1dd960092fc9678fb88e64&lt;/code&gt;. That’s part of what I typed into the browser’s address bar. Let’s start with what a URI is.&lt;/p&gt;


&lt;h3&gt;
  
  
  2. URIs and URLs
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Uniform Resource Identifiers (URI) are used to identify "resources" on the web. URIs are commonly used as targets of HTTP requests, in which case the URI represents a location for a physical resource, such as a document, a photo, binary data. &lt;br&gt;
The most common type of URI is a Uniform Resource Locator (URL), which is known as the web address.&lt;br&gt;
A Uniform Resource Name (URN) is a URI that identifies a resource by name in a particular namespace. (&lt;a href="https://developer.mozilla.org/en-US/docs/Web/URI" rel="noopener noreferrer"&gt;MDN web docs: URIs&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A URL (Uniform Resource Locator) is renown term not only by developers but also by just users. The point is that a URL ∈ URI (is a subset of), meaning a URI is broader (every URL is a URI, but not every URI is a URL). A URL is a specific type of URI that not only identifies a resource but also provides the information needed to retrieve it (such as its network location and protocol).&lt;/p&gt;

&lt;p&gt;In the previous article, I gave examples of opening files from my PC using this URI:&lt;br&gt;&lt;br&gt;
&lt;code&gt;file:///home/lalala/Projects/DEVTO/webdev/randomPDF.pdf&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is a URI, but not exactly a URL—it’s a bit ambiguous because it could be called a URL if you consider it &lt;em&gt;a locator for a file resource&lt;/em&gt;. However, it is common to use “file URI” to emphasize that it accesses a local file rather than a resource over &lt;em&gt;HTTP/HTTPS&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;By contrast,&lt;br&gt;&lt;br&gt;
&lt;code&gt;https://search.brave.com/search?q=difference+between+.com+and+.dev&amp;amp;source=desktop&amp;amp;summary=1&amp;amp;conversation=1dd960092fc9678fb88e64&lt;/code&gt;&lt;br&gt;&lt;br&gt;
is definitely both a URI and a URL.&lt;/p&gt;

&lt;p&gt;First, I want to "decompose" one illustrative URL - &lt;code&gt;http://www.example.com:80/path/to/myfile.html?key1=value1&amp;amp;key2=value2#SomewhereInTheDocument&lt;/code&gt; to demonstrate all its functional parts, which can help in understanding how it works—this is a key moment:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;_1. &lt;code&gt;http://&lt;/code&gt; is the scheme of the URL, indicating which protocol the browser must use&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;www.example.com&lt;/code&gt; is the host name of the URI, indicating which Web server is being requested. Here, we use a domain name. It is also possible to directly use an IP address, but because it is less convenient, it is rare to do so, unless the server doesn't have a registered domain name&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;:80&lt;/code&gt; is the port of the URL, indicating the technical "gate" used to access the resources on the web server. It is usually omitted if the web server uses the standard ports of the HTTP protocol (&lt;code&gt;80&lt;/code&gt; for HTTP and &lt;code&gt;443&lt;/code&gt; for HTTPS) to grant access to its resources. Otherwise, it is mandatory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/path/to/myfile.html&lt;/code&gt; is the path of the URL, indicating the location of the resource on the web server. In the early days of the Web, this was an actual directory path to a physical location on the web server. Nowadays, web servers usually abstract this to an arbitrary location.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;?key1=value1&amp;amp;key2=value2&lt;/code&gt; is the query of the URL, which are extra parameters provided to the web server. The parameters are a list of key/value pairs prefixed by the ? symbol, and separated with the &amp;amp; symbol. These can be used to provide additional context about the resource being requested._&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's detangle it all one by one. I'm starting with the protocol.&lt;/p&gt;

&lt;p&gt;In a URL, anything before &lt;code&gt;://&lt;/code&gt; is a &lt;strong&gt;protocol&lt;/strong&gt;. When I was opening local files from my PC, I used the &lt;code&gt;file://&lt;/code&gt; protocol to access them. However, the most common and widespread protocol—and the one you'll 100% be dealing with—is HTTP(S). So it makes sense to explain it.&lt;/p&gt;


&lt;h3&gt;
  
  
  3. HTTP Protocol
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;HTTP: Hypertext Transfer Protocol (HTTP) is an application protocol that defines a language for clients and servers to speak to each other. This is like the language you use to order your goods. (&lt;a href="https://developer.mozilla.org/en-US/docs/Learn_web_development/Getting_started/Web_standards/How_the_web_works" rel="noopener noreferrer"&gt;MDN web docs: How the web works&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The browser acts a &lt;em&gt;client&lt;/em&gt;, and it communicates with &lt;em&gt;servers&lt;/em&gt; using the HTTP language (If it is indicated in the URL, i.e. when it starts with &lt;code&gt;http(s)://&lt;/code&gt;). This language per se is a &lt;em&gt;language&lt;/em&gt; of verbs—&lt;em&gt;actions&lt;/em&gt; with self-explanatory names like GET, PUT, DELETE, POST, TRACE, and others. The browser sends one of these &lt;em&gt;verbs&lt;/em&gt; to an address indicated in the URL (URL addresses I will cover later), and if that address is "valid", the server found on this address sends a response.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I want to clarify that, in this article, a server is simply a computational device—like the PC or notebook you’re using, but without a monitor or GUI. For simplicity, just think of it that way for now.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When (if) the server responds to the browser’s request, it does so using HTTP, as it was contacted via this protocol. If the request sent by browser got rejected by server for any reason, HTTP ensures that a client (browser) anyway receives a response. That’s very important (for your JS &lt;code&gt;try catch&lt;/code&gt; blocks :D). HTTP responses are more complex than requests (verbs) — the &lt;strong&gt;status codes&lt;/strong&gt; are a crucial part of any HTTP response.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;_HTTP response status codes indicate whether a specific HTTP request has been successfully completed. Responses are grouped in five classes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Informational responses (100 – 199)&lt;/li&gt;
&lt;li&gt;Successful responses (200 – 299)&lt;/li&gt;
&lt;li&gt;Redirection messages (300 – 399)&lt;/li&gt;
&lt;li&gt;Client error responses (400 – 499)&lt;/li&gt;
&lt;li&gt;Server error responses (500 – 599) (&lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status" rel="noopener noreferrer"&gt;MDN web docs: URIs&lt;/a&gt;)_&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, let me show you some &lt;em&gt;nerd fun&lt;/em&gt; that will help you to understand in deep how HTTP protocol works. Of course you can learn all the status codes, methods of HTTP language, but I think it is not enough if HTTP protocol remains a block box for you, especially when it comes to the debugging of your web apps.&lt;/p&gt;

&lt;p&gt;The difficulty in understanding networking concepts is because it's kinda all happening under the hood – you don't see any of the communication statuses or responses in the raw format, until you get an error. This is true if you're a user; but if you're a dev, you will be intact even more with networking stuff. However, even if you're a dev and were avoiding networking by all means, I'll show you what's inside this black box.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's start with the fact that the HTTP protocol sits on top of another protocol, TCP.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network (&lt;a href="https://en.wikipedia.org/wiki/Transmission_Control_Protocol" rel="noopener noreferrer"&gt;Wikipedia:TCP&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here is the chain of protocols:&lt;/p&gt;

&lt;p&gt;HTTP Protocol &amp;lt;-- TCP Protocol &amp;lt;-- IP Protocol&lt;/p&gt;

&lt;p&gt;Let's start from the bottom up.&lt;/p&gt;
&lt;h4&gt;
  
  
  3.1 Internet Protocol and IP addresses
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;IP (Internet Protocol) has the task of delivering packets from the source host to the destination host &lt;strong&gt;solely based on the IP addresses&lt;/strong&gt; in the packet headers. For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information (&lt;a href="https://en.wikipedia.org/wiki/Internet_Protocol" rel="noopener noreferrer"&gt;Wikipedia:IP&lt;/a&gt;).&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;IP may be quite familiar to you, especially when it comes to the term "IP address". The device from which you're reading this article has an IP address, and my PC, from which I'm writing this article, also has an IP address. DEV.TO, as a portal, has an IP address as well. Even though your device and my PC do not communicate directly, both our devices are communicating with DEV.TO.&lt;/p&gt;

&lt;p&gt;There are actually two versions of the IP protocol: IPv4 and IPv6. As stated in the Wikipedia quote above, the IP protocol is about communication between the &lt;em&gt;source&lt;/em&gt; and &lt;em&gt;destination&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;Try to think of packets of data as &lt;strong&gt;envelopes&lt;/strong&gt; with some info—&lt;em&gt;&lt;strong&gt;I&lt;/strong&gt;nternet &lt;strong&gt;P&lt;/strong&gt;rotocol is mostly about the structure of addresses&lt;/em&gt;. Each envelope with a &lt;em&gt;letter&lt;/em&gt; (data) inside has a &lt;em&gt;to&lt;/em&gt; (destination) and a &lt;em&gt;from&lt;/em&gt; (source).&lt;/p&gt;

&lt;p&gt;Now, what about IP versions? &lt;em&gt;Let's say that we lived in the old days when people resided in houses rather than apartments (one house=one household). The addressing system was quite different—just the street name, house number, city, and country. That system worked fine. But then, as time went on, globalization and urbanization began, and more people started living in cities. It can't be that only one family lives in a huge house, so houses started to be divided into apartments (one house=many households). Now, if a letter arrives with just the street, city, country, and house number, it will not find the correct Destinee. A new "addressing" mechanism had to be invented.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxspibutveh9lvwd5q8cy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxspibutveh9lvwd5q8cy.png" alt="envelope photo" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;a href="https://www.today.com/home/how-address-envelope-t156576" rel="noopener noreferrer"&gt;Image source&lt;/a&gt;



&lt;p&gt;That's exactly what happened with IPv4 (Internet Protocol version 4). It was created when the Internet was a far less "global"—few had access to it—and it worked fine. But look at us in 2025: each of us usually has at least two devices that need to be connected to the Internet, and many of us don't live alone but with family, so the numbers multiply. Here’s the thing: IPv4 addresses are mathematically limited—the number of unique addresses available is around 4 billion. If you calculate only for private use and individuals, it is misleading because think also about all big companies with many servers that use unique IP addresses if they're connected to the global Internet.&lt;/p&gt;

&lt;p&gt;So, a newer version of IP was born—IPv6 (Internet Protocol version 6). This version of the protocol completely changes the address structure and syntax, providing an enormous pool of unique addresses so that every server, every personal device can have its own unique address.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The problem is that IPv6 has not yet been widely adopted, and most of the Internet is still on IPv4&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So you may think there's a problem: network packets (the envelopes from the analogy above) are getting lost, because the addresses are "duplicated"... but that's not true. Various mechanisms somehow solve the issue of IPv4 addresses scarcity, and the core mechanism is NAT—Network Address Translation. I won't go into detail here, but if you genuinely want to advance your understandings in networking, you can read my other articles on networking: &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-3b4-2ca5"&gt;this&lt;/a&gt;, and &lt;a href="https://dev.to/dev-charodeyka/virtualization-on-debian-with-libvirtqemukvm-networking-beyond-default-must-have-concepts-to-2ccn"&gt;this&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the scope of this article, I just want to explain the key point of how NAT affects your devices and any web app you will ever develop and host. &lt;strong&gt;Your devices don't have unique IPv4 addresses when connecting to the internet&lt;/strong&gt;. Firstly, you never connect directly to the internet from your devices unless you have a device directly attached to a fiber cable connected in a wired way to the global net. Yes, if you think the global internet is wireless, you're mistaken. The global internet is entirely wired, except for the internet provided by satellites. So, how do you access the internet from your devices? Mostly wirelessly, I imagine—you connect to a Wi-Fi router. And if you've ever handled internet provision at your home, you might have seen the operator arrive at your house with a thin cable, your to be Wi-Fi router and connecting it with a wire! Another alternative is using a SIM card in your router or mobile phone, which receives signals "wirelessly". &lt;/p&gt;

&lt;p&gt;While cellular towers broadcast wireless signals to phones, themselves they rely on a wired or high-capacity link to connect to the rest of the carrier’s network. In the case of Wi-Fi routers, fiber cable is physically brought to your internet provider's establishment, which is connected by cable—first to the city, then to the country, and then to the rest of the world.&lt;/p&gt;

&lt;p&gt;Anyway, that was just FYI. What's important for web development is this: at your home setup your Wi-Fi router has a unique IPv4 address*. (&lt;em&gt;* stands for the fact that first, most probably, this unique ipv4 address is not as long lasting as the classical physical address of a building - it is constantly changing - usually much more than once a day; second, there is such a thing as CGNAT and I leave it for you to check what is it&lt;/em&gt;). Your devices connected to Wi-Fi router don't have unique IPv4 addresses each. If you check your PC's IP address, you'll see something like 192.168.x.x.&lt;/p&gt;

&lt;p&gt;I see this this:&lt;br&gt;
(If you're on Mac or Windows, check the web on how to find your IP address information.)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ip a
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 ....
    ...
    inet 127.0.0.1/8 scope host lo
    ...
2: eno1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 ...
    ...
    inet 192.168.8.9/24 brd 192.168.8.255 
    ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are two things you see in my output: &lt;code&gt;lo&lt;/code&gt; and &lt;code&gt;eno1: 192.168.8.9&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;eno1&lt;/code&gt; is the network interface—or the point of connection for my PC. It's the Ethernet cable connected to my Wi-Fi router, and that's its address. What is &lt;code&gt;lo&lt;/code&gt;? It's the &lt;em&gt;loopback&lt;/em&gt; interface. You probably know it well—it's the famous &lt;strong&gt;localhost&lt;/strong&gt; - &lt;code&gt;127.0.0.1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;My Wi-Fi router has a public IPv4 address. Also, it is creating a local network (my home network; LAN - local area network) that spans as long as the signals of my router reach. Any device that can connect to my Wi-Fi router becomes a part of this network. This network is a default &lt;strong&gt;private&lt;/strong&gt; network, and it uses IP addresses reserved for private usage. So, my PC has the IPv4 address 192.168.8.9, my phone - 192.168.8.10, my laptop - 192.168.8.5 and so on. This default local network can accommodate up to 254 devices simultaneously connected to my router. With the NAT mechanism, my Wi-Fi router &lt;em&gt;translates&lt;/em&gt; all the &lt;em&gt;requests&lt;/em&gt; and -&lt;em&gt;responses&lt;/em&gt; sent to global net from each device connected to it, granting them access to the internet (e.g request: I go to google from my PC to find a picture of kitty and then I download it).&lt;/p&gt;

&lt;p&gt;So, how is this all related to web dev?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You cannot directly host anything you develop on a &lt;strong&gt;private IP&lt;/strong&gt; address if you want to expose it to the global net.&lt;/li&gt;
&lt;li&gt;Anything!!! Absolutely anything you see on the web—any page you manage to open, any existing services, anything at all—resides/is stored on a "device" with an IP address.&lt;/li&gt;
&lt;li&gt;Whatever you visit on the web from a browser: A. exposes your external (e.g the router’s public IP) IP address (little remark: VPNs and proxies can mask your IP :-)) B. exposes its IP address to your browser.&lt;/li&gt;
&lt;li&gt;The alfa-numerical part following &lt;code&gt;http(s)://&lt;/code&gt; in a URL (roughly speaking, "the name" of a website) can be presented as an IP - e.g you can generally replace &lt;code&gt;http(s)://example.com&lt;/code&gt; with &lt;code&gt;http(s)://&amp;lt;ip_address&amp;gt;&lt;/code&gt; and access the site that way, however, some sites have domain-specific configurations, so entering an IP might not always load the intended site, but the point is that &lt;strong&gt;a domain name ultimately resolves to an IP&lt;/strong&gt; (on domains later on!!).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Okay, basta with IP. I introduced key information about IP addresses:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbrr1ixdfs2xt4fiodov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbrr1ixdfs2xt4fiodov.png" alt="envelope with from to as IP addresses" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next up: TCP.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.2 Transmission Control Protocol (TCP)
&lt;/h4&gt;

&lt;p&gt;In my analogy with envelopes and letters, the Transmission Control Protocol plays the role of a &lt;em&gt;delivery service&lt;/em&gt;. Thanks to IP, &lt;em&gt;envelopes&lt;/em&gt; (network packets with data) have a standardized addressing system, so &lt;strong&gt;if they're delivered responsibly&lt;/strong&gt;, the data will arrive from sender to recipient. This is where TCP comes into picture: its job is to deliver &lt;em&gt;envelopes&lt;/em&gt; (network packets with data) in a reliable way, ensuring they reach their destination.&lt;/p&gt;

&lt;p&gt;TCP isn’t the only protocol on top of IP. There’s also UDP (User Datagram Protocol), but I’ll leave it aside for now. This meme fits well to explain the difference:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwjiki2qny0b1i23e3ig.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwjiki2qny0b1i23e3ig.jpg" alt="meme about TCP and UDP diffs" width="640" height="853"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Anyway, since HTTP sits on top of TCP, let’s concentrate on TCP rather than on other protocols.&lt;/p&gt;

&lt;p&gt;To illustrate TCP in action, as well as HTTP and HTTPS, I’ll share the results of some network packets capturing. By the way, continuing my envelope analogy: &lt;em&gt;the HTTP protocol is like the language of the letter (data) inside the envelope (network packets). The sender writes the letter in a language, and the recipient responds in the same language.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;When a letter (data) is sent in the HTTP "language", anyone nosy who intercepts the envelope can open it and read the letter (data)  clearly, because it’s not &lt;strong&gt;encrypted&lt;/strong&gt;. But if the letter (data) is written in HTTPS "language", this nosy someone will only see some general information and then abracadabra instead of the actual content.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;HTTPS is a secure version of HTTP that stops bad people from reading your data while it is being transported. On the modern web, pretty much every server uses HTTPS, so if you don't include it explicitly, the browser assumes that is what you are using and adds it for you.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Anyway, I promised some nerd fun, and by that, I mean that I will capture network packets that travelling in and out of my PC to show you what is inside.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.3 TCP/HTTP Network traffic
&lt;/h4&gt;

&lt;p&gt;For HTTP network traffic, I’ll use this site: &lt;code&gt;http://httpforever.com/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For HTTPS traffic, I’ll use the URL of my DEV.TO profile as an example: &lt;code&gt;https://dev.to/dev-charodeyka&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The tool I’ll use to capture network packets is &lt;a href="https://www.wireshark.org/" rel="noopener noreferrer"&gt;Wireshark&lt;/a&gt; (it has a GUI). I could do the same with &lt;code&gt;tcpdump&lt;/code&gt;, but for demonstration purposes, I want something visual.&lt;/p&gt;

&lt;p&gt;I’ll also be using a software called &lt;code&gt;curl&lt;/code&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;curl - transfer a URL&lt;/em&gt;&lt;br&gt;
&lt;em&gt;curl is a tool for transferring data from or to a server. It supports HTTP and HTTPS protocols.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this setup, &lt;code&gt;curl&lt;/code&gt; will act as the client in the typical client–server relationship. In web dev a browser is usually the client in such a setup. The command I’ll use is curl with a verbosity flag (&lt;code&gt;-v&lt;/code&gt;), so I can show you all the details.&lt;/p&gt;

&lt;p id="curlHTTP"&gt;Curl output&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -v http://httpforever.com/
* Host httpforever.com:80 was resolved.
* IPv6: 2604:a880:4:1d0::1f1:2000
* IPv4: 146.190.62.39
*   Trying [2604:a880:4:1d0::1f1:2000]:80...
* Immediate connect fail for 2604:a880:4:1d0::1f1:2000: Network is unreachable
*   Trying 146.190.62.39:80...
* Connected to httpforever.com (146.190.62.39) port 80
* using HTTP/1.x
&amp;gt; GET / HTTP/1.1
&amp;gt; Host: httpforever.com
&amp;gt; User-Agent: curl/8.12.1
&amp;gt; Accept: */*
&amp;gt;
* Request completely sent off
&amp;lt; HTTP/1.1 200 OK
&amp;lt; Server: nginx/1.18.0 (Ubuntu)
&amp;lt; Date: Mon, 17 Mar 2025 21:00:08 GMT
&amp;lt; Content-Type: text/html
&amp;lt; Content-Length: 5124
&amp;lt; Connection: keep-alive
&amp;lt; Referrer-Policy: strict-origin-when-cross-origin
&amp;lt; X-Content-Type-Options: nosniff
...
&amp;lt;
&amp;lt;!DOCTYPE HTML&amp;gt;
&amp;lt;html&amp;gt;
    &amp;lt;head&amp;gt;
        ....
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Back to explanation&lt;/p&gt;

&lt;p&gt;Hear me out! What you’re seeing is exactly the same thing that happens under the hood when I visit this website from my browser. It might be confusing—like, how can the website (the "front end") also be about servers?&lt;/p&gt;

&lt;p&gt;Well, that confusion arises because there’s a common understanding that "frontend" refers to something to be run in a browser (client-facing), and "backend" is the stuff on servers. But actually... everything runs on servers. Hard-to-swallow pill: if you want to self-host your future web apps and avoid paying for web hosting or PaaS, welcome to servers and the Linux world.&lt;/p&gt;

&lt;p&gt;The backend and frontend parts of a web app can reside on the same physical machine (or the same cloud instance). However, on the software level, they are separated. Frontend is running on &lt;strong&gt;web servers&lt;/strong&gt; (software, "server" comes from the fact that this software is &lt;em&gt;serving&lt;/em&gt; something).&lt;/p&gt;

&lt;p&gt;This line from &lt;code&gt;curl&lt;/code&gt; output shows which web server the &lt;code&gt;http://httpforever.com/&lt;/code&gt;’s frontend is running on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt; Server: nginx/1.18.0 (Ubuntu)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The physical server has Ubuntu OS and is running the Nginx. Physical server's IP is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* IPv4: 146.190.62.39
default HTTP port is 80 so we see this:
*   Trying 146.190.62.39:80...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  3.4 Understanding Ports
&lt;/h4&gt;

&lt;p&gt;Now, about ports (:80), even if it’s outside the main scope of this article, I’ll explain with an analogy so you understand what a port is.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Imagine you have a house. To enter, you have a door. If you remove the door, you can’t enter the house. If you don’t know where the door is, you also can’t get in. Let’s expand on that: maybe you have a door for you and your family, a little door for your cat, and a big door on the ground floor for your car. If you try to drive your car through the cat door, it won’t work, and if the cat goes through the garage door, it ends up somewhere it doesn’t need to be.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbckxr8td08zco0xhssn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbckxr8td08zco0xhssn.jpg" alt="cat exists from pet door" width="800" height="1006"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s the same concept with ports. HTTPS traffic uses port 443, and HTTP uses port 80 by default. You may face the need to open other ports on your PC/server, as many services have their own default ports—like databases. All these ports need to be open if you want &lt;em&gt;external&lt;/em&gt; access for a given service. And it’s also about what requests or messages you’re pushing through a port. Returning to my analogy: &lt;em&gt;the cat enters the house through its little door looking for food. If it walked into the garage, it wouldn’t find its feeder&lt;/em&gt;. Same idea with software. For example, Mongo DataBase’s default port is 27017. If you try to send &lt;code&gt;curl&lt;/code&gt; requests there expecting an HTTP response, it won’t work, because MongoDB can’t respond in that way.&lt;/p&gt;

&lt;p&gt;Before I move on to the same command for an HTTPS site, I want to explain the first line of the &lt;code&gt;curl&lt;/code&gt; output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host httpforever.com:80 was resolved.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;h3 id="DNS"&gt;3.5 DNS&lt;/h3&gt;
&lt;/h4&gt;

&lt;p&gt;You can have a look again on the error that I encountered when I dropped in the browser's search bar an erroneous URL: &lt;code&gt;https://differencebetween.com.and.dev/&lt;/code&gt; DNS error&lt;/p&gt;

&lt;p&gt;Basically, the browser displayed an error because I was trying to reach an abracadabra web address, &lt;code&gt;differencebetween.com.and.dev&lt;/code&gt;, using the &lt;code&gt;https&lt;/code&gt; protocol. The browser couldn’t find a server at that "address" because it simply doesn’t exist. But how did the browser figure that out? Where did the browser "look up" that "address"? Does your browser have some kind of "address book" containing all the existing "addresses" on the web? Of course not. Every browser indeed checks an "address book" of the entire Internet, but it does so via the network – it doesn’t store it locally.&lt;/p&gt;

&lt;p&gt;Before I give a name to this "address book of the Internet", I want to explain what is in this book, gracefully connecting all the pieces of information I shared above. However, I guess you can already guess what a browser needs to find a server... an IP address, of course! Take another look at the curl output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* Host httpforever.com:80 was resolved.
* IPv6: 2604:a880:4:1d0::1f1:2000
* IPv4: 146.190.62.39
*   Trying [2604:a880:4:1d0::1f1:2000]:80...
* Immediate connect fail for 2604:a880:4:1d0::1f1:2000: Network is unreachable
*   Trying 146.190.62.39:80...
* Connected to httpforever.com (146.190.62.39) port 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First thing that happened after &lt;code&gt;curl&lt;/code&gt; request was sent to &lt;code&gt;http://httpforever.com&lt;/code&gt; was that this web address was &lt;em&gt;resolved&lt;/em&gt;. Resolved de-facto means that the client (&lt;code&gt;curl&lt;/code&gt; in this case, though the same is completely true for browsers) received the information that &lt;code&gt;httpforever.com&lt;/code&gt; == &lt;code&gt;146.190.62.39&lt;/code&gt; (IPv4) and &lt;code&gt;httpforever.com&lt;/code&gt; == &lt;code&gt;2604:a880:4:1d0::1f1:2000&lt;/code&gt; (IPv6). The browser immediately tried to send the HTTP request to the first IP it found, which was the IPv6 address. But as I mentioned, the Internet and many servers are still only on IPv4, so the connection failed... then it quickly started to send the request to the IPv4 address, and the connection was successful!&lt;/p&gt;

&lt;p&gt;So, where do browsers get a "dictionary" that maps human-readable addresses to IP addresses? Browsers get it from DNS (Domain Name System).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The Domain Name System (DNS) is the phonebook of the Internet. Humans access information online through domain names, like nytimes.com or espn.com. Web browsers interact through Internet Protocol (IP) addresses. DNS translates domain names to IP addresses so browsers can load Internet resources. (&lt;a href="https://www.cloudflare.com/learning/dns/what-is-dns/" rel="noopener noreferrer"&gt;Cloudflare: What is DNS?&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I won’t go into too much detail on DNS, it is simple per se in what it is for, but it is not that simple to configure from the standpoint of networking configuration. Here is the schematic representation on how your browser reaches Domain Name System:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;URL example: https://www.cloudflare.com/learning/dns/what-is-dns/
[Browser] --&amp;gt; (Browser's internal DNS cache)
                    |
                    v
        Found IP of www.cloudflare.com?
        |                |
        No               Yes ---&amp;gt; resolved, sending request...
        |
        v
[Operating System of PC/server] --&amp;gt; (Local DNS cache &amp;amp; configuration)
                    |
                    v
        Found IP of www.cloudflare.com?
        |                |
        No               Yes ---&amp;gt; resolved, sending request...
        |
        v
[Router/Modem] --&amp;gt;Forwards DNS query
               Forwards where? Depends on configuration..
                │             OR               |
                v                              v
  [  External Public DNS           [Internet Service Provider's 
   (e.g. Google, 8.8.8.8)]                 (ISP) DNS] 
                |             OR               |
                v                              v
               Found IP of www.cloudflare.com?
                |                |
                No               Yes ---&amp;gt; resolved, sending request...
                |
                v
         www.cloudflare.com does not exists.   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key takeaway is that a "human" address in a URL is just for people. Underneath, there’s always an IP address. And as a first step, browsers try to look up the "translation" of the provided address into an IP. If it fails, that’s the end of the query and results in an unavoidable error. Another takeaway is that you can't, just out of the blue, decide that your server with IP 123.124.135.5 (for example) will have a domain name like &lt;code&gt;coolest-site.eu&lt;/code&gt; by specifying it in your server’s configurations. I hope that’s pretty obvious.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.6 TCP in action: 3-way handshake
&lt;/h4&gt;

&lt;p&gt;Anyway, here is the nerd fun!!! &lt;em&gt;Haha, I called it nerd fun because once my colleague told me that I need to have a social life after I mentioned that I observed my home lab's network packets for hours to examine a peculiar network anomaly.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As I mentioned, I use Wireshark for monitoring network traffic on my PC and with this tool I can capture all the network packets that pass through it. If I start capturing everything, the output gets flooded in seconds, because any website I open in my browser ends up in the stats table. Currently, I have my PC connected to only one network, and I will capture packets traveling via this network. I will filter out only the packets for HTTP example site, &lt;code&gt;http://httpforever.com&lt;/code&gt;, as I already know the IP after the first launch of &lt;code&gt;curl&lt;/code&gt; I can use this network packets filter: &lt;code&gt;ip.addr==146.190.62.39 and tcp.port==80&lt;/code&gt;. So I start capturing with an active packet filter in Wireshark, and then I just open &lt;code&gt;http://httpforever.com&lt;/code&gt; in my browser.&lt;/p&gt;

&lt;p&gt;Here is what I see:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi98ek4bhinj1utg8k1cq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi98ek4bhinj1utg8k1cq.png" alt="wireshark http packets capture output" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s just what happened under the hood for one simple action: I tried to load the landing page of &lt;code&gt;http://httpforever.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let’s decompose it. First, for each network packet (each row in the table above = network packet) there’s the &lt;code&gt;source&lt;/code&gt; and &lt;code&gt;destination&lt;/code&gt;.  The source of the first packet is the private IP address of my PC, as the communication was started by my PC (specifically, by my browser). The destination of the first network packet is the IP of the site &lt;code&gt;http://httpforever.com&lt;/code&gt;. In the &lt;code&gt;Info&lt;/code&gt; column, as a first thing you see the ports &lt;code&gt;40500 → 80&lt;/code&gt;. Yes, &lt;code&gt;40500&lt;/code&gt; is the port used on my PC during this communication - &lt;em&gt;it’s an ephemeral port, which is basically a byproduct of NAT applied by my router&lt;/em&gt;. The first thing after ports in the first row, column Info (No. 7) you see is the abbreviature SYN — SYN (Synchronize) is the first step in making a handshake, &lt;em&gt;so it’s like extending your hand to someone else to initiate a handshake&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Next, the server where &lt;code&gt;http://httpforever.com&lt;/code&gt;'s frontend runs, sees the handshake request from my browser and &lt;strong&gt;acknowledges&lt;/strong&gt; it, which is the 2nd step of handshake - SYN-ACK (Synchronize-Acknowledge) (No. 8 line in the table above). &lt;/p&gt;

&lt;p&gt;Next, my browser sends an ACK to confirm the server’s SYN-ACK, and that &lt;strong&gt;completes the handshake&lt;/strong&gt; (No. 9 line in the table above). At this point, a reliable connection is established, and data transmission can begin.&lt;/p&gt;

&lt;p&gt;As you can see, the row No. 10 in the table informs that the network packet with HTTP GET method request was sent - data exchange between my browser and the server of &lt;code&gt;http://httpforever.com&lt;/code&gt; has started -after the handshake, my browser immediately requests what it needs. The server acknowledges the GET request and starts sending data (PSH) (rows No. 11 - 13 in the table above). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Notice&lt;/strong&gt; there are &lt;strong&gt;more than one transfers of different lengths&lt;/strong&gt; before the line No.18 that contains HTTP response 200 - OK, meaning, that first GET request from my browser was completed. That is because, servers servers over HTTP never send everything requested in one big mega data transfer. For example, if you wanted to download a 10 GB video, the website's server from which you are downloading it  wouldn’t just dump the entire file at once—that would overwhelm your network connection. Instead, HTTP network packets are chunked, and the full content is split across those packets. How the data is spitted in network packets by size, is there some rule? There are limits on the server side and on your PC. In fact, without realizing it, you’ve already seen your PC’s limit per network packet if you connect your PC to the Internet via ethernet cable - by default it is 1500 bytes, and this value is called MTU - Maximum Transmission unit. Here is the recap of &lt;code&gt;$ ip a&lt;/code&gt; command, output of which contains &lt;code&gt;mtu&lt;/code&gt; value:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ip a
...
2: eno1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 ...
    ...
    inet 192.168.8.249/24 brd 192.168.8.255 
    ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some packets can for some reasons not arrive to client from the first attempt, however, TCP will do its best to ensure the delivery of everything, that's why it is considered as a reliable protocol. And that is why I called it in my analogy with envelopes and letters as a delivery service. 3 way handshakes, and reattempting of sending network packets if something goes wrong. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5ucw3q9yi46bioha2y2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5ucw3q9yi46bioha2y2.jpg" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;a href="https://www.europosters.it/woman-receiving-padded-envelope-from-delivery-service-courier-indoors-closeup-f243219110" rel="noopener noreferrer"&gt;Image source&lt;/a&gt;; this picture is used to stress out the fact that TCP acts as a "reliable delivery service" for network packets - not just dropping them "by the door", but requiring the "signature" as a confirmation of delivery



&lt;p&gt;&lt;em&gt;NB! the Wireshark output may be a bit not accurate for visualizing how network packets are split up due to the fact that Wireshark takes data segments and reassembles them into the complete application-level message. However, I wanted to demonstrate that network traffic is more complex that it seems and it is segmented.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;However, this is very important to know that data travels in chunks when requested—and never gets dumped all in one bulk. It is key for cases when you have to fetch/process huge amounts of data with your JavaScript/TypeScript code (for example, NDJSON). You can take advantage of streaming approaches instead of fetching an entire file at once without streaming and nuking your PC's RAM while trying to process it. Plus, all the "download status bars" are exactly about tracking the chunks of data that arrive and comparing how much is left to the total size of the data being downloaded.&lt;/p&gt;

&lt;p&gt;So, I showed you nerd fun with HTTP network traffic - to help you understand how HTTP works, and what happens when you try to open a website from your browser. Now, what's left to discuss is HTTPS.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. HTTPS
&lt;/h3&gt;

&lt;p&gt;If you look again at &lt;span&gt;the output of a request sent by &lt;code&gt;curl&lt;/code&gt; to website via HTTP&lt;/span&gt;, the most important part is the last section (which I horribly shortened), but here is the point: after all the info about HTTP communication, there’s actual data in HTML format - all the elements that will be displayed in your browser after it will reconstruct and &lt;span id="htmlHTTP"&gt;render DOM based&lt;/span&gt; on this html data.&lt;/p&gt;

&lt;p&gt;Also, if you look at line No.18 in the Wireshark table from above, you can see that the content type sent over as HTML to my browser was (text/html).&lt;/p&gt;

&lt;p&gt;Using Wireshark, I can actually follow each network packet. So, if I select the packet that was carrying the data, I can investigate it—and I see the content in plain text. Here it is:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcqoju6oa53ai090od1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcqoju6oa53ai090od1z.png" alt=" " width="800" height="747"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Getting ahead, here’s what I see if I capture network traffic for an HTTPS site: in Wireshark, I start tracking TCP port 443 and the IP of the DEV.TO website, start capturing packets, go to  &lt;code&gt;https://dev.to/dev-charodeyka&lt;/code&gt; in my browser, identify any packet with data in Wireshark table, and put my nose there, trying to see the data:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumip2wiytn7vqzq6qy1h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumip2wiytn7vqzq6qy1h.png" alt=" " width="800" height="704"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Abracadabra! The real content, html data is hidden. In the process of transfer, data transferred via HTTPS—and not HTTP—is encrypted. So if someone tries to capture it and sniff the content, they won’t understand what it is.&lt;/p&gt;

&lt;p&gt;How is that possible? I’ll demonstrate by repeating the same procedure I did for the HTTP website—I’ll send a curl to the HTTPS site - &lt;code&gt;https://dev.to/dev-charodeyka&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -v https://dev.to/dev-charodeyka
0* Host dev.to:443 was resolved.
* IPv6: (none)
* IPv4: 151.101.66.217, 151.101.2.217, 151.101.194.217, 151.101.130.217
0*   Trying 151.101.66.217:443...
* GnuTLS ciphers: NORMAL:-ARCFOUR-128:-CTYPE-ALL:+CTYPE-X509:-VERS-SSL3.0
* ALPN: curl offers h2,http/1.1
* found 152 certificates in /etc/ssl/certs/ca-certificates.crt
* found 456 certificates in /etc/ssl/certs
* SSL connection using TLS1.2 / ECDHE_RSA_CHACHA20_POLY1305
*   server certificate verification OK
*   server certificate status verification SKIPPED
*   common name: dev.to (matched)
*   server certificate expiration date OK
*   server certificate activation date OK
*   certificate public key: RSA
*   certificate version: #3
*   subject: CN=dev.to
*   start date: Tue, 07 Jan 2025 22:00:10 GMT
*   expire date: Sun, 08 Feb 2026 22:00:09 GMT
*   issuer: C=BE,O=GlobalSign nv-sa,CN=GlobalSign Atlas R3 DV TLS CA 2024 Q4
* ALPN: server accepted h2
* Connected to dev.to (151.101.66.217) port 443
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://dev.to/dev-charodeyka
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: dev.to]
* [HTTP/2] [1] [:path: /dev-charodeyka]
* [HTTP/2] [1] [user-agent: curl/8.12.1]
* [HTTP/2] [1] [accept: */*]
&amp;gt; GET /dev-charodeyka HTTP/2
&amp;gt; Host: dev.to
&amp;gt; User-Agent: curl/8.12.1
&amp;gt; Accept: */*
&amp;gt;
* Request completely sent off
&amp;lt; HTTP/2 200
&amp;lt; server: Cowboy
....
&amp;lt; strict-transport-security: max-age=31557600
&amp;lt; content-length: 145678
&amp;lt;
&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html lang="en"&amp;gt;
  &amp;lt;head&amp;gt;
  ....
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, same logic. Now, I hope you can read the output. &lt;br&gt;
First, the &lt;code&gt;dev.to&lt;/code&gt; address was resolved into an IP address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host dev.to:443 was resolved
* IPv4: 151.101.66.217, 151.101.2.217, 151.101.194.217, 151.101.130.217
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, not one, but many. Why?&lt;/p&gt;

&lt;h3&gt;
  
  
  4.1 Little extra info: multiple IP addresses horizontal scaling of web apps
&lt;/h3&gt;

&lt;p&gt;Well, I told you networking stuff is cool, and now, dear reader, if you've read this far, you'll also learn about load balancing and horizontal/vertical scaling. &lt;/p&gt;

&lt;p&gt;Small web applications that don’t have many visitors can run even on a Raspberry Pi (a tiny PC acting as a server). Indeed, they can even run on something like a phone or any device with limited computational power. However, when a website is pretty well-known - and it's not just a basic blog with zero interactivity, but more like an online shop or DEV.TO - one single server may not be enough to ensure that each user has nice smooth experience browsing it. And it's not only about server hardware specs like super-mega RAM or many many CPU cores; it's also about the network, haha. Network transmission speed does not multiply with number of cpu/ram.&lt;/p&gt;

&lt;p&gt;There are two ways of dealing with the hardware resources deficit of a server where web app is running: horizontal and vertical scaling. Let’s say you created a web app and hosted it on a Raspberry Pi. Then you added some cool backend algorithms that do something for your users, so even though you only have a few users, the algorithms demand computational power. You upgrade your Raspberry Pi to something more powerful with more RAM and a better CPU. One important detail is that your website was &lt;code&gt;coolest.site.eu&lt;/code&gt;, mapped to the IP &lt;code&gt;123.124.6.8&lt;/code&gt; - the IP address of that Raspberry Pi. You replace the Raspberry Pi, move your app to the new device, and also bind the  SAME IP to the new device. This is vertical scaling.&lt;/p&gt;

&lt;p&gt;But what if your web app doesn’t do anything too computationally heavy, yet you have thousands of users accessing it simultaneously? Sure, you could replace your Raspberry Pi with a super-mega server, but that might not be the best use of resources. Instead, you could buy four more Raspberry Pi, duplicate your web app on 3 of them, and thus end up with four different IP addresses—because they can’t all share one IP. Then you’d take 4th Raspberry Pi, install any load balancing software on it (like Nginx), and use it as a "sorter" of requests sent by browsers of your website users when they try to access it. For example, if a thousand users try to reach your website, they won’t all slam into one single Raspberry Pi. Instead, their browsers’ requests land on Load balancer's Raspberry Pi first. It sees how busy each Raspberry Pi is and directs traffic to whichever Pi is least busy at that moment. That’s horizontal scaling, and what load balancer does is balance the load among all available servers.&lt;/p&gt;

&lt;p&gt;Okay, next. As I mentioned before, HTTPS is the secure version of the HTTP protocol. If you look at the end of the &lt;code&gt;curl&lt;/code&gt; output I shared above, once again we see HTML data in plain text—but only after the browser successfully establishes communication with one of &lt;code&gt;dev.to&lt;/code&gt;’s servers that hosts the frontend code. That’s why in the Wireshark screenshot, when I tried to put my nose in a HTTPS network packet, I saw weird symbols instead of plain HTML. It’s because the data was encrypted.&lt;/p&gt;

&lt;h4&gt;
  
  
  4.2 About encryption
&lt;/h4&gt;

&lt;p&gt;If I send you the message 12334 145 672 849, like this, you won’t understand it—unless you know we have a decryption map such as 1=h, 2=e, 3=l, 4=o, 5=w, 6=a, 7=r, 8=y, 9=u, which transforms the message into "hello how are you". You can then respond using the same "encryption key". Of course, this is a very simplified example. In real encryption scenarios, there are usually two keys: public and private.&lt;/p&gt;

&lt;p&gt;In my simple example, public key will be containing this info: 1 -&amp;gt; ?, 2 -&amp;gt; ?, 3 -&amp;gt; ?, etc. From public key you only get partial information about encryption—you know how to encrypt the message - turn letters into numbers.&lt;br&gt;
Private key is the secret part that completes the map—only the holder of the private key can reliably invert the numbers into letters.&lt;br&gt;
The public key is indeed public, and it helps a client figure out how data will be encrypted by server. And where is the public key stored when we talk about servers that run websites and exposes them via https protocol? In a small document called a certificate.&lt;/p&gt;

&lt;p&gt;Returning to the &lt;code&gt;curl&lt;/code&gt; output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0*   Trying 151.101.66.217:443...
* GnuTLS ciphers: NORMAL:-ARCFOUR-128:-CTYPE-ALL:+CTYPE-X509:-VERS-SSL3.0
* ALPN: curl offers h2,http/1.1
* found 152 certificates in /etc/ssl/certs/ca-certificates.crt
* found 456 certificates in /etc/ssl/certs
* SSL connection using TLS1.2 / ECDHE_RSA_CHACHA20_POLY1305
*   server certificate verification OK
*   server certificate status verification SKIPPED
*   common name: dev.to (matched)
*   server certificate expiration date OK
*   server certificate activation date OK
*   certificate public key: RSA
*   certificate version: #3
*   subject: CN=dev.to
*   start date: Tue, 07 Jan 2025 22:00:10 GMT
*   expire date: Sun, 08 Feb 2026 22:00:09 GMT
*   issuer: C=BE,O=GlobalSign nv-sa,CN=GlobalSign Atlas R3 DV TLS CA 2024 Q4
* ALPN: server accepted h2
* Connected to dev.to (151.101.66.217) port 443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, you see a certificate verification step because client (curl, in this case, but the same is for browser) found certificates. Where do they come from, and how does the browser (or curl) verify them?&lt;/p&gt;

&lt;h4&gt;
  
  
  4.3 TLS/SSL certificates
&lt;/h4&gt;

&lt;p&gt;Well, these certificates were created by &lt;code&gt;dev.to&lt;/code&gt;’s server admins. First, two keys were generated on the server: a public and a private key. The certificate includes the server’s public key and some extra data (domain name, organization info, etc.). The &lt;code&gt;dev.to&lt;/code&gt; admins sent this certificate to a Certificate Authority (CA) to have it signed. In &lt;code&gt;dev.to&lt;/code&gt;’s case, I can see the certificate was signed by CA called GlobalSign:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;*   issuer: C=BE,O=GlobalSign nv-sa,CN=GlobalSign Atlas R3 DV TLS CA 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That CA verified that &lt;code&gt;dev.to&lt;/code&gt; actually owns the domain and then issued a signed certificate if everything was legitimate about this server and domain. The signed certificate is sent back to the &lt;code&gt;dev.to&lt;/code&gt; server. After this, &lt;code&gt;dev.to&lt;/code&gt;'s server is equipped with its private key (secret) and its public key that got embedded in the signed certificate (this is not secret). This signed certificate is TLS/SSL certificate&lt;/p&gt;

&lt;p&gt;When I try to access &lt;code&gt;https://dev.to/dev-charodeyka&lt;/code&gt; via a browser:&lt;/p&gt;

&lt;p&gt;I. My browser starts a connection via the Secure Socket Layer (SSL) protocol to the dev.to server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* SSL connection using TLS1.2 / ECDHE_RSA_CHACHA20_POLY1305
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;II. The &lt;code&gt;dev.to&lt;/code&gt; server sends its signed certificate (which contains the public key) to my browser.&lt;br&gt;
III. My browser verifies the certificate (checks expiration, domain match, CA trust, etc.). You can see that checking process in the output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;*   server certificate verification OK
*   server certificate status verification SKIPPED
*   common name: dev.to (matched)
*   server certificate expiration date OK
*   server certificate activation date OK
*   certificate public key: RSA
*   certificate version: #3
*   subject: CN=dev.to
*   start date: Tue, 07 Jan 2025 22:00:10 GMT
*   expire date: Sun, 08 Feb 2026 22:00:09 GMT
*   issuer: C=BE,O=GlobalSign nv-sa,CN=GlobalSign Atlas R3 DV TLS CA 2024 Q4
* ALPN: server accepted h2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;IV. After the certificate is verified and approved by my browser, the &lt;code&gt;dev.to&lt;/code&gt; server and my browser establish a secure channel. Think of it like going into a private room so nobody else can hear what you're discussing with another person. In digital communication, that means all messages are encrypted in such a way that only the sender and recipient can understand them.&lt;/p&gt;

&lt;h4&gt;
  
  
  4.4 Secure communication channel
&lt;/h4&gt;

&lt;p&gt;To create this secure channel, both my browser and &lt;code&gt;dev.to&lt;/code&gt; server need to agree on encryption and decryption keys:&lt;/p&gt;

&lt;p&gt;1) They agree on the TLS (Transport Layer Security) version and cipher format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* SSL connection using TLS1.2 / ECDHE_RSA_CHACHA20_POLY1305
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) The &lt;code&gt;dev.to&lt;/code&gt; server and my browser each create "ephemeral" key pairs for this specific session that my browser initialized.&lt;br&gt;
3) They exchange the public parts of these ephemeral keys (encrypted with or signed by the &lt;code&gt;dev.to&lt;/code&gt; server’s private key to ensure authenticity).&lt;br&gt;
4) A shared secret (the session key) is computed on both ends - my browser and &lt;code&gt;dev.to&lt;/code&gt; server.&lt;/p&gt;

&lt;p&gt;Now, my browser and the &lt;code&gt;dev.to&lt;/code&gt; server each have the same session key. All the HTML, images, and everything else is encrypted with this session key—and it’s unique to this session.&lt;/p&gt;

&lt;p&gt;Finally, once this secure channel is established, my browser sends encrypted HTTP requests and receives encrypted responses (like the HTML code for rendering my &lt;code&gt;dev.to&lt;/code&gt; profile).&lt;/p&gt;

&lt;p&gt;You can compare all these steps to the earlier HTTP Wireshark table—there, everything was visible in plain text. Under HTTPS, it’s all far not that clear. No one intercepting the packets from "outside" of the secured channel can read the data unless they managed to "steal" a secret key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzd8ivvrpugtaw06uu8yj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzd8ivvrpugtaw06uu8yj.png" alt=" " width="800" height="794"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Well... I guess that’s all for networking fundamentals in the context of web development. This article got quite long. &lt;/p&gt;

&lt;p&gt;The next (and concluding) part of this series will be mostly centered around the backend side of web development and JavaScript, but from a particular perspective - here is a teaser: can you use JavaScript the same you use Python?&lt;/p&gt;




&lt;p&gt;Summarizing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Don’t ignore the networking and servers. Even if you’re aiming to become a CSS/HTML/JS guru who can create amazing UIs, remember that understanding how servers and networking work is crucial. Otherwise, your frontend code could unintentionally breach security and expose sensitive data. Web dev isn’t just about pretty visuals; it’s also about safe and &lt;strong&gt;efficient&lt;/strong&gt; communication between clients and servers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HTTPS is more complex than HTTP; it’s not simply a "version 2" of HTTP. Developing on HTTP and localhost won’t behave exactly the same as when you switch to HTTPS in production. Secure cookies, CORS, and encrypted data transfers all tax your code once communication passes from HTTP to HTTPS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Get familiar with how ports work and how to monitor them. You don’t want to open a bazillion ports in your web app by repeatedly opening up database clients or other services each on a new port.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Understanding URLs is vital for setting up routing in your web apps, especially when you deal with protected endpoints that require authorization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Understanding IP addresses is just &lt;strong&gt;basic&lt;/strong&gt; for any IT field. It will prevent you from sending your friend a URL to evaluate your cool web app that has as address your home network’s 192.168.x.x address.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A privacy-aware bonus that may encourage you to explore in-depth networking and DNS:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you’re a privacy-obsessed and always use a VPN to surf the web like an untraceable ninja, ensure that DNS queries are never forwarded to your ISP’s DNS server. If they are, you are a traceable ninja.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>http</category>
      <category>uri</category>
      <category>webdev</category>
      <category>dns</category>
    </item>
    <item>
      <title>Where to Start in Web Development: Your Browser as Your First IDE</title>
      <dc:creator>Anna</dc:creator>
      <pubDate>Sun, 09 Mar 2025 22:50:15 +0000</pubDate>
      <link>https://dev.to/dev-charodeyka/where-to-start-in-web-development-react-angular-svelte-or-somewhere-else-29a4</link>
      <guid>https://dev.to/dev-charodeyka/where-to-start-in-web-development-react-angular-svelte-or-somewhere-else-29a4</guid>
      <description>&lt;p&gt;The reality of starting web development may be very confusing in the beginning. There are so many resources, courses, tutorials, and so on—but how do you choose the right one? Moreover, chances are that right from the start you’ll bump into React/Angular/Vue, because they are very popular &lt;em&gt;frameworks&lt;/em&gt;, and it might seem like all web development can be done only with one of these frameworks and it begins with picking a "right" framework.&lt;/p&gt;

&lt;p&gt;Plus, a lot of tutorials begin by setting up VS Code as the IDE (Integrated Development Environment). You start using VS Code, learn how to write &lt;code&gt;.html&lt;/code&gt; and &lt;code&gt;.css&lt;/code&gt; files, do some basic JavaScript scripting, and then—for simplicity—you install the VS Code Live Server. You click here, click there, maybe even run &lt;code&gt;npm run...&lt;/code&gt; and—boom—your first web app appears in the browser. Enjoy!&lt;/p&gt;

&lt;p&gt;But wait—what’s actually going on? &lt;code&gt;localhost&lt;/code&gt;? Some port (:3000)? The browser complaining about an unsecured HTTP connection? JavaScript seems weird, and it’s not clear how or where to use it, or what the use cases are. How is the browser displaying your scripts so nicely? What is Node.js, and how did installing it on your PC allow the browser to execute JS scripts? So many questions might come up...&lt;/p&gt;

&lt;p&gt;The worst thing you can do is leave these questions unanswered. As soon as you continue learning without truly understanding, these blind spots keep piling up. Eventually, you might realize you can’t develop anything based on your own ideas—at least not without following a step-by-step tutorial—because you don’t really know where to start.&lt;/p&gt;

&lt;p&gt;In general, if you’re just discovering web development field and want to begin somewhere, starting with any of frameworks (React, Svelte, Vue and Angular are frameworks)—will only confuse you, because &lt;strong&gt;they’re definitely not the tools to start with web development&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Even though, of course, HTML, CSS, and JavaScript (further - JS) are the fundamentals of web development, but don't worry—this article isn't about that you just need to first master them and you will become web dev. &lt;/p&gt;

&lt;p&gt;I am assuming that if you want to do web development, your final aim is to learn how to create web applications or websites—basically something that will be running in a &lt;strong&gt;browser&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And here is the first question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;How browsers work? What happens "behind the hood" when you open your browser and homepage shows on your screen? How does your browser manage to display all the web sites you visit?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You may be aware of the fact that most of the web applications have a "backend" that is residing on some mysterious "servers". &lt;/p&gt;

&lt;p&gt;And here is the second question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;What exactly are the "backend" and "server," and how does a browser communicate with a "server"? Which code is considered "frontend," and which is "backend"? What determines whether some code belongs to one or the other?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you cannot answer these question, that's "where" you must start in web development—by finding the answer to them.&lt;/p&gt;

&lt;p&gt;In this series of articles I will provide the answers to these questions illustrating them with hands on code examples. I will be writing code in a very direct way—no IDEs, no frameworks, no extra stuff. Just a browser, my PC's Linux OS, and basic JS and HTML. Because I also want to show you that you can do many things with just basic tools and that only your imagination is the limit. You don't need React, Nginx, Express, Angular, VSCode Live Server or something else to &lt;strong&gt;start trying&lt;/strong&gt; some web development. &lt;/p&gt;

&lt;p&gt;So, let's start!&lt;/p&gt;




&lt;p&gt;I’ve decided to write a set of articles on web development, rather than pack everything I want to share into one very long piece. This first article in the series—which you’re currently reading—will focus on using the browser as your powerful tool for web development. Here’s a roadmap:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Building confidence with a browser

&lt;ol&gt;
&lt;li&gt;Open random files in your browser to see what happens&lt;/li&gt;
&lt;li&gt;Browser's Developer Tools as an IDE (Integrated Development Environment)&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;How does a browser work?

&lt;ol&gt;
&lt;li&gt;What is the &lt;code&gt;document&lt;/code&gt;?&lt;/li&gt;
&lt;li&gt;Browser's rendering engine&lt;/li&gt;
&lt;li&gt;Document Object Model - DOM&lt;/li&gt;
&lt;li&gt;Browser's JavaScript Interpreter&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;The &lt;a href="https://dev.to/dev-charodeyka/where-to-start-in-web-development-ignoring-learning-https-urls-dns-ip-ssl-will-have-57le"&gt;second part&lt;/a&gt; of this web development series focuses on the &lt;strong&gt;browser's networking layer&lt;/strong&gt;, specifically on &lt;strong&gt;URLs&lt;/strong&gt;, the &lt;strong&gt;HTTP protocol&lt;/strong&gt;, and a few &lt;strong&gt;must-know networking concepts&lt;/strong&gt; in general. Although the networking field in IT isn’t simple, it’s a tough pill you have to swallow  - if your goal is to get your web applications to a production-ready stage—rather than leaving them on your PC in some project folder, you must understand networks.&lt;/p&gt;

&lt;p&gt;You might question this statement, especially if you’ve already explored the two well-known branches of web development—backend and frontend—and decided that frontend is what you want to do. However, even if you’ve chosen to focus on one, that doesn’t mean you can completely ignore the other. You still need at least the basic concepts and an understanding of how both "branches" work—frontend and backend. The third part of this set of articles will be dedicated to "backend" with a focus on what makes a code "frontend" or "backend".&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;NB!&lt;/strong&gt; I use Linux, more precisely Debian, and I have an allergy to any other OS. So, if you use Windows or macOS, you might need to google how to  do the same things on your system. Anyway, whatever I do in this article is not related in any way to the type of Operational System.&lt;/p&gt;




&lt;h3&gt;
  
  
  1. Building confidence with a browser
&lt;/h3&gt;

&lt;p&gt;If before web development you were just a &lt;em&gt;user&lt;/em&gt; of your preferred browser, that golden time is over—now it's your working horse if you're aiming to be a front-end developer. As a &lt;em&gt;user&lt;/em&gt;, you might be very picky about browsers, choosing one that suits you. But the moment you decide to do front-end, hehe, soon you will install all the most common browsers for testing purposes—at least one per engine group they use (yes, even Microsoft Edge). I'll explain why later.&lt;/p&gt;

&lt;p&gt;So, the browser... Now, I am writing this article in the browser. In a tab I am working I see in the address bar something like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;https://dev.to/dev-charodeyka/where-to-start-in-web-development-bla-bla/edit&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And if I go to any other webpage, I'll see something like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;https://some-site/home&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You may have noticed that you can use your browser not only just to view web sites, but also to open images, PDF files, and other types of files. For example, I don't have any PDF reader on my system, so I use the browser to view PDF files if needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.1 Open random files in your browser to see what happens
&lt;/h3&gt;

&lt;p&gt;Okay, so I open LibreOffice, drop some random text, and save it as a PDF on my PC and... I open it in the browser by typing in the address bar: &lt;code&gt;file:///home/lalala/Projects/DEVTO/webdev/randomPDF.pdf&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Little remark: pay attention to how a local file stored on my PC is opened in the browser vs how websites are opened— "file:///home/lalala/..." vs "&lt;a href="https://some-site/home." rel="noopener noreferrer"&gt;https://some-site/home.&lt;/a&gt;.". Do you notice the same syntax? The key part to notice is: &lt;code&gt;https://&lt;/code&gt; and &lt;code&gt;file://&lt;/code&gt; (third &lt;code&gt;/&lt;/code&gt; in &lt;code&gt;file:///home/lalala/...&lt;/code&gt; belongs to the absolute path of the file). It is important, I will return to this later in the article.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And here is what I see when I open a &lt;code&gt;randomPDF.pdf&lt;/code&gt; file in my Browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6d68mkhmu0gtumqnx3ht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6d68mkhmu0gtumqnx3ht.png" alt=" " width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What if I want to view a &lt;em&gt;text&lt;/em&gt; file with the same content instead of a PDF?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# first, I create it 
$ vim randomTXT.txt
Hellow!
I am a TXT file!
I am visualized in a browser!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I paste this into browser's address bar to open text file: &lt;code&gt;file:///home/lalala/Projects/DEVTO/webdev/randomTXT.txt&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7d32fg5po0v90ph6rnw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7d32fg5po0v90ph6rnw8.png" alt=" " width="800" height="120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the text is just text—there’s no bigger font for a title line, because the TXT format does not provide any way to format/style the text... but a &lt;strong&gt;H&lt;/strong&gt;yper*&lt;em&gt;T&lt;/em&gt;&lt;em&gt;ext **M&lt;/em&gt;&lt;em&gt;arkup **L&lt;/em&gt;*anguage does—let’s create one!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# first, I create it 
$ vim randomHTML.html
&amp;lt;h1&amp;gt;Hellow!&amp;lt;/h1&amp;gt;
&amp;lt;h3&amp;gt;I am a TXT file!&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;I am visualized in a browser!&amp;lt;/p&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I paste this into browser's address bar: &lt;code&gt;file:///home/lalala/Projects/DEVTO/webdev/randomHTML.html&lt;/code&gt;&lt;br&gt;
Here is the result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvgukebw88n8wkqb3akvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvgukebw88n8wkqb3akvp.png" alt=" " width="800" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's add some &lt;em&gt;spice&lt;/em&gt; - styling of one word with &lt;strong&gt;C&lt;/strong&gt;ascading &lt;strong&gt;S&lt;/strong&gt;tyle &lt;strong&gt;S&lt;/strong&gt;heets - a &lt;em&gt;style sheet language used for specifying the presentation and styling of a document written in a markup language&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vim randomHTML.html
&amp;lt;h1&amp;gt;Hellow!&amp;lt;/h1&amp;gt;
&amp;lt;h3&amp;gt;I am a random &amp;lt;span style="color: red;"&amp;gt;HTML&amp;lt;/span&amp;gt; file!&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;I am visualized in a browser!&amp;lt;/p&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftleydyrwjr7vi2z790zf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftleydyrwjr7vi2z790zf.png" alt=" " width="800" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I demonstrated HTML and CSS in action. If I want to display more text or change the color of some words, is modifying &lt;code&gt;randomHTML.html&lt;/code&gt; file and then refresh the browser's page to see the changes is the only way to do it? Heh, of course not. &lt;/p&gt;

&lt;h4&gt;
  
  
  1.2 Browser's Developer Tools as an IDE (Integrated Development Environment)
&lt;/h4&gt;

&lt;p&gt;First, I have to open the developer tools (hereafter, "dev tools") in my browser. If you're not sure how to open dev tools, you can easily find instructions online. A common method is to right-click anywhere on the page and select "Inspect" from the drop-down menu; however, this option may not always be available, as some websites block it. In that case, you can launch the dev tools from your browser's control panel, where you also can check a specific keyboard shortcut they are usually tied to.&lt;/p&gt;

&lt;p&gt;This is how my chromium based browser Brave dev tools look like (the quality of screenshot is not the best due to the automatic compression of DEVTO attachments):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw70hv6x2i7dvnweqlzlg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw70hv6x2i7dvnweqlzlg.png" alt=" " width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;
Development tools of my Brave browsers are grouped into Elements, Console, Sources, Network, Performance, Memory, Application, Security. I add/delete/modify HTML elements from Elements Tab. 



&lt;p&gt;Using dev tools I can add/delete/modify HTML elements and CSS styles for them "on the fly". Of course dev tools are a pretty powerful tool and are not meant only for these silly modifications, but I just wanted to demonstrate how you can easily experiment with HTML and CSS directly from a browser.&lt;/p&gt;

&lt;p&gt;However, keep in mind that whatever I do with the browser’s dev tools doesn’t affect the original &lt;code&gt;.html&lt;/code&gt; file that I opened in the browser—all changes will be lost if I don’t save them into a &lt;strong&gt;separate file&lt;/strong&gt;. The same is true for any web page you inspect with dev tools and eventually modify something - of course it does not affect permanently the original page in any way.&lt;/p&gt;

&lt;p&gt;Okay, I played a bit with HTML and CSS from &lt;em&gt;Elements&lt;/em&gt; tab of dev tools. There’s another interesting tab in these tools called &lt;em&gt;Console&lt;/em&gt;, which is an interactive shell where I can write some JS code and execute it!&lt;/p&gt;

&lt;p&gt;Some silly "print" of the sum of two numbers and a couple of  evaluations of expressions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//JS code I ran in the Browser's console:
const a = 2
const b = 3
a&amp;gt;b
a===b
a&amp;lt;b
console.log(a+b)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you know what? Why not add a new HTML element to the displayed &lt;code&gt;.html&lt;/code&gt; file's content opened in a browser with JS? Because JS is definitely capable of it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F141hm4m5dqehp1opklsa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F141hm4m5dqehp1opklsa.png" alt=" " width="800" height="238"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//JS code I used to add a new text to my HTML document:

//first, I add &amp;lt;div&amp;gt;
const newRandomDiv = document.createElement('div');
//then, I add child of &amp;lt;div&amp;gt;, &amp;lt;p&amp;gt; with some text
newRandomDiv.innerHTML = '&amp;lt;p&amp;gt;Hellow, I was added by JavaScript!!!&amp;lt;/p&amp;gt;';
//my HTML document has the &amp;lt;body&amp;gt;, so I append new element as a child of it
document.body.appendChild(newRandomDiv);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, you might ask: what is the &lt;code&gt;document&lt;/code&gt;? I opened the &lt;em&gt;Console&lt;/em&gt; of dev tools and haven’t declared any &lt;code&gt;document&lt;/code&gt; variable or anything like that—where does it come from? I guess it’s obvious that the document in question is related to the HTML file that’s open in the tab:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3arznmfq6js18buw5sv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3arznmfq6js18buw5sv9.png" alt=" " width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. How does a browser work?
&lt;/h3&gt;

&lt;p&gt;On the screenshot above, you can see that I used the browser’s JavaScript interactive shell to "print" the &lt;code&gt;document&lt;/code&gt;. From the output it is clear that it contains HTML elements from the original &lt;code&gt;.html&lt;/code&gt; file I opened in the browser, plus some other elements that I added manually by directly editing the HTML structure in the dev tools and also using JavaScript from the browser. However, what was not originally in the &lt;code&gt;randomHTML.html&lt;/code&gt; file that I opened are the tags like &lt;code&gt;&amp;lt;html&amp;gt;&lt;/code&gt;,&lt;code&gt;&amp;lt;head&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;body&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The &lt;code&gt;&amp;lt;html&amp;gt;&lt;/code&gt; tag represents the root of an HTML document. The  tag defines the document's body.&lt;br&gt;
The &lt;code&gt;&amp;lt;body&amp;gt;&lt;/code&gt; element contains all the contents of an HTML document, such as headings, paragraphs, images, hyperlinks, tables, lists, etc.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  2.1 What is the &lt;code&gt;document&lt;/code&gt;?
&lt;/h4&gt;

&lt;p&gt;Everything, that is enclosed between &lt;code&gt;&amp;lt;html&amp;gt;&amp;lt;/html&amp;gt;&lt;/code&gt; tags is the &lt;code&gt;document&lt;/code&gt;. And, roughly speaking, that is what web development (front-end part) is about: working on this &lt;code&gt;document&lt;/code&gt; - bending it, mangling it, shaping it to make it look how you want. When I opened &lt;code&gt;randomHTML.html&lt;/code&gt; file with the browser I &lt;em&gt;loaded this file into the browser&lt;/em&gt;, and its content became a part of &lt;em&gt;document object&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;When an HTML document is loaded into a web browser, it becomes a document object.&lt;br&gt;
The document object is the root node of the HTML document.&lt;br&gt;
The document object is a property of the window object.&lt;br&gt;
The document object is accessed with: &lt;code&gt;window.document&lt;/code&gt; or just &lt;code&gt;document&lt;/code&gt; (&lt;a href="https://www.w3schools.com/jsref/dom_obj_document.asp" rel="noopener noreferrer"&gt;W3Schools: HTML DOM Documents&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What about the TXT and PDF files that I opened before? Well, &lt;strong&gt;their content&lt;/strong&gt; also become a part of document objects, when I open them in the browser.&lt;/p&gt;

&lt;p&gt;This is the &lt;code&gt;.txt&lt;/code&gt; file opened in my browser when I inspect the tab's content with dev tools:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;html&amp;gt;
  &amp;lt;head&amp;gt;
    &amp;lt;meta name="color-scheme" content="light dark"&amp;gt;
  &amp;lt;/head&amp;gt;
  &amp;lt;body&amp;gt;
    &amp;lt;pre style="word-wrap: break-word; white-space: pre-wrap;"&amp;gt;Hellow!
     I am a random TXT file!
     I am visualized in a browser!
    &amp;lt;/pre&amp;gt;
  &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You may notice how interestingly the &lt;em&gt;raw text file content was "wrapped" into an HTML document object&lt;/em&gt; by my browser. It is not uncommon for modern browsers to create a minimal HTML "wrapper" for a content that a browser was requested to display and that it managed to recognize.&lt;/p&gt;

&lt;p&gt;In the plain text file, there was no any styling, like the first line in bold and enlarged font. But content of the &lt;code&gt;.txt&lt;/code&gt; file has the line breaks. They were preserved by the browser by attaching CSS rules for word-breaking, white space, and word wrapping!&lt;/p&gt;

&lt;p&gt;Let's have a look on the document object structure of PDF file opened in the browser:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;html&amp;gt;
  &amp;lt;head&amp;gt;
  &amp;lt;/head&amp;gt;
  &amp;lt;body style="height: 100%; width: 100%; overflow: hidden; margin:0px; background-color: rgb(82, 86, 89);"&amp;gt;
    &amp;lt;embed name="DDDBE1725B1DA9C2C0B9CFA699CCB3B9" style="position:absolute; left: 0; top: 0;" width="100%" height="100%" src="about:blank" type="application/pdf" internalid="DDDBE1725B1DA9C2C0B9CFA699CCB3B9"&amp;gt;
  &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What is similar to a &lt;code&gt;.txt&lt;/code&gt; file is that the browser’s PDF viewer is embedded in a minimal HTML document. While with a text file I was able to see the actual file's content directly and even modify it right from the browser, with a PDF file it's not the same. This kind of embedding is actually a byproduct of how my browser handles and renders PDF files.&lt;/p&gt;

&lt;p&gt;Can I mess it up with how PDF file is displayed using dev tools even if it is originally PDF file? Sure:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2r4kstq9szaxt84swrt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2r4kstq9szaxt84swrt.png" alt=" " width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is JS code I used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const redThingy = document.createElement('div');
redThingy.style.backgroundColor = 'red';

redThingy.style.width = '400px';
redThingy.style.height = '400px';

redThingy.style.position = 'absolute';
redThingy.style.top = '50%';
redThingy.style.left = '50%';
redThingy.style.transform = 'translate(-50%, -50%)';

redThingy.innerHTML = '&amp;lt;p&amp;gt;Hello, I was added by JavaScript to &amp;lt;span style="color: yellow;"&amp;gt;mess up&amp;lt;/span&amp;gt; PDF!!!&amp;lt;/p&amp;gt;';

document.body.appendChild(redThingy);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, obviously, this red box with text didn’t affect the PDF file itself in any way. What my modifications actually affected was just how my browser rendered the opened PDF file.&lt;/p&gt;

&lt;p&gt;I guess it’s time to summarize the point of all of these experiments.&lt;/p&gt;

&lt;p&gt;As you can see, a browser is a powerful tool. It's not just an "app" to open websites with added features like a photo viewer or PDF viewer.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.2 Browser's rendering engine
&lt;/h4&gt;

&lt;p&gt;Any browser has its own engine that &lt;em&gt;renders&lt;/em&gt; what you see on its User Interface (UI). Roughly speaking, to render anything a browser uses &lt;em&gt;instructions&lt;/em&gt; in a markup language—HTML. &lt;/p&gt;

&lt;p&gt;Different browsers work differently. However, let me explain the process slightly generalizing: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When you enter a query in the browser's search bar (a URL), &lt;strong&gt;networking layer&lt;/strong&gt; of the browser delivers your "request" to a destination point that is retrieved from URL and bring back to browser a response which could be any type of file or data. Some file types can’t be displayed directly (for example, if the file is corrupted or has an unrecognized format), in which case the browser may display an error or prompt you to download it instead.&lt;/li&gt;
&lt;li&gt;If the received data is recognized as HTML, the browser parses it to build a Document Object Model (DOM) tree. This process involves reading the HTML and turning it into a structured hierarchy of nodes that represent the page’s elements. If the received data isn’t HTML but the browser can still display it (like plain text or PDF), the browser will process it in a specialized way. For plain text files, most browsers apply a minimal HTML “wrapper”; for PDFs, the browser’s PDF viewer is embedded in within an HTML context, so browser can position it, handle scrolling, zoom, and so on.&lt;/li&gt;
&lt;li&gt;The browser then creates or updates a render tree, which is the combination of the DOM and the CSSOM (the CSS Object Model). The layout step figures out the exact positions and sizes of all elements on the page.&lt;/li&gt;
&lt;li&gt;Finally, the browser paints (or “rasterizes”) the render tree onto your screen. This is what you actually see in the browser’s viewport.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are many variables in the rendering process. Every element in the DOM tree is an object that the browser needs to render by calculating its position (this is where &lt;em&gt;responsive design&lt;/em&gt; comes in, since an element’s positioning depends heavily on the available space on the user’s device), appearance, and any dynamic changes. The DOM tree isn’t static; the browser continually re-renders it as elements move or change appearance—like those animations you see on some websites.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.3 Document Object Model - DOM
&lt;/h4&gt;

&lt;p&gt;I want to elaborate more on the DOM tree. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The backbone of any HTML document is its tags. In the Document Object Model (DOM), every HTML tag is an object. Nested tags are considered “children” of the enclosing tag, and even the text inside a tag is treated as an object. The DOM represents HTML as a tree structure of tags. (&lt;a href="https://javascript.info/dom-nodes" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;html&amp;gt;
  &amp;lt;head&amp;gt;
  &amp;lt;/head&amp;gt;
  &amp;lt;body&amp;gt;
    &amp;lt;h1&amp;gt;...&amp;lt;/h1&amp;gt;
    &amp;lt;div&amp;gt;...&amp;lt;/div&amp;gt;
    &amp;lt;...&amp;gt;...&amp;lt;/...&amp;gt;
  &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can notice the nested structure of HTML document: each tag is an object, and the nesting forms a tree-like hierarchy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;I'm repeating this: if you're aiming to be a front-end developer, the DOM is everything. In front-end development, almost everything ultimately revolves around DOM manipulation. I've shown how to manipulate the DOM using browser's dev tools—adding HTML elements, applying CSS, and most importantly, doing it through JS. Because in front-end development, JS is primarily about manipulating the objects in the DOM tree, which are (at their core) HTML elements.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;All these DOM objects are accessible via JavaScript. I’ve only shown a small part, and it might have seemed very easy. Well, it was easy because I was working &lt;em&gt;directly&lt;/em&gt; with those DOM objects—there were only a few, and &lt;em&gt;everything was clear&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;But remember this: the JS libraries you use to "simplify" certain features—and especially any framework (React, Angular, Vue, etc., though each to a different extent)—actually take you one step further away from direct manipulation of the DOM objects. &lt;/p&gt;

&lt;p&gt;Frameworks abstract direct DOM manipulation, so you write your code in a more &lt;em&gt;declarative&lt;/em&gt; way. Behind the scene, any framework still manipulates the DOM — but you interact with a framework’s abstractions instead of directly selecting or updating DOM elements.&lt;/p&gt;

&lt;p&gt;The key point is that these abstractions are not "bad"  - they make development more efficient and your code more maintainable, as long as you have a basic understanding of how the DOM works. If you rely on abstractions without knowing what’s happening underneath, you are cooked. No fancy framework will make you a good developer if you don’t at least keep the DOM tree in mind whenever you manipulate it with JS.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I’ve shown how to do manipulate DOM directly in the browser—not to promote this style of coding (obviously, no one codes like this for a full project). I just wanted to show you what your browser is truly capable of. &lt;strong&gt;Everything I did was handled entirely by the browser—there was no server behind it, no external tools, just the browser.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  2.4 Browser's JavaScript Interpreter
&lt;/h4&gt;

&lt;p&gt;And this leads to a very important point: I was able to execute JavaScript in the browser because browsers have a built-in JavaScript interpreter. Think about it—if you’re a Python developer, before you could run Python code, you had to install Python on your PC (unless you're on Linux, where it's usually pre-installed). That’s because Python isn't machine code, so your PC can’t understand it without an interpreter. The same goes for JavaScript. The key point is that modern browsers come with a JavaScript interpreter embedded in their engine.&lt;/p&gt;

&lt;p&gt;Sounds great, right? But here’s the cornerstone: &lt;strong&gt;different browsers have slightly different JavaScript interpreters inside their engines&lt;/strong&gt;. For basic, standard JavaScript usage, this isn’t a big deal. But when you get into non-standard usage, things get trickier. And every external library you add to your code potentially brings you slightly closer to using a JS in "non-standard" way that may be not supported by some browsers. And these browsers may be the favorite browser of potential users of your web applications :).  &lt;/p&gt;

&lt;p&gt;To conclude, here is a very &lt;strong&gt;generalized&lt;/strong&gt; scheme of &lt;em&gt;how browsers work&lt;/em&gt;: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsocdg38o2ptgyomppdw2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsocdg38o2ptgyomppdw2.png" alt=" " width="742" height="751"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next article of this series on Web Development, I’ll elaborate on the format of the "query" you see in the browser’s search bar, as well as the role of "servers".&lt;/p&gt;

&lt;p&gt;This &lt;em&gt;query&lt;/em&gt; format is very important—I’d even call it crucial— because the routing in your future web apps result in these "queries".&lt;/p&gt;

&lt;p&gt;Jumping ahead, I should mention that "query" is actually a bit of an oversimplification of what appears in the address bar, because the correct technical term is URI (Uniform Resource Identifier). The true &lt;em&gt;query&lt;/em&gt; is just one part of a URI, and URIs are closely tied to the HTTP protocol in web development, which in turn is tied to networking.&lt;/p&gt;

&lt;p&gt;In the next article, I’ll do my best to simplify and clarify all these concepts!&lt;/p&gt;




&lt;p&gt;Summarizing the main points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Browsers have a built-in JS interpreter and they do run JS code.&lt;/li&gt;
&lt;li&gt;The browser’s JS interpreter is not the same as your PC (or server)’s JS interpreter—they’re different. Keep this in mind.&lt;/li&gt;
&lt;li&gt;The internal components of browsers are complex, and each browser implements them differently. &lt;strong&gt;In my humble opinion, that’s what makes the frontend part harder than the backend in modern architectures.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;You need to &lt;em&gt;understand&lt;/em&gt; both backend and frontend to be a good web developer—you can’t just master one and completely ignore the other.&lt;/li&gt;
&lt;li&gt;In front-end development, almost everything ultimately revolves around DOM manipulation (= you have to understand the concept of DOM as clear as possible)&lt;/li&gt;
&lt;li&gt;If "network" for you only means "the Wi-Fi at home", you must invest your time in studying networking.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>react</category>
      <category>browser</category>
    </item>
    <item>
      <title>Virtualization on Debian with libvirt&amp;QEMU&amp;KVM — Networking beyond "default": Forwarding Incoming Connections to NAT'ed network</title>
      <dc:creator>Anna</dc:creator>
      <pubDate>Mon, 27 Jan 2025 23:30:02 +0000</pubDate>
      <link>https://dev.to/dev-charodeyka/virtualization-on-debian-with-libvirtqemukvm-networking-beyond-default-bridged-networking-2ef4</link>
      <guid>https://dev.to/dev-charodeyka/virtualization-on-debian-with-libvirtqemukvm-networking-beyond-default-bridged-networking-2ef4</guid>
      <description>&lt;p&gt;This is the second part of "Networking Beyond "Default". All &lt;em&gt;must-have&lt;/em&gt; theory was explained in detail in the previous part, this article will be very practical.&lt;/p&gt;

&lt;p&gt;If your understanding of these concepts is very &lt;em&gt;wobbly&lt;/em&gt;, I strongly recommend reading the &lt;a href="https://dev.to/dev-charodeyka/virtualization-on-debian-with-libvirtqemukvm-networking-beyond-default-must-have-concepts-to-2ccn"&gt;previous article&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;nftables&lt;/code&gt; rules that perform network address translation&lt;/li&gt;
&lt;li&gt;The structure of IPv4 addresses and CIDR&lt;/li&gt;
&lt;li&gt;Physical vs Virtual Network Interfaces&lt;/li&gt;
&lt;li&gt;Virtual network switches a.k.a bridges&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Terminology I will use in this article:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Host&lt;/em&gt; – This refers to your PC, on which you set up virtualization and create virtual machines.&lt;br&gt;
&lt;em&gt;Guest&lt;/em&gt; – This is any virtual machine you create. VM/VMs – A virtual machine/Virtual Machines.&lt;br&gt;
&lt;em&gt;LAN&lt;/em&gt; – Local Area Network managed by my WiFi Router provided by ISP(Internet service Provider)&lt;br&gt;
&lt;em&gt;NAT&lt;/em&gt; – Network Address Translation.&lt;br&gt;
&lt;em&gt;&lt;strong&gt;DEFAULT&lt;/strong&gt; network&lt;/em&gt; – the Libvirt's virtual network that virtual machines are connected to by default (gets started with sudo virsh net-start default)&lt;br&gt;
&lt;em&gt;Packet&lt;/em&gt; - a network packet is a formatted unit of data carried by a network.&lt;/p&gt;



&lt;p&gt;As I have spitted the content on virtualization into 3+ articles, I’ll start with a quick overview of my current setup and the goals I want to achieve with different network configurations.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;My virtual machines are created under &lt;code&gt;qemu:///system&lt;/code&gt;, which means I run all &lt;code&gt;virsh&lt;/code&gt; commands with &lt;code&gt;sudo&lt;/code&gt;. Using &lt;code&gt;sudo&lt;/code&gt; privileges allows me to configure the network as I need, whereas &lt;code&gt;qemu:///session&lt;/code&gt; has very limited permissions in hardware virtualization (for more details, check &lt;a href="https://dev.to/dev-charodeyka/virtualization-on-debian-with-virshqemukvm-what-you-need-to-install-and-how-49oo"&gt;this article&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;My host machine runs Debian, with nftables managing network traffic and firewalld serving as the firewall management tool.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I have a virtual machine running &lt;a href="https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-debian/#std-label-install-mdb-community-debian" rel="noopener noreferrer"&gt;MongoDB&lt;/a&gt; (VM name: &lt;code&gt;deb-mongo&lt;/code&gt;). This VM was created in the most basic way and is attached to the &lt;strong&gt;DEFAULT&lt;/strong&gt; libvirt network. Here’s what that entails:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;deb-mongo&lt;/code&gt; VM can communicate with other VMs connected to the same &lt;strong&gt;DEFAULT&lt;/strong&gt; network, which operates in the NAT mode.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The VM can communicate with the host in both directions—whether the host initiates the connection (e.g., via SSH) or the VM initiates it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The VM has access to the internet, allowing me to install MongoDB, update system packages, use &lt;code&gt;ping&lt;/code&gt;, &lt;code&gt;curl&lt;/code&gt;, &lt;code&gt;wget&lt;/code&gt;, and so on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No device on my local home network (the same network the host is on) can access this VM directly, and of course, nothing from the outside my home LAN can reach it either.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What I want to achieve, for demonstration purposes, is to make the &lt;code&gt;deb-mongo&lt;/code&gt; VM reachable from my laptop, which is on the same local home network as the host (My Desktop PC). I want to be able to connect to MongoDB running on the VM using the MongoDB Compass GUI client on my laptop.&lt;/strong&gt;&lt;/p&gt;



&lt;p&gt;In the previous article, I analyzed libvirt's &lt;strong&gt;DEFAULT&lt;/strong&gt; network  in detail and broke its configuration down into small pieces. &lt;em&gt;If you were paying close attention, you would have already understood which &lt;code&gt;nftables&lt;/code&gt; rules block connections from anything other than the host machine to the VMs connected to &lt;strong&gt;DEFAULT&lt;/strong&gt; network&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;In the scope of this article I plan to enable access to the VM from devices connected to my LAN by modifying &lt;em&gt;nftables&lt;/em&gt; rules of &lt;strong&gt;DEFAULT&lt;/strong&gt; network, plus configure port forwarding on the host.&lt;/p&gt;

&lt;p&gt;This option is viable, however, it is about turning a well-defined libvirt’s &lt;strong&gt;DEFAULT&lt;/strong&gt; network into something different, which it was not meant to be. &lt;/p&gt;

&lt;p&gt;I wrote this tutorial for educational purposes as I noticed that the questions about how to do it can be found on all possible forums for different distros. Personally, I do not use this network configuration in my home lab, and later in this article it will become clear why.&lt;/p&gt;

&lt;p&gt;At some point, it’s better to define a new virtual network from scratch and define rules and NAT for it, rather than modify libvirt’s &lt;strong&gt;DEFAULT&lt;/strong&gt; one, if you are very “NATophile”. &lt;/p&gt;

&lt;p&gt;However, I can assume, that you are not, just maybe you think that is the only option to make VMs reachable for other devices connected to LAN. But it is not true. In the next article I will cover more suitable network configuration for the use case when VMs should be accessible members of LAN.&lt;/p&gt;

&lt;p&gt;Now, let's begin with &lt;strong&gt;DEFAULT&lt;/strong&gt; network tweaking!&lt;/p&gt;



&lt;p&gt;Here is the roadmap for this article:&lt;/p&gt;

&lt;p&gt;➀ DEFAULT libvirt network&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;➀.➀ &lt;code&gt;virbrN&lt;/code&gt; and &lt;code&gt;vnetN&lt;/code&gt; - what do they do?&lt;/li&gt;
&lt;li&gt;➀.➁ About DHCP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➁ Configuring static IPv4 address on VM&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;➁.➀ Shrinking DHCP range &lt;/li&gt;
&lt;li&gt;➁.➁ Modifying &lt;code&gt;/etc/network/interfaces&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➂ Connecting to a VM from Another Device on the LAN using the Port Forwarding Method:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;➂.➀ What is the Port Forwarding?&lt;/li&gt;
&lt;li&gt;➂.➁ &lt;code&gt;Firewalld&lt;/code&gt; and open ports&lt;/li&gt;
&lt;li&gt;➂.➂ Forwarding ports on the host with &lt;code&gt;firewalld&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;➂.➃ Adding a rule to &lt;code&gt;nftables&lt;/code&gt; ruleset to accept inbound network traffic to the VM&lt;/li&gt;
&lt;li&gt;➂.➄ Considerable drawbacks of this method&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  ➀ DEFAULT libvirt network
&lt;/h3&gt;

&lt;p&gt;First, I start the &lt;strong&gt;DEFAULT&lt;/strong&gt; network, then I start the VM &lt;code&gt;deb-mongo&lt;/code&gt; and enter its console.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ip a serves to show when virtual network interfaces appear
$ ip a
1: lo
 ...
2: eno1:
    inet 192.168.1.X/24 brd 
3: wlx123456789kl:
    inet 192.168.1.Y/24

$ sudo virsh net-start default

$ ip a
1: lo
 ...
2: eno1:
    inet 192.168.1.X/24 brd 
3: wlx123456789kl:
    inet 192.168.1.Y/24
4: virbr0:
    inet 192.168.122.1/24

$ sudo virsh start deb-mongo

$ ip a
1: lo
 ...
2: eno1:
    inet 192.168.1.X/24 brd 
3: wlx123456789kl:
    inet 192.168.1.Y/24
4: virbr0:
    inet 192.168.122.1/24
5: vnet0: 
    inet ???

$ sudo virsh console deb-mongo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  ➀.➀ &lt;code&gt;virbrN&lt;/code&gt; and &lt;code&gt;vnetN&lt;/code&gt; - what do they do?
&lt;/h4&gt;

&lt;p&gt;First, I want to schematize what these &lt;code&gt;virbr0&lt;/code&gt; and &lt;code&gt;vnet0&lt;/code&gt; network interfaces. They weren’t there initially (&lt;code&gt;ip link&lt;/code&gt;/&lt;code&gt;ip a&lt;/code&gt; before &lt;code&gt;sudo virsh net-start default&lt;/code&gt; did not contain them in outputs), but &lt;code&gt;virbr0&lt;/code&gt; showed up after I started the &lt;strong&gt;DEFAULT&lt;/strong&gt; network, and &lt;code&gt;vnet0&lt;/code&gt; appeared when I started the &lt;code&gt;deb-mongo&lt;/code&gt; VM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksf1zd47aqzgoiy1il6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksf1zd47aqzgoiy1il6v.png" alt=" " width="761" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;virbr0&lt;/code&gt; is the virtual network switch, and while its name suggests it is a &lt;em&gt;bridge&lt;/em&gt; (and bridges are generally used to &lt;em&gt;connect different networks&lt;/em&gt;), in the case of the &lt;strong&gt;DEFAULT&lt;/strong&gt; libvirt network, &lt;code&gt;virbr0&lt;/code&gt; doesn't &lt;em&gt;bridge&lt;/em&gt; anything - it &lt;strong&gt;does not&lt;/strong&gt; bridge the &lt;strong&gt;DEFAULT&lt;/strong&gt; network (192.168.122.0/24) to any of the networks the host is connected to. In a more physical sense, no physical network interface is &lt;em&gt;plugged&lt;/em&gt; into this &lt;code&gt;virbr0&lt;/code&gt;. All network traffic between VM and host, between VM and the outside world is governed by NAT (Network Address Translation) configuration.&lt;/p&gt;

&lt;p&gt;This is partially why no VM connected to the &lt;strong&gt;DEFAULT&lt;/strong&gt; network can be reached from other devices on your local home network, which the host is connected to. The other reason lies in the rules that define NAT, which essentially translate and isolate the internal network traffic.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;vnet0&lt;/code&gt; is a network TUN device. TUN/TAP devices are kernel virtual network devices. Being network devices supported entirely in software, they differ from ordinary network devices which are backed by physical network adapters.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The guest VM will have an associated tun device created with a name of vnetN, which can also be overridden with the  element. The tun device will be attached to the bridge.This provides the guest VM full incoming &amp;amp; outgoing net access just like a physical machine. (&lt;a href="https://libvirt.org/formatdomain.html#network-interfaces" rel="noopener noreferrer"&gt;Libvirt:Network Interfaces&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's move on.&lt;/p&gt;

&lt;p&gt;If you check above the latest &lt;code&gt;ip a&lt;/code&gt; output from my host PC, you'll notice there is no IPv4 address associated with the &lt;code&gt;vnet0&lt;/code&gt; interface, which is the virtual network interface of my VM &lt;code&gt;deb-mongo&lt;/code&gt;. So, to proceed, I need to check the IP address of the VM directly from the VM itself.&lt;/p&gt;

&lt;p&gt;I want to connect to the MongoDB instance running on this VM using the MongoDB Compass GUI client installed &lt;em&gt;on my host machine&lt;/em&gt;. To do this, the first step is finding the IP address of the VM.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user-mongo@deb-mongo:~$ ip a
1: lo: 
...
2: enp1s0: 
    inet 192.168.122.196/24 brd 192.168.122.255
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's quite a strange IP address, especially considering this is the second VM created in this network. 192.168.122.1 is the address used by &lt;code&gt;virbr0&lt;/code&gt;, which acts as the router/gateway for this &lt;strong&gt;DEFAULT&lt;/strong&gt; network (I'll elaborate more on it later). So why is my VM's IP .196 instead of something like .2 or .3? Where did it get this address? The answer is that it got it from DHCP - the Dynamic Host Configuration Protocol. &lt;/p&gt;

&lt;h4&gt;
  
  
  ➀.➁ About DHCP
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;When you set up any network, any device connecting to this network needs to have certain information, such as the IP-address of its interface, the IP-address of at least one domain name server, and the IP-address of a server in the LAN that serves as a router to the internet. In the manual setup you have to type in this information for each client anew. With the Dynamic Host Configuration Protocol (DHCP) the computers can do that automatically for you. (&lt;a href="https://wiki.debian.org/DHCP_Server" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Each virtual network switch can be given a range of IP addresses, to be provided to guests through DHCP.&lt;br&gt;
Libvirt uses a program, dnsmasq, for this. An instance of dnsmasq is automatically configured and started by libvirt for each virtual network switch needing it. (&lt;a href="https://wiki.libvirt.org/VirtualNetworking.html#network-address-translation-nat" rel="noopener noreferrer"&gt;Libvirt: Network Address Translation-NAT&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73eo8uv6tfkimfze0k85.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73eo8uv6tfkimfze0k85.jpg" alt=" " width="574" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As stated in the libvirt documentation quoted above, I expect to find in the network configuration the range of IPs that the DHCP server is configured to assign. Indeed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo virsh net-dumpxml default

&amp;lt;network&amp;gt;
  &amp;lt;name&amp;gt;default&amp;lt;/name&amp;gt;
  &amp;lt;uuid&amp;gt;2becd6d0-5a0f-4b45-afff-5a518370fc8c&amp;lt;/uuid&amp;gt;
  &amp;lt;forward mode='nat'&amp;gt;
    &amp;lt;nat&amp;gt;
      &amp;lt;port start='1024' end='65535'/&amp;gt;
    &amp;lt;/nat&amp;gt;
  &amp;lt;/forward&amp;gt;
  &amp;lt;bridge name='virbr0' stp='on' delay='0'/&amp;gt;
  &amp;lt;mac address='MA:C:AD:DD:RE:SS'/&amp;gt;
  &amp;lt;ip address='192.168.122.1' netmask='255.255.255.0'&amp;gt;
    &amp;lt;dhcp&amp;gt;
      &amp;lt;range start='192.168.122.2' end='192.168.122.254'/&amp;gt;
    &amp;lt;/dhcp&amp;gt;
  &amp;lt;/ip&amp;gt;
&amp;lt;/network&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;DHCP automatically assigns IPs to the connected to &lt;strong&gt;DEFAULT&lt;/strong&gt; network virtual machines from the range 192.168.122.2 - 192.168.122.254, based on availability rather than strictly sequential allocation. That’s why my VM ended up with .196.&lt;/p&gt;

&lt;p&gt;What DHCP does respect when assigning IPs is that it keeps track of what’s happening in the network it manages. It won’t assign an IP that’s already &lt;em&gt;currently&lt;/em&gt; occupied to new VMs joining the network. However, unless configured otherwise, DHCP won’t intervene if two VMs somehow end up with the same IP, and that can create a mess.&lt;/p&gt;

&lt;p&gt;Why would this happen, you might ask? Well, it’s not uncommon to change the DHCP mode from dynamic IP allocation to static IPs (manually assigned). With the default setup, DHCP assigns dynamic addresses, so when a device connects to the network, it gets an IP like X. But if it disconnects and reconnects later, it &lt;em&gt;might&lt;/em&gt; be assigned a completely different IP like Y, which can be very inconvenient for certain services that need to be accessed remotely.&lt;/p&gt;




&lt;h3&gt;
  
  
  ➁ Configuring static IPv4 address on a VM
&lt;/h3&gt;

&lt;p&gt;NB! It’s not being said for sure that DHCP is some crazy guy randomly assigning IP addresses by just looking at new connections and completely ignoring whether a device (like a VM's network interface) was already connected before. In fact, it’s very common for the IP address assigned once to persist. This is because the VM’s network interface, even if virtual, has a consistent and unchanging MAC address - so, it can be identified.&lt;/p&gt;

&lt;p&gt;Anytime the VM reconnects to the same network, it will 99.9% get the same IP address as it was assigned the first time. However, sometimes certain factors can cause DHCP to reassign a different address.&lt;/p&gt;

&lt;p&gt;On top of that, you may want to bring some order to your VMs, especially if there’s a logical structure to them, and you’d like more control over their network configuration. Assigning static IP addresses can help with this. While static IPs introduce some network fragility, they also improve network security, as you can create firewall rules, nftables rules, etc., based on the fixed IPs.&lt;/p&gt;

&lt;p&gt;Of course, you can still set up such rules with DHCP-assigned dynamic IPs. However, the problem arises if the VM’s IP address changes for any reason. In that case, all the carefully imposed network traffic rules would collapse, and the setup would need to be reconfigured.&lt;/p&gt;

&lt;p&gt;Moreover, knowing how to configure VM's network interface in the way you want it to is very nice skill to have, IMHO. &lt;/p&gt;

&lt;p&gt;To manually assign a static IP to a VM, there are two things to respect: the range of network - I cannot assign IP address out of range; unique IP address - no any other VMs should have the same IP.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➁.➀ Shrinking DHCP range
&lt;/h4&gt;

&lt;p&gt;To prevent, from the start, the possibility that DHCP assigns to a new VM an IP address I manually &lt;em&gt;glued&lt;/em&gt; to one of the VMs by reconfiguring its network interface, I have to modify the &lt;strong&gt;DEFAULT&lt;/strong&gt; network configuration and shrink the DHCP range—the set of available addresses that DHCP can use to assign.&lt;/p&gt;

&lt;p&gt;Currently, this range occupies all the addresses available in the 192.168.122.0/24 network—all 254 addresses, excluding those reserved for the gateway/router (&lt;code&gt;virbr0&lt;/code&gt;) and the broadcast address. By shrinking this range, I ensure there are IPs left outside the DHCP range that I can assign manually to specific VMs without interference.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#to prevent problems with VM connectivity
$ sudo virsh destroy deb-mongo

$ sudo virsh net-edit default
#I replace this:
&amp;lt;network&amp;gt;
...
  &amp;lt;ip address='192.168.122.1' netmask='255.255.255.0'&amp;gt;
    &amp;lt;dhcp&amp;gt;
---&amp;gt; &amp;lt;range start='192.168.122.2' end='192.168.122.254'/&amp;gt;
    &amp;lt;/dhcp&amp;gt;
  &amp;lt;/ip&amp;gt;
&amp;lt;/network&amp;gt;
#with:
&amp;lt;network&amp;gt;
...
  &amp;lt;ip address='192.168.122.1' netmask='255.255.255.0'&amp;gt;
    &amp;lt;dhcp&amp;gt;
---&amp;gt; &amp;lt;range start='192.168.122.101' end='192.168.122.254'/&amp;gt;
    &amp;lt;/dhcp&amp;gt;
  &amp;lt;/ip&amp;gt;
&amp;lt;/network&amp;gt;

# restart DEFAULT network
$ sudo virsh net-destroy default
$ sudo virsh net-start default

# check that changes came in power:
$ sudo virsh net-dumpxml default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  ➁.➁ Modifying &lt;code&gt;/etc/network/interfaces&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;As it was found out earlier, the current IP of &lt;code&gt;deb-mongo&lt;/code&gt; VM is 192.168.122.196. To change this IP I will have to modify configurations of network interface stored in &lt;code&gt;/etc/network/interfaces&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user-mongo@deb-mongo:~$ sudo vim.tiny /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug enp1s0
iface enp1s0 inet dhcp &amp;lt;--here is what I need to change

# I comment out the line: #iface enp1s0 inet dhcp and replace it with:
iface enp1s0 inet static
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this line (&lt;code&gt;iface enp1s0 inet static&lt;/code&gt;), I just switched the mode of interface configuration from &lt;code&gt;dhcp&lt;/code&gt; to &lt;code&gt;static&lt;/code&gt;, which means I now have to configure it manually. My goal is to assign an IP address of my choice, which can be set using the &lt;code&gt;address&lt;/code&gt; field. However, if you were attentive to the previous section, you’d know that DHCP doesn’t just assign an IP address—it also provides the device (the network interface of the VM) with crucial information about the network it’s connecting to. More specifically, it provides details such as the size of the network (netmask) and the IP address of the gateway. Since I switched to static mode, I need to manually provide the following details in the configuration: &lt;code&gt;address&lt;/code&gt;, &lt;code&gt;gateway&lt;/code&gt;, and &lt;code&gt;netmask&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;address 192.168.122.10&lt;/code&gt; is the IP address I want this VM to have persistently. You can choose any number outside the range .&lt;/p&gt;

&lt;p&gt;Information about the gateway and netmask is taken from this part of the network configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  &amp;lt;ip address='192.168.122.1' netmask='255.255.255.0'&amp;gt;
    &amp;lt;dhcp&amp;gt;
      &amp;lt;range start='192.168.122.101' end='192.168.122.254'/&amp;gt;
    &amp;lt;/dhcp&amp;gt;
  &amp;lt;/ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Gateway is &lt;code&gt;ip address='192.168.122.1'&lt;/code&gt;, netmask is &lt;code&gt;netmask='255.255.255.0'&lt;/code&gt;. With the netmask, it’s pretty straightforward—it corresponds to the CIDR notation of the network. For example, /24 equals 255.255.255.0, with the last 0 indicating the part of the IP address range that is available for assignment. This defines the size of the network.&lt;/p&gt;

&lt;p&gt;But what about the gateway? Remember the previous article's explanation about network switches? In the &lt;strong&gt;DEFAULT&lt;/strong&gt; network, this job is handled by &lt;code&gt;virbr0&lt;/code&gt;. All VMs connected to this virtual network switch become part of the same network, and they are allowed to communicate with each other (as per the &lt;code&gt;nftables&lt;/code&gt; rules set by libvirt).&lt;/p&gt;

&lt;p&gt;It’s not just that by connecting to the same virtual network switch, VMs can magically communicate &lt;strong&gt;directly&lt;/strong&gt; with one another. No, there is a network switch in between EACH COMMUNICATION, acting like a &lt;em&gt;post office&lt;/em&gt;. When you want to send a letter to someone, you write the letter, put the destination address on it, and bring it to the post office—you don’t usually travel to the recipient’s address and deliver it manually. The post office handles everything, ensuring the letter is delivered.&lt;/p&gt;

&lt;p&gt;The same logic applies to the network switch, and in the case of the &lt;strong&gt;DEFAULT&lt;/strong&gt; network, this role is performed by &lt;code&gt;virbr0&lt;/code&gt;, which acts as the gateway. Gateways, as a technical term, are broader than just routers or switches because they can perform a variety of tasks. It is important for the VM to know where to find the gateway in order to communicate via it with other network devices and with the outside world (if allowed). Regardless, the gateway (&lt;code&gt;virbr0&lt;/code&gt;) will decide how to handle any communication sent from the VMs connected to it. virbr0 IP address is 192.168.122.1 (can validate with &lt;code&gt;ip a&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;This information ends up in the configuration file &lt;code&gt;/etc/network/interfaces&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;....
....
#iface enp1s0 inet dhcp
iface enp1s0 inet static
  address 192.168.122.10
  netmask 255.255.255.0
  gateway 192.168.122.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, I restart &lt;code&gt;networking systemd service&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user-mongo@deb-mongo:~$ sudo systemctl restart networking
user-mongo@deb-momgo:~$ ip a
1: lo: 
....
2: enp1s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP ...
    link/ether ...
    inet 192.168.122.10/24 brd 192.168.122.255

# to check if nothing went messed up:
user-mongo@deb-momgo:~$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=112 time=17.4 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=112 time=17.0 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=112 time=16.5 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms

#try to update system packages
user-mongo@deb-momgo:~$ sudo apt update &amp;amp;&amp;amp; upgrade
...
Fetched 485 kB in 1s (901 kB/s)
...
All packages are up to date.

#try to ssh from HOST MACHINE
ssh user-mongo@192.168.122.10
...
Are you sure you want to continue connecting (yes/no/[fingerprint])?
#It is requesting you again to accept the fingerprint, because you change "address" of a VM
#Anyway ssh is successful
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NB! If you experience any problems with VM's connectivity to the internet, try the following steps:&lt;/p&gt;

&lt;p&gt;Destroy the network with &lt;code&gt;sudo virsh net-destroy default&lt;/code&gt;; &lt;br&gt;
stop the VM with &lt;code&gt;sudo virsh destroy &amp;lt;vm-name&amp;gt;&lt;/code&gt;; restart the &lt;code&gt;libvirtd&lt;/code&gt; service with &lt;code&gt;sudo systemctl restart libvirtd&lt;/code&gt;; start the network with &lt;code&gt;sudo virsh net-start default&lt;/code&gt;; start the VM again with &lt;code&gt;sudo virsh start &amp;lt;vm-name&amp;gt;&lt;/code&gt;. Restarted VM should not have problems with connectivity. If connectivity still doesn’t work after these steps, then something got messed up wrong with the configuration.&lt;/p&gt;

&lt;p&gt;So now, I should be able to connect to the MongoDB running on my VM &lt;code&gt;deb-mongo&lt;/code&gt; from the host using the MongoDB Compass GUI client.&lt;/p&gt;

&lt;p&gt;For demonstration purposes, I didn’t configure any sophisticated RBAC for the database, so all I need to connect is the IP address and the port through which the database is accessible. Here is the MongoDB configuration on the VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ user-mongo@deb-mongo:~$ cat /etc/mongof.conf
...
# network interfaces
net:
  port: 27017
  bindIp: 192.168.122.10
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voila:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sqab1edwtms6s0ru4gv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sqab1edwtms6s0ru4gv.png" alt=" " width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, the objective is to modify configurations of the &lt;strong&gt;DEFAULT&lt;/strong&gt; network in such a way that I can connect to MongoDB on the VM from a laptop that is connected via Wi-Fi to the same local home network as my PC (host).&lt;/p&gt;

&lt;h3&gt;
  
  
  ➂ Connecting to a VM from Another Device on the LAN using the Port Forwarding Method:
&lt;/h3&gt;

&lt;p&gt;This is what Port Forwarding method is about:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6maj5jmwa4b6xw0m6czt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6maj5jmwa4b6xw0m6czt.png" alt=" " width="800" height="592"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As I’ve pointed out more than once, 192.168.122.10 belongs to a different network than 192.168.1.0/24 (my home's LAN). &lt;code&gt;virbr0&lt;/code&gt; does not bridge any physical interface of my host PC that is connected to the LAN with the DEFAULT network. As a result, the VM’s address is unreachable for any device connected to the LAN.&lt;/p&gt;

&lt;p&gt;However, the devices connected to my home's LAN can communicate between themselves. &lt;/p&gt;

&lt;h4&gt;
  
  
  ➂.➀ What is the Port Forwarding?
&lt;/h4&gt;

&lt;p&gt;If you are a developer and work on remote servers using VSCode as IDE, most probably you’ve already used port forwarding &lt;em&gt;indirectly&lt;/em&gt;. For example, when you’re developing something like a web app during its early stages, it’s common for the app’s components—backend or frontend—to run on the server’s &lt;code&gt;localhost&lt;/code&gt; using specific development ports. However, when you connect to this server via VSCode and update the code, you can see in the browser the preliminary results on your development machine (let's call it laptop), not on the server’s browser.&lt;/p&gt;

&lt;p&gt;Here’s the point: let’s say your frontend app is running on the server’s localhost at port 5173. If you try to access it directly from your laptop’s browser, you won’t see anything. That’s because the app is running on the server, not your laptop. Without port forwarding, the server’s &lt;code&gt;localhost&lt;/code&gt; is inaccessible to your laptop.&lt;/p&gt;

&lt;p&gt;What happens, often with just two clicks—and sometimes even automatically in VSCode—is port forwarding. This forwards the server’s port (e.g., 5173) to a port on your laptop. As a result, you can open the browser on your laptop and see your app at an address like &lt;a href="http://localhost:5173" rel="noopener noreferrer"&gt;http://localhost:5173&lt;/a&gt;. What you’re actually seeing is the server’s localhost:5173, &lt;em&gt;forwarded&lt;/em&gt; to your laptop. &lt;/p&gt;

&lt;p&gt;So, I’m about to do something similar, bu with port forwarding on the host to the VM &lt;code&gt;deb-mongo&lt;/code&gt;. On my host machine, there’s no MongoDB installed, but it is running on the VM. I want to use the host machine as a kind of layover "airport" for the network packets traveling from my laptop to the VM to reach the database (the MongoDB Compass GUI client on my laptop exchanges packets with the MongoDB server and translates the packet payloads into the visualizations I see).&lt;/p&gt;

&lt;p&gt;The layover airport analogy fits perfectly here. In real life, layovers are a common practice, and the most interesting parallel is with &lt;strong&gt;visas&lt;/strong&gt;. For instance, let’s say you’re traveling to Brazil from your home country. Based on a bilateral agreement between your country and Brazil, you don’t need a visa to enter Brazil. However, your flight has a layover in the UK, and to enter the UK, you &lt;strong&gt;do need&lt;/strong&gt; a visa. Here’s where it gets interesting: if you’re just transiting through the UK and stay within the airport's transit zone, you don’t need a &lt;em&gt;true&lt;/em&gt; UK visa (let's leave aside transition visa). As long as you don’t exit the airport, everything works perfectly. But if you try to leave the transit zone, UK authorities will block you because you’re not permitted to &lt;em&gt;enter UK&lt;/em&gt; without a true visa. &lt;/p&gt;

&lt;p&gt;Similarly, in networking, my host machine acts as the "layover airport." Packets from my laptop need to travel through this transit point (the host) to reach the MongoDB instance running on the VM. As long as the host is correctly configured to forward packets (like a transit zone in an airport), everything flows smoothly. If, however, the laptop tries to directly access the VM (bypassing the host’s forwarding), the packets will fail because they are not permitted to "enter".&lt;/p&gt;

&lt;p&gt;Anyway, why do I speak so much about permissions? Because if you use &lt;code&gt;firewalld&lt;/code&gt; or any other firewall that’s up and running, they usually protect your device from unauthorized access. By default, all ports on your host are closed for &lt;strong&gt;NEW&lt;/strong&gt; connections. However, &lt;strong&gt;ESTABLISHED&lt;/strong&gt; and &lt;strong&gt;RELATED&lt;/strong&gt; connections are often allowed, meaning that network packets can navigate through them. &lt;strong&gt;ESTABLISHED&lt;/strong&gt; means the connection was initiated by the device itself (remember the distinction between inbound vs outbound traffic from the previous article?).&lt;/p&gt;

&lt;p&gt;What I want to do is use a random port, &lt;strong&gt;12345&lt;/strong&gt;, on my host machine as a "dummy port" that my laptop can connect via LAN to using the TCP protocol. I want that all network packets sent by my laptop to my host's IP:12345 will be redirected to the VM &lt;code&gt;deb-mongo&lt;/code&gt;, specifically to MongoDB’s default port, &lt;code&gt;27017&lt;/code&gt;, and the same for network packets with response from MongoDB.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➂.➁ Firewalld and closed ports
&lt;/h4&gt;

&lt;p&gt;Now, for demonstration purposes, I’ll show you that a connection to this port (12345) on my host is not allowed by &lt;code&gt;firewalld&lt;/code&gt;. I’m using the &lt;code&gt;netcat-openbsd&lt;/code&gt; package to perform this simulation. It’s installed on both my laptop and my PC (the host).&lt;/p&gt;

&lt;p&gt;On the host, I start listening on port 12345. This is just a random listener; there isn’t any service running on this port. It’s not like the case with MongoDB, where MongoDB listens on its default port and expects specific syntax that it can understand. With &lt;code&gt;netcat&lt;/code&gt;, I can send any nonsense via TCP, and the listener will still accept it without validation—because it’s just a raw socket tool.&lt;/p&gt;

&lt;p&gt;On my host I start a listener on port 12345&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ nc -lv 12345
Listening on 0.0.0.0 12345
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the laptop I try to send a message to the host private IP, port 12345:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo "hellow from the laptop" | nc -v 192.168.1.106 12345
nc: connect to 192.168.1.106 1235 (tcp) failed: No route to host!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The connection fails. I start by investigating the &lt;code&gt;firewalld&lt;/code&gt; rules because I’m sure it’s responsible for blocking the connection. And indeed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo firewall-cmd --query-port=12345/tcp
no
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This confirms that port 12345 is not open (read: no any device can connect to my host via this port). Next, I control which zones are being used for my network interfaces. &lt;code&gt;firewalld&lt;/code&gt; applies rules based on zones. However, explaining zones in detail is outside the scope of this article (you can check the documentation &lt;a href="https://firewalld.org/documentation/zone/" rel="noopener noreferrer"&gt;here&lt;/a&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo firewall-cmd --get-active-zones
libvirt
  interfaces: virbr0
public (default)
  interfaces: wlx123456789kl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not the best idea to modify firewall rules for the &lt;em&gt;public&lt;/em&gt; zone, so I switch to the home zone, which is actually the technically correct one because my host is connected to my home's LAN.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo firewall-cmd --zone=home --change-interface=wlx123456789kl
success
$ sudo firewall-cmd --zone=home --query-port=12345/tcp
no
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This port 12345 is also closed in the home zone, so I could just open it with &lt;code&gt;sudo firewall-cmd --zone=home --add-port=12345/tcp&lt;/code&gt;. But I want to open it only to be accessible by the private IP address of my laptop. So I just add a rich rule instead.&lt;/p&gt;

&lt;p&gt;Remember! Avoid, whenever you can, any wobbly rules for accepting something. Do not open ports to the whole world. Always try to minimize access and add precise, restrictive, and well-defined permissive rules.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo firewall-cmd --zone=home --add-rich-rule='rule family="ipv4" source address="192.168.1.105" port protocol="tcp" port="12345" accept'
success
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And voilà: the message from my laptop successfully arrived at my host, port 12345:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo nc -lv 12345
Listening on 0.0.0.0 12345
Connection received on LAPTOP-1234567.station XXXXX
hellow from laptop!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NB! If you are unfamiliar with &lt;code&gt;firewalld&lt;/code&gt;, you might not have noticed that I didn’t add these rules permanently. This means that when I reboot or reload the &lt;code&gt;firewalld&lt;/code&gt; service, these rules will be lost—which is exactly what I wanted.&lt;/p&gt;

&lt;p&gt;For port forwarding, the rule is different and doesn’t require keeping the port open on the host which will be forwarded. This is where my earlier example with visas is valid—packets that are simply transiting through my host don’t need permission from the firewall because they don’t actually "enter" my host at the end.&lt;/p&gt;

&lt;p&gt;So, I’ve demonstrated and imposed these rules purely for educational purposes, and now I’ll flush them with &lt;code&gt;sudo firewall-cmd --reload&lt;/code&gt;, as they’re no longer needed.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➂.➂ Forwarding ports on the host with &lt;code&gt;firewalld&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;Now, I’ll forward the host’s port 12345 to the VM’s &lt;code&gt;deb-mongo&lt;/code&gt; default MongoDB port 27017. I’ll do this by adding a rule with &lt;code&gt;firewalld&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;According to the &lt;a href="https://firewalld.org/documentation/man-pages/firewall-cmd.html" rel="noopener noreferrer"&gt;firewalld documentation&lt;/a&gt;, this is how port forwarding should be done:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[--permanent] [--zone=zone] [--permanent] [--policy=policy] --add-forward-port=port=portid[-portid]:proto=protocol[:toport=portid[-portid]][:toaddr=address[/mask]] [--timeout=timeval]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As I flushed the rules from the example above, I first need to repeat the process of changing the zone for my WiFi interface from public to home. Once that’s done, I can apply the port forwarding rule. NB! I do not add &lt;code&gt;--permanent&lt;/code&gt; option to the commands, but if you plan to keep these &lt;code&gt;firewalld&lt;/code&gt; configurations, you will need to specify this option.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo firewall-cmd --zone=home --change-interface=wlx123456789kl
success
# THE RULE FOR PORT FORWARDING
$ sudo firewall-cmd --zone=home --add-forward-port=port=12345:proto=tcp:toport=27017:toaddr=192.168.122.10
success
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, can I connect now from my laptop to MongoDB running on the VM &lt;code&gt;deb-mongo&lt;/code&gt;? No. However, half the job is done.&lt;/p&gt;

&lt;p&gt;In the previous article, I described in very detailed way why any inbound traffic is blocked by rules specified in &lt;code&gt;nftables&lt;/code&gt; that are placed there by &lt;code&gt;libvirt&lt;/code&gt;. For the &lt;strong&gt;DEFAULT&lt;/strong&gt; network, these NAT rules are configured to isolate VMs, allowing them to connect outbound but blocking any inbound traffic from external devices (except Host, of course). This is why the connection still fails.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➂.➃ Adding a rule to &lt;code&gt;nftables&lt;/code&gt; ruleset to accept inbound network traffic to the VM
&lt;/h4&gt;

&lt;p&gt;To fix this, I’ll need to adjust these &lt;code&gt;nftables&lt;/code&gt; rules. To see ALL the active &lt;code&gt;nftables&lt;/code&gt; rules, you can use the command &lt;code&gt;sudo nft list ruleset&lt;/code&gt;. To see only rules related to the &lt;strong&gt;DEFAULT&lt;/strong&gt; network, you can specify the &lt;em&gt;table&lt;/em&gt; and its family. Table name is &lt;code&gt;libvirt_network&lt;/code&gt;, and its family is &lt;code&gt;ip&lt;/code&gt; which stands for IPv4 - rules in this table are applicable ONLY to IPv4 network traffic! I prefer to use &lt;code&gt;sudo nft -a list table ip libvirt_network&lt;/code&gt;.&lt;br&gt;
The -a option adds a &lt;em&gt;handle&lt;/em&gt; to each rule. This handle acts like an index, allowing me to reference and manipulate any specific rule without having to retype the entire rule.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo nft -a list table ip libvirt_network
...
table ip libvirt_network { # handle 6
    chain forward { # handle 1
        type filter hook forward priority filter; policy accept;
        counter packets 728 bytes 637074 jump guest_cross # handle 7
        counter packets 728 bytes 637074 jump guest_input # handle 5
        counter packets 207 bytes 14639 jump guest_output # handle 3
    }

    chain guest_output { # handle 2
        ip saddr 192.168.122.0/24 iif "virbr0" counter packets 1 bytes 76 accept # handle 55
        iif "virbr0" counter packets 0 bytes 0 reject # handle 52
    }

    chain guest_input { # handle 4
        oif "virbr0" ip daddr 192.168.122.0/24 ct state established,related counter packets 1 bytes 76 accept # handle 56
        oif "virbr0" counter packets 9 bytes 468 reject # handle 53
    }

    chain guest_cross { # handle 6
        iif "virbr0" oif "virbr0" counter packets 0 bytes 0 accept # handle 54
    }

    chain guest_nat { # handle 8
        type nat hook postrouting priority srcnat; policy accept;
        ip saddr 192.168.122.0/24 ip daddr 224.0.0.0/24 counter packets 0 bytes 0 return # handle 61
        ip saddr 192.168.122.0/24 ip daddr 255.255.255.255 counter packets 0 bytes 0 return # handle 60
        meta l4proto tcp ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 counter packets 0 bytes 0 masquerade to :1024-65535 # handle 59
        meta l4proto udp ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 counter packets 1 bytes 76 masquerade to :1024-65535 # handle 58
        ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 counter packets 0 bytes 0 masquerade # handle 57
    }
}
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’ve already dag all the internet searching for an answer about &lt;em&gt;how to add port forwarding to the &lt;strong&gt;DEFAULT&lt;/strong&gt; libvirt network in NAT mode&lt;/em&gt;, or &lt;em&gt;how to use the port forwarding method to access NAT'ed virtual networks&lt;/em&gt;, I imagine that your attention is attached to the &lt;code&gt;chain guest_nat&lt;/code&gt; in the displayed above &lt;code&gt;libvirt_network table&lt;/code&gt;, because it handles NAT. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;However, this is misleading!&lt;/strong&gt; The rule that is actually blocking inbound connections to the VM (&lt;code&gt;deb-mongo&lt;/code&gt;, in my case) from a laptop connected to the LAN is not there - the blocking rule is in the &lt;code&gt;chain guest_input&lt;/code&gt;!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chain guest_input {
          oif "virbr0" ip daddr 192.168.122.0/24 ct state established,related counter packets 1 bytes 76 accept # handle 56
          oif "virbr0" counter packets 9 bytes 468 reject # handle 53
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These two rules, #56 and #53, are the culprits behind the issue. Rule #56 accepts ONLY the traffic with destination address in the &lt;strong&gt;DEFAULT&lt;/strong&gt; network 192.168.122.0/24 &lt;em&gt;which is part of &lt;strong&gt;ESTABLISHED&lt;/strong&gt; and &lt;strong&gt;RELATED&lt;/strong&gt; network connections&lt;/em&gt;. This is why connections between the host and the VMs work seamlessly, as those are ESTABLISHED or RELATED.&lt;br&gt;
Similarly, &lt;strong&gt;outbound traffic&lt;/strong&gt; from any VM on this network (e.g., downloading packages or connecting to external resources) also falls under this rule.&lt;/p&gt;

&lt;p&gt;However, when I try to connect to any VM on this network (192.168.122.0/24) from my laptop, which is on the LAN (a "neighbor" of the host), this traffic is a part of NEW connection. This is where Rule #53 comes into picture and REJECTS any traffic that is part of NEW connections targeting the virtual network devices.&lt;/p&gt;

&lt;p&gt;The intuitive solution is to modify rule #56, and using its handle, I can do something like this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo nft replace rule libvirt_network guest_input handle 56 oif "virbr0" ip daddr 192.168.122.0/24 ct state new,established,related counter accept
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;HOWEVER! Remember, it’s wiser to &lt;strong&gt;avoid adding overly permissive rules&lt;/strong&gt;. So, instead, I’ll specify that traffic that is part of NEW connections AND:&lt;br&gt;
A) ONLY from the private IP of my laptop &lt;br&gt;
B) ONLY to the VM &lt;code&gt;deb-mongo&lt;/code&gt;&lt;br&gt;
c) ONLY to the port 27017 of the VM &lt;code&gt;deb-mongo&lt;/code&gt;&lt;br&gt;
should be accepted, while the rest should fall into rule #53—the rejection rule. &lt;/p&gt;

&lt;p&gt;To do this, I need to add &lt;em&gt;a new rule&lt;/em&gt; to handle NEW traffic.&lt;/p&gt;

&lt;p&gt;Please NOTE! Rule #56, which accepts traffic from ESTABLISHED and RELATED connections, is the &lt;em&gt;first&lt;/em&gt; in the &lt;code&gt;chain guest_input&lt;/code&gt;. Rule #53, which rejects anything that is not accepted by Rule #56, is the &lt;em&gt;second&lt;/em&gt; rule in the chain. If you add a new rule using &lt;code&gt;sudo nft add rule ...&lt;/code&gt;, it will be appended to the end of the chain. This means it will never participate in filtering traffic from NEW connections because all these packets will already be rejected by Rule #53. &lt;strong&gt;The order of rules in chains matters!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of using &lt;code&gt;sudo nft add rule...&lt;/code&gt;, I will use &lt;code&gt;sudo nft insert rule ...&lt;/code&gt;. This command always inserts the rule at the top of the chain, which is not always ideal for different network logic but, in this case, is completely fine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo nft insert rule libvirt_network guest_input \
    oif "virbr0" ip saddr 192.168.1.10 ip daddr 192.168.122.10 tcp dport 27017 \
    ct state new counter accept
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;VOILA! the connection on my laptop is installed right away after this new rule is added:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9iss9e83qzfzlwfeebyj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9iss9e83qzfzlwfeebyj.jpg" alt=" " width="800" height="642"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you may notice, if you are following my steps, nothing needs to be restarted with &lt;code&gt;sudo virsh&lt;/code&gt;- nor the &lt;strong&gt;DEFAULT&lt;/strong&gt; network, nor the VM, nor the &lt;code&gt;systemd networking service&lt;/code&gt; on the host, nor the same service on the VM. That’s the superpower of network traffic management and mangling.&lt;/p&gt;

&lt;p&gt;As I mentioned earlier, &lt;em&gt;&lt;code&gt;virbr0&lt;/code&gt; is not connected in any way to any physical network interfaces of your host&lt;/em&gt;. All the connectivity is managed through the &lt;code&gt;nftables&lt;/code&gt; ruleset in the &lt;code&gt;libvirt_network table&lt;/code&gt; for IPv4 traffic.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➂.➄ Considerable drawbacks of this method
&lt;/h4&gt;

&lt;p&gt;In the end, to enable the Port Forwarding on the NAT'ed DEFAULT network it’s just two commands one: to add the &lt;code&gt;firewalld&lt;/code&gt; rule to impose port forwarding on the HOST. Second command is to add a rule to the guest_input chain in the libvirt_netwrok ip table in &lt;code&gt;nftables&lt;/code&gt; ruleset.&lt;/p&gt;

&lt;p&gt;Quite elegant and simple, no? So, what’s the drawback then, besides the complexity of &lt;code&gt;nftables&lt;/code&gt; rules for people completely unfamiliar with them? WELL… the drawback is the persistence of this configuration—or, to be more accurate, zero persistence.&lt;/p&gt;

&lt;p&gt;When I added the port forwarding rule with &lt;code&gt;firewalld&lt;/code&gt;, I noted that I didn’t make it permanent, but you can preserve the rules by using the &lt;code&gt;--permanent&lt;/code&gt; option. What about the &lt;code&gt;nftables&lt;/code&gt; rule I added? How do you preserve it? No way.&lt;/p&gt;

&lt;p&gt;If you remember, at the start, I showed you (using &lt;code&gt;ip link&lt;/code&gt;/&lt;code&gt;ip a&lt;/code&gt; outputs) that &lt;code&gt;virbr0&lt;/code&gt; doesn’t exist as an interface before you start the &lt;strong&gt;DEFAULT&lt;/strong&gt; network that is based on it. The same logic applies to the &lt;code&gt;nftables&lt;/code&gt; rules related to this virtual interface. If you list all the rules when the &lt;strong&gt;DEFAULT&lt;/strong&gt; network is down, there will be no &lt;code&gt;libvirt_network table&lt;/code&gt; in &lt;code&gt;nftables&lt;/code&gt; ruleset. But the moment you start the &lt;strong&gt;DEFAULT&lt;/strong&gt; network, the table immediately appears—it is added automatically by &lt;code&gt;libvirt&lt;/code&gt; itself.&lt;/p&gt;

&lt;p&gt;So, any custom rules for the &lt;code&gt;DEFAULT&lt;/code&gt; network you add to &lt;code&gt;nftables&lt;/code&gt; will be lost the moment this network stops and restarts. This is the main drawback of this otherwise clean and straightforward approach. &lt;/p&gt;

&lt;p&gt;There is a workaround, though—a script that can automate the re-adding of the &lt;code&gt;nftables&lt;/code&gt; rule. Even though it’s still a workaround, it is presented in the &lt;a href="https://wiki.libvirt.org/Networking.html#forwarding-incoming-connections." rel="noopener noreferrer"&gt;libvirt documentation on how to forward connections for the DEFAULT network&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, there’s an important note: the documentation’s example is written for &lt;code&gt;iptables&lt;/code&gt;, not &lt;code&gt;nftables&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here’s a tiny reminder: you cannot just drop &lt;code&gt;iptables&lt;/code&gt; rules out of the blue on your Debian system if you have &lt;code&gt;nftables&lt;/code&gt; up and running. At best, the rules will simply not work. At worst, you will mess up all the network traffic on your host :3.&lt;/p&gt;

&lt;p&gt;Here is this script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# IMPORTANT: Change the "VM NAME" string to match your actual VM Name.
# In order to create rules to other VMs, just duplicate the below block and configure
# it accordingly.
if [ "${1}" = "VM NAME" ]; then

   # Update the following variables to fit your setup
   GUEST_IP=
   GUEST_PORT=
   HOST_PORT=

   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
    /sbin/iptables -D FORWARD -o virbr0 -p tcp -d $GUEST_IP --dport $GUEST_PORT -j ACCEPT
    /sbin/iptables -t nat -D PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
   fi
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
    /sbin/iptables -I FORWARD -o virbr0 -p tcp -d $GUEST_IP --dport $GUEST_PORT -j ACCEPT
    /sbin/iptables -t nat -I PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
   fi
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;em&gt;workaround&lt;/em&gt; is actually a &lt;strong&gt;hook script&lt;/strong&gt;— libvirt's hooks are quite handy for specific system management needs. These hooks can trigger various scripts based on VM-related events like start, shutdown, etc. You can definitely create a similar hook for adding the &lt;code&gt;nftables&lt;/code&gt; rule, and it would even be much shorter than this monstrosity of &lt;code&gt;iptables&lt;/code&gt; rules.  &lt;/p&gt;

&lt;p&gt;However, I will not be doing it, nor will I use this configuration for my home lab. That’s because the &lt;strong&gt;DEFAULT network&lt;/strong&gt; is designed for a different purpose—it’s meant to provide a plug-and-play or out-of-the-box experience for new VMs. You create a VM, start it, and everything works from the networking side without additional configuration.  &lt;/p&gt;

&lt;p&gt;I prefer to keep it that way. For accessibility to the VMs from my LAN devices, I use other network configurations that are better suited for this purpose.&lt;/p&gt;

</description>
      <category>libvirt</category>
      <category>debian</category>
      <category>networking</category>
      <category>nftable</category>
    </item>
    <item>
      <title>Virtualization on Debian with libvirt&amp;QEMU&amp;KVM — Networking beyond "default": Must Have Concepts to Start</title>
      <dc:creator>Anna</dc:creator>
      <pubDate>Thu, 23 Jan 2025 00:12:56 +0000</pubDate>
      <link>https://dev.to/dev-charodeyka/virtualization-on-debian-with-libvirtqemukvm-networking-beyond-default-must-have-concepts-to-2ccn</link>
      <guid>https://dev.to/dev-charodeyka/virtualization-on-debian-with-libvirtqemukvm-networking-beyond-default-must-have-concepts-to-2ccn</guid>
      <description>&lt;p&gt;Initially, I planned to publish all &lt;em&gt;how-to&lt;/em&gt; s for advanced network configurations for virtual machines in one article. However, I am unstoppable when it comes to writing, and the article became quite long. So, I decided to split the theory from the hands-on configurations, XML/Shell scripting, etc.&lt;/p&gt;

&lt;p&gt;You might think you don’t need to read this, but I strongly discourage you from jumping directly to the next article and blindly copying my configurations without understanding what you’re doing.&lt;/p&gt;




&lt;p&gt;NB! I’m not a network engineer, nor have I taken courses on this. I’m actually &lt;em&gt;just a dev&lt;/em&gt; who’s just passionate about Debian and spends way too much time on my PC. So, please &lt;em&gt;don’t throw slippers at me&lt;/em&gt; if I end up sharing any misleading information. I’m simply documenting and describing some of my experiments in this field.&lt;/p&gt;




&lt;p&gt;Terminology I will use in this article, sometimes in acronym form – I try to avoid it, but sometimes it just happens automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Host&lt;/em&gt; – This refers to your PC, on which you set up virtualization and create virtual machines.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Guest&lt;/em&gt; – This is any virtual machine you create.
VM/VMs – A virtual machine/Virtual Machines.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;ISP&lt;/em&gt; – Internet Service Provider.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;LAN&lt;/em&gt; – Local Area Network.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;NAT&lt;/em&gt; – Network Address Translation.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;&lt;strong&gt;DEFAULT&lt;/strong&gt;&lt;/em&gt; network – This refers to the Libvirt's virtual network that virtual machines are connected to by default, within the scope of &lt;code&gt;qemu:///system&lt;/code&gt;. If you do not understand the difference between &lt;code&gt;qemu:///system&lt;/code&gt; and &lt;code&gt;qemu:///session&lt;/code&gt; and how to switch between them, please refer to &lt;a href="https://dev.to/dev-charodeyka/virtualization-on-debian-with-virshqemukvm-what-you-need-to-install-and-how-49oo"&gt;the previous article&lt;/a&gt;. In this article I will be creating/configuring VMs that are in the scope of &lt;code&gt;qemu:///system&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Packet&lt;/em&gt; - A network packet is a formatted unit of data carried by a network.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;When it comes to networking, it’s really hard to decide where to even start explaining. To avoid turning this article into some kind of networking handbook—which I don’t have the expertise to write anyway—I’ll probably have to skip over some fundamental concepts.&lt;/p&gt;

&lt;p&gt;However, I’ll do my best to simplify things and provide schemes for the main concepts so you can (hopefully!) follow along.&lt;/p&gt;

&lt;p&gt;Here is the road-map for this article:&lt;/p&gt;

&lt;p&gt;➀ About Public IP address and LAN&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;➀.➀ Inbound vs Outbound traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➁ Understanding your LAN and private IP address(s) of your host machine&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;➁.➀ Network Interfaces: general information&lt;/li&gt;
&lt;li&gt;➁.➁ Network Interfaces: MAC addresses&lt;/li&gt;
&lt;li&gt;➁.➂ IPv4 vs IPv6 addresses&lt;/li&gt;
&lt;li&gt;➁.➃ IPv4 address ranges reserved for private networks&lt;/li&gt;
&lt;li&gt;➁.➄ IPv4 addresses structure and CIDR&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➂ Libvirt's &lt;strong&gt;DEFAULT&lt;/strong&gt; Network: about NAT mode&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;➂.➀ About virtual network switches&lt;/li&gt;
&lt;li&gt;➂.➁ Libvirt wants &lt;code&gt;iptables&lt;/code&gt;, Debian has &lt;code&gt;nftables&lt;/code&gt;: what to do?&lt;/li&gt;
&lt;li&gt;➂.➂ DEFAULT Libvirt's network: what's under the hood?&lt;/li&gt;
&lt;li&gt;➂.➃ Nftables rulesets: tables and chains explained&lt;/li&gt;
&lt;li&gt;➂.➄ About how NAT works&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's start!&lt;/p&gt;




&lt;p&gt;I’ve already written some explanations of how your home network works, about public IP address, and ports in &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-3b4-2ca5"&gt;this article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here is the schematic representation of the most common setup of local "home" network with WiFi router playing the central role in it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8thuop2arcn4wbbhmfgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8thuop2arcn4wbbhmfgo.png" alt=" " width="800" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ➀ About Public IP address and LAN
&lt;/h3&gt;

&lt;p&gt;The most important takeaway from this scheme is that all your devices—whether they connect to your Wi-Fi router wirelessly or physically (via Ethernet cable)—are part of a local network, your home network. This LAN (Local Area Network) is created and managed by your Wi-Fi router.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Most likely&lt;/em&gt;*, only your router has a public IP address! Your devices do not (I wrote &lt;em&gt;most likely&lt;/em&gt; because if you’ve configured your Home Lab to use IPv6, the story is quite different. But I’m guessing you wouldn’t be reading this article if you were capable of doing so! :) ) Instead, each device connected to your Wi-Fi network is assigned a private IP address—one private address per device. If, for some reason, something gets messed up with your router and it assigns the same IP address to two devices, those devices will start having problems accessing the internet (they will start losing some packets - for example, pinging will be unstable, with some percentage of packets lost).&lt;/p&gt;

&lt;p&gt;You can check your public IP address using website like &lt;a href="https://www.dnsleaktest.com/" rel="noopener noreferrer"&gt;DNS Leak Test&lt;/a&gt;, which will show you IP address with which you visited this site (what is DNS and DNS "leaking" I will explain later in this article). Your Public IP address is discovered by servers hosting the websites you visit. For example, when I upload an image here in articles, the Dev.to servers see the request coming from my public IP address that’s pushing the data.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➀.➀ Inbound vs Outbound traffic
&lt;/h4&gt;

&lt;p&gt;For now, what you know for sure is that a Wi-Fi router is the magic box that allows you to access internet resources from your devices. This is quite obvious because you paid for exactly that. But how does it actually work?&lt;/p&gt;

&lt;p&gt;In the previous part of this series, I mentioned that I couldn’t host anything on my PC - a website - without tweaking the router. In general, when someone tries to reach you via your public IP address, they send you packets with some data/requests. These packets first reach your Wi-Fi router as it is the one having public IP address, not your devices connected to WiFi! What happens next depends on your router’s configuration.&lt;/p&gt;

&lt;p&gt;Will it drop the packets, effectively blocking them, or will it redirect them to the appropriate member of your local network (for example, your PC)? If it does redirect, the router uses the private IP address (as he knows all the private addresses of connected devices) of the target device (your PC) to forward the packets. Router will also have to forward the packets to the correct port. However, for all of this to happen, the router needs to be explicitly configured. By default, most Home-purpose routers will just block incoming traffic, and the packets will simply be dropped. This is why I cannot host any website on my PC without modifying the configurations of WiFi router.&lt;/p&gt;

&lt;p&gt;However, I can download whatever I want by default. When I download something, it comes to my computer in form of &lt;em&gt;packets&lt;/em&gt; with data as well. And the stuff that I am downloading is &lt;em&gt;coming in&lt;/em&gt;, not &lt;em&gt;out&lt;/em&gt;. So, it is also a sort of incoming traffic, and it does not get blocked by WiFi router at all.&lt;/p&gt;

&lt;p&gt;The key difference in these two examples is &lt;strong&gt;who initializes the communication&lt;/strong&gt;. When you download something—i.e using &lt;code&gt;wget&lt;/code&gt;—it’s your device that sends the request, initializing the connection. The server responds, sends packets with requested data, they arrive to your WiFi router, it redirects them to the device that made a "request" - and that’s why it works - WiFi router does not impede this process, because it is &lt;em&gt;outbound&lt;/em&gt; traffic.&lt;/p&gt;

&lt;p&gt;Inbound traffic, on the other hand, refers to situations where the connection isn’t initiated by your device—like when uninvited guests show up at the door of your house. In those cases, the &lt;em&gt;firewall rules&lt;/em&gt; of your router sends them away. These rules protect your local home network from unexpected or unwanted traffic.&lt;/p&gt;

&lt;p&gt;In the scope of this article on virtualization, the focus will be on &lt;em&gt;networking "locally"&lt;/em&gt;. It won’t be about creating networks that make your VMs publicly accessible, vulnerable, or anything of that sort. I won’t touch my router configurations, and all configurations will be done on networks that operate &lt;em&gt;behind&lt;/em&gt; a Wi-Fi router.&lt;/p&gt;




&lt;h3&gt;
  
  
  ➁ Understanding your LAN and private IP address(s) of your host machine
&lt;/h3&gt;

&lt;p&gt;As I mentioned, the Wi-Fi router creates a LAN (local area network). Any devices you connect to the Wi-Fi join this LAN, and their communication—both with each other (if configured so) and with the outside world (web) —is managed by the Wi-Fi router.&lt;/p&gt;

&lt;p&gt;The first thing to do is to get familiar with the private IP addresses of your host machine - what do they mean, why they are like this, how to manage them.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you’re using ONLY Network Manager to handle your network connections... well, put it aside for now. Your new best friend is the &lt;code&gt;iproute2&lt;/code&gt;. It should already be installed by default on Debian. The thing is, when you install Network Manager, it can start conflicting with &lt;code&gt;iproute2&lt;/code&gt; configurations. However, on Debian, this is partially resolved by the default setup, where Network Manager only manages wireless network interfaces.&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  ➁.➀ Network Interfaces: general information
&lt;/h4&gt;

&lt;p&gt;Now, let’s take a look at what’s going on with the networks on my host machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ip link show
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; ...
....
2: eno1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; ... state UP ....
   ....
3: wlxXXXXXX43643754XX: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; ... state UP 
   ...
4: virbr0: &amp;lt;NO-CARRIER,BROADCAST,MULTICAST,UP&amp;gt; .... state DOWN ...
    ....
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What I have:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;lo&lt;/code&gt;: This is the loopback interface. It’s a virtual network interface used by my OS to communicate with itself. It’s always there—no need to touch it! It won’t participate in the networking configurations I’ll be covering in this article.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;eno1&lt;/code&gt;: This is my network interface that comes with the Ethernet cable that physically links my PC to the Wi-Fi router. *This guy will be playing the key role in the networking setups for my VMs.&lt;br&gt;
*&lt;/p&gt;

&lt;p&gt;&lt;code&gt;wlxXXXXXX43643754XX&lt;/code&gt;: This is the wireless network interface that comes from my USB Wi-Fi adapter. In my setup, it has higher priority - this interface is used for communications with WiFi router and within the LAN. However, this wireless network interface &lt;strong&gt;WILL NOT participate in SOME networking setup (bridged networking) for the virtualization process because it’s problematic.&lt;/strong&gt; Wireless network devices use different drivers, different protocols, and it isn’t trivial to make them to participate in bridged networks. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;virbr0&lt;/code&gt;: The last guy in the list, which is currently DOWN, is a virtual bridge. It’s managed by libvirt/qemu. It is created when the &lt;code&gt;libvirt&lt;/code&gt; daemon is first installed and started. However, it remains down unless any libvirt's network that uses this network interface starts (i.e &lt;code&gt;sudo virsh net-start default&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Let's look into more details about each network interface:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ip a
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; ...
....
2: eno1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; ... state UP ....
   link/ether 12:ab:c3:d4:ef:56
   inet 192.168.1.X/24 brd 192.168.1.255
   inet6 fe80::278e:1234:l678:123/64 
3: wlxXXXXXX43643754XX: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; ... state UP 
   link/ether 98:zy:xb:67:kl:34
   inet 192.168.1.Y/24 brd 192.168.1.255
   inet6 fe80::kre7:b5b5:z8y6:987/64 
4: virbr0: &amp;lt;NO-CARRIER,BROADCAST,MULTICAST,UP&amp;gt; .... state DOWN ...
    link/ether 77:mo:5g:k9:r0:46
    inet 192.168.122.1/24 brd 192.168.122.255
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this article section, I will explain what are these values: &lt;code&gt;link/ether&lt;/code&gt;, &lt;code&gt;inet&lt;/code&gt; and &lt;code&gt;inet6&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➁.➁ Network Interfaces: MAC addresses (&lt;code&gt;link/ether&lt;/code&gt;)
&lt;/h4&gt;

&lt;p&gt;First, I want to point your attention to the fact that &lt;code&gt;eno1&lt;/code&gt; and &lt;code&gt;wlxXXXXXX43643754XX&lt;/code&gt; Network Interfaces are providing the connection to the same network, my home local network (LAN), managed by my Wi-Fi router. Just the origin of &lt;code&gt;eno1&lt;/code&gt; is the cable (Ethernet), and the origin of &lt;code&gt;wlxXXXXXX43643754XX&lt;/code&gt; is the Wi-Fi USB adapter, so a wireless connection. Why do they have different IP addresses even if they are "attached" to the same network? &lt;/p&gt;

&lt;p&gt;All other devices connected to my house Wi-Fi have private IP addresses - like the laptop at 192.168.1.A, the phone at 192.168.1.B, the PlayStation at 192.168.1.C, etc. So why does my single desktop PC have two addresses in the same network? Does this cause confusion?&lt;/p&gt;

&lt;p&gt;My PC can connect to a home local network (and to any network that is not virtualized by the PC itself=existing only "inside" the PC) through a NIC—Network Interface Card/Contrtoller. This is a hardware component that is often integrated into the motherboard.&lt;/p&gt;

&lt;p&gt;If you’re using a laptop that doesn’t have a port for an Ethernet cable, it most likely doesn’t have a NIC either. So, even if you imagine buying something like a USB-to-Lightning-to-Type-C adapter just to connect an Ethernet cable to port of Type C, it won’t work. This is because it’s not just about the "shape" of the cable, but the capability to process the type of communication that comes through it - and this capability is granted by NIC.&lt;/p&gt;

&lt;p&gt;The solution is to purchase a proper USB hub with a built-in NIC and Ethernet port. The laptop example is a good one because it shows that an Ethernet cable is not the cornerstone of connectivity to networks. Such laptops indeed connect wirelessly, if a network of interest allows so. In this case, they do not use a default NIC but rather a WNIC—a Wireless Network Interface Card.&lt;/p&gt;

&lt;p&gt;A WNIC in a desktop computer often needs to be purchased separately—either as a PCIe network card (to be attached to the motherboard's PCI slot) or as part of a USB Wi-Fi adapter.&lt;/p&gt;

&lt;p&gt;In my case, I have them both, integrated NIC and WNIC of WiFi USB adapter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#list of Ethernet controllers
$ lspci | grep -i ethernet
04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. Controller
#list of WNIC
$ lsusb
Bus 002 Device 002: ID 1234:5678 TP-Link 802.11ac NIC
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My Wi-Fi router perceives connection from each (W)NICs as a separate initialization of a connection and assigns a distinct private IP address to each. That’s why my PC, which has two network interfaces, ends up with two IP addresses—one for each (W)NIC. The router cannot identify that both NICs &lt;em&gt;physically&lt;/em&gt; belong to the same device (my PC) because the &lt;em&gt;physical&lt;/em&gt; identifiers are the NICs themselves (not a CPU, not a motherboard ecc)!&lt;/p&gt;

&lt;p&gt;Each NIC has its own unique MAC address (Media Access Control address), which serves as the &lt;em&gt;unique permanent identifier&lt;/em&gt;. Not only network interfaces, but every piece of network-connected hardware, &lt;em&gt;MUST HAVE&lt;/em&gt; a unique MAC address in a network. This is different from an IP address because a MAC address is permanent: every NIC has one and only one MAC address, hardcoded by the manufacturer (though, sometimes it is possible to change it). You can see the MAC addresses of your network interfaces using &lt;code&gt;ip link&lt;/code&gt; or &lt;code&gt;ip a&lt;/code&gt; commands—in the lines labeled &lt;code&gt;link/ether&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;While MAC addresses are permanent, an IP address is temporary and can change quite often - every time a device connects to a network, it is possible that WiFi router can assign to it a new private IP address (the probability of it depends on the network configuration and the device’s own settings).&lt;/p&gt;

&lt;p&gt;You can think of a MAC address as a device’s permanent name, while an IP address is more like instructions for other devices on how to communicate with it. For example, your device might always be MAC number A, but at any given moment, it can be located at IP address B or IP Address C ecc.&lt;/p&gt;

&lt;p&gt;Now that I’ve covered MAC addresses, it’s time to explain IP addresses.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➁.➂ IPv4 vs IPv6 addresses and NAT
&lt;/h4&gt;

&lt;p&gt;To see the IP addresses of your PC's network interface(s) (roughly speaking Private IP address(es) of your PC) you can execute &lt;code&gt;ip a&lt;/code&gt; command. You can scroll up and take another look at my output of this command. Now you know that the &lt;code&gt;link/ether&lt;/code&gt; field tells you the MAC address(es) of network interfaces. If you’re familiar with IP addresses, you can guess that &lt;code&gt;inet&lt;/code&gt; field exactly contains IP address information (the address though which other devices on the same network can communicate with PC). However, 'inet' field is not the one that shows you IP address of a network interface. &lt;code&gt;inet6&lt;/code&gt; field (if you have it) shows you another address and it is also IP address (even if it looks more like a MAC address)!&lt;/p&gt;

&lt;p&gt;My PC's &lt;code&gt;eno1&lt;/code&gt; interface (Ethernet) has &lt;code&gt;inet&lt;/code&gt; 192.168.1.X/24 and &lt;code&gt;inet6&lt;/code&gt; fe80::278e:1234:l678:123/64; &lt;code&gt;wlxXXXXXX43643754XX&lt;/code&gt; interface (wireless) has &lt;code&gt;inet&lt;/code&gt; 192.168.1.Y/24 and &lt;code&gt;inet6&lt;/code&gt; fe80::kre7:b5b5:z8y6:987/64; &lt;code&gt;virbr0&lt;/code&gt; virtual network interface managed by &lt;code&gt;libvirt&lt;/code&gt; has only &lt;code&gt;inet&lt;/code&gt; 192.168.122.1/24. &lt;/p&gt;

&lt;p&gt;Both &lt;code&gt;inet&lt;/code&gt; and &lt;code&gt;inet6&lt;/code&gt; fields of &lt;code&gt;ip a&lt;/code&gt; output contain completely valid IP addresses - it’s not like what you see in &lt;code&gt;inet6&lt;/code&gt; field is just an encrypted, altered, or transformed version of &lt;code&gt;inet&lt;/code&gt; field value, nor does it indicate a completely different network. The value of &lt;code&gt;inet&lt;/code&gt; field is the IPv4 address, while &lt;code&gt;inet6&lt;/code&gt; displays the IPv6 address.&lt;/p&gt;

&lt;p&gt;IP stands for Internet Protocol and it has two major versions: IPv4 and IPv6. IPv6 is the newer, more advanced version, offering enhanced features, greater capabilities, and significant potential for addressing future network demands.&lt;/p&gt;

&lt;p&gt;In a previous article, I already touched on the topic of IPv4 vs. IPv6 differences. I’ll share the most relevant takeaway as it relates to this article:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;A unique public IP address is a scarce resource! Actually, a unique public IPv4 address is in deficit. Internet Protocol version 4 (IPv4) forms the foundation of most Global Internet traffic today. An IP Address represented under IPv4 is composed of four sets of numbers ranging from 0 to 255, separated by periods(.).&lt;br&gt;
If you do the straightforward math - total four numbers in an IPv4 address; each number can be in range between 0 and 255 (256 possible values) - 256 * 256 * 256 * 256 = 4,294,967,296 total addresses.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The first distinction is that IPv6 addresses are not just numerical; they are alphanumeric. IPv6 can provide many more unique addresses than the IPv4 system (340 trillion trillion trillion unique addresses vs 4 billion). But this is not the only difference. IPv6, as a protocol, is more modern and from the beginning was designed to be more secure. &lt;/p&gt;

&lt;p&gt;It is not like IPv4 was designed without security in mind, just when it was created, the internet had far fewer users, and the number of devices worldwide was significantly lower. Let me illustrate how it affects the security mechanisms.&lt;/p&gt;

&lt;p&gt;Security mechanisms, such as encryption, inevitably introduce additional computational load. For example, imagine my PC is communicating with your PC using an encryption mechanism. All the &lt;em&gt;packets&lt;/em&gt; we exchange are heavily encrypted, and our devices communicate using public IP addresses, which act like the destination addresses for our data packets (like mailing letters). My device with its public IP address communicates with your device at its public IP address.&lt;/p&gt;

&lt;p&gt;This scenario demonstrates the end-to-end principle of communication: my device can communicate with yours securely, with security mechanisms implemented directly in the communicating end nodes (our PCs, that have encryption/decryption keys). The &lt;em&gt;intermediary nodes&lt;/em&gt; (like gateways and routers) don’t take part in our secure communication.&lt;/p&gt;

&lt;p&gt;So, the Gateways and Routers are bad? Not at all! However, in this example, gateways and routers can A) introduce computational overhead B) make some encryption mechanisms impossible. &lt;/p&gt;

&lt;p&gt;Getting closer to the topic... This idealized scenario of end-to-end communication is possible today, but only if we use IPv6 and our devices are properly configured to support it. Why? Because IPv6 adreesses poll is really huuuge, so every device can have a public IPv6 address, enabling direct communication between them without needing intermediary &lt;em&gt;translation&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Each packet that we exchange in my example as any network packet consists of control information (headers) and user data (payload). Control information provides data for delivering the payload (e.g., source and destination network addresses, error detection codes, or sequencing information). &lt;/p&gt;

&lt;p&gt;So, in case of IPv4. My PC sends to my WiFi router a packet that it is meant for you. But my PC does not have a Public IP address! So the source IP address in the headers is abracadabra for the web scope and it needs to be_ translated_, according to some rules. This job is done by NAT - Network Address Translation mechanism of my WiFi router. And here it is why this intermediary node in our communication is a baddie - NAT introduces processing overhead because it rewrites packet headers &lt;strong&gt;for every packet&lt;/strong&gt; that passes through the router. And this happens not only on my side, but also on your side!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3my34wx8gmxdmbjc87r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3my34wx8gmxdmbjc87r.png" alt=" " width="721" height="581"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some encryption techniques are meant for end-to-end communication only with consistent IP addresses - so they are not possible if NAT (in its default form) is in the middle.&lt;/p&gt;

&lt;p&gt;With IPv4, gateways, routers become unavoidable due to the scarcity of public IPv4 addresses, and so does NAT. I started introducing NAT right here; however, I will elaborate more on this in the next sections, as the &lt;code&gt;libvirt&lt;/code&gt;'s &lt;strong&gt;DEFAULT&lt;/strong&gt; network is based on the NAT mechanism.&lt;/p&gt;

&lt;p&gt;_A little thought experiment: how do you think the internet would be if IPv6 were fully adopted and replaced IPv4? What would happen to the prices for internet service? Would the speed of the internet change? PS. you can check out what is CGNAT :) _&lt;/p&gt;

&lt;p&gt;In this article, I will use IPv4 addresses for any network configuration. Unfortunately, Internet Protocol version 4 (IPv4) still forms the foundation of most global internet traffic today. Plus, as I stated earlier, this article is focused on creating a local network rather than configuring any VM to be remotely reachable from the outside.&lt;/p&gt;

&lt;p&gt;The scope of IPv4 reserved addresses for private networks is quite large for a home setup, so let's move to it.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➁.➃ IPv4 address ranges reserved for private networks
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;According to standards set forth in Internet Engineering Task Force (IETF) document RFC-1918 , the following IPv4 address ranges are reserved by the IANA for private internets, and are not publicly routable on the global internet:&lt;br&gt;
10.0.0.0/8 IP addresses: 10.0.0.0 – 10.255.255.255&lt;br&gt;
172.16.0.0/12 IP addresses: 172.16.0.0 – 172.31.255.255&lt;br&gt;
192.168.0.0/16 IP addresses: 192.168.0.0 – 192.168.255.255&lt;br&gt;
Note that only a portion of the “172” and the “192” address ranges are designated for private use. The remaining addresses are considered “public,” and thus are routable on the global Internet.(&lt;a href="https://www.arin.net/reference/research/statistics/address_filters/" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Key takeaway is that there is set of IPv4 IP addresses that are reserved for private networks. The reservation ensures that there will be no conflicts with global - public IP addresses.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➁.➄ IPv4 addresses structure and CIDR
&lt;/h4&gt;

&lt;p&gt;All IPv4 IP addresses has the same structure - 4 numbers divided by dots: x.x.x.x. The trailing slash with a number after it is not a part of IP address, it is CIDR (Classless Inter-Domain Routing) notation.&lt;/p&gt;

&lt;p&gt;Take a detailed look at this again:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;10.0.0.0/8 IP addresses: 10.0.0.0 – 10.255.255.255&lt;/li&gt;
&lt;li&gt;172.16.0.0/12 IP addresses: 172.16.0.0 – 172.31.255.255&lt;/li&gt;
&lt;li&gt;192.168.0.0/16 IP addresses: 192.168.0.0 – 192.168.255.255&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The largest network range (with the most IP addresses) is represented by 10.0.0.0/8. The CIDR number after the slash (e.g., "/8") is not random; it refers to the number of bits allocated for the network portion of the IP address.&lt;/p&gt;

&lt;p&gt;A "/8" means that the first 8 bits are reserved for the network, leaving 24 bits for the host part. This allows for 16,777,216 available addresses for devices (hosts) within the network. The smaller the CIDR number (like "/8"), the larger the number of available IP addresses, because fewer bits are used for the network portion, and more are left for devices.&lt;/p&gt;

&lt;p&gt;The IP IPv4 address is 32 bits long, divided into four groups (octets) of 8 bits each. For example, 192.168.1.0 is written as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;192    .    168    .     1    .    0
11000000.10101000.00000001.00000000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each number is an 8-bit block (each block represents one of the four octets). The CIDR prefix tells how many of these 32 bits are used for the network portion.&lt;/p&gt;

&lt;p&gt;Lets return to my outputs from ip a:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;eno1: inet 192.168.1.X/24 brd 192.168.1.255&lt;/li&gt;
&lt;li&gt;wlxXXXXXX43643754XX: inet 192.168.1.Y/24 brd 192.168.1.255&lt;/li&gt;
&lt;li&gt;virbr0: inet 192.168.122.1/24 brd 192.168.122.255&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;eno1&lt;/code&gt; and &lt;code&gt;wlxXXXXXX43643754XX&lt;/code&gt; network interfaces connect my PC to the local network managed by my WiFi router. Router assigned them private IP address 192.168.1.X and 192.168.1.Y. These IP addresses were vacant, so they were assigned by router's DHCP (Dynamic Host Configuration Protocol). CIDR /24 tells me that in this local network there are 256 "spots" for devices:&lt;/p&gt;

&lt;p&gt;/24 means the first 24 bits are for the network. This leaves 8 bits for the host (the devices within the network). Network bits: 24 bits; host bits: 8 bits; 2^8 = 256 possible IP addresses in this network (but in practice, some are reserved for special purposes).&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;virbr0&lt;/code&gt; network interface is a different case. First, it is managed by &lt;code&gt;libvirt/QEMU&lt;/code&gt;, as I mentioned, not by my WiFi router, and it is a virtual bridge. The Network to which this network interface connects also has a CIDR of /24, meaning there are 256 available addresses.&lt;/p&gt;

&lt;p&gt;However, remember the last information I shared in the previous part of this series on virtualization? RECAP: I created a VM connected to the &lt;strong&gt;DEFAULT&lt;/strong&gt; network, and I mentioned that even though it can reach the internet, no device connected to my local home network can access this VM via SSH, ping, or anything else (except HOST!). This is because any 192.168.122.X address is not part of my local home's 192.168.1.X/24 network! The 192.168.1.X/24 range covers addresses from 192.168.1.0 to 192.168.1.255, but any 192.168.122.X address is outside of that range. As a result, there’s no connection, no communication.&lt;/p&gt;

&lt;p&gt;After this long introductory session, let’s get back to virtualization with virsh, QEMU, and KVM, focusing on network configurations.&lt;/p&gt;




&lt;h3&gt;
  
  
  ➂ Libvirt's &lt;strong&gt;DEFAULT&lt;/strong&gt; Network: about NAT mode
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;libvirt&lt;/code&gt; uses the concept of a virtual network switch. The network interface you saw in my &lt;code&gt;ip a&lt;/code&gt; outputs in the previous section, called &lt;code&gt;virbr0&lt;/code&gt;, is nothing more than a virtual network switch automatically created and managed by libvirt.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➂.➀ About virtual network switches
&lt;/h4&gt;

&lt;p&gt;I don’t know if you’re familiar with &lt;code&gt;physical&lt;/code&gt; network switches, but these guys look like this (not so friendly for trypophobic folks, hehe):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbk5of6ifbnpywzik93c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbk5of6ifbnpywzik93c.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Getting closer to understanding networks: my PC has an Ethernet cable that’s plugged into the Wi-Fi router. But what if I plug it into my laptop instead? &lt;em&gt;Haha, I can’t do that because my laptop doesn’t have an Ethernet port&lt;/em&gt;. However, if I could, that connection would create a small network of two devices (my PC and my laptop), allowing them to communicate with each other.&lt;/p&gt;

&lt;p&gt;But what if I wanted to attach something else—a second laptop? It would be pretty hard to do with the scarcity of Ethernet ports (even if both laptops had one Ethernet port).&lt;/p&gt;

&lt;p&gt;If I had a network switch, though, I could plug in all the devices I wanted—up to the number of available ports on the switch. The network switch would handle the communication between them by forwarding packets between connected devices!&lt;/p&gt;

&lt;p&gt;So, the virtual network switch works kinda the same way, just for VMs. The default &lt;code&gt;virbr0&lt;/code&gt; virtual network switch is used when VMs are connect to the &lt;strong&gt;DEFAULT&lt;/strong&gt; network. This virtual network switch enables them to communicate easily with each other.&lt;/p&gt;

&lt;p&gt;Here is the &lt;strong&gt;DEFAULT&lt;/strong&gt; network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo virsh net-list --all

 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   no          yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;strong&gt;DEFAULT&lt;/strong&gt; network operates in Network Address Translation (NAT) mode (the one I introduced above discussing IPv4 vs IPv6).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;By default, a virtual network switch operates in NAT mode (using IP masquerading rather than SNAT or DNAT).&lt;br&gt;
This means any guests connected through it, use the host IP address for communication to the outside world. Computers external to the host can't initiate communications to the guests inside, when the virtual network switch is operating in NAT mode. (&lt;a href="https://wiki.libvirt.org/VirtualNetworking.html" rel="noopener noreferrer"&gt;Libvirt: Virtual Networking&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8hioepoxd1u1e9fis75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8hioepoxd1u1e9fis75.png" alt=" " width="682" height="389"&gt;&lt;/a&gt;&lt;/p&gt;
DEFAULT network in the NAT mode&lt;a href="https://wiki.libvirt.org/VirtualNetworking.html" rel="noopener noreferrer"&gt;Libvirt documentation&lt;/a&gt;



&lt;p&gt;Libvirt's documentation specifies, that &lt;strong&gt;the NAT is set up using &lt;em&gt;iptables&lt;/em&gt; rules&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And here we arrive at another important player in the whole networking process – &lt;code&gt;iptables&lt;/code&gt;. It is often perceived juts as a tool that protects your system from unauthorized access/traffic based on some rules. However, the reality is that it is much, much more than that, as it can also be used for traffic manipulation, forwarding, NAT (Network Address Translation), and more.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Iptables provides packet filtering, network address translation (NAT) and other packet mangling. &lt;br&gt;
NOTE: &lt;code&gt;iptables&lt;/code&gt; was replaced by &lt;code&gt;nftables&lt;/code&gt; starting in Debian 10 Buster. (&lt;a href="https://wiki.debian.org/iptables" rel="noopener noreferrer"&gt;Debian Wiki: iptables&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I am using Debian Sid, so I definitely have &lt;code&gt;nftables&lt;/code&gt; and not the legacy &lt;code&gt;iptables&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Is &lt;code&gt;nftables&lt;/code&gt; just a modern version of &lt;code&gt;iptables&lt;/code&gt; with fixed vulnerabilities, faster performance, and so on, but still it is just the&lt;code&gt;iptables&lt;/code&gt; under the hood? NO. They are two different frameworks designed to do the same job—'mangling' network traffic. However, think of 'different frameworks' in this context as you would if you have experience with Python: it's like TensorFlow and PyTorch. In web development, it's like React and Angular. You cannot write a neural network using PyTorch and then expect to just copy-paste the network source code into TensorFlow and have it work. The same goes for &lt;code&gt;nftables&lt;/code&gt; and &lt;code&gt;iptables&lt;/code&gt;. They are different, with different syntax, and different logic, especially when it comes to IPv6 traffic.                  &lt;/p&gt;

&lt;p&gt;It's better not to mix the two, thinking, 'Oh, for an issue Y I'll write and add some rules in &lt;code&gt;iptables&lt;/code&gt;, but then for the issue X I found a tutorial for &lt;code&gt;nftables&lt;/code&gt;, so I'll add rules this way'. (&lt;em&gt;Played Witcher 3? Remember what happened to Geralt when he was courting both Triss and Yennefer? Well, the same can happen to your network traffic if you start playing around with different tools for network traffic management)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;However, even if you do so (use both the &lt;code&gt;nftables&lt;/code&gt; and the legacy &lt;code&gt;iptables&lt;/code&gt; tool at the same time), Debian has you covered in a certain way. First, the &lt;code&gt;iptables&lt;/code&gt; utility is not installed on the system by default. If it is installed, the &lt;code&gt;iptables&lt;/code&gt; utility will, by default, use the &lt;code&gt;nftables&lt;/code&gt; backend. But, again:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Should I mix nftables and iptables/ebtables/arptables rulesets?&lt;br&gt;
No, unless you know what you are doing. (&lt;a href="https://wiki.debian.org/nftables" rel="noopener noreferrer"&gt;Debian Wiki: nftables&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  ➂.➁ Libvirt wants &lt;code&gt;iptables&lt;/code&gt;, Debian has &lt;code&gt;nftables&lt;/code&gt;: what to do?
&lt;/h4&gt;

&lt;p&gt;As I mentioned before, &lt;code&gt;libvirt&lt;/code&gt;'s &lt;strong&gt;DEFAULT&lt;/strong&gt; network functions in NAT mode, and this NAT mode is defined using &lt;code&gt;iptables&lt;/code&gt; rules. So, when you installed &lt;code&gt;libvirt&lt;/code&gt; tools, it most probably pulled in &lt;code&gt;iptables&lt;/code&gt; as a dependency. Indeed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aptitude why iptables
i   libvirt-daemon-system          Depends libvirt-daemon-driver-nwfilter (= 10.10.0-3)
i A libvirt-daemon-driver-nwfilter Depends iptables
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But I already discouraged you from using &lt;code&gt;iptables&lt;/code&gt;, hehe. So, what's the plan? First, most likely, you don't even have &lt;code&gt;nftables&lt;/code&gt; up and running yet :D. Because if you did, and it was running as a &lt;code&gt;systemd&lt;/code&gt; service, you would have encountered some troubles starting VMs with the &lt;strong&gt;DEFAULT&lt;/strong&gt; network. I'll show you why:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#if for some reason you do not have nftables installed:
# $ sudo apt install nftables
$ sudo systemctl status nftables
#is it active? No? Then, start it
$ sudo systemctl startnftables
# and ENABLE it so it will start on the Boot
$ sudo systemctl enable nftables.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;FYI: Take a look at the &lt;code&gt;nftables&lt;/code&gt; ruleset "in use" using command &lt;code&gt;sudo nft list ruleset&lt;/code&gt; when &lt;code&gt;nftables.service&lt;/code&gt; is stopped and when it's started (or before starting it and after). Compare the two and try to find libvirt's NAT configuration :).&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#And now I bring UP DEFAULt libvirt network
$ sudo virsh net-start default
error: Failed to start network default
error: internal error: Failed to apply firewall command 'nft -ae insert rule ip libvirt_network guest_output iif virbr0 counter reject': Error: Could not process rule: No such file or directory
insert rule ip libvirt_network guest_output iif virbr0 counter reject
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;OOPS! I broke everything. Libvirt wants &lt;code&gt;iptables&lt;/code&gt;, not &lt;code&gt;nftables&lt;/code&gt;. &lt;br&gt;
Whatever wants Libvirt, but in Russian there is saying: "eat what is given". There’s a legit solution to make libvirt work well with &lt;code&gt;nftables&lt;/code&gt;. And it’s not just some workaround. &lt;/p&gt;

&lt;p&gt;Maybe you are familiar with UFW—Uncomplicated Firewall—this tool is built on top of &lt;code&gt;iptables&lt;/code&gt;, making it easier to manage &lt;code&gt;iptables&lt;/code&gt; rules. You write simplified rules, and it translates them into &lt;code&gt;iptables&lt;/code&gt; rules in the underground. A similar interface also exists for &lt;code&gt;nftables&lt;/code&gt;! It’s called &lt;code&gt;firewalld&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;First, let’s clarify why this isn’t just a workaround that adds extra software to your system just to make something work for VMs:"&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;You should consider using a wrapper instead of writing your own firewalling scripts&lt;/strong&gt;. It is recommended to run &lt;code&gt;firewalld&lt;/code&gt;, which integrates pretty well into the system. See also &lt;a href="https://firewalld.org/" rel="noopener noreferrer"&gt;https://firewalld.org/&lt;/a&gt; (&lt;a href="https://wiki.debian.org/nftables" rel="noopener noreferrer"&gt;Debian Wiki: nftables&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And...:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The firewalld software takes control of all the firewalling setup in your system, so you don't have to know all the details of what is happening in the underground. There are many other system components that can integrate with firewalld, like NetworkManager, libvirt, podman, fail2ban, docker, etc.(&lt;a href="https://wiki.debian.org/nftables" rel="noopener noreferrer"&gt;Debian Wiki: nftables&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, &lt;code&gt;firewalld&lt;/code&gt;, besides all its perks in firewall configuration and network traffic management, is the bro that will make &lt;code&gt;libvirt&lt;/code&gt; function correctly with &lt;code&gt;nftables&lt;/code&gt;. It will ensure that all the NAT rules, written with love by the &lt;code&gt;libvirt&lt;/code&gt; devs, are active. That means the DEFAULT network based on NAT rules will work again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install firewalld
$ sudo systemctl start firewalld
$ sudo systemctl enable firewalld

#have a look on the new ruleset firewalld brought with it to nftables
$ sudo nft list ruleset
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What about libvirt's ruleset for NAT?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo nft list ruleset | grep libvirt

#nothing 

#now i start DEFAULT network
$ sudo virsh net-start default
Network default started
$ sudo nft list ruleset | grep libvirt
        iifname "virbr0" jump mangle_PRE_libvirt
        iifname "virbr0" jump nat_PRE_libvirt
        iifname "virbr0" oifname "virbr0" jump nat_POST_libvirt
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo nft list tables

table inet filter
table inet firewalld
table ip libvirt_network
table ip6 libvirt_network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, there's a separate "table" for &lt;code&gt;libvirt_network&lt;/code&gt;. This table is managed by libvirt, and libvirt fills this table with rules when its virtual network switch becomes active (&lt;code&gt;virbr0&lt;/code&gt;). This happens only if at least one network using the virtual bridge &lt;code&gt;virbr0&lt;/code&gt; is started.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➂.➂ DEFAULT Libvirt's network: what's under the hood?
&lt;/h4&gt;

&lt;p&gt;As I mentioned before, the &lt;code&gt;libvirt&lt;/code&gt; &lt;strong&gt;DEFAULT&lt;/strong&gt; network's NAT is set up using &lt;em&gt;iptables&lt;/em&gt; rules. And in my case Libvirt is forced to use what is available on my Debian - &lt;code&gt;nftables&lt;/code&gt;. So, if I explore existing rules in nftables, I for sure should find there Libvirt's default network rules, according to which traffic circulate between VMs and FROM VMs TO the "outside world". &lt;/p&gt;

&lt;p&gt;And here's how the network magic happens for VMs: when they interact with each other and with the host, and access the internet to fetch whatever they're commanded to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo nft -a list ruleset
table ip libvirt_network { # handle 6
    chain forward { # handle 1
        type filter hook forward priority filter; policy accept;
        counter packets 0 bytes 0 jump guest_cross # handle 7
        counter packets 0 bytes 0 jump guest_input # handle 5
        counter packets 0 bytes 0 jump guest_output # handle 3
    }

    chain guest_output { # handle 2
        ip saddr 192.168.122.0/24 iif "virbr0" counter packets 0 bytes 0 accept # handle 13
        iif "virbr0" counter packets 0 bytes 0 reject # handle 10
    }

    chain guest_input { # handle 4
        oif "virbr0" ip daddr 192.168.122.0/24 ct state established,related counter packets 0 bytes 0 accept # handle 14
        oif "virbr0" counter packets 0 bytes 0 reject # handle 11
    }

    chain guest_cross { # handle 6
        iif "virbr0" oif "virbr0" counter packets 0 bytes 0 accept # handle 12
    }

    chain guest_nat { # handle 8
        type nat hook postrouting priority srcnat; policy accept;
        ip saddr 192.168.122.0/24 ip daddr 224.0.0.0/24 counter packets 1 bytes 40 return # handle 21
        ip saddr 192.168.122.0/24 ip daddr 255.255.255.255 counter packets 0 bytes 0 return # handle 20
        meta l4proto tcp ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 counter packets 0 bytes 0 masquerade to :1024-65535 # handle 19
        meta l4proto udp ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 counter packets 0 bytes 0 masquerade to :1024-65535 # handle 18
        ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 counter packets 0 bytes 0 masquerade # handle 17
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ukskzo9x4hlecgghivp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ukskzo9x4hlecgghivp.png" alt=" " width="463" height="657"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  ➂.➃ Nftables rulesets: Libvirt Network's table and chains explained
&lt;/h4&gt;

&lt;p&gt;Let’s break it down:&lt;/p&gt;

&lt;p&gt;First, &lt;code&gt;libvirt_network&lt;/code&gt; rules have their own &lt;em&gt;table&lt;/em&gt;. In &lt;code&gt;nftables&lt;/code&gt;, tables act as "containers" within the overall ruleset (to see the full list: &lt;code&gt;sudo nft list ruleset&lt;/code&gt;). &lt;em&gt;Tables&lt;/em&gt; contain &lt;em&gt;chains&lt;/em&gt;, sets, maps, flowtables, and stateful objects.&lt;/p&gt;

&lt;p&gt;Each table belongs to &lt;strong&gt;exactly one&lt;/strong&gt; &lt;em&gt;family&lt;/em&gt;. If you want to apply a specific set of rules to some network traffic, you must first &lt;strong&gt;define the table by specifying its type&lt;/strong&gt;. The type determines that only traffic of this specific type will be filtered by the rules in that table.&lt;/p&gt;

&lt;p&gt;For example, the &lt;em&gt;table ip libvirt_network&lt;/em&gt; rules &lt;strong&gt;only filter IPv4 traffic/packets&lt;/strong&gt; because its table is assigned the &lt;strong&gt;ip&lt;/strong&gt; family:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;table ip libvirt_network

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A &lt;em&gt;chain&lt;/em&gt; is essentially a list of rules. In the case of &lt;code&gt;libvirt_network&lt;/code&gt; ruleset, everything starts with the &lt;em&gt;forward chain&lt;/em&gt;, which is responsible for filtering incoming traffic packets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;table ip libvirt_network {
    chain forward { #rules}
        chain guest_output {#rules}
    chain guest_input {#rules}
        chain guest_cross {#rules}
    chain guest_nat {#rules}
}

  ______         A |-----------------|F
 |packet|  ---&amp;gt;  C |  chain forward  |I ---|
  ¯¯¯¯¯¯         C |  type: filter   |L    | 
                 E |  hook: forward  |T    |
                 P |  policy: accept |E    |
                 T |-----------------|R    |
                     -----packet----&amp;gt;      | 
                                           |
                                Routing decision:
                     &amp;lt;---?---  | |----?----&amp;gt; 
             |-----------------| | -----------------|
             |                 | ?                  |
|------------------------|     | |    |------------------------| 
|    chain guest_input   |     | V    |   chain guest_output   |    
|    inbound packets     |     |      |    outbound packets    |
|   destined for guests  |     |      |       from guests      |   
|           ?            |     |      |             ?          |
|------------------------|     |      |------------------------| 
            |                  |                    |  
|-----------|------------|     |      |-------------|----------|  
|                        |     |      |                        | 
+ACCEPT             REJECT-    |      +ACCEPT             REJECT-
+If:              Anything-    |      +If:              Anything-
+outgoing             else-    |      +source IP:           else-
+interface:               -    |      +192.168.122.0/24         -
+virbr0                   -    |      +                         - 
+                         -    |      +incoming                 -
+connection               -    |      +interface:               -
+state:                   -    |      +virbr0                   -     
+established OR           -    |      +++++++++++++++++++++++++++ 
+related                  -    | 
+                         -    |      |--------------------------|
+destination:             -    |      |    chain guest_cross     | 
+in 192.168.122.0/24      -    |------|    traffic between       |
+++++++++++++++++++++++++++           |    guests on virbr0      |                                            
                                      |             ?            |        
                                      |--------------------------| 
                                      +ACCEPT              REJECT-
                                      +If:                No rule-
                                      +incoming interface:       -
                                      +virbr0                    - 
                                      +                          - 
                                      +outgoing interface:       -
                                      +virbr0                    -
                                      ++++++++++++++++++++++++++++ 

 ##########################################################  
 #                Routing decision is made!               #
 #                            IF:                         #
 #                 outbound packets from guests           #
 ##########################################################
    :==========:      ______                      :==========:
    :guest VM  : --&amp;gt; |packet|                     : Other VM : 
    :==========:      ¯¯¯¯¯¯                      :==========:
                        |                                  ^ 
                        V                                  |
 |------------------------|    |--IF traffic FROM:      NO NAT
 |                        |    |                    (NO masquerade)
 |   chain guest_nat      |---&amp;gt;| guest subnet TO:          ^
 |   type: nat            |    | (224.0.0.0/24) OR --------|
 |   hook: postrouting    |    | (255.255.255.255)
 |   priority: srcnat;    |    |==================================
 |   policy: accept;      |---&amp;gt;| IF traffic FROM:   
 |                        |    | guest subnet TO: ---------|
 | -----------------------|    | DIFFERENT subnet          V
                               |           MASQUERADE traffic (NAT)
                                                           |
                                          INTERNET &amp;lt;--------

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I hope this ASCII scheme brought some clarity to how VMs attached to the &lt;strong&gt;DEFAULT&lt;/strong&gt; libvirt's network communicate with the host, with each other, and with the outside world. Here’s some more info.&lt;/p&gt;

&lt;p&gt;A network packet pops up on the &lt;code&gt;virbr0&lt;/code&gt; interface and then gets &lt;em&gt;filtered&lt;/em&gt; (by &lt;code&gt;chain forward&lt;/code&gt;) and &lt;em&gt;routed&lt;/em&gt; or &lt;em&gt;dropped&lt;/em&gt;  - based on the &lt;code&gt;nftables&lt;/code&gt; rules.&lt;/p&gt;

&lt;p&gt;For example, when one VM sends a packet to another VM (if both are connected to &lt;strong&gt;DEFAULT&lt;/strong&gt; network, it’s routed to that VM. The rules in &lt;code&gt;chain guest_cross&lt;/code&gt; are triggered—there’s no rejection, so the packet is delivered without any special &lt;em&gt;mangling&lt;/em&gt;, since it goes from &lt;code&gt;virbr0&lt;/code&gt; to &lt;code&gt;virbr0&lt;/code&gt; network interface. &lt;code&gt;Chain guest_nat&lt;/code&gt; also participates, but NAT does not apply here because it’s an internal connection, and &lt;em&gt;the rules say that when it’s from the same subnet to the same subnet, there’s no NAT&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Another example: the host sends a packet to a VM. This triggers the &lt;code&gt;chain guest_input&lt;/code&gt;. The &lt;em&gt;connection tracking state&lt;/em&gt; is &lt;em&gt;established&lt;/em&gt; in this case (the VM has already initiated a connection to the host). And here it is! This is why you cannot connect to the VM remotely (&lt;strong&gt;even from another device connected to the same local home network as host and configured port forwarding!&lt;/strong&gt;)—it will be &lt;strong&gt;new&lt;/strong&gt; connection, not &lt;strong&gt;established&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;Also, if you try to connect to the VM from a laptop on the same Wi-Fi as the host, it gets dropped because the VM is on a different network (remember, 192.168.122.x is not a part of 192.168.1.0/24 network!). But you configured &lt;em&gt;port forwarding&lt;/em&gt;? Still nope. Packets will be dropped. Because again, it’s a &lt;strong&gt;new&lt;/strong&gt; connection.&lt;/p&gt;

&lt;p&gt;Yet another example: the VM wants to run &lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade&lt;/code&gt;. It needs to fetch data from the Debian repositories. This is governed by &lt;code&gt;chain guest_output&lt;/code&gt;, so the rules are triggered and they don’t cause the drop of packets. Then the &lt;code&gt;guest_nat chain&lt;/code&gt; is activated because traffic is going from the guest subnet to an external subnet. Without NAT, it wouldn’t work at all—the VM network is different from your local home network (WiFi), so they don’t directly communicate. They’re isolated from each other, and &lt;code&gt;virbr0&lt;/code&gt; doesn’t &lt;em&gt;bridge&lt;/em&gt; these two networks!&lt;/p&gt;

&lt;p&gt;To summarize: there are only two possible outcomes for the network traffic you initialize - FROM and TO your virtual machines that are attached to the &lt;strong&gt;DEFAULT&lt;/strong&gt; network—it will either reach its destination (ACCEPTED) or be dropped (REJECTED).&lt;/p&gt;

&lt;p&gt;I hope it’s clear when it comes to host-to-VM, VM-to-host traffic, and communication between VMs. 1) There are no specific &lt;code&gt;nftables&lt;/code&gt; network filtering/mangling rules blocking it. 2) These communications are possible by the specifically configured and virtualized &lt;code&gt;virbr0&lt;/code&gt; virtual network switch.&lt;/p&gt;

&lt;p&gt;However, HOW &lt;em&gt;does&lt;/em&gt; traffic TO the internet—and especially FROM other networks (like your home LAN) &lt;em&gt;work&lt;/em&gt; — can still be challenging to understand. NAT (Network Address Translation) might still appear like a black box. So, in the final section of this article, I’ll try to simplify things and explain it in detail.&lt;/p&gt;

&lt;p&gt;Understanding NAT is crucial for all IPv4 traffic, as NAT has become a widespread networking configuration. This is due to the inherent limitations IPv4 faces in the modern world, making NAT a kind of symbiotic solution for IPv4 networks.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➂.➄ About how NAT works
&lt;/h4&gt;

&lt;p&gt;Let’s get back to the &lt;code&gt;NAT chain&lt;/code&gt; of the &lt;code&gt;libvirt_network ip table&lt;/code&gt;. This chain doesn’t just apply filtering rules to network traffic—it handles &lt;em&gt;postrouting&lt;/em&gt; rules, meaning it &lt;strong&gt;sees all packets after routing, right before they leave the local system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For example, imagine your VM wants to connect to the internet to update itself when you run &lt;code&gt;sudo apt update&lt;/code&gt;. This command fetches the data about the versions of your system’s packages against the repository versions. The packets from the VM are sent to &lt;a href="http://deb.debian.org/debian/" rel="noopener noreferrer"&gt;http://deb.debian.org/debian/&lt;/a&gt; (the Debian package repo source link).&lt;/p&gt;

&lt;p&gt;These packets (fetching request) end up on &lt;code&gt;virbr0&lt;/code&gt;, which acts like an airport for them. It decides if these “passengers” (packets) need to take an “international flight” (outside the guest VM network) or a “domestic flight” (within the guest VM network).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB! I will cover DNS servers and gateways that act as routers in detail in the next article! For now, let’s simplify things (even though it’s not fully technically accurate) and say that the &lt;code&gt;virbr0&lt;/code&gt; virtual network switch handles this &lt;em&gt;somehow&lt;/em&gt;, so I can focus on explaining NAT (otherwise I will never finish this article T_T).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's get back to the postrouting rules specified in the guest_nat chain and disaamble them to see how they worl&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo nft -a list ruleset
...
chain guest_nat { # handle 8
        type nat hook postrouting priority srcnat; policy accept;
        ip saddr 192.168.122.0/24 ip daddr 224.0.0.0/24 counter packets 1 bytes 40 return # handle 21
        ip saddr 192.168.122.0/24 ip daddr 255.255.255.255 counter packets 0 bytes 0 return # handle 20
        meta l4proto tcp ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 counter packets 0 bytes 0 masquerade to :1024-65535 # handle 19
        meta l4proto udp ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 counter packets 0 bytes 0 masquerade to :1024-65535 # handle 18
        ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 counter packets 0 bytes 0 masquerade # handle 17
    }
}
..
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Handles&lt;/em&gt; are used natively by &lt;code&gt;nftables&lt;/code&gt; to make it easier to reference specific rules when you need to modify or transform them, so they’re quite &lt;strong&gt;handy&lt;/strong&gt; for me right now—I don’t have to retype rules.&lt;/p&gt;

&lt;p&gt;Handles #20 and #21 aren’t what I’m looking for to illustrate NAT in action. These handles specifically ignore packets traveling within the same guest network.&lt;/p&gt;

&lt;p&gt;But handles #19, #18, and #17 are exactly where NAT is in action! These three handles are categorized based on the communication protocol in use—TCP (#19), UDP (#18), and all other protocols (#17).&lt;/p&gt;

&lt;p&gt;All three handles process traffic in the same way. The key word in all these rules is &lt;strong&gt;masquerade&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The term masquerade hints what happens to the &lt;em&gt;source address&lt;/em&gt; of the network traffic—it gets "masked".&lt;/p&gt;

&lt;p&gt;Let’s follow the network packets from a VM that executed &lt;code&gt;sudo apt update&lt;/code&gt;. First, since network communication of this type uses the TCP protocol, I have to look into the rule of handle #19:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 --&amp;gt;  
--&amp;gt; This is like a programming `if` condition:
If the **s**ource **addr**ess (saddr) is within the 192.168.122.0/24 subnet 
AND the **d**estination **addr**ess (daddr) is outside of this subnet:
     masquerade to :1024-65535
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Masquerade&lt;/em&gt; replaces the original source address with something else. In this case, it replaces it with the private IP address of the host. More specifically, it replace it with the private IP of the network interface with the highest priority (if there’s more than one, as in your case).&lt;/p&gt;

&lt;p&gt;And what’s with the numbers after the colon? Those are ports—specifically the ephemeral port range (1024-65535). Why such a big range? And aren’t these ports busy on the host?&lt;/p&gt;

&lt;p&gt;This port range is the ephemeral port range, which is designed to avoid conflicts with privileged ports (0–1023). Privileged ports are reserved for well-known system services, so this range ensures the masqueraded traffic doesn’t interfere with them.&lt;/p&gt;

&lt;p&gt;Oh I left alone the packets from &lt;code&gt;sudo apt update&lt;/code&gt;. Here they are: after masquerading:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The packets originally departed from the VM at 192.168.122.10 on port 80 (since it’s HTTP traffic)&lt;/li&gt;
&lt;li&gt;They reached &lt;code&gt;virbr0&lt;/code&gt; (the virtual network switch).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;virbr0&lt;/code&gt; routed them to go outside the &lt;strong&gt;DEFAULT&lt;/strong&gt; network (where VM belongs to).&lt;/li&gt;
&lt;li&gt;Before leaving, they were masqueraded: each packet’s header was modified: the source address was replaced from 192.168.122.10:80 → to 192.168.1.5:12345. Here, 192.168.1.5 is the private IP address of the host (my PC), and 12345 is an ephemeral port assigned dynamically for this communication. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is the peculiar schema:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feu4wdfc7kt8zoxb814pi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feu4wdfc7kt8zoxb814pi.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;a href="https://www.gettyimages.it/detail/foto/woman-with-black-hair-and-eyes-peeking-behind-immagine-royalty-free/182885155?adppopup=true" rel="noopener noreferrer"&gt;Source of Original Image&lt;/a&gt;



&lt;p&gt;And that’s how NAT works. That’s all. Hahaha, just kidding. &lt;/p&gt;

&lt;p&gt;That’s how the FIRST NAT worked. Now, these poor little packets have to "embark" on yet another "plane". They’re off to their next layover, where they’ll get NAT'd (masqueraded) AGAIN— new masks for everyone! This time, the layover is at the WiFi router.&lt;/p&gt;

&lt;p&gt;The NAT’d packets leave the host with source address 192.168.1.5:12345. They arrive to my Wi-Fi router on the LAN side.&lt;br&gt;
Wi-Fi Router NAT&lt;/p&gt;

&lt;p&gt;The WiFi router sees packets from 192.168.1.5:1234 → to 123.123.7.132:80 (I am sooo bad, I do not know the IP address of Debian Servers &amp;amp; lazy to check).&lt;/p&gt;

&lt;p&gt;NAT rewrites the source IP AGAIN from 192.168.1.5 to my router’s public IP (111.111.111.111) and change the port to another ephemeral one (54321).&lt;/p&gt;

&lt;p&gt;NB! This is important! This is the cornerstone of weak routers (from the point of hardware):&lt;/p&gt;

&lt;p&gt;Network packets don’t just travel to the Debian servers for a one-way trip; it’s always a round trip! At some point, response packets will come back. In the case of success, the VM expects to receive information about the versions of the packages it requested, to figure out what’s outdated.&lt;/p&gt;

&lt;p&gt;Now, with all this masquerading, we end up in a situation much like those in movies—a classic masquerade ball drama (someone gets kissed because they were mistaken for someone else under their mask). To prevent these kinds of situationships, the WiFi router steps in as the responsible dude in charge. The router keeps track of &lt;strong&gt;who&lt;/strong&gt; is going &lt;strong&gt;where&lt;/strong&gt;, and if something comes back from "there," it makes sure it gets sent to the right &lt;strong&gt;who&lt;/strong&gt;. To do this, router keeps updated a sort of table (thankfully it’s .xlsx). This table looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;|Private IP    | Public IP      | Source Port | Destination Port |
|192.168.1.5   | 123.193.7.162  | 54321       | 80               |
|192.168.1.20  | 111.153.6.132  | 35465       | 443              |
|192.168.1.11  | 125.161.7.162  | 34564       | 27017            |
|192.168.1.16  | 3.193.5.132    | 1224        | 22               |
.......................
#This table can be quite long if many devices are connected to the same WiFi and actively use Internet!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then, the router keeps this information... in the table.&lt;/p&gt;

&lt;p&gt;Here’s where an important point comes in: this table can grow very quickly. If your router is fast and powerful, no big deal—but if you cheaped out on it, you might run into trouble. A router that can’t handle more than a certain number of rows (based on its hardware limitations) will start to slow down significantly as the table grows beyond.&lt;/p&gt;

&lt;p&gt;Again I left alone travelling network packets. Here they are:&lt;/p&gt;

&lt;p&gt;The packets departed to the Debian server with my public IP and an ephemeral port: 111.111.111.111:54321.&lt;/p&gt;

&lt;p&gt;The Debian server responds, and the response arrives from 123.123.7.132:80 to the router. The router then checks its table and says, "Ah, here it is!" It finds the matching entry in its tracking table and figures out which device the response is meant for. At this point, the router rewrites the headers of the packet AGAIN, removing its public IP address (111.111.111.111) and restoring the private IP and port from the table. Finally, it sends the packet back to my PC (the host) at 192.168.1.5:12345.&lt;/p&gt;

&lt;p&gt;The host sees in the packets the destination 192.168.1.5:12345, checks its NAT table from the &lt;em&gt;guest_nat chain&lt;/em&gt;, and recognizes this was originally from 192.168.122.10:80.&lt;/p&gt;

&lt;p&gt;It rewrites AGAIN the destination from 192.168.1.5:12345 back to the VM’s IP and port: 192.168.122.10:80.&lt;/p&gt;

&lt;p&gt;The packet is finally forwarded to the VM at 192.168.122.10:80.&lt;br&gt;
The VM sees a response from &lt;a href="http://deb.debian.org/debian/" rel="noopener noreferrer"&gt;http://deb.debian.org/debian/&lt;/a&gt; to 192.168.122.10:80, completes the TCP handshake, and gets the repository data.&lt;/p&gt;

&lt;p&gt;And now, that’s truly that's all! This is how NAT works. Enjoying IPv4? Still thinking IPv6 is difficult and scary? &lt;/p&gt;

&lt;p&gt;For a little fun (and for little support for my emotional state after writing this article), try counting and writing in the comments how many times the network packets from the VM, headed to the internet, were rewritten along the way :).&lt;/p&gt;




&lt;p&gt;Now after this TINY introduction to networking, it is time to get to the hands-on configurations. How can I make virtual machines accessible from other devices connected to the same LAN as the host—or even remotely? While I won’t be covering remote access here, making VMs accessible from other devices on the same home network can be quite convenient.&lt;/p&gt;

&lt;p&gt;For example, I have a pretty powerful Desktop PC in terms of specs, and maybe sometimes I’m lazy and want to work from my laptop, which is like a toy in comparison to my PC. When I say I am lazy, I mean I don't want to sit properly at my desk; I want to loaf on the sofa with my laptop. However, I still want to use the resources of my PC. I can, of course, connect via SSH to my host machine, but it may be that I just want to connect to the VM with MongoDB to do something or check on it.&lt;/p&gt;

&lt;p&gt;There are three main ways to do so:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Port forwarding &amp;amp; Custom NAT&lt;/li&gt;
&lt;li&gt;Bridged networking (aka "shared physical device")&lt;/li&gt;
&lt;li&gt;PCI Passthrough of host network devices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of them I will cover in the next article.&lt;/p&gt;

</description>
      <category>libvirt</category>
      <category>debian</category>
      <category>networking</category>
      <category>ipv6</category>
    </item>
    <item>
      <title>Virtualization on Debian with virsh&amp;QEMU&amp;KVM — Installation of virtualization tools and first VM creation</title>
      <dc:creator>Anna</dc:creator>
      <pubDate>Sun, 12 Jan 2025 13:00:56 +0000</pubDate>
      <link>https://dev.to/dev-charodeyka/virtualization-on-debian-with-virshqemukvm-what-you-need-to-install-and-how-49oo</link>
      <guid>https://dev.to/dev-charodeyka/virtualization-on-debian-with-virshqemukvm-what-you-need-to-install-and-how-49oo</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3tnq2vci8jxtzzq95liw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3tnq2vci8jxtzzq95liw.gif" alt=" " width="720" height="380"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;In this article, I will not cover the basics of &lt;em&gt;virtualization&lt;/em&gt;—what it is and when you might need it. This article is for those who are more or less familiar with the concept, but don’t know how to get started with it on Debian. &lt;/p&gt;




&lt;p&gt;Here’s the road map for this article:&lt;/p&gt;

&lt;p&gt;➀ Virtualization and self-hosting&lt;br&gt;
➁ Virtualization as a tool for resource-Constrained application development&lt;br&gt;
➂ Virtualization on Debian: how it works&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;➂.➀ CPU virtualization support&lt;/li&gt;
&lt;li&gt;➂.➁ Hypervisor&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➃ KVM &amp;amp; QEMU&lt;br&gt;
➄ Libvirt&lt;br&gt;
➅ Validation of virtualization tools installation&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;➅.➀ Important! &lt;code&gt;qemu:///system&lt;/code&gt; vs &lt;code&gt;qemu:///session&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➆ Creation of first VM:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;➆.➀ OS image file&lt;/li&gt;
&lt;li&gt;➆.➁ Preparing storage&lt;/li&gt;
&lt;li&gt;➆.➂ VM creation with virt-install&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➇ Let the Networking begin!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;➇.➀ Userspace (SLIRP or passt) connection&lt;/li&gt;
&lt;li&gt;➇.➁ NAT forwarding (aka "virtual networks")&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;First, I will introduce my use case - why do I need virtual machines on my personal PC.&lt;/p&gt;

&lt;p&gt;Long story short, I’m an "on-premise" girl (read "not very pro with cloud infrastructures"). I have an experience with dealing with on-prem infrastructure, and now I’m facing the need to deploy SOMEWHERE my personal project—a web app—so it sees the real world and real world sees it. And no, this isn’t a static website; it has a backend and a database. And hypothetically, some components will need horizontal scaling in the future.&lt;/p&gt;


&lt;h3&gt;
  
  
  ➀ Virtualization and self-hosting
&lt;/h3&gt;

&lt;p&gt;My personal PC isn’t bad at all in terms of specs—perfectly capable for development purposes and even, &lt;strong&gt;theoretically&lt;/strong&gt;, for serving all the needs of my small app in production. However, hosting anything exposed to the web on a personal PC is out of the question. If your first thought is that the only obstacle is my PC needing to run 24/7, it is not about it. Using a machine that has some personal data for hosting of something exposed to the web is a VERY BAD IDEA. If you don’t understand why, you can check out &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-3b4-2ca5"&gt;this article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If I create Virtual Machine(s) on my personal machine, and configure very well the networks, does that solve the problem? NO! And here’s why. &lt;strong&gt;The main impediment to hosting anything from "home"&lt;/strong&gt;—even if you bought a proper server for it—&lt;strong&gt;is your router and internet provider&lt;/strong&gt;. Is this about internet speed? Nope. It’s about...your public IP address. &lt;/p&gt;

&lt;p&gt;Let's say you move into a new house, and it doesn’t have Wi-Fi. So you contact the internet providers in your city, check the prices, select the most advantageous offer, and… sign the contract. If you’re just an average user and didn’t specify otherwise, the contract gives you “internet for your house” from the chosen provider.&lt;/p&gt;

&lt;p&gt;A technician arrives at the scheduled appointment, brings some cables, and a plastic box—which is the router. They deal with the cables, connect them to the router, hand you a manual along with the Wi-Fi network name and password, and voilà! You can connect all your home devices and enjoy browsing the web.&lt;/p&gt;

&lt;p&gt;When you just use the internet, you’re most likely never even aware of your public IP address. But chances are, it’s not static at all. It changes periodically, and this is done by your internet provider—because that’s how they often manage their clients with “home” use.&lt;/p&gt;

&lt;p&gt;When it comes to hosting something, like a website, even if you buy a domain name like &lt;em&gt;my-cool-site.it&lt;/em&gt;, how will people find it? Who will bind YOUR PC WITH CODE of this site (where all the site’s needs and dependencies reside) to that domain? Domain name of your web app needs to be resolved in such a way that the correct IP address behind it is revealed.&lt;/p&gt;

&lt;p&gt;Theoretically, you don’t even need to buy a domain name; your site can work perfectly fine with just an IP address like &lt;em&gt;&lt;a href="https://12.34.56.78/home" rel="noopener noreferrer"&gt;https://12.34.56.78/home&lt;/a&gt;&lt;/em&gt;. But it’s not a top if you want your site to be searchable on Google and not just accessible by people who already have the link.&lt;/p&gt;

&lt;p&gt;If your internet provider changes your public IP address periodically, it’s like frequently moving houses. People trying to send you letters would still send them to your old address unless you keep updating them, and the letters would never reach you. The same logic applies to the hosting having a dynamic IP. You could, of course, manually update everything and rebind the domain to your new public IP address, but that’s hardly convenient. &lt;/p&gt;

&lt;p&gt;If you want to have a static public IP address, you should contact your internet provider and find out the conditions under which you can get it. &lt;em&gt;It will probably come with an increased payment for internet service&lt;/em&gt;. Is sticking a certain IP address to your Wi-Fi router that hard and does it require extra effort to keep it like this to cover "technical" costs? No. The increased payment is not even very related to the fact that the need for fixed public IP address can hint that the internet access is for business use, so it’s about earning something, and therefore why not to charge you more. Well... it’s because a &lt;em&gt;unique&lt;/em&gt; public IP address is a scarce resource! Actually, a &lt;em&gt;unique&lt;/em&gt; public &lt;strong&gt;IPv4&lt;/strong&gt; address is in deficit. Internet Protocol version 4 (IPv4) forms the foundation of most &lt;em&gt;Global Internet traffic&lt;/em&gt; today. An IP Address represented under IPv4 is composed of four sets of numbers ranging from 0 to 255, separated by periods(.). &lt;/p&gt;

&lt;p&gt;If you do the straightforward math - total four numbers in an IPv4 address; each number can be in range between 0 and 255 (256 possible values) - 256 * 256 * 256 * 256 = 4,294,967,296 total addresses.&lt;/p&gt;

&lt;p&gt;So here, on the &lt;em&gt;market&lt;/em&gt; for internet service, a basic economic rule comes into play: demand is growing with increasing digitalization around the world, but the supply is restricted by the very nature (mathematical) of the good (unique IPv4 address), so the prices for this good are increasing. In the next article, I will cover more details on IPv4, explain a bit about IPv6 (the solution for this IP deficit situationship), and also cover some interesting aspects of networking that are consequences of this IPv4 address deficit (NAT).&lt;/p&gt;

&lt;p&gt;Plus, another obstacle for self-hosting is a router, provided by your internet provider. They often have very restricting measures in terms of incoming https/s traffic (it gets blocked), and those restrictions (thankfully) will impede any hosting attempts. I say thankfully, because if you truly do not understand how it works, it is better that these restriction, firewall rules, are up and protecting you. &lt;/p&gt;

&lt;p&gt;However, keep in mind, hosting on the **same network **you use for any personal device is not a perfect idea if you are unable to configure all the security mechanisms, firewalls, and configure networks properly.&lt;/p&gt;

&lt;p&gt;Summing this up, currently, it is not an option for me to "selfhost".&lt;/p&gt;


&lt;h3&gt;
  
  
  ➁ Virtualization as a tool for resource-Constrained application development
&lt;/h3&gt;

&lt;p&gt;So, virtualization is not a solution for my problem with deployment of the web-app. Then where it can be deployed? The cloud. I can choose a cloud provider, rent the instances that match my app's needs, configure them, and deploy my app. Simple, right? Well, not so fast—because every instance, every service, comes with a price. And those prices... For someone like me, who’s built a pretty powerful PC for around $1,000, seeing cloud pricing for "little server" instances can be a bit confusing. To give you an idea, you can explore pricing on AWS using &lt;a href="https://calculator.aws/#/" rel="noopener noreferrer"&gt;their calculator&lt;/a&gt;. I’ll share some screenshots of EC2 instance pricing:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcph62ik0m4jfcm7tkp3i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcph62ik0m4jfcm7tkp3i.png" alt=" " width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3w8cksgk0s4c9ueefwn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3w8cksgk0s4c9ueefwn.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2 vCPUs, 4 GB of RAM, and storage for an additional cost. All yours for around $30 per month if you want to host something that has a server side operations. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmaw5c993u62fbdpgnkni.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmaw5c993u62fbdpgnkni.gif" alt=" " width="450" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Actually, 2 &lt;strong&gt;v&lt;/strong&gt;CPUs, not CPUs. vCPUs stand for virtual CPUs because they aren’t real physical CPUs—they’re virtualized. And an EC2 instances are essentially just virtual machines.&lt;/p&gt;

&lt;p&gt;Now as we are back to the word virtualization, let’s talk about my use case—my needs. When it comes to the development of my app on my PC, even though I could install everything needed directly (since I’m always using the same stack as a developer), it’s far from optimal. Why clutter my PC with installations os stuff like Nginx and MongoDB, leaving them hanging around unnecessarily when the project is finished?&lt;/p&gt;

&lt;p&gt;To keep everything tidy for development purposes, virtual machines hosted on my PC is the great solution. However, the real issue with development directly on my PC is this: when I develop on a machine with 20 CPU cores of the latest generation, 64 GB of RAM, and 12 GB of GPU memory, how can I be sure that what I’ve developed will actually run on a small EC2 instances? Or more importantly, how can I evaluate the resource requirements for my app in general? (let's leave code-based evaluation aside for now)&lt;/p&gt;

&lt;p&gt;This is where virtualization will really help me. I can evaluate my code’s performance right from the start by creating VMs with small resources attached and placing my app's components there!&lt;/p&gt;
&lt;h4&gt;
  
  
  Note for the Dockerists/Dockerphiles/Containerphiles
&lt;/h4&gt;

&lt;p&gt;I can already foresee the "Gosh, just learn Docker—&lt;em&gt;it’s easy!&lt;/em&gt; Developing on bare metal is dinosauric; containerization is the key!" argument. I have no doubt Docker can handle everything. In fact, I personally enjoy Docker Swarm quite a bit (can’t say the same for Kubernetes, though). &lt;/p&gt;

&lt;p&gt;However, let’s not forget that Docker has under a virtualization technology. And as I mentioned earlier, EC2 instances are nothing more than virtual machines. So, when you spin up an EC2 instance, you’re essentially getting a VM—a &lt;strong&gt;virtual layer&lt;/strong&gt;. Then, when you install Docker on top of that, you’re adding… yet another virtual layer! And all of this is happening on a modest machine with just a few CPUs and some RAM. &lt;/p&gt;

&lt;p&gt;You know what happens when you pile on more and more virtualization layers? They take you farther and farther away from the bare-metal performance of the hardware. &lt;/p&gt;

&lt;p&gt;And Kubernetes for small apps? That’s like using a bazooka to kill a fly. Sure, I know Docker apps can be deployed in various ways on AWS (not only on top of EC2), but that’s not the point. My small-scope web app doesn’t need any of the "perks" Docker van bring.&lt;/p&gt;

&lt;p&gt;"with Docker, my app can run everywhere"—because it’s no longer tied to OS. But I don’t plan to run my app anywhere except on Debian. I know how my app component's VMs work; I will set them up myself and I will know exactly what’s there. &lt;br&gt;
"Docker provides an isolated environment" Sure, but isolated from what? Separate VMs already provide plenty of isolation.&lt;/p&gt;

&lt;p&gt;As for bundling and isolating software of different components, and managing version conflicts. If it is your primary need for Docker even in small projects...Naughty, naughty - did you give up on pure TypeScript/Python and relied on external libraries a lot? Not my case, by the way.&lt;/p&gt;

&lt;p&gt;Why would one follow the containerization hype just because everyone else is doing it?&lt;/p&gt;

&lt;p&gt;That said, I’m not completely throwing Docker out of my stack. But for me, dockerization is something I’ll consider only when everything else is ready. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Moreover, Docker/Containerization is as same easy as virtualization/Virtual Machines. Different sintaxis, some different concepts, but the logic behind is more or less the same. Docker is easy when it comes to setting up everything in a default way, but if you need something more advanced, most probably you will get very frustrated if you do not know anything about virtualization of hardware and virtual machines. Docker is not a rocket science at all for those who have some experience with Virtual Machines.&lt;/strong&gt;      &lt;/p&gt;

&lt;p&gt;So let's start with virtualization on Debian!&lt;/p&gt;


&lt;h3&gt;
  
  
  ➂ Virtualization on Debian: how it works
&lt;/h3&gt;

&lt;p&gt;Virtualization process is happening under the "instructions" of your PC's &lt;em&gt;physical&lt;/em&gt; CPU, so it is important that your CPU is supporting it. Yes, virtual machines can access (if allowed so) various hardware components of your PC, but it is exactly the CPU that is responsible for isolation of process running on guest VMs from the host (your physical PC). If your CPU supports the virtualization, first, it needs to be enabled on your PC:&lt;/p&gt;
&lt;h4&gt;
  
  
  ➂.➀ CPU virtualization support
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;To know if you have virtualization support enabled, you can check if the relevant flag is enabled with grep. If the following command for your processor returns some text, you already have virtualization support enabled:&lt;br&gt;
For Intel processors you can execute &lt;code&gt;grep vmx /proc/cpuinfo&lt;/code&gt; to check for Intel's Virtual Machine Extensions.&lt;br&gt;
For AMD processors you can execute &lt;code&gt;grep svm /proc/cpuinfo&lt;/code&gt; to check for AMD's Secure Virtual Machine. (&lt;a href="https://www.debian.org/doc/manuals/debian-handbook/sect.virtualization.en.html" rel="noopener noreferrer"&gt;The Debian Administrator's Handbook: Virtualization&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In my case:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#this command is counting the number of times 'vmx flags' is mentioned in the output of /proc/cpuinfo. It is equal to 20, meaning that all my CPU cores support virtualization (I have 20 cores totale)
$ egrep -c '(vmx flags)' /proc/cpuinfo
20
#additional command
$ lscpu | grep Virtualization
Virtualization:       VT-x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If in your case the output of command &lt;code&gt;grep vmx /proc/cpuinfo&lt;/code&gt; is empty, but your CPU is quite modern and is supposed to support virtualization, you’ll have to enter the BIOS during boot and enable it there. The steps to follow in BIOS are something like described &lt;a href="https://support.hp.com/in-en/document/ish_5637142-5637191-16" rel="noopener noreferrer"&gt;in this guide&lt;/a&gt;. The interface of your BIOS depends on the brand of your motherboard, so if you are lost you’ll need to check instructions on how to enable virtualization on your PC in web.&lt;/p&gt;

&lt;p&gt;All the CPU cores are ready to virtualize something! Who starts?&lt;/p&gt;

&lt;h4&gt;
  
  
  ➂.➁ Hypervisor
&lt;/h4&gt;

&lt;p&gt;A hypervisor! hypervisor is a bit generic term:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;A hypervisor, also known as a virtual machine monitor (VMM) or virtualizer, is a type of computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. (&lt;a href="https://en.wikipedia.org/wiki/Hypervisor" rel="noopener noreferrer"&gt;Wikipedia&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are different types of hypervisors. To simplify their perception, they can be divided into two types (left and right images of the scheme below):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv12plezh0wkd8krjj7qp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv12plezh0wkd8krjj7qp.png" alt=" " width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second type of hypervisors might be familiar to you if you’ve ever used VirtualBox. It runs on top on Windows, just like any other app on Windows. On the other hand, Type 1 hypervisors &lt;strong&gt;run directly on bare metal&lt;/strong&gt;. They often have their own OS, specifically tuned for virtualization purposes. And they are often used for enterprise scope.&lt;/p&gt;

&lt;p&gt;I included Proxmox in the category of hypervisors type 2, because it comes as a Debian-based OS. To use it, you’ll need to replace your current desktop Debian with Proxmox OS. &lt;em&gt;By the way, Proxmox is pretty great—easy to use and functionality-rich. I use this hypervisor for work, and I find it awesome&lt;/em&gt;. But it is not fully technically correct that Proxmox is hypervisor of type 1, as it is based on KVM&amp;amp;QEMU.&lt;/p&gt;

&lt;p&gt;Let's get to KVM, that is schematized in the middle on the image above. Is it a hypervisor? Well... yes and no. The term "hypervisor" is generic, so you could call it that. But technically, KVM is a Linux kernel module. You don’t have to build it yourself—it comes shipped with the Linux kernel that is core part of your Debian, just like other kernel modules (for example, drivers).&lt;/p&gt;

&lt;p&gt;In this article, I’ll be using KVM to set up virtualization tools on my PC. A popular alternative to KVM on Debian is Xen. Xen is a &lt;strong&gt;truly&lt;/strong&gt; Type 1 hypervisor, even though it can also run alongside Debian OS for personal use.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Xen is a “paravirtualization” solution. It introduces a thin abstraction layer, called a “hypervisor”, between the hardware and the upper systems; this acts as a referee that controls access to hardware from the virtual machines. (&lt;a href="https://www.debian.org/doc/manuals/debian-handbook/sect.virtualization.en.html" rel="noopener noreferrer"&gt;The Debian Administrator's Handbook: Virtualization&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As Xen runs &lt;em&gt;between the hardware and the upper systems&lt;/em&gt; it qualifies as a Type 1 hypervisor. VMware ESXi is another example of a Type 1 hypervisor.&lt;/p&gt;

&lt;p&gt;I’ll be using a KVM-based virtualization setup instead of Xen—just a personal preference.&lt;/p&gt;




&lt;h3&gt;
  
  
  ➃ KVM &amp;amp; QEMU
&lt;/h3&gt;

&lt;p&gt;But what exactly is KVM, besides being a kernel module?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The Kernel Virtual Machine, or KVM, is a full virtualization solution for Linux on x86 (64-bit included) and ARM hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, which provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. (&lt;a href="https://wiki.debian.org/KVM" rel="noopener noreferrer"&gt;Debian Wiki: KVM&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;While KVM is providing most of the infrastructure that can be used by a virtualizer, but it is not a virtualizer by itself. Actual control for the virtualization is handled by a QEMU-based application.&lt;br&gt;
Unlike other virtualization systems, KVM was merged into the Linux kernel right from the start. Its developers chose to take advantage of the processor instruction sets dedicated to virtualization (Intel-VT and AMD-V), which keeps KVM lightweight, elegant and not resource-hungry. The counterpart, of course, is that KVM doesn't work on any computer but only on those with appropriate processors. &lt;br&gt;
Unlike such tools as VirtualBox, KVM itself doesn't include any user-interface for creating and managing virtual machines.(&lt;a href="https://www.debian.org/doc/manuals/debian-handbook/sect.virtualization.en.html" rel="noopener noreferrer"&gt;The Debian Administrator's Handbook: Virtualization&lt;/a&gt;)&lt;/em&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Before, I showed how to check if the virtualization is enabled on your PC, the following command will show you if you have KVM kernel module and it can be used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# checking for presence of KVM kernel modules
$ lsmod | grep kvm
kvm_intel             327680  0
kvm                   983040  1 kvm_intel
#Additional check with cpu-checker package:
$ sudo apt install cpu-checker 
$ sudo kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As the Debian Manual on virtualization stated in the quote above KVM alone goes alongside with qemu for virualization porocesses.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;QEMU (stands for Quick Emulator) is a generic and open source machine emulator and virtualizer.&lt;br&gt;
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your own PC). By using dynamic translation, it achieves very good performance.&lt;br&gt;
When used as a virtualizer, QEMU achieves near native performance by executing the guest code directly on the host CPU. QEMU supports virtualization when executing under the Xen hypervisor or using the KVM kernel module in Linux. When using KVM, QEMU can virtualize x86, server and embedded PowerPC, 64-bit POWER, S390, 32-bit and 64-bit ARM, and MIPS guests. (&lt;a href="https://wiki.qemu.org/Main_Page" rel="noopener noreferrer"&gt;QEMU Wiki&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Is it possible to virtualize using only KVM? THEORETICALLY yes, but KVM has neither GUI nor CLI, so one has to write code in C in order to virtualize something, but KVM alone will not emulate virtual CPUs or virtual RAM.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can you use QEMU without KVM? Yes. QEMU alone can &lt;em&gt;emulate&lt;/em&gt; a full system with its built-in binary translator Tiny Code Generator (TCG), which is purely emulated (= compute-intensive) and overall performance of fully emulated system can be slow. Thus, using QEMU without an accelerator is inefficient and generally best for experimental purposes (e.g if your CPU has an architecture A, but you’re curious about exploring how it all works on CPU architecture B). To “accelerate” the emulated system that run on the same architecture to the host’s one QEMU is using accelerators; and KVM is one of them. However, QEMU can use alternative accelerators like XEN (&lt;a href="https://www.qemu.org/docs/master/system/introduction.html" rel="noopener noreferrer"&gt;QEMU: Virtualisation Accelerators&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;QEMU is not a software package that comes pre-installed on Debian, so you’ll need to install it manually. And here’s where confusion can arise. If you Google around, you’ll most likely find something like this for Debian-based systems &lt;code&gt;sudo apt install qemu-kvm virt-manager bridge-utils&lt;/code&gt;. At first glance, this seems fine— you actually need QEMU to work with KVM. But here’s the tricky part: &lt;code&gt;qemu-kvm&lt;/code&gt; isn’t even a real package. It’s a virtual package, which actually point to something else:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqn1w3mqrzukfztgxu34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqn1w3mqrzukfztgxu34.png" alt=" " width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For my case, it’s fine because I plan to have all my guests using this architecture. But if you want to use a different architecture for your guest VMs, &lt;code&gt;qemu-kvm&lt;/code&gt;will bring in redundant package. There are other packages that will install QEMU besides &lt;code&gt;qemu-system-x86,&lt;/code&gt; like &lt;code&gt;qemu-system-arm&lt;/code&gt;, &lt;code&gt;qemu-system-misc&lt;/code&gt;, &lt;code&gt;qemu-system-ppc&lt;/code&gt; and &lt;code&gt;qemu-system&lt;/code&gt;, which will bring you dependencies to virtualize/emulate various architectures with qemu.&lt;/p&gt;

&lt;p&gt;I will install QEMU in this way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install qemu-system-x86
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To shape your choice a bit, according to QEMU documentation on virtualization with KVM:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;QEMU can make use of KVM when running a target architecture that is the same as the host architecture. For instance, when running &lt;code&gt;qemu-system-x86&lt;/code&gt; on an x86 compatible processor, you can take advantage of the KVM acceleration — giving you benefit for your host and your guest system (&lt;a href="https://wiki.qemu.org/Features/KVM" rel="noopener noreferrer"&gt;QEMU: features KVM&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Technically, if you try to create a guest with a CPU architecture different from your host machine’s CPU, KVM won’t be used, because, remember, QEMU can fully emulate machines (I haven’t tested this myself, though).&lt;/p&gt;

&lt;p&gt;QEMU is installed, KVM is ready to virtualize, so everything is technically set up. However, the QEMU CLI syntax is far from simple and pretty particular. I would prefer to use a syntax which is more familiar to me. And this is where Libvirt will help me.&lt;/p&gt;




&lt;h3&gt;
  
  
  ➄ Libvirt
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Libvirt is collection of software that provides a convenient way to manage virtual machines and other virtualization functionality, such as storage and network interface management.&lt;br&gt;
An primary goal of libvirt is to provide a single way to manage multiple different virtualization providers/hypervisors. No need to learn the hypervisor specific tools! (&lt;a href="https://wiki.libvirt.org/FAQ.html" rel="noopener noreferrer"&gt;Libvirt FAQ&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Libvirt is a bundle of software that includes an API library, a daemon (&lt;code&gt;libvirtd&lt;/code&gt;), and a command line utility (&lt;code&gt;virsh&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Libvirt tools for management of virtual machines are 'virsh', 'virt-manager', and 'virt-install', which are all built around libvirt functionality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F386vnul4gtulhkr01jsk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F386vnul4gtulhkr01jsk.png" alt=" " width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;virt-manager&lt;/code&gt; is a GUI tool for creation and management of VMs entirely through a graphical user interface (GUI) &amp;lt;-- can be a viable option in the beginning.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;virt-install&lt;/code&gt; is a CLI tool that enables the creation and management of VMs via commands and parameters. If created VMs are supposed to have a display and graphical sessions, they can be accessed with &lt;code&gt;virt-viewer&lt;/code&gt; (for a display).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;virsh&lt;/code&gt; is a command-line utility that can do a lot of stuff - staring from very simple tasks like VM creation up to advanced virtualization. virsh works tightly with XML configuration files, that can be used to configure domains, virtual machine specs, networks ecc. Virsh gives an option to connect to existing VMs remotely via SSH &amp;lt;--I will be using this tool.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I install &lt;code&gt;libvirt&lt;/code&gt; with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install libvirt-daemon-system
$ systemctl status libvirtd
● libvirtd.service - libvirt legacy monolithic daemon
     Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; preset: enabled)
     Active: active (running) since Fri 2025-01-10 22:39:29 CET; 2min 17s ago
#if not enabled in your case:
# $ sudo systemctl enable libvirtd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this way, I will have &lt;code&gt;libvirt-clients&lt;/code&gt; installed as well, as it is a dependency of this package.&lt;/p&gt;

&lt;p&gt;Everything needed is supposed to be installed for now. First, I want to validate that everything is OK, and then I can proceed with first VM creation.&lt;/p&gt;




&lt;h3&gt;
  
  
  ➅ Validation of installed virtualization tools
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ virt-host-validate
  QEMU: Checking for hardware virtualization  : PASS
  QEMU: Checking if device '/dev/kvm' exists  : PASS
  QEMU: Checking if device '/dev/kvm' is accessible : PASS
  ...
$ virsh version
Compiled against library: libvirt 10.10.0
Using library: libvirt 10.10.0
Using API: QEMU 10.10.0
Running hypervisor: QEMU 9.2.0

#command to check, which guest machines you can emulate with QEMU features you have installed:
$ virsh capabilities
...
  &amp;lt;guest&amp;gt;
    &amp;lt;os_type&amp;gt;hvm&amp;lt;/os_type&amp;gt;
    &amp;lt;arch name='i686'&amp;gt; &amp;lt;---
      &amp;lt;wordsize&amp;gt;32&amp;lt;/wordsize&amp;gt; &amp;lt;---
      &amp;lt;emulator&amp;gt;/usr/bin/qemu-system-i386&amp;lt;/emulator&amp;gt;
      ...
      &amp;lt;domain type='qemu'/&amp;gt; &amp;lt;---
      &amp;lt;domain type='kvm'/&amp;gt;  &amp;lt;---
    &amp;lt;/arch&amp;gt;
  &amp;lt;/guest&amp;gt;

  &amp;lt;guest&amp;gt;
    &amp;lt;os_type&amp;gt;hvm&amp;lt;/os_type&amp;gt;
    &amp;lt;arch name='x86_64'&amp;gt; &amp;lt;---
      &amp;lt;wordsize&amp;gt;64&amp;lt;/wordsize&amp;gt; &amp;lt;---
      &amp;lt;emulator&amp;gt;/usr/bin/qemu-system-x86_64&amp;lt;/emulator&amp;gt;
      ...
      &amp;lt;domain type='qemu'/&amp;gt; &amp;lt;---
      &amp;lt;domain type='kvm'/&amp;gt;  &amp;lt;---
    &amp;lt;/arch&amp;gt;
  &amp;lt;/guest&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  ➅.➀ Important! qemu:///system vs qemu:///session
&lt;/h4&gt;

&lt;p&gt;Here is the command I want you to pay attention to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ virsh uri
qemu:///session

$ sudo virsh uri
qemu:///system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, there’s a difference between running &lt;code&gt;virsh&lt;/code&gt; with &lt;code&gt;sudo&lt;/code&gt; or without it. When you run &lt;code&gt;virsh&lt;/code&gt; with &lt;code&gt;sudo&lt;/code&gt;, it connects  to the system &lt;code&gt;libvirtd&lt;/code&gt; service, the one launched by &lt;code&gt;systemd&lt;/code&gt;. &lt;code&gt;libvirtd&lt;/code&gt; is running as root, so has access to all host resources. Daemon config is in &lt;code&gt;/etc/libvirt&lt;/code&gt;, VM logs and other bits are stored in &lt;code&gt;/var/lib/libvirt&lt;/code&gt;. &lt;br&gt;
On the contrary, if you run &lt;code&gt;virsh&lt;/code&gt; without &lt;code&gt;sudo&lt;/code&gt;, it connects to &lt;code&gt;qemu:///session&lt;/code&gt;, that is a session &lt;code&gt;libvirtd&lt;/code&gt; service running as the app user, the daemon is auto-launched if it's not already running. &lt;code&gt;libvirt&lt;/code&gt; and all VMs run as the user. All config and logs and disk images are stored in &lt;code&gt;$HOME&lt;/code&gt; directory of a user. This means each user has their own &lt;code&gt;qemu:///session&lt;/code&gt; VMs, separate from all other users. Details are taken &lt;a href="https://blog.wikichoon.com/2016/01/qemusystem-vs-qemusession.html" rel="noopener noreferrer"&gt;from here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If, for some reason, your output of &lt;code&gt;virsh uri&lt;/code&gt; is empty, you can connect manually. And if you mess up between the session or system, you’ll be informed about it in the output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ virsh
virsh # connect qemu:///system
==== AUTHENTICATING FOR org.libvirt.unix.manage ====
System policy prevents management of local virtualized systems

#The correct way if you want to connect to user-space session:
virsh # connect qemu:///session
#The correct way if you want to connect to system wide session:
$ sudo virsh
virsh # connect qemu:///system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;THIS INFO IS VERY IMPORTANT:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;With qemu:///session, libvirtd and VMs run as your unprivileged user. This integrates better with desktop use cases since permissions aren't an issue, no root password is required, and each user has their own separate pool of VMs.&lt;br&gt;
However because nothing in the chain is privileged, any VM setup tasks that need host admin privileges aren't an option. Unfortunately this includes most general purpose networking options.&lt;br&gt;
The default qemu network mode when running unprivleged is usermode networking (or SLIRP). This is an IP stack implemented in userspace. This has many drawbacks: the VM can not easily be accessed by the outside world, the VM can talk to the outside world but only over a limited number of networking protocols, and it's very slow. (&lt;a href="https://blog.wikichoon.com/2016/01/qemusystem-vs-qemusession.html" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If this quote does not tell you much, here is the key takeaway: &lt;code&gt;qemu:///session&lt;/code&gt; integrates better &lt;strong&gt;with desktop use cases&lt;/strong&gt;. Any &lt;strong&gt;VM setup&lt;/strong&gt; tasks &lt;strong&gt;that need host admin privileges aren't an option&lt;/strong&gt;. &lt;strong&gt;This means that your VMs in the scope of &lt;code&gt;qemu:///session&lt;/code&gt; will have ONLY general purpose networking options&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  ➆ Creation of first VM
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;NB! For the demonstration purposes, I will be creating first VM in the scope of &lt;code&gt;qemu:///session&lt;/code&gt;. In this way, I will be able to demonstrate the constraints of created VM. Then, I will show you, how to move create VM under &lt;code&gt;qemu:///system&lt;/code&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  ➆.➀ OS image file
&lt;/h4&gt;

&lt;p&gt;To create my first virtual machine, which will, of course, be Debian Stable (Bookworm), I need an &lt;code&gt;.iso&lt;/code&gt; file. I’ll go for the minimal &lt;a href="https://www.debian.org/CD/netinst/" rel="noopener noreferrer"&gt;netinstall image&lt;/a&gt; to keep the system tidy and install later only tools I will need.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd #to teleport to $HOME directory
$ mkdir -p .local/share/libvirt/images/
$ cd .local/share/libvirt/images/
$ wget https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-12.9.0-amd64-netinst.iso
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  ➆.➁ Preparing storage
&lt;/h4&gt;

&lt;p&gt;Then, I want to specify the storage device I plan to use for all my virtual machines (as they will share portions of it). Since I manage all storage devices on my PC using LVM, the first step is to create a new logical volume using the available free space in my existing logical volume group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo vgs
  VG            #PV #LV #SN Attr   VSize    VFree
  MY-vg       1   5   0 wz--n- &amp;lt;372.53g &amp;lt;129.02g
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How will my virtual machines use the storage space I plan to create for them? Each machine will have one or more virtual disks, and these virtual disks are essentially "disk images." As everything is a file on Linux OS, these disk images are files. And they can have different formats. The &lt;code&gt;.qcow&lt;/code&gt; format is a file format for disk image files used by QEMU. Its updated version, &lt;code&gt;.qcow2&lt;/code&gt;, offers better optimization to the original &lt;code&gt;.qcow&lt;/code&gt;. I can also create disk images for VMs in the &lt;code&gt;.raw&lt;/code&gt; format. However, &lt;code&gt;.qcow2&lt;/code&gt; is generally more space-efficient and can be snapshot-ed and compressed.&lt;/p&gt;

&lt;p&gt;So the task is the following: I need to create a &lt;code&gt;.qcow2&lt;/code&gt; disk image for my to be created VM. And I want to use available space in my logical volume group.&lt;/p&gt;

&lt;p&gt;There are two options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I can create a fairly large new logical volume to provide space for multiple virtual machines. In this case, I need to place a file system on top of the new logical volume, mount it, and then create &lt;code&gt;.qcow2&lt;/code&gt; disk images over it. Why? Because a logical volume without a file system is a single, contiguous block of storage. A single &lt;code&gt;.qcow2&lt;/code&gt; can occupy the entire block device, but there’s no mechanism to store multiple files on the same device unless a file system is present.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The second option is to create separate logical volumes sized to the needs of each virtual machine, with each logical volume fully allocated to a single &lt;code&gt;.qcow2&lt;/code&gt; image. By the way, you can also use physical partitions for your VMs, as logical volumes are just my preferred method of managing storage space. However, even if under each &lt;code&gt;.qcow2&lt;/code&gt; image there is its "personal"  logical volume, this doesn’t mean you can expand the size of &lt;code&gt;.qcow2&lt;/code&gt; image by expanding &lt;br&gt;
logical volume. No, not at all. If my VM runs out of space, I’ll need to attach a new "virtual disk", create an additional logical volume, create new &lt;code&gt;.qcow2&lt;/code&gt; image over it.... This quickly becomes a mess.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, I prefer the first option: a single logical volume as a kind of storage pool for all my virtual machines disk images. &lt;/p&gt;

&lt;p&gt;If your understanding of LVM terminology is a bit &lt;em&gt;wobbly&lt;/em&gt; and you still confuse &lt;em&gt;logical volume groups&lt;/em&gt; with &lt;em&gt;logical volumes&lt;/em&gt;, I recommend to read &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-24-2m32"&gt;this article&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#I create new logical volume with name 'virt-machines' inside of the existing volume group
$ sudo lvcreate -L 100G -n virt-machines MY-vg
# I create filesystem on top of it 
$ sudo mkfs.ext4 /dev/MY-vg/virt-machines
# I create a mounting point for it
$ sudo mkdir -p /mnt/virt-machines
# I mount it
$ sudo mount /dev/MY-vg/virt-machines /mnt/virt-machines
# i add automounting option on boot with by modifying /etc/fstab
$ sudo vim.tiny /etc/fstab
# I add this line 
/dev/mapper/MY--vg-virt--machines /mnt/virt-machines ext4 defaults 0 0
# to validate syntax:
$ sudo mount -a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, I can create a &lt;code&gt;.qcow2&lt;/code&gt; disk image in this directory. Since I plan to create and run VMs in my user space, I’ve given ownership of this directory to my user to avoid any permission issues later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# I create a .qcow2 disk image
$ sudo qemu-img create -f qcow2 /mnt/virt-machines/deb-nginx.qcow2 10G
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have an &lt;code&gt;.iso&lt;/code&gt; file from which the VM will boot (which will be attached as a virtual CDROM), and I have a virtual disk where the new system will be installed. Now, I just need to create a VM and allocate CPU cores and RAM to it. For the first VM creation, I will use &lt;code&gt;virt-install&lt;/code&gt; instead of &lt;code&gt;virsh&lt;/code&gt; to demonstrate the logic, and then proceed with XML configuration explanations. The &lt;code&gt;virt-install&lt;/code&gt; CLI is part of the &lt;code&gt;virtinst&lt;/code&gt; package and is not included in the &lt;code&gt;libvirt-clients&lt;/code&gt; package. It needs to be installed separately.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➆.➂ VM creation with &lt;code&gt;virt-install&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;I will be using the default options of the &lt;code&gt;virt-install&lt;/code&gt; command, with two exceptions: -&lt;code&gt;-graphics none&lt;/code&gt; and &lt;code&gt;--extra-args='console=ttyS0'&lt;/code&gt;. My VMs don’t need any graphical interface as they will not have display servers; I will access them via the console. Debian offers not only a graphical installer but also a terminal user interface (TUI) installer, which will guide through the installation process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install virtinst
$ virt-install \
  --connect qemu:///session \
  --name deb-nginx \
  --ram 4096 \
  --vcpus 2 \
  --disk path=/mnt/virt-machines/deb-nginx.qcow2,size=10 \
  --location $HOME/.local/share/libvirt/images/debian-12.8.iso \
  --os-variant debian12 \
  --graphics none \
  --extra-args='console=ttyS0'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you encounter an error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Traceback (most recent call last):
  File "/usr/bin/virt-install", line 6, in &amp;lt;module&amp;gt;
    from virtinst import virtinstall
  File "/usr/share/virt-manager/virtinst/__init__.py", line 8, in &amp;lt;module&amp;gt;
    import gi
ModuleNotFoundError: No module named 'gi'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check if you are currently not in any active Python environment. I use anaconda, so the &lt;code&gt;base&lt;/code&gt; conda environment is always activated. I just deactivate it with &lt;code&gt;conda deactivate&lt;/code&gt; command before executing &lt;code&gt;virt-install&lt;/code&gt; command.&lt;br&gt;
You will see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvf4kf9fgdtfgj99f65u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvf4kf9fgdtfgj99f65u.png" alt=" " width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then installation of Debian (in TUI only) should pop up. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqad5l59l144mwl5iogrr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqad5l59l144mwl5iogrr.png" alt=" " width="800" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After installation is finished, VM is rebooted, I close active console ad reconnect with &lt;code&gt;virsh&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ virsh --connect qemu:///session console deb-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3i0tjx9rtyiiv8b4jw1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3i0tjx9rtyiiv8b4jw1.png" alt=" " width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Actually, everything is ready, VM is usable, so technically I can try to go into SSH...&lt;/p&gt;




&lt;h3&gt;
  
  
  ➇ Let the Networking begin!
&lt;/h3&gt;

&lt;p&gt;To SSH into the VM, I need to know the private IP address it was assigned (and I believe it was, as I expect some default network to have been configured and the VM joined it during the creation process via QEMU). I will leave the details about how SSH works from a networking perspective for now and will cover it in the next article of this virtualization series.&lt;/p&gt;

&lt;p&gt;To find out IP of created VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@deb-nginx:~# ip a
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu qdisc noqueue state UNKNOWN 
....
2: enp1s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether MA:CA:DD:RE:SS:VM brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp1s0
       valid_lft 77570sec preferred_lft 77570sec
    inet6 XXXXXXXXXXXXXX/64 scope site dynamic mngtmpaddr
       valid_lft 86291sec preferred_lft 14291sec
    inet6 XXXXXXXXXXXXXXXX/64 scope link
       valid_lft forever preferred_lft forever
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The network interface is enp1s0, and the IPv4 address is 10.0.2.15. So, let's try ssh!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#from Host machine!
$ ssh 10.0.2.15

Nothing!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ➇.➀ Userspace (SLIRP or passt) connection
&lt;/h3&gt;

&lt;p&gt;However, as I mentioned earlier, &lt;code&gt;qemu:///session&lt;/code&gt; VMs are primarily intended for desktop use, such as trying out a new distro. The network used under &lt;code&gt;qemu:///session&lt;/code&gt; is somewhat primitive and restrictive—it does not allow incoming connections to the VMs and cannot be properly modified. For instance, you cannot configure more sophisticated network settings without &lt;code&gt;sudo&lt;/code&gt; privileges to create network components like bridges, change their states, etc.&lt;/p&gt;

&lt;p&gt;I can check which network is configured for this VM, and in general, I can review the full configuration of created VM. When I used &lt;code&gt;virt-install&lt;/code&gt;, I simply passed some options during the creation process to specify &lt;em&gt;how&lt;/em&gt; I wanted my VM, and those parameters were translated into configuration file. This file is much more detailed and "technical" than the option list I provided when I was creating VM. QEMU thoroughly translated my requirements into technical specifications, allocated the necessary hardware, and configured other components for my VM to work. The configuration format used by &lt;code&gt;virsh&lt;/code&gt; for almost everything is XML.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ virsh dumpxml deb-nginx

&amp;lt;domain type='kvm' id='3'&amp;gt;
  &amp;lt;name&amp;gt;deb-nginx&amp;lt;/name&amp;gt;
  &amp;lt;uuid&amp;gt;0057aa74-392a-4b4b-ac89-8557d7b9312d&amp;lt;/uuid&amp;gt;
  ...
  &amp;lt;memory unit='KiB'&amp;gt;4194304&amp;lt;/memory&amp;gt;
  &amp;lt;currentMemory unit='KiB'&amp;gt;4194304&amp;lt;/currentMemory&amp;gt;
  &amp;lt;vcpu placement='static'&amp;gt;2&amp;lt;/vcpu&amp;gt; &amp;lt;--interesting
  &amp;lt;os&amp;gt;
    &amp;lt;type arch='x86_64' machine='pc-q35-9.2'&amp;gt;hvm&amp;lt;/type&amp;gt;
  &amp;lt;/os&amp;gt;
  &amp;lt;features&amp;gt;
   ...
  &amp;lt;/features&amp;gt;
  &amp;lt;cpu mode='host-passthrough' check='none' migratable='on'/&amp;gt; &amp;lt;--interesting
   ...
  &amp;lt;devices&amp;gt;
    &amp;lt;emulator&amp;gt;/usr/bin/qemu-system-x86_64&amp;lt;/emulator&amp;gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;
      &amp;lt;driver name='qemu' type='qcow2' discard='unmap'/&amp;gt;
      &amp;lt;source file='/mnt/virt-machines/deb-nginx.qcow2' index='2'/&amp;gt;
      &amp;lt;backingStore/&amp;gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;
      &amp;lt;alias name='virtio-disk0'/&amp;gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/&amp;gt;
    &amp;lt;/disk&amp;gt;
    &amp;lt;disk type='file' device='cdrom'&amp;gt;
      &amp;lt;driver name='qemu'/&amp;gt;
      &amp;lt;target dev='sda' bus='sata'/&amp;gt;
      &amp;lt;readonly/&amp;gt;
      &amp;lt;alias name='sata0-0-0'/&amp;gt;
      &amp;lt;address type='drive' controller='0' bus='0' target='0' unit='0'/&amp;gt;
    &amp;lt;/disk&amp;gt;
    ....
--------------&amp;gt; HERE IT IS, NETWORK INTERFACE &amp;lt;-------------------
    &amp;lt;interface type='user'&amp;gt;
      &amp;lt;mac address='MA:CA:DD:RE:SS:VM'/&amp;gt;
      &amp;lt;model type='virtio'/&amp;gt;
      &amp;lt;alias name='net0'/&amp;gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/&amp;gt;
    &amp;lt;/interface&amp;gt;
    ....
 ---&amp;gt; O! Mouse and keyboard: &amp;lt;---------
    &amp;lt;input type='mouse' bus='ps2'&amp;gt;
      &amp;lt;alias name='input0'/&amp;gt;
    &amp;lt;/input&amp;gt;
    &amp;lt;input type='keyboard' bus='ps2'&amp;gt;
      &amp;lt;alias name='input1'/&amp;gt;
    &amp;lt;/input&amp;gt;
    ....
&amp;lt;/domain&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Created VM has a network interface, only one. But in XML configuration file I see that this interface has type "user" &lt;code&gt;&amp;lt;interface type='user'&amp;gt;&lt;/code&gt;, and normally the type should be "network".&lt;/p&gt;

&lt;p&gt;I can check for existing alternatives (other network interfaces). Under &lt;code&gt;qemu:///session&lt;/code&gt;, as expected, there is nothing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ virsh net-list --all

 Name   State   Autostart   Persistent
----------------------------------------
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, it seems that this userspace network thingy can now be configured quite extensively, because newer versions of &lt;code&gt;libvirt&lt;/code&gt; have introduced more advanced features and options:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Since 9.0.0 an alternate backend implementation of the user interface type can be selected by setting the interface's  subelement type attribute to passt. In this case, the passt transport (&lt;a href="https://passt.top" rel="noopener noreferrer"&gt;https://passt.top&lt;/a&gt;) is used. Similar to SLIRP, passt has an internal DHCP server that provides a requesting guest with one ipv4 and one ipv6 address; it then uses userspace proxies and a separate network namespace to provide outgoing UDP/TCP/ICMP sessions, and optionally redirect incoming traffic destined for the host toward the guest instead.(&lt;a href="https://libvirt.org/formatdomain.html#userspace-slirp-or-passt-connection" rel="noopener noreferrer"&gt;Libvirt: Userspace connection&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;However, configuration of userspace connection is beyond the scope of this article (and the next article on networking as well). For my use case, I don’t actually need port forwarding—I need something different.&lt;/p&gt;

&lt;h4&gt;
  
  
  ➇.➁ NAT forwarding (aka "virtual networks")
&lt;/h4&gt;

&lt;p&gt;However, &lt;code&gt;qemu:///system&lt;/code&gt; has one default interface (NAT):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $ sudo virsh net-list --all

 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   no          yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, I just recreate a VM into &lt;code&gt;qemu:///system&lt;/code&gt; scope and this VM will be using by default this 'default' network. &lt;/p&gt;

&lt;p&gt;First, I have to destroy and undefine VM in &lt;code&gt;qemu:///session&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ virsh destroy deb-nginx
$ virsh undefine deb-nginx
#(optionally, recreare disk image)
$ sudo rm /mnt/virt-machines/deb-nginx.qcow2
$ sudo qemu-img create -f qcow2 /mnt/virt-machines/deb-nginx.qcow2 10G
#IMPORTANT! Start dafault network if it is not started yet
$ sudo virsh net-start default

$ sudo virt-install \
  --connect qemu:///system \
  --name test \
  --ram 4096 \
  --vcpus 2 \
  --disk path=/mnt/virt-machines/deb-nginx.qcow2,size=10 \
  --location /var/lib/libvirt/images/debian-12.9.iso \
  --os-variant debian12 --graphics none \
--extra-args='console=ttyS0'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Please note&lt;/strong&gt; where I placed the ISO file. If you place it inside &lt;code&gt;/etc/libvirt/&lt;/code&gt;in some folder, as it might seem like the right place, you could encounter a weird and misleading error, such as:&lt;br&gt;
&lt;code&gt;error: internal error cannot load AppArmor profile 'libvirt-9cb01efc-ed3b-ff8e-4de5-7227d311dd15'.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you put the ISO file somewhere under your &lt;code&gt;$HOME&lt;/code&gt; directory, you might see a warning like this:&lt;br&gt;
WARNING /home/..../debian-12.8.iso may not be accessible by the hypervisor. You will need to grant the 'libvirt-qemu' user search permissions for the following directories: ['/home/...', '/home/....local', '/home/.../.local/share'].&lt;/p&gt;

&lt;p&gt;Both errors are related to the fact that the VM creation process cannot access the ISO file.&lt;/p&gt;

&lt;p&gt;Meanwhile I proceed with new Installation via TUI. I setup LVM on this VM, and I put &lt;code&gt;/var&lt;/code&gt; on separate logical volume, because this VM is meant for NGINX and NGINX can be very talkative in its logs, especially if configured badly. If you do not know how to do it, refer to &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-24-2m32"&gt;this article&lt;/a&gt;. I also installed SSH server, so I can ssh into this VM from host.&lt;/p&gt;

&lt;p&gt;NB if you have &lt;code&gt;ufw&lt;/code&gt; up! During installation, Debian should auto-configure the network. If it fails and you see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3xelm33y98k36r9efgi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3xelm33y98k36r9efgi.png" alt=" " width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;...disable &lt;code&gt;ufw&lt;/code&gt; &lt;strong&gt;temporarily&lt;/strong&gt; for the process of installation, then &lt;strong&gt;enable it again afterward&lt;/strong&gt;. It's not the best solution, the best approach is to adjust &lt;code&gt;ufw&lt;/code&gt; rules so it doesn't block DHCP requests and DNS resolution, which &lt;code&gt;libvirt&lt;/code&gt; uses to configure the VM's network through the default NAT setup.&lt;/p&gt;

&lt;p&gt;When Installation is completed, I reopen the console with virsh and login into the freshly created VM. First, lets I check connectivity, disccover the IP address, try to ssh from host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo virsh console deb-nginx
# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=112 time=15.5 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=112 time=17.3 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=112 time=16.3 ms
# ip a
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt;
.....
2: enp1s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether XXXXXXXXXXXXXXXX brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.125/24 brd 192.168.122.255 scope global dynamic enp1s0
       valid_lft 3217sec preferred_lft 3217sec
    inet6 XXXXXXXXXXXXXXXX scope link
       valid_lft forever preferred_lft forever
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, the IP is 192.168.122.125.&lt;br&gt;
*&lt;em&gt;NB! If you try to execute &lt;code&gt;ssh 192.168.122.125&lt;/code&gt; from the host, the login will fail because you will automatically be attempting to ssh as &lt;code&gt;root&lt;/code&gt;, and &lt;code&gt;root&lt;/code&gt; login via ssh is disabled by default on Debian.&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh user@192.168.122.125
-&amp;gt;yes
user@192.168.122.125's password:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, what can I do from this VM from the network standpoint:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I can access the Internet (e.g., run &lt;code&gt;apt update&lt;/code&gt; and &lt;code&gt;apt upgrade&lt;/code&gt;).
&lt;/li&gt;
&lt;li&gt;I can SSH into this VM from the host machine.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What I cannot do:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I have a laptop connected to the same home network as my PC (via WiFi). Can I SSH into this VM from the laptop? No. This is why:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ip a
.....
5. virbr0 ...
  inet 192.168.122.0/24
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;You may have many questions and few answers about network configuration, but this article is already quite long, so I’ll move the networking setups to the second part of this series&lt;/p&gt;

</description>
      <category>debian</category>
      <category>kvm</category>
      <category>virtualmachine</category>
      <category>linux</category>
    </item>
    <item>
      <title>Debian 12 … is amazing! How to: Create your custom codehouse #6 [Giving Voice to Debian: Wireless Audio Devices configuration]</title>
      <dc:creator>Anna</dc:creator>
      <pubDate>Mon, 30 Dec 2024 12:34:07 +0000</pubDate>
      <link>https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-6-giving-voice-to-debian-wireless-32kc</link>
      <guid>https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-6-giving-voice-to-debian-wireless-32kc</guid>
      <description>&lt;p&gt;In the &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-4a4-4dfh"&gt;previous article&lt;/a&gt;, I demonstrated all the steps required to set up a custom UI from scratch, including the display server, window manager, status bar and additional useful apps. Additionally, I shared my choices for fundamental apps, such as a terminal emulator and a browser.&lt;/p&gt;

&lt;p&gt;Now, it’s time to complete the setup. What I configured previously was primarily focused on output devices—like the monitor, which handles what you see—and input devices like the keyboard and mouse, which allow interaction with applications. However, input and output audio devices remain untouched (in my case, a Marshall wireless Bluetooth headset with both a microphone for input and speakers for output).&lt;/p&gt;




&lt;p&gt;My input/output audio device is only one—a Marshall IV headset (which I use mostly in wireless Bluetooth mode, but it also has a wired option). I don’t have separate speakers or a microphone. So, the first thing I need to ensure is that my Bluetooth dongle is working properly. I have written a detailed &lt;a href="https://dev.to/dev-charodeyka/why-is-it-when-something-happens-it-is-always-you-two-troubleshooting-bluetooth-and-wi-fi-2ofn"&gt;article dedicated to troubleshooting Wi-Fi and Bluetooth devices&lt;/a&gt;, so if your Bluetooth device gives you any troubles, please refer to that article.&lt;/p&gt;

&lt;p&gt;My Bluetooth dongle works perfectly on Debian—truly plug and play. There’s no need for configuration, installation of drivers/firmware, or any tweaks; it just works. This Bluetooth dongle is from the brand Edimax, purchased on Amazon for around $10 (Edimax BT-8500). It supports the Bluetooth 5.0 protocol.&lt;/p&gt;

&lt;p&gt;Therefore, my Debian system is ready to be configured to use my headset seamlessly for audio playback and, if needed, microphone input.&lt;/p&gt;




&lt;p&gt;Here’s the road map for this article:&lt;/p&gt;

&lt;p&gt;➀ Sound on your PC: where it starts (hardware-side)?&lt;/p&gt;

&lt;p&gt;➁ Sound on your PC: where it starts (software-side)?&lt;/p&gt;

&lt;p&gt;➂ ALSA, PipeWire or Pulseaudio: what is better*? (*trick question)&lt;/p&gt;

&lt;p&gt;➃ Bluetooth devices management tool&lt;/p&gt;

&lt;p&gt;➄ PipeWire vs Pulseaudio&lt;/p&gt;

&lt;p&gt;➅ PipeWire: Installation&lt;/p&gt;

&lt;p&gt;➆ About XDG Desktop Portal&lt;/p&gt;

&lt;p&gt;➇ Pipewire audio profiles: Understanding the difference between device profile headset-head-unit and a2dp&lt;/p&gt;




&lt;h3&gt;
  
  
  ➀ Sound on your PC: where it starts?
&lt;/h3&gt;

&lt;p&gt;Let’s start by understanding what is the crucial component of your PC which determines whether you can enjoy any audio or not. It’s not just about having speakers/microphone, a headset, or anything else that can actually play/register sound. Your PC needs a hardware piece capable of processing audio at a very low level. That job is done by sound cards. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;A sound card is a computer expansion card that can input and output sound under program control. (&lt;a href="https://wiki.debian.org/SoundCard" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you have a laptop or a PC—or if you built a PC manually and you are worried that you might have forgotten some piece—there’s nothing to worry about in 99% of cases; your machine has a sound card. Modern machines typically have on-board sound cards integrated into the motherboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxz22mymihs0h3cxqpwz3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxz22mymihs0h3cxqpwz3.png" alt=" " width="800" height="682"&gt;&lt;/a&gt;&lt;/p&gt;
Zoomed area of sound card on motherboard with 3 audio jacks



&lt;p&gt;Depending on your motherboard’s specifications, the quality of the integrated sound card will vary, and that directly affects the quality of the sound your PC can produce or record. Some people who work with professional audio may find the capacities of integrated sound cards unsatisfactory, so they might opt for an additional sound card that can be attached via PCIe or USB (not all motherboards have PCIe slots for sound cards, though). However, for everyday use, an integrated sound card is usually very sufficient.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ lspci -d ::0403 #::0403 here means Audio device subclass
00:1f.3 Audio device: Intel Corporation Raptor Lake High Definition Audio Controller (rev 11)
01:00.1 Audio device: NVIDIA Corporation GA106 High Definition Audio Controller (rev a1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first audio device is my motherboard’s integrated sound card. But what about NVIDIA Corporation High Definition Audio Controller? Why is it there? Can I use it?&lt;/p&gt;

&lt;p&gt;Nvidia High Definition Audio allows NVIDIA GPU to transmit audio to any display (monitor or TV) with built-in speakers, provided it has an audio-capable connector (most commonly HDMI or DisplayPort). By using Nvidia HD Audio, you can enjoy various uncompressed and lossless audio formats, which can significantly enhance your overall listening experience—especially if you’re using modern displays with integrated speakers.&lt;/p&gt;

&lt;p&gt;However, my monitor is pretty simple and it does not have any audio device. So, I will be using on-board soundcard.&lt;/p&gt;




&lt;h3&gt;
  
  
  ➁ Sound on your PC: where it starts (software-side)?
&lt;/h3&gt;

&lt;p&gt;As with any hardware device, a sound card needs firmware and drivers to function. On Debian, you usually don’t have to worry about these, except in cases where something is messed up with the kernel modules—this can often happen if Debian is running on a virtual machine. Yes, like most drivers, sound card drivers are part of Linux kernel, so it’s the Linux kernel that provides them to your Debian.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ lspci -k -d ::0403
00:1f.3 Audio device: Intel Corporation Raptor Lake High Definition Audio Controller (rev 11)
    DeviceName: Intel HD Audio
    Subsystem: ASUSTeK Computer Inc. Raptor Lake High Definition Audio Controller
    Kernel driver in use: snd_hda_intel
    Kernel modules: snd_hda_intel, snd_soc_avs, snd_sof_pci_intel_tgl
01:00.1 Audio device: NVIDIA Corporation GA106 High Definition Audio Controller (rev a1)
    Subsystem: ASUSTeK Computer Inc. GA106 High Definition Audio Controller
    Kernel driver in use: snd_hda_intel
    Kernel modules: snd_hda_intel
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything is alright with sound card's driver in my case - &lt;code&gt;snd_hda_intel&lt;/code&gt; driver is present and in use.&lt;/p&gt;

&lt;h3&gt;
  
  
  ➂ ALSA, PipeWire or Pulseaudio: what is better*? (trick question)
&lt;/h3&gt;

&lt;p&gt;I have a sound card, I have drivers and firmware, and &lt;strong&gt;I have an audio output device—my headset connected via cable&lt;/strong&gt; (let’s leave aside the Bluetooth management part for now).&lt;/p&gt;

&lt;p&gt;So, I go to YouTube to listen to my favourite music band, &lt;a href="https://www.youtube.com/watch?v=4S9ZMmctmMA&amp;amp;list=OLAK5uy_mShYKzbbtk8KHC9apyQTS43XrMR4zcD7A&amp;amp;index=6" rel="noopener noreferrer"&gt;Public Memory&lt;/a&gt;...&lt;/p&gt;

&lt;p&gt;Well, I can’t hear anything (I can’t provide a screenshot as proof, though XD). So, what am I missing?&lt;/p&gt;

&lt;p&gt;If you’ve ever Googled/troubleshooted about audio on Linux, most probably you’ve encountered the three names ALSA, PulseAudio, and PipeWire, regardless of your distro. Maybe, these names can be confusing. Are they alternatives to each other, do you need all of them, or which one is better? That’s why I gave this section its title. I want to answer these questions by exploring why I can’t listen to Public Memory - which software I am missing? &lt;/p&gt;

&lt;h3&gt;
  
  
  ALSA (Advanced Linux Sound Architecture)
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;_ Advanced Linux Sound Architecture (ALSA) is a software framework and part of the Linux kernel that provides an application programming interface (API) for sound card device drivers.&lt;br&gt;
Put simply, ALSA can be divided into two components: The kernel API that provides access to your sound card for higher-level sound servers and applications, and a userspace library that provides more general functions (like effects, mixing, routing, etc.)&lt;br&gt;
There is no way to "replace" ALSA, with regards to the kernel API. Previously, there was also OSS (Open Sound System), but that's been deprecated for nearly 20 years. The same is not true of ALSA's userspace library, which can be replaced. (&lt;a href="https://wiki.debian.org/ALSA" rel="noopener noreferrer"&gt;Debian Wiki: ALSA&lt;/a&gt;)_&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here we are! The first answer is: &lt;strong&gt;ALSA is a must-have&lt;/strong&gt;. No ALSA = no audio. Moreover, you already have it on Debian, because it is the part of the Linux kernel, the same as sound card drivers. So, since I have the Linux kernel, I should already have ALSA installed.&lt;/p&gt;

&lt;p&gt;Indeed, I have it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# I check for presense of package that contains the ALSA userspace library and its standard plugins, as well as the required configuration files:
$ dpkg -l | grep libasound2
ii  libasound2:amd64                     1.2.8-1+b1                           amd64        shared library for ALSA applications
ii  libasound2-data                      1.2.8-1                              all          Configuration files and profiles for ALSA drivers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why I cannot listen to Public Memory, then?? T-T. &lt;/p&gt;

&lt;p&gt;Well, if you haven’t been reading this series of articles from the first part, you might not know that when I installed my Debian, I used a super minimal installation: Debian Stable with only the standard system utilities. This setup is usually perfect for server environments, and the last thing you’d typically do on a server is play music. So, I do have ALSA, I have the drivers, and everything is ready for me, but it needs to be initialised. By default, it’s not initialised because Debian (if you choose this option during installation) doesn’t run/initialise anything you didn’t explicitly request (and I love it). &lt;/p&gt;

&lt;p&gt;The initialization is quite simple: &lt;code&gt;sudo alsactl init&lt;/code&gt;. However, I cannot use it because this command is part of the &lt;a href="https://packages.debian.org/bookworm/alsa-utils" rel="noopener noreferrer"&gt;alsa-utils package&lt;/a&gt;, which is not installed by default on my setup and is not included in the standard system utilities. Therefore, I need to install it first.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install alsa-utils
$ sudo alsactl init
Found hardware: "HDA-Intel" "Realtek ALC897" "HDA:10ec0897,104387fb,00100402" "0x1043" "0x87fb"
Hardware is initialized using a generic method
Found hardware: "HDA-Intel" "Nvidia GPU 9f HDMI/DP" "HDA:10de009f,1043881d,00100100" "0x1043" "0x881d"
Hardware is initialized using a generic method
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Aaaannd magic! Public Memory divine music is in my ears!&lt;/p&gt;

&lt;p&gt;Is ALSA restrictive in some way? Does it work only for one audio stream at a time? &lt;/p&gt;

&lt;p&gt;For development purposes, I have multiple browsers installed, so I opened another Public Memory song in a different browser and played it simultaneously. It works—I hear the overlaying audio of both songs.&lt;/p&gt;

&lt;p&gt;I think it’s time to set up Bluetooth management tools and try listening to songs using my wireless headset.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Regarding the question posed in the section title, I haven’t fully explored it yet. For now, it’s clear that ALSA is the key to audio input/output working on Debian, and it’s functioning well for little tests I performed. I’ll leave advanced tests—like using the microphone, testing apps for calls, etc.—for later, as my priority now is setting up Bluetooth to use my headset wirelessly.&lt;/p&gt;




&lt;h3&gt;
  
  
  ➃ Bluetooth devices management tool
&lt;/h3&gt;

&lt;p&gt;I have a Bluetooth adapter (dongle Edimax BT-8500) and a Bluetooth device (Marshall IV headset). &lt;br&gt;
Basically, I just need to install the &lt;a href="https://packages.debian.org/bookworm/bluetooth" rel="noopener noreferrer"&gt;&lt;code&gt;bluetooth&lt;/code&gt; metapackage&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Package: bluetooth (5.66-1+deb12u2)&lt;br&gt;
This package provides all of the different plugins supported by the Bluez bluetooth stack&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install bluetooth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I won't be using any GUI for now; I'll use command-line tool &lt;code&gt;bluetoothctl&lt;/code&gt; instead. First, I'll check if the bluetooth service is running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ systemctl status bluetooth
# if it is not running
$ sudo systemctl start bluetooth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, I run &lt;code&gt;bluetoothctl&lt;/code&gt; and enter its CLI&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ bluetoothctl
[bluetooth]# power on
# i set the agent to handle pairing and set it to default mode:
[bluetooth]# agent on
[bluetooth]# default-agent
# start scanning devices
[bluetooth]# scan on
[NEW] Device XX:XX:XX:XX:XX:XX MAJOR IV
[bluetooth]# scan off
#numbers you see is unique MAC address of your audio device
# once I have the MAC address of my Marshall headset, I can initiate pairing:
# I PUT INTO PAIRING MODE MY MARSHALL HEADSET
[bluetooth]# pair XX:XX:XX:XX:XX:XX
Attempting to pair with 1C:6E:4C:84:CF:A7
[CHG] Device XX:XX:XX:XX:XX:XX Connected: yes
[CHG] Device XX:XX:XX:XX:XX:XX Bonded: yes
...
[CHG] Device XX:XX:XX:XX:XX:XX ServicesResolved: yes
[CHG] Device XX:XX:XX:XX:XX:XX Paired: yes
Pairing successful
[CHG] Device XX:XX:XX:XX:XX:XX ServicesResolved: no
[CHG] Device XX:XX:XX:XX:XX:XX Connected: no
[bluetooth]# scan off
##Hmm, pairing is successful, but connection is no...
[bluetooth]# connect XX:XX:XX:XX:XX:XX
Attempting to connect to XX:XX:XX:XX:XX:XX
Failed to connect: org.bluez.Error.Failed br-connection-profile-unavailable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The error isn’t very self-explanatory. Something seems off. &lt;a href="https://github.com/bluez/bluez/blob/bb12ef4a9f71550ba84033f565a27773d893d8bf/doc/errors.txt#L26-L30" rel="noopener noreferrer"&gt;Bluez's corresponding error description&lt;/a&gt; still not entirely clear:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Failed to find connectable services or the target service.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;However, nothing to worry, this isn’t really about troubleshooting or debugging. I just wanted to gently point you back to the unanswered question in the previous section - is having only ALSA enough?&lt;/p&gt;

&lt;p&gt;First off, there are two guides for Debian on Bluetooth usage: one for &lt;a href="https://wiki.debian.org/BluetoothUser" rel="noopener noreferrer"&gt;non-audio Bluetooth devices&lt;/a&gt;, and &lt;a href="https://wiki.debian.org/BluetoothUser/a2dp" rel="noopener noreferrer"&gt;another specifically for Bluetooth audio devices&lt;/a&gt;. The second one, the right one for my case, explicitly states:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Pre-configuration&lt;br&gt;
In short: To connect to a given device, you need Bluetooth hardware on your PC (either built-in, or in the form of a USB dongle), the Bluez daemon, and a compatible audio server (either PulseAudio or PipeWire). Alternatively Bluetooth ALSA available since Debian 12 bookworm allows to avoid running of a high-level sound server. (&lt;a href="https://wiki.debian.org/BluetoothUser/a2dp" rel="noopener noreferrer"&gt;Debian Wiki: Bluetooth Audio&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, there are two options here:&lt;/p&gt;

&lt;p&gt;You stick with ALSA only, using an additional package &lt;a href="https://wiki.debian.org/Bluetooth/Alsa" rel="noopener noreferrer"&gt;Bluetooth ALSA&lt;/a&gt;. In this case, you don’t need PulseAudio or PipeWire for Bluetooth audio devices to work.&lt;br&gt;
Both PulseAudio and PipeWire, as mentioned above, are audio servers.&lt;/p&gt;

&lt;p&gt;However, it may still be unclear why I can't simply use ALSA if it worked for wired music playback, and what's missing from my system. The answer is that, without additional software, the Bluetooth codec used by my Marshall headset isn't supported.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;A codec determines how Bluetooth transmits from the source device to your headphones. It encodes and decodes digital audio data into a specific format. In an ideal world, a high-fidelity signal would be possible at the minimum specified bit rate, resulting in the least amount of space and bandwidth required for storage and transmission. Lower bitrates actually mean better compression but often mean worse sound quality, a high bitrate usually means better sound quality and worse compression. (&lt;a href="https://www.soundguys.com/understanding-bluetooth-codecs-15352/" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are more advanced codecs and less advanced ones. However, the use of codecs is somewhat bilateral. I mean, the choice I mentioned above—that you can go for just an additional package like Bluetooth ALSA or choose one of the sound servers—can be partially guided by the support of audio codecs out of the box. In this case, the winner is PipeWire. However, if your audio input/output doesn't support advanced codecs, there's no way to utilize them. &lt;br&gt;
Interestingly, despite using Bluetooth 5.0, the my Marshall Major IV supports only the default SBC audio codec. This codec is adequate and the connection is stable, however it’s not ideal (&lt;a href="https://zmarshall.zendesk.com/hc/en-us/articles/22570444942481-Major-IV-Specifications" rel="noopener noreferrer"&gt;source&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;So, actually for my setup just additional package for bluettohb audio devices support for ALSA is viable version:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The &lt;code&gt;bluez-alsa-utils&lt;/code&gt;package supports the LDAC and SBC codecs. AAC support is not compiled into the official package, because the required library is only available in the non-free repository.&lt;br&gt;
This will also automatically configure a "bluez-alsa" systemd service, and install the proper ALSA configuration files to glue everything together. No further steps should be necessary. If you run into issues after installation though, a reboot might help. (&lt;a href="https://wiki.debian.org/Bluetooth/Alsa" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;However, I would prefer to use a sound server on my setup.&lt;/p&gt;


&lt;h3&gt;
  
  
  ➄ PipeWire vs Pulseaudio
&lt;/h3&gt;

&lt;p&gt;Before comparing these two audio servers, it would be helpful to first understand what they do and why they’re needed except for the Bluetooth connection of audio devices.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-4a4-4dfh"&gt;previous article&lt;/a&gt; of this series, I installed and configured a display server as the first step in building my custom UI. The reason for this is that all graphical applications "live" on the display server, which handles all inputs and outputs. Moreover, there is an inter-process communication, the desktop bus, used by applications to communicate with each other, creating a kind of "graphical ecosystem".&lt;/p&gt;

&lt;p&gt;A similar parallel can be drawn for sound servers. While ALSA is sufficient for many use cases, you might need a sound server for more advanced and dynamic audio usage. This includes not just playing music but also audio and video streaming, sharing screens with audio during calls, and leveraging advanced audio technologies and codecs. For most of these scenarios, a sound server becomes essential.&lt;/p&gt;

&lt;p&gt;The difference between PulseAudio and PipeWire is somewhat analogous to the difference between the X11 display protocol and Wayland. The latter is a much newer technology, designed and developed to replace PulseAudio while addressing all of its shortcomings.&lt;/p&gt;

&lt;p&gt;There’s nothing inherently wrong with PulseAudio—it’s neither abandoned nor outdated. However, PipeWire is more modern and, in some cases, offers out-of-the-box support for advanced audio devices without the need for tweaks or extra configuration. This isn’t relevant in my case since, as I mentioned, my Marshall Major IV headset only supports the very basic SBC Bluetooth codec. Nonetheless, I’ve decided to go with PipeWire.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;PipeWire is a server and API for handling multimedia on Linux. Its most common use is for Wayland and Flatpak applications to implement screensharing, remote desktop, and other forms of audio and video routing between different pieces of software. Per the official FAQ, "you can think of it as a multimedia routing layer on top of the drivers that applications and libraries can use."&lt;br&gt;
As opposed to PulseAudio's focus on consumer audio and JACK's focus on professional audio, PipeWire aims to work for all users at all levels. Among other techniques, PipeWire achieves this with its ability to dynamically switch between different buffer sizes, for adapting to the different latency requirements of different audio applications.&lt;br&gt;
In Debian 12, PipeWire 0.3.65 is available, and is considerably more reliable, and is a comfortable drop-in replacement for many use-cases. PipeWire is the default sound server with GNOME Desktop. (&lt;a href="https://wiki.debian.org/PipeWire" rel="noopener noreferrer"&gt;Debian Wiki: PipeWire&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  ➅ PipeWire: Installation
&lt;/h3&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-4a4-4dfh"&gt;previous article&lt;/a&gt;, while configuring the BSPWM window manager and display server, it was necessary to set up D-Bus for the X session. This is because, when initiating the graphical user interface with &lt;code&gt;startx&lt;/code&gt; from the console, an X session managed by X11 is started, and it exists with BSPWM which executes all configurations specified in its setup.&lt;/p&gt;

&lt;p&gt;Similarly, PipeWire requires a session manager to function effectively.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Two session managers are now available. The one pulled in by default (wireplumber) is the one recommended by pipewire's developers. The other one (pipewire-media-session) is primitive, and is best when using PipeWire just for its basic functionality like screensharing. When using PipeWire as your system's sound server, the maintainer recommends installing the more advanced WirePlumber instead. This command will install WirePlumber while removing the old session manager:_&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, I just need to install WirePlumber, and as a dependency, it will bring PipeWire to my system. There is this recommendation from Debian developers:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;It is recommended to install the metapackage &lt;code&gt;pipewire-audio&lt;/code&gt; which depends on &lt;code&gt;wireplumber&lt;/code&gt; (the recommended session manager), &lt;code&gt;pipewire-pulse&lt;/code&gt; (to replace PulseAudio), &lt;code&gt;pipewire-alsa&lt;/code&gt; (ALSA) and &lt;code&gt;libspa-0.2-bluetooth&lt;/code&gt; (for Bluetooth support). Moreover, installing this metapackage will remove &lt;code&gt;pulseaudio&lt;/code&gt;to prevent any conflicts between both sound server. (&lt;a href="https://wiki.debian.org/PipeWire#Bluetooth-1" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;*&lt;em&gt;NB! The Debian 12 Bookworm stable repository has quite outdated versions of the PipeWire sound server (v. 0.3.65) and its dependencies. Since PipeWire is young and evolving quickly, it may be important to have a more up-to-date version installed. So, the option to install it from Bookworm backports may be viable (v. 1.2.5), and later on, I will show you how. *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;However, even if you decide to go for it, make sure to read the text  below and check the logs, because otherwise, you might not understand the &lt;code&gt;XDG Desktop Portal&lt;/code&gt;. A version of WirePlumber and PipeWire from Bookworm backports doesn't seem to complain about it, but trust me, you'll most likely need the &lt;code&gt;XDG Desktop Portal&lt;/code&gt;."&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install pipewire-audio
$ systemctl --user start wireplumber
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, I’ll check if everything is okay with wireplumber service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ systemctl --user status  wireplumber
● wireplumber.service - Multimedia Service Session Manager
     Loaded: loaded (/usr/lib/systemd/user/wireplumber.service; enabled; preset: enabled)
     Active: active (running) since Mon 2024-12-30 13:04:27 CET; 13s ago
....

Dec 30 13:04:27 wonderland systemd[1017]: Started wireplumber.service - Multimedia Service Session Manager.
Dec 30 13:04:27 wonderland wireplumber[6190]: Can't find org.freedesktop.portal.Desktop. Is xdg-desktop-portal running?
Dec 30 13:04:27 wonderland wireplumber[6190]: found session bus but no portal
Dec 30 13:04:27 wonderland wireplumber[6190]: Failed to set scheduler settings: Operation not permitted
Dec 30 13:04:27 wonderland wireplumber[6190]: SPA handle 'api.libcamera.enum.manager' could not be loaded; is it installed?
Dec 30 13:04:27 wonderland wireplumber[6190]: PipeWire's libcamera SPA missing or broken. libcamera not supported.
Dec 30 13:04:28 wonderland wireplumber[6190]: Trying to use legacy bluez5 API for LE Audio - only A2DP will be supported. Please upgrade bluez5.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wireplumber is running, but there are a couple of things that aren't okay. Let's go through them one by one with the errors.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;wireplumber: Failed to set scheduler settings: Operation not permitted&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is definitely something about permissions. It might be related to the fact that PipeWire version available on Debian Bookworm is low, so the permission issue is most likely due to the absence of a specific user group (pipewire) created with the necessary permissions when PipeWire is installed. The default group is used, but PipeWire needs more. Personally, I would leave it as is and only debug if something really doesn't work.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;PipeWire's libcamera SPA missing or broken. libcamera not supported.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is just about a PipeWire plugin, and depending on whether you need it or not, you can install it. I don't need it, so I’ll leave PipeWire to complain about the missing plugin. &lt;a href="https://packages.debian.org/bookworm/mips64el/pipewire-libcamera" rel="noopener noreferrer"&gt;Here is&lt;/a&gt; the package for this plugin.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Trying to use legacy bluez5 API for LE Audio - only A2DP will be supported. Please upgrade bluez5&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Well, here I am completely okay with it since I don't have devices that use legacy codecs. A2DP is exactly what I need:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Advanced Audio Distribution Profile (A2DP)&lt;br&gt;
A standard for how Bluetooth devices that can stream high-quality audio to remote devices. This is most commonly used for linking wireless headphones and speakers to your PC. (&lt;a href="https://wiki.debian.org/BluetoothUser/a2dp" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What’s left is quite interesting, and it’s something you should pay attention to:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Can't find org.freedesktop.portal.Desktop. Is xdg-desktop-portal running?&lt;/code&gt;&lt;br&gt;
&lt;code&gt;found session bus but no portal&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pipewire&lt;/code&gt; service reports the same issue in slightly different manner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ systemctl --user  status pipewire
● pipewire.service - PipeWire Multimedia Service
     Loaded: loaded (/usr/lib/systemd/user/pipewire.service; enabled; preset: enabled)
     Active: active (running) since Mon 2024-12-30 13:04:27 CET; 12min ago
TriggeredBy: ● pipewire.socket
....

Dec 30 13:04:27 wonderland systemd[1017]: Started pipewire.service - PipeWire Multimedia Service.
Dec 30 13:04:27 wonderland pipewire[6189]: mod.rt: Can't find org.freedesktop.portal.Desktop. Is xdg-desktop-portal running?
Dec 30 13:04:27 wonderland pipewire[6189]: mod.rt: found session bus but no portal
alisa@wonderland:~$
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, DBus User Message Bus is running correctly (I dedicated a very detailed attention to configuring it &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-4a4-4dfh"&gt;in the previous article&lt;/a&gt;). But something related to the desktop is missing...&lt;/p&gt;

&lt;p&gt;What even is this?&lt;/p&gt;




&lt;h3&gt;
  
  
  ➆ About XDG Desktop Portal
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;XDG Desktop Portal&lt;br&gt;
A portal frontend service for Flatpak and other desktop containment frameworks.&lt;br&gt;
xdg-desktop-portal works by exposing a series of D-Bus interfaces known as portals under a well-known name (org.freedesktop.portal.Desktop) and object path (/org/freedesktop/portal/desktop). (&lt;a href="https://flatpak.github.io/xdg-desktop-portal/" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Mostly, it’s needed for containerized software like Flatpaks, as the quote above says. However, it’s not only for this, and even if you don’t use Flatpak, you could still run into problems. A very trivial case could be with a browser like Mozilla or any other when you try to upload something, and the file chooser dialog is needed. It won’t work without the portal.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Portals were designed for use with applications sandboxed through Flatpak, but any application can use portals to provide uniform access to features independent of desktops and toolkits. This is commonly used, for example, to allow screen sharing on Wayland via PipeWire, or to use file open and save dialogs on Firefox that use the same toolkit as your current desktop environment. (&lt;a href="https://wiki.archlinux.org/title/XDG_Desktop_Portal" rel="noopener noreferrer"&gt;Arch Wiki: XDG Desktop Portal&lt;/a&gt;)&lt;/em&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;XDG Desktop Portal can be installed with &lt;code&gt;sudo apt install xdg-desktop-portal&lt;/code&gt; on Debian. HOWEVER, THE COMMON MISTAKE IS TO INSTALL ONLY THIS PACKAGE. XDG Desktop Portal needs at least one backend (can have also more). There are different backends available on Debian Bookworm:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;xdg-desktop-portal-gnome&lt;/li&gt;
&lt;li&gt;xdg-desktop-portal-gtk&lt;/li&gt;
&lt;li&gt;xdg-desktop-portal-kde&lt;/li&gt;
&lt;li&gt;xdg-desktop-portal-wlr&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will proceed with installation of xdg-desktop-portal first.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Package: xdg-desktop-portal &lt;br&gt;
desktop integration portal for Flatpak and Snap&lt;br&gt;
xdg-desktop-portal provides a portal frontend service for Flatpak, Snap, and possibly other desktop containment/sandboxing frameworks. This service is made available to the sandboxed application, and provides mediated D-Bus interfaces for file access, URI opening, printing and similar desktop integration features. (&lt;a href="https://packages.debian.org/bookworm/xdg-desktop-portal" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install xdg-desktop-portal
$ systemctl --user start xdg-desktop-portal
$ systemctl --user status xdg-desktop-portal
● xdg-desktop-portal.service - Portal service
     Loaded: loaded (/usr/lib/systemd/user/xdg-desktop-portal.service; static)
     Active: active (running) since Mon 2024-12-30 13:29:28 CET; 1s ago
 ....

Dec 30 13:29:28 wonderland systemd[1017]: Starting xdg-desktop-portal.service - Portal service...
Dec 30 13:29:28 wonderland xdg-desktop-por[7132]: No skeleton to export
Dec 30 13:29:28 wonderland systemd[1017]: Started xdg-desktop-portal.service - Portal service.

$ systemctl --user  restart wireplumber

$ systemctl --user  status wireplumber
● wireplumber.service - Multimedia Service Session Manager
     Loaded: loaded (/usr/lib/systemd/user/wireplumber.service; enabled; preset: enabled)
     Active: active (running) since Mon 2024-12-30 13:29:46 CET; 7s ago
   ...

Dec 30 13:29:46 wonderland systemd[1017]: Started wireplumber.service - Multimedia Service Session Manager.
Dec 30 13:29:46 wonderland wireplumber[7157]: Failed to set scheduler settings: Operation not permitted
Dec 30 13:29:46 wonderland wireplumber[7157]: SPA handle 'api.libcamera.enum.manager' could not be loaded; is it installed?
Dec 30 13:29:46 wonderland wireplumber[7157]: PipeWire's libcamera SPA missing or broken. libcamera not supported.
Dec 30 13:29:47 wonderland wireplumber[7157]: Trying to use legacy bluez5 API for LE Audio - only A2DP will be supported. Please upgrade bluez5.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;WirePlumber just wanted the XDG Desktop Portal available, without the specific backend. So, after installing the XDG Desktop Portal and starting it as a service, the errors in the WirePlumber and PipeWire services related to the XDG portal disappeared (after restarting both of them).&lt;/p&gt;

&lt;p&gt;However, the XDG Desktop Portal service itself is reporting that no backend was found: &lt;code&gt;xdg-desktop-portal[7132]: No skeleton to export.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I already mentioned that even if you don’t use Flatpaks, you can still encounter problems with other apps, like browsers. The backend is easy to choose when you have the default DE: if it’s GNOME, then &lt;code&gt;xdg-desktop-portal-gnome&lt;/code&gt;; if it’s KDE, then &lt;code&gt;xdg-desktop-portal-kde&lt;/code&gt;. These are usually installed automatically if you have installed one of default DEs. I would install the GTK backend (&lt;code&gt;xdg-desktop-portal-gtk&lt;/code&gt;). Why? For example, Firefox depends on GTK, and I also use GIMP, which depends on GTK. In general, GTK will be completely enough for me. &lt;strong&gt;But please note, some Flatpaks depend on other services that require a specific backend, like GNOME for XDG Desktop Portal. If a Flatpak doesn’t work with GTK, you’ll have to check the logs to figure out what it needs and install it if necessary, possibly another backend.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install xdg-desktop-portal-gtk

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NB! You need to understand this sequence in order to decide how and when to launch the aforementioned services (&lt;code&gt;wireplumber&lt;/code&gt; and &lt;code&gt;xdg-dektop-portal&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;wireplumber&lt;/code&gt; service manages the sound server &lt;code&gt;pipewire&lt;/code&gt; sessions.&lt;br&gt;
If you enable it with &lt;code&gt;systemctl --user enable wireplumber&lt;/code&gt;, it will start automatically on boot.&lt;br&gt;
However, you cannot enable &lt;code&gt;xdg-desktop-portal&lt;/code&gt; service  in the same way! Moreover, even if you can do it manually, keep in mind that it will result in an error without the &lt;code&gt;DISPLAY&lt;/code&gt; environment variable, just like &lt;code&gt;dunst&lt;/code&gt; &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-4a4-4dfh"&gt;in the previous article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So, I do not enable &lt;code&gt;wireplumber&lt;/code&gt; because it will need to be restarted after &lt;code&gt;xdg-desktop-portal&lt;/code&gt; is launched. Instead, I modify the &lt;code&gt;bspwmrc&lt;/code&gt; config, so it handles it for me.&lt;/p&gt;

&lt;p&gt;I add these lines (&lt;code&gt;systemctl --user start xdg-dektop-portal&lt;/code&gt;,&lt;br&gt;
&lt;code&gt;systemctl --user restart wireplumber&lt;/code&gt; and &lt;code&gt;systemctl --user restart pipewire&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo vim.tiny /etc/xdg/bspwm/bspwmrc
#! /bin/sh

#SXHKD Launching
pgrep -x sxhkd &amp;gt; /dev/null || sxhkd -c "/etc/xdg/bspwm/sxhkdrc" &amp;amp;

#DISPLAY env import
systemctl --user import-environment DISPLAY

#Starting XDG Desktop portal
systemctl --user start xdg-dektop-portal

#ReStarting Wireplumber
systemctl --user restart wireplumber
systemctl --user restart pipewire

#DUNST Launching
systemctl --user start dunst

#POLYBAR Launching
polybar-msg cmd quit

echo "---" | tee -a /tmp/polybar.log
polybar 2&amp;gt;&amp;amp;1 | tee -a /tmp/polybar.log &amp;amp; disown


bspc monitor -d I II III IV V

bspc config border_width         2
bspc config window_gap          12

bspc config split_ratio          0.52
bspc config borderless_monocle   true
bspc config gapless_monocle      true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can do &lt;code&gt;sudo reboot&lt;/code&gt; and then &lt;code&gt;startx&lt;/code&gt; to check the success of modifications with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;systemctl --user status wireplumber&lt;/code&gt; &lt;code&gt;systemctl --user status xdg-desktop-portal&lt;/code&gt; and &lt;code&gt;systemctl --user status pipewire&lt;/code&gt;. Here is what I have:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ciweevdlui1vrbb6f6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ciweevdlui1vrbb6f6b.png" alt=" " width="800" height="754"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Everything seems alrighty!&lt;br&gt;
Time to test Bluetooth!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ bluetoothctl
[bluetooth]# scan on
[NEW] Device XX:XX:XX:XX:XX:XX MAJOR IV
[bluetooth]# pair XX:XX:XX:XX:XX:XX
Attempting to pair with XX:XX:XX:XX:XX:XX
[CHG] Device XX:XX:XX:XX:XX:XX Connected: yes
[CHG] Device XX:XX:XX:XX:XX:XX Bonded: yes
Pairing successful
[bluetooth]# connect XX:XX:XX:XX:XX:XX
Attempting to connect to XX:XX:XX:XX:XX:XX
Connection successful
[MAJOR IV]# scan off
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Connection successful! Start enjoying &lt;a href="https://www.youtube.com/watch?v=rsI1gv_BcnM" rel="noopener noreferrer"&gt;Public Memory - Zig Zag&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now I can use the WirePlumber CLI to check if the device is registered there.&lt;/p&gt;




&lt;h3&gt;
  
  
  ➇ Pipewire audio profiles: Understanding the difference between device profile headset-head-unit and a2dp
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wpctl status
PipeWire 'pipewire-0' [0.3.65, alisa@wonderland, cookie:1612476836]
 └─ Clients:
 ...

Audio
 ├─ Devices:
 │      39. GA106 High Definition Audio Controller [alsa]
 │      40. Built-in Audio                      [alsa]
 │      51. MAJOR IV                            [bluez5]
 │
 ├─ Sinks:
 │      41. Built-in Audio Digital Stereo (IEC958) [vol: 0.40]
 │  *   52. MAJOR IV                            [vol: 0.60]
 │
 ├─ Sink endpoints:
 │
 ├─ Sources:
 │      42. Built-in Audio Analog Stereo        [vol: 1.00]
 │
 ├─ Source endpoints:
 │
 └─ Streams:

...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sinks are audio output devices, and sources are audio input devices. As you can see, &lt;code&gt;wpctl status&lt;/code&gt; shows that my Marshall headset is registered as a sink (so, output) and not as a source, so I can't use the microphone of my headset.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;wpctl&lt;/code&gt; is similar to &lt;code&gt;pactl&lt;/code&gt;, which is the PulseAudio CLI. However, you can use it with PipeWire without installing the PulseAudio server—just the utilities. So, I’ll do that because &lt;code&gt;pactl&lt;/code&gt; is easier to use for me. &lt;code&gt;pactl list&lt;/code&gt; output will help me explain why my headset is attached only as an output device.&lt;code&gt;pactl&lt;/code&gt; is part of the &lt;code&gt;pulseaudio-utils&lt;/code&gt; package.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install pulseaudio-utils 
$ pactl list
Card #52
    Name: bluez_card.XX_XX_XX_XX_XX_XX
    Driver: module-bluez5-device.c
    Owner Module: n/a
    Properties:
                ....
        api.bluez5.connection = "connected"
        api.bluez5.device = ""
        bluez5.auto-connect = "[ hfp_hf hsp_hs a2dp_sink ]"
                ...
        device.alias = "MAJOR IV"
        device.api = "bluez5"
        device.bus = "bluetooth"
        device.description = "MAJOR IV"
        device.form_factor = "headphone"
                ....
    Profiles:
        off: Off (sinks: 0, sources: 0, priority: 0, available: yes)
        a2dp-sink: High Fidelity Playback (A2DP Sink) (sinks: 1, sources: 0, priority: 16, available: yes)
        headset-head-unit: Headset Head Unit (HSP/HFP) (sinks: 1, sources: 1, priority: 1, available: yes)
        a2dp-sink-sbc: High Fidelity Playback (A2DP Sink, codec SBC) (sinks: 1, sources: 0, priority: 18, available: yes)
        a2dp-sink-sbc_xq: High Fidelity Playback (A2DP Sink, codec SBC-XQ) (sinks: 1, sources: 0, priority: 17, available: yes)
        headset-head-unit-cvsd: Headset Head Unit (HSP/HFP, codec CVSD) (sinks: 1, sources: 1, priority: 2, available: yes)
        headset-head-unit-msbc: Headset Head Unit (HSP/HFP, codec mSBC) (sinks: 1, sources: 1, priority: 3, available: yes)
    Active Profile: a2dp-sink-sbc

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Input is quite long, but you just have to find the card of your audio device. What’s important in the output are the profiles. For my Marshall IV headset, the available profiles (which my Marshall supports, and as I mentioned above, it’s not the highest-end codec it supports in terms of quality) are a2dp-sink (only output), a2dp-sink-sbc (only output), a2dp-sink-sbc_xq (only output), headset-head-unit (both input and output) and headset-head-unit-xvsd (both input and output).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Active Profile: a2dp-sink-sbc&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;a2dp-sink-sbc: High Fidelity Playback (A2DP Sink, codec SBC) (sinks: 1, sources: 0, priority: 18, available: yes)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can see that there are no sources for this profile, only sinks! So, THIS IS THE REASON WHY I can use my Marshall only as an output device.&lt;/p&gt;

&lt;p&gt;However, I am not bound to only one codec, automatically selected. I can switch it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pactl set-card-profile bluez_card.XX_XX_XX_XX_XX_XX headset-head-unit
#I can change it back
$ pactl set-card-profile bluez_card.XX_XX_XX_XX_XX_XX a2dp-sink-sbc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NB! When I use the profile &lt;code&gt;headset-head-unit&lt;/code&gt; to use the microphone, the sound quality is quite horrid. I guess this is connected to the maturity of PipeWire and the overall performance of Bluetooth audio devices on Linux, which is evolving rapidly but still has room for improvement. You can tweak the configuration, and if I find a solution to improve the audio quality, I will definitely share it. Most likely, it will be related to the configuration of PipeWire&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  (Optional)
&lt;/h4&gt;

&lt;p&gt;Upgrading the PipeWire sound server version could be a partial remedy. For Debian Bookworm, the way to upgrade it is through the Bookworm backports:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#add bookworm-backports repo
$ sudo vim.tiny /etc/apt/sources.list
deb http://deb.debian.org/debian/ bookworm-backports main contrib non-free
$ sudo apt update
$ sudo apt install --reinstall -t bookworm backports pipewire-audio
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;That's all! See you in the next, conclusive part, which will be dedicated to having fun with Polybar, shell cuxtomization and installing some cool apps.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>debian</category>
    </item>
    <item>
      <title>"Why is it, when something happens, it is always you TWO?"- troubleshooting Bluetooth and Wi-Fi devices on Debian 12</title>
      <dc:creator>Anna</dc:creator>
      <pubDate>Tue, 24 Dec 2024 00:30:40 +0000</pubDate>
      <link>https://dev.to/dev-charodeyka/why-is-it-when-something-happens-it-is-always-you-two-troubleshooting-bluetooth-and-wi-fi-2ofn</link>
      <guid>https://dev.to/dev-charodeyka/why-is-it-when-something-happens-it-is-always-you-two-troubleshooting-bluetooth-and-wi-fi-2ofn</guid>
      <description>&lt;p&gt;Hardware support on Linux OSs is one of the most "sensitive" topics for new users transitioning from Windows. Troubleshooting can be quite difficult, as most hardware works in a plug-and-play manner on Windows, leading its users to not have any idea about how exactly hardware works even approximately. &lt;/p&gt;




&lt;p&gt;I want to make one important point clear from the start: labeling Linux as a badly-done because of some hardware issues you’ve encountered and concluding that Windows is better is a VERY WRONG MINDSET TO START WITH. First of all, Windows is not a free OS. While it’s not crazily expensive for personal use, the costs escalate significantly in enterprise setups. Windows OS is developed and maintained by people, whoa are paid for it, while most Linux OSs are developed and maintained by the community. Please respect their work and remain polite and humble in the discussions — no one guaranteed you that everything will work perfectly out of the box. &lt;/p&gt;

&lt;p&gt;Moreover, I want to explain another point: some troubles with Linux hardware support don’t arise because Linux developers are not skilled enough or don’t care. In fact, it’s quite the opposite. The issue with hardware support is that a VERY large share of hardware manufacturers simply don’t care about Linux users. They only ship drivers and firmware for Windows and provide warranties for that platform. For Linux, it's not just that they provide a driver with no warranty—in the most cases, there’s nothing at all. That’s half of the problem. The other half is that their firmware and drivers are often closed-source. It’s one thing to have open-source code that was written for Windows and it is available for everyone in an official Git repository, so Linux devs can just adapt this code for Linux, and another entirely when Linux developers have to reverse-engineer the closed code and then rewrite it for Linux OS.&lt;/p&gt;




&lt;p&gt;This article will be dedicated to two problematic devices: Wi-Fi USB sticks and Bluetooth dongles. Support for Wi-Fi and Bluetooth devices is, I think, one of the most discussed topics on forums, Reddit threads, etc., across all Linux distros. I know that laptop touchscreens also cause troubles sometimes, but I don’t have a device with such functionality to write about it. However, what I do have is a Wi-Fi USB stick that doesn’t work out of the box and Bluetooth dongles that also don’t work out of the box. &lt;/p&gt;




&lt;p&gt;Returning to the statement above about manufacturers not caring about users, a good example is Realtek. Chances are, you'll have to deal with them, because no matter the brand of your Wi-Fi device, most likely the core of it will be Realtek (chipset). You’re welcome to visit their &lt;a href="https://www.realtek.com/Download/Index?cate_id=194&amp;amp;menu_id=297" rel="noopener noreferrer"&gt;website&lt;/a&gt; and try to download a driver &lt;em&gt;that you need&lt;/em&gt; (Spoiler: UX of this site is -1000000/10, and for someone without a solid understanding of hardware, it will seem like reading Arabic—even if the site is in English).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhc7tdh42pcq64jhliuq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhc7tdh42pcq64jhliuq.png" alt=" " width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, I have written this article to shed light on many details related to hardware support on Linux, and I will show you how I get non-functioning Wi-Fi and Bluetooth devices to work with some tweaking. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The main objective of this article is to provide an explanation of logic behind of how hardware works on Linux, not to make something work at all costs, especially if it risks the stability of the system or involves installing questionable packages. There will be none of that in this article.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Here’s the road map for this article:&lt;/p&gt;

&lt;p&gt;➀ Hardware Support: The Role of the Linux Kernel vs. Debian OS&lt;/p&gt;

&lt;p&gt;➁ Clarifying the Difference Between Firmware and Drivers&lt;/p&gt;

&lt;p&gt;➂ Troubleshooting Step #1: How to Identify the Root Cause – Missing Firmware or Missing Driver?&lt;/p&gt;

&lt;p&gt;➃ Troubleshooting Step #2: Leveraging Debian repository's components to fetch missing firmware drivers&lt;/p&gt;

&lt;p&gt;➄ "Why is it, when something happens, it is always you TWO?" - An Answer.&lt;/p&gt;

&lt;p&gt;➅ Troubleshooting Step #3: Fetching missing WiFi drivers: Upgrading Kernel vs Manual installation&lt;/p&gt;

&lt;p&gt;➆ Troubleshooting Step #4: Fetching missing Bluetooth firmware from web vs Replacing Bluetooth dongle&lt;/p&gt;




&lt;h3&gt;
  
  
  ➀ Hardware Support: The Role of the Linux Kernel vs. Debian OS
&lt;/h3&gt;

&lt;p&gt;You may have noticed that the title states this article will be about troubleshooting Bluetooth and Wi-Fi devices on Debian, but in the introduction, I spoke about Linux in general. You might wonder why. Debian has unfairly earned the reputation of being an OS that isn’t for beginners, especially regarding the fact that getting some hardware to work always requires tweaking (&lt;strong&gt;which is certainly not true starting from Debian 12)&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;It often takes the heat for the fact that Ubuntu—a Debian-based distro—supports something while Debian does not. People question how this is possible and throw accusations like, "Debian is bad," and so on.&lt;/p&gt;

&lt;p&gt;And here’s the crucial point: hardware drivers are NOT THE RESPONSIBILITY OF THE OS in most cases. Hardware drivers are parts of the Linux kernel, and more specifically, they are &lt;strong&gt;kernel modules&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;However, for the proper functioning of hardware on the OS, &lt;strong&gt;not only drivers are needed, but also firmware&lt;/strong&gt;. Despite the common mixing of terms (when problems related to drivers being reported as missing firmware, and vice versa), &lt;strong&gt;Firmware ≠ Drivers!&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  ➁ Clarifying the Difference Between Firmware and Drivers
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Firmware refers to embedded software which controls electronic devices. Well-defined boundaries between firmware and software do not exist, as both terms cover some of the same code. Typically, the term firmware deals with low-level operations in a device, without which the device would be completely non-functional (&lt;a href="https://wiki.debian.org/Firmware" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;The key takeaway about firmware is that without it, hardware is completely non-functional&lt;/em&gt;. Without a correct firmware, it’s just a piece of microchip, and no driver will be able to send the correct signal to it. When we’re talking about USB devices—like Bluetooth and Wi-Fi dongles—once connected to USB ports, if the situation with their firmware is very bad, they will be detected as just unknown USB devices. This can happen with improperly produced hardware purchased from no-name brands or from unsuccessful replicas of existing hardware from renowned brands.&lt;/p&gt;

&lt;p&gt;Device drivers are available as kernel modules (in most cases).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Kernel modules are pieces of code that can be loaded and unloaded into the kernel upon demand. They extend the functionality of the kernel without the need to reboot the system. For example, one type of module is the device driver, which allows the kernel to access hardware connected to the system. Without modules, we would have to build monolithic kernels and add new functionality directly into the kernel image. Besides having larger kernels, this has the disadvantage of requiring us to rebuild and reboot the kernel every time we want new functionality (&lt;a href="https://tldp.org/LDP/lkmpg/2.6/lkmpg.pdf" rel="noopener noreferrer"&gt;Source&lt;/a&gt;).&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, now you know that drivers and firmware are not the same thing. &lt;strong&gt;The first step in troubleshooting is to understand what exactly the problem is with your non-functioning device—missing firmware or missing driver. Once identified, you can move on to resolving the issue.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  ➂ Troubleshooting Step #1: How to Identify the Root Cause – Missing Firmware or Missing Driver?
&lt;/h3&gt;

&lt;p&gt;The troublesome devices that you will see in this article (TP-Link Archer T3U Plus WiFi USB Adapter and ASUSTek Broadcom BCM20702A0 Bluetooth dongle) I use solely for the purposes of this article; they are quite dinosauric and &lt;strong&gt;were originally purchased without any thought regarding their Linux compatibility&lt;/strong&gt;. &lt;strong&gt;Keep that in mind, as I will elaborate more on this later.&lt;/strong&gt; The truth is that far from all devices need troubleshooting on Linux—there are plenty of Bluetooth and Wi-Fi USB devices that work in plug-and-play mode, meaning no additional configuration is needed beyond plugging them into the USB port (I will share the lists later on in the article).&lt;/p&gt;

&lt;p&gt;The first command that will give me an overview of the plugged-in devices is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ lsusb
# I will not be providing the list of full devices attached to my PC, just the 2 that are in question for this article
# Bluetooth dongle:
Bus 001 Device 009: ID 0b05:17cb ASUSTek Computer, Inc. Broadcom BCM20702A0 Bluetooth
# Wi-Fi USB stick
Bus 001 Device 003: ID 2357:0138 TP-Link 802.11ac NIC
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How do I know that these devices are not functioning? With the Wi-Fi device, it's easy. When I run a command to list available network interfaces, I do not see a wireless network interface, which should be presented, because previous command demonstrated that my WiFi USB device is not broken, it is attached and identified correctly by Debian.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ip a
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; .... state UNKNOWN 
    ......
2: eno1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; ..... state UP .....
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I also have &lt;a href="https://wiki.debian.org/NetworkManager" rel="noopener noreferrer"&gt;Network Manager&lt;/a&gt; installed, which handles my wireless network interface. When I try to activate a Wi-Fi connection using &lt;code&gt;nmtui&lt;/code&gt;, I see that the list of available connections is empty:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvieosqfjs0d8aehgv4u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvieosqfjs0d8aehgv4u.png" alt=" " width="457" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Bluetooth dongle, instead, I see a quick log pop up when booting into my Debian—an error related to Bluetooth. These boot logs are usually displayed very quickly, and if they’re not critical for the boot process, your system will boot anyway. However, you can (and should!) always review all the logs if during the booting process you saw some warning/error message. You can do it using the &lt;code&gt;dmesg&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo dmesg  #| grep Bluetooth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the log, which is very self-explanatory as to why the Bluetooth dongle does not work properly:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffie7li7njdlp0prfs7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffie7li7njdlp0prfs7j.png" alt=" " width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Okay, for the Bluetooth dongle, the first troubleshooting step is done—the problem is with the missing firmware.&lt;/p&gt;

&lt;p&gt;However, &lt;code&gt;dmesg&lt;/code&gt; did not give me any error messages regarding the Wi-Fi USB stick. I just see that it was correctly identified (the same output as from &lt;code&gt;lsusb&lt;/code&gt;), so I have to continue with further investigation.&lt;/p&gt;

&lt;p&gt;There are two commands that can give you more technical details about your connected USB devices:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ usb-devices
# Output for Broadcom Bluetooth dongle:
T:  Bus=01 Lev=01 Prnt=03 Port=07 Cnt=01 Dev#=  9 Spd=12  MxCh= 0
D:  ....
P:  Vendor=....
S:  Manufacturer=Broadcom Corp
S:  Product=BCM20702A0
S:  SerialNumber=5CF37094AE3A
C:  ....
I:  If#= 0 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=01 Prot=01 Driver=btusb
...
...
...
# Output for TP-Link WiFi USB stick:
T:  Bus=01 Lev=01 Prnt=02 Port=02 Cnt=01 Dev#=  3 Spd=480 MxCh= 0
D:  ....
P:  Vendor=.....
S:  Manufacturer=Realtek
S:  Product=802.11ac NIC
S:  SerialNumber=123456
C:  ....
I:  If#= 0 Alt= 0 #EPs= 5 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none)
....
....
....
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, the Bluetooth Broadcom dongle has missing firmware, but the driver (Driver=btusb) is loaded for it. Meanwhile, the TP-Link Realtek Wi-Fi Adapter shows &lt;em&gt;Driver=(none)&lt;/em&gt;, meaning the Wi-Fi USB stick has a missing driver.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ lsusb -v
#outputs of this command are quite long for each USB device
# output for TP-Link Wi-Fi USB stick:
Bus 001 Device 003: ID 2357:0138 TP-Link 802.11ac NIC
Couldn't open device, some information will be missing
idVendor           0x2357 TP-Link
idProduct          0x0138
iManufacturer           1 Realtek
iProduct                2 802.11ac NIC
#output for Broadcom Bluetooth:
Bus 001 Device 009: ID 0b05:17cb ASUSTek Computer, Inc. Broadcom BCM20702A0 Bluetooth
Couldn't open device, some information will be missing
idVendor           0x0b05 ASUSTek Computer, Inc.
idProduct          0x17cb Broadcom BCM20702A0 Bluetooth
iManufacturer           1 Broadcom Corp
iProduct                2 BCM20702A0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the &lt;code&gt;lsusb -v&lt;/code&gt;, I also pasted details about the product ID, vendor, and manufacturer, as &lt;strong&gt;this information is very important for later if none of the other methods work and I have to manually search for a driver for the Wi-Fi and firmware for the Bluetooth&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;However, before rushing into digging the internet for manually fetching drivers and firmware—&lt;strong&gt;which is the least optimal solution, as it is not the "Debian way" at all and could break your system or create a security breach if the driver/firmware is acquired from a "questionable" source&lt;/strong&gt;—it's important to consider alternatives. If you're not familiar with the "Debian way" of administering your system and don't understand the risks of breaking your system, I advise you to read &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-14-57b1"&gt;this article&lt;/a&gt;. For security-related threats when installing anything on your system, you can read &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-3a4-3fbo"&gt;this&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, Debian offers a reliable potential solution for missing firmware and drivers directly from the Debian package repositories! &lt;/p&gt;




&lt;p&gt;➃ Troubleshooting Step #2: Leveraging Debian repository's components to fetch missing firmware drivers&lt;/p&gt;

&lt;p&gt;Before we start with this troubleshooting step, I want to make it clear where firmware and Linux drivers come from to your OS. Let’s start with firmware.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Historically, firmware would be built into the device's ROM or Flash memory, but more and more often, a firmware image has to be loaded into the device RAM by a device driver during device initialization. (&lt;a href="https://wiki.debian.org/Firmware" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The problem is, as I mentioned before, that many hardware manufacturers target ONLY Windows users, so the firmware shipped together with the drivers will be for Windows. And Linux users? &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Free software based systems such as Debian depend on the cooperation between manufacturers and developers to produce and maintain quality drivers and firmware. Drivers and firmware are what determine if, and how well, your hardware works.&lt;br&gt;
Non-free drivers and firmware are produced by entities refusing or unable to cooperate with the free software community. With non-free drivers and firmware support is often unavailable or severely constrained. For instance features are often left out, bugs go unfixed, and what support does exist from the manufacture may be fleeting.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Most of the firmware comes bundled with the Linux kernel in Linux distros. And often we have to praise the brilliant minds of Linux/Debain developers, when manufacturers refuse to cooperate. However, firmware can end up in your system not only as a part of Linux Kernel. This brings us back to Debian and its package repositories.&lt;/p&gt;

&lt;p&gt;What about drivers? Well, it's the similar scenario:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Most of WWAN and WLAN dongles (USB devices) have their proprietary Windows drivers onboard. When plugged in for the first time, they act like a flash storage and start installing the Windows driver from there. If the driver is installed, it makes the storage device disappear and a new device, mainly composite (e.g. with modem ports), shows up (&lt;a href="https://manpages.debian.org/bookworm/usb-modeswitch/usb_modeswitch.1.en.html" rel="noopener noreferrer"&gt;Source&lt;/a&gt;).&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As I mentioned before, drivers are, in most cases, shipped with the Linux kernel. Device drivers are kernel modules, which are essentially reworked and adapted Windows drivers for the Linux kernel, in cases where the manufacturer didn't bother releasing drivers for Linux.&lt;/p&gt;

&lt;p&gt;When you install Debian, it comes with the Linux kernel, which is actually a large package (and indeed it is handled as a package on your Debian) called &lt;code&gt;linux-image-*&lt;/code&gt; (in my case, for example, &lt;code&gt;linux-image-amd64&lt;/code&gt;). This package is fetched from the Debian repository, not from the Linux upstream. That's why the kernel version may vary. For instance, Debian Bookworm stable comes with kernel version 6.1.x, while the current latest release of the Linux kernel is 6.13. The Linux kernel package on Debian comes with many built-in kernel modules (drivers) that serve most of your hardware. &lt;em&gt;Generally, if the Linux kernel doesn't have a kernel module (device driver) for a particular hardware device, Debian won't have it either&lt;/em&gt;. However,  keep in mind the kernel version. The most accurate way to say the previous phrase would be that &lt;em&gt;if the Linux kernel of version 6.1.x doesn't have some drivers, Debian Stable will not have them either&lt;/em&gt;. If you don’t understand why Debian won’t just upgrade the kernel to the latest version, then I recommend reading &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-14-57b1"&gt;this article&lt;/a&gt; to better understand the Debian distro.&lt;/p&gt;

&lt;p&gt;However, there are exceptions, and Debian doesn't always lack drivers that the Linux kernel doesn't have. A famous example is the Nvidia driver. It’s a proprietary driver for Nvidia GPUs that can be installed from the Debian repository, while the Linux kernel only offers the open-source Nouveau driver for Nvidia graphics cards. Another example is &lt;a href="https://wiki.debian.org/iwlwifi" rel="noopener noreferrer"&gt;iwlwifi&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So, let's start with first attempt of troubleshooting. The probability of getting a kernel module-driver for my Realtek Wi-Fi USB from any Debian repository component is quite low— &lt;code&gt;iwlwifi&lt;/code&gt;, mentioned earlier, is for Intel Wi-Fi adapters, not Realtek. However, the missing firmware can be fetched with a much higher chance!&lt;/p&gt;

&lt;h4&gt;
  
  
  Troubleshooting Step #2: execution
&lt;/h4&gt;

&lt;p&gt;Debian, as a distro, has more than one package repository from which it can install and update packages and software. If you don’t understand what a package repository is and which repositories Debian has, I strongly recommend &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-14-57b1"&gt;this article&lt;/a&gt; and &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-3a4-3fbo"&gt;this article&lt;/a&gt;. Debian has one main repository for each of its releases, and this repository also has repository components, one of which is specifically called &lt;code&gt;non-free-firmware&lt;/code&gt;. The term "non-free" doesn’t mean you have to pay someone somewhere to use the software from there. It means that this component contains proprietary, closed-source software that has been carefully analyzed and rewritten for Debian OS to function well and harmoniously with the rest of the system, ensuring maximum possible support for various hardware devices.&lt;/p&gt;

&lt;p&gt;Here is the list of repository components:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;main consists of DFSG-compliant packages, which do not rely on software outside this area to operate. These are the only packages considered part of the Debian distribution.&lt;br&gt;
contrib packages contain DFSG-compliant software, but have dependencies not in main (possibly packaged for Debian in non-free).&lt;br&gt;
non-free contains software that does not comply with the DFSG.&lt;br&gt;
non-free-firmware provides firmware that is needed by some hardware, but does not comply with the DFSG.(&lt;a href="https://wiki.debian.org/SourcesList" rel="noopener noreferrer"&gt;SourcesList — Debian Wiki&lt;/a&gt;).&lt;/em&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, the troubleshooting is very simple—you just have to ensure that &lt;code&gt;apt&lt;/code&gt; can fetch packages from these repository components. To do so, you'll probably need to modify the list of sources for &lt;code&gt;apt&lt;/code&gt;. Just to remind you, I am using the Debian Stable release, not Sid or Trixie.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat /etc/apt/sources.list
# this is what I have:
#first, main repository of Debian stable
deb http://deb.debian.org/debian/ bookworm main contrib non-free non-free-firmware
deb-src http://deb.debian.org/debian/ bookworm main non-free-firmware
#repository for security updates
deb http://security.debian.org/debian-security bookworm-security main non-free-firmware
deb-src http://security.debian.org/debian-security bookworm-security main non-free-firmware
# repository bookworm-updates, to get updates before a point release is made;
# see https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_updates_and_backports
deb http://deb.debian.org/debian/ bookworm-updates main non-free-firmware
deb-src http://deb.debian.org/debian/ bookworm-updates main non-free-firmware
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What you need to look at is the first line, related to the main repository of your Debian release. If it, for some reason, only has the &lt;code&gt;main&lt;/code&gt; component and &lt;code&gt;contrib&lt;/code&gt;, you will need to add &lt;code&gt;non-free&lt;/code&gt; and &lt;code&gt;non-free-firmware&lt;/code&gt;: &lt;code&gt;deb http://deb.debian.org/debian/ bookworm main contrib non-free non-free-firmware&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;After modifying &lt;code&gt;sources.list&lt;/code&gt;, you'll need to run &lt;code&gt;sudo apt update&lt;/code&gt; so that &lt;code&gt;apt&lt;/code&gt; becomes aware of the updated sources list. &lt;/p&gt;

&lt;p&gt;After this, I run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install firmware-misc-nonfree
#for double check:
$ sudo apt install firmware-realtek
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;firmware-misc-nonfree&lt;/code&gt; package will bring&lt;a href="https://packages.debian.org/bookworm/all/firmware-misc-nonfree/filelist" rel="noopener noreferrer"&gt; this firmware&lt;/a&gt;. I can see the following files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/lib/firmware/brcm/BCM-0a5c-6410.hcd
/lib/firmware/brcm/BCM-0bb4-0306.hcd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, what the Broadcom Bluetooth dongle was looking for was a different version of firmware: &lt;code&gt;BCM-0b05-17cb.hcd&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To inspect which firmware is on your OS, you just can inspect the contents of &lt;code&gt;/lib/firmware&lt;/code&gt;. If I go into the subfolder &lt;code&gt;/lib/firmware/brcm&lt;/code&gt;, I can see the existing firmware. In my case, However, none of them matches the firmware version that appeared in the &lt;code&gt;dmesg&lt;/code&gt; error. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyahlmk1sfoymt9tpr2r8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyahlmk1sfoymt9tpr2r8.png" alt=" " width="409" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to take a look at the existing device drivers, they are in a different location - &lt;code&gt;/lib/modules/$(uname -r)/kernel/drivers/&lt;/code&gt;. &lt;code&gt;uname -r&lt;/code&gt; command will show you the version of kernel in use by your Debian. You can find more than one kernel version in &lt;code&gt;/lib/modules&lt;/code&gt; if you regularly update your System and Kernel. &lt;/p&gt;

&lt;p&gt;Bluetooth drivers can be found in &lt;code&gt;/lib/modules/$(uname -r)/kernel/drivers/bluetooth.&lt;/code&gt; Available Realtek Wi-Fi drivers are in &lt;code&gt;/lib/modules/$(uname -r)/kernel/drivers/net/wireless/realtek/rtlwifi/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ouf353vcwifi34d4xch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ouf353vcwifi34d4xch.png" alt=" " width="800" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Anyway, the problem with both my troublesome devices still persists. There are Realtek drivers, but it seems none of them is suitable for my Realtek Wi-Fi device, and the same goes for the firmware for my Bluetooth dongle. So, I have to continue troubleshooting. As I mentioned earlier, what’s left for me is to fetch the driver and firmware from somewhere outside of Debian repositories.&lt;/p&gt;

&lt;p&gt;Before proceeding with &lt;strong&gt;the last-resort&lt;/strong&gt; troubleshooting solution, I want to remind you of what I said about these devices. &lt;strong&gt;They were acquired without any thought of their compatibility with Linux, and that’s where the problem starts.&lt;/strong&gt; It’s not that Debian is a bad OS or Linux is a bad system—it’s just that both the manufacturer AND vendor of BOTH devices don’t care about Linux users. However, not all devices are like this on Linux OSs. I’m not saying that there are MANY other manufacturers that are better at supporting Linux (Broadcom and Realtek are very big players on the market of hardware, so the chances are high, that many pieces you buy, will be manufactured by them), but rather that MANY their devices already have drivers and firmware provided by Linux software engineers, just you were unlucky to get devices that are unsupported YET or ALREADY. &lt;strong&gt;Not many still, but there are also manufacturers that genuinely care about Linux users, and I believe they deserve your attention when you are about to buy a new hardware device.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following section will be dedicated to existing Wi-Fi and Bluetooth devices that can work on Linux as plug-and-play. The reason you're reading this article is that your devices are not one of them, and &lt;strong&gt;most likely, you didn’t know how to choose the right devices when you bought them&lt;/strong&gt;. It’s important to understand what exactly the support for a piece of hardware on Linux depends on (spoiler: NOT A BRAND/MODEL NAME).&lt;/p&gt;




&lt;h3&gt;
  
  
  ➄ "Why is it, when something happens, it is always you TWO?" - An Answer.
&lt;/h3&gt;

&lt;p&gt;Why exactly can Bluetooth and Wi-Fi hardware together cause some troubles? Do they have something in common? YES. If you didn’t know, Bluetooth is nothing else but a network—small, but still a network—a PAN (personal area network):&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances and building personal area networks (PANs) (&lt;a href="https://en.wikipedia.org/wiki/Bluetooth" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you have a laptop, you have much less control over its hardware components compared to a PC (desktop setup), especially if you’ve built it yourself by selecting and assembling each part.&lt;/p&gt;

&lt;p&gt;In terms of Wi-Fi and Bluetooth, it’s common to go for Bluetooth dongles or USB Wi-Fi sticks with antennas, or even dongles for Wi-Fi. But why are &lt;em&gt;Wi-Fi cards&lt;/em&gt; so often overlooked and forgotten? These cards are essentially &lt;em&gt;PCIe devices&lt;/em&gt;—in other words, just cards you attach to your motherboard.&lt;/p&gt;

&lt;p&gt;Sure, they occupy a PCIe slot, which can be seen as a drawback. However, dongles occupy USB slots, and Wi-Fi PCIe cards don’t take up the precious PCIe slot reserved for your fancy GPU. Despite this, they’re still often avoided.&lt;/p&gt;

&lt;p&gt;I bring this up because network cards highlight why Wi-Fi and Bluetooth issues often go hand in hand. If you look up network cards, you’ll find that most of them are "2-in-1" devices—combining both Wi-Fi and Bluetooth functionality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2wif4s77ojjgflg7kn5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2wif4s77ojjgflg7kn5.png" alt=" " width="596" height="979"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Why Wi-Fi PCIe cards are often bundled with Bluetooth?&lt;br&gt;
The primary reason is shared chipsets. Wi-Fi and Bluetooth are frequently integrated into a single chipset because both technologies use similar radio frequency (RF) components and they operate in the 2.4 GHz band. Combining them on a single card eliminates the need to duplicate components. Both Wi-Fi and Bluetooth require antennas to transmit and receive signals. By bundling the two, the same antenna(s) can often serve both purposes, reducing hardware requirements and saving space.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, is a network PCIe card the panacea for all problems related to the lack of support for Bluetooth or Wi-Fi on Debian? NO.&lt;/strong&gt; If you look at the image above, you’ll notice the antennas, metallic parts, and that green microcircuit board... that’s the guy! Whether it works out-of-the-box when attached to your motherboard, or requires extensive tweaking (and possibly still doesn’t work), depends entirely on it.&lt;/p&gt;

&lt;p&gt;In the image above, the green microcircuit board belongs to Intel and is labeled with the model AX210. The crucial point is that whether a Wi-Fi or Bluetooth device works on Debian depends entirely on the available software support (drivers and firmware) for the chipsets used in those devices. &lt;/p&gt;

&lt;p&gt;Please read the information from Debian developers, that I quoted below. You will most likely be confused by all the combinations of numbers and letters that are chipset names, but I will untangle it for you using the example of my Wi-Fi USB device later on in this article!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;A WiFi device operates on an electronic chip called a "chipset". We can find the same chipset in several different devices. Consequently, the driver/module for one chipset will work for all wireless devices using that chipset.&lt;br&gt;
Currently there are only a few modern wifi chipsets readily available that work with free software systems. For USB wifi devices this list includes the Realtek RTL8187B chipset (802.11G) and the Atheros AR9170 chipset (802.11N). For Mini PCIe all cards with an Atheros chipset are supported.&lt;br&gt;
Wifi has always been a problem for free software users. USB Wifi cards are becoming less free. With the older 802.11G standard many USB wifi cards had free drivers and did not require non-free firmware. With 802.11N there are only a couple chipsets on the market, from Atheros, which are completely free.&lt;br&gt;
One company which specializes in free software and sells 802.11N USB wifi cards, &lt;a href="https://www.thinkpenguin.com/" rel="noopener noreferrer"&gt;ThinkPenguin.com&lt;/a&gt;, has indicated the availability of free software supported 802.11N USB wifi cards is disappearing. Solving this problem will require more demand than currently exists. Next time you purchase a piece of hardware ask yourself if it is free software compatible. (&lt;a href="https://wiki.debian.org/WiFi" rel="noopener noreferrer"&gt;Debian Wiki: WiFi&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now that you are fully equipped with the theory behind hardware support on Linux/Debian, I will proceed with troubleshooting my USB Wi-Fi adapter.&lt;/p&gt;


&lt;h3&gt;
  
  
  ➅ Troubleshooting Step #3: Fetching missing WiFi drivers: Upgrading Kernel vs Manual installation
&lt;/h3&gt;

&lt;p&gt;I hope it’s clear now that there is no "safe for Linux" brand or manufacturer from which you can buy hardware—specifically Wi-Fi adapters—that will just work when you plug it in on Debian because it is THAT brand and THAT manufacturer. The quote I shared above may be confusing because it talks about chipsets with some alphanumerical names. &lt;/p&gt;

&lt;p&gt;The support on Debian depending on the chipset makes it more complicated to choose the correct Wi-Fi device if you’re planning to buy one with the goal of making it work seamlessly on your Debian system. However, it’s not that bad. Remember, in the world of Linux, you always have a community that has your back.&lt;/p&gt;
&lt;h4&gt;
  
  
  Identifying the chipset and it's driver
&lt;/h4&gt;

&lt;p&gt;What information do I have for now about my Wi-Fi USB adapter?&lt;br&gt;
These are the details about my devices that I extracted with &lt;code&gt;lsusb -v&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;Device: ID 2357:0138 TP-Link 802.11ac NIC&lt;/p&gt;

&lt;p&gt;idVendor           0x2357 TP-Link&lt;br&gt;
idProduct          0x0138&lt;br&gt;
iManufacturer           1 Realtek&lt;br&gt;
iProduct                2 802.11ac NIC&lt;/p&gt;

&lt;p&gt;Here’s the cornerstone—not all devices from the brand TP-Link are unsupported by Debian. It depends on the manufacturer of the chipset used in the TP-Link device, which in my case is Realtek. Not all Wi-Fi devices manufactured by Realtek will work seamlessly on Debian, because it depends on the specific chipset used. For my Wi-Fi adapter, it is known which wireless standards the chipset uses: it is &lt;strong&gt;802.11ac&lt;/strong&gt;. But chipset name still remains unknown.&lt;/p&gt;

&lt;p&gt;That’s all. Oh, actually, look at the device ID numbers—they are meaningful, especially &lt;strong&gt;0138&lt;/strong&gt;, which is the product ID. I could technically find this on the web, specifically on the TP-Link site, as I definitely know the vendor.&lt;/p&gt;

&lt;p&gt;However, I know the exact model of my Wi-Fi adapter—it is written on the metallic part of the USB stick, burned there, so I know it’s the TP-Link Archer T3U Plus.&lt;/p&gt;

&lt;p&gt;Maybe, if you don’t recall the model name of your device that’s causing you problems and there’s no way to retrieve it, and you don’t have it written anywhere... well, you can try searching by the vendor ID and product ID first &lt;a href="http://www.linux-usb.org/usb.ids" rel="noopener noreferrer"&gt;here&lt;/a&gt; and also &lt;a href="https://wiki.debian.org/DeviceDatabase/USB" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;My Wi-Fi adapter is absent in both links—ID 0138 is not present. Indeed, there is an Archer T3U listed in the &lt;a href="http://www.linux-usb.org/usb.ids" rel="noopener noreferrer"&gt;first link&lt;/a&gt;, but not the Plus version..&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjj3ovy0hipw5th9x2xxv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjj3ovy0hipw5th9x2xxv.png" alt=" " width="684" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, let’s look at the example of the Archer T3U &lt;a href="http://www.linux-usb.org/usb.ids" rel="noopener noreferrer"&gt;first link&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;012d Archer T3U [Realtek RTL8812BU]&lt;/p&gt;

&lt;p&gt;The first part of this troubleshooting step is to find exactly this info—RTL8812BU. Does it remind you of something? No? Here’s a refresher:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ls /lib/modules/$(uname -r)/kernel/drivers/net/wireless/realtek/rtlwifi/
#this is the output of this command, that list all available drivers on my Debian
btcoexist  rtl8192c   rtl8192cu  rtl8192ee  rtl8723ae  rtl8723com  rtl_pci.ko  rtlwifi.ko
rtl8188ee  rtl8192ce  rtl8192de  rtl8192se  rtl8723be  rtl8821ae   rtl_usb.ko
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And none of these present drivers fits my Wi-Fi Adapter! So, what do I need, and why don’t they fit?&lt;/p&gt;

&lt;p&gt;Let’s start with the first one: rtl8192cu.&lt;/p&gt;

&lt;p&gt;Running &lt;code&gt;sudo modinfo rtl8192cu&lt;/code&gt; will show me the details about which device this driver is suitable for.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx2fd5wu1h1uc4i3xz9tm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx2fd5wu1h1uc4i3xz9tm.png" alt=" " width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The key thing that is important for my troubleshooting—and actually gives a hint as to why this driver is not suitable for my device—is this line:&lt;br&gt;
description:    Realtek 8192C/8188C 802.11n USB wireless&lt;/p&gt;

&lt;p&gt;My Wi-Fi adapter uses 802.11ac wireless standard. For more details, check here:&lt;a href="https://en.wikipedia.org/wiki/IEEE_802.11ac-2013" rel="noopener noreferrer"&gt; IEEE 802.11ac on Wikipedia&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I can enrich &lt;code&gt;modinfo&lt;/code&gt; output by checking the Realtek website, specifically the page dedicated to Wi-Fi device chipsets (there is also a page for combined Wi-Fi and Bluetooth). Here’s the link: &lt;a href="https://www.realtek.com/Product/Category?cate_id=262&amp;amp;menu_id=291" rel="noopener noreferrer"&gt;Realtek Wi-Fi Chipsets&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Why &lt;a href="https://www.realtek.com/Product/Index?id=580&amp;amp;cate_id=194&amp;amp;menu_id=291" rel="noopener noreferrer"&gt;rtl8192cu&lt;/a&gt; is not my Wi-Fi adapter's chipset based on the Realtek description?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;General Description:&lt;br&gt;
The Realtek RTL8811CU-CG is a highly integrated single-chip that supports 1-stream 802.11ac, bla bla bla...&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I know that my Wi-Fi adapter supports 2 streams.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Host interface:&lt;br&gt;
USB 2.0 for WLAN and BT controller.&lt;br&gt;
I know that my Wi-Fi adapter uses USB 3.0.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, I can continue exploring the Realtek list of chipsets. There’s no need to examine other existing drivers with &lt;code&gt;modinfo&lt;/code&gt; because they definitely won’t fit.&lt;/p&gt;

&lt;p&gt;I could say that I examined the Realtek list of chipsets and matched the parameters, hehe. Actually, I did out of curiosity, but in reality, I just Googled my model to find which drivers I need. However, the Realtek information card for this chipset looks exactly like the one in my Wi-Fi adapter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupasmk12kxfkjvoe1mnk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupasmk12kxfkjvoe1mnk.png" alt=" " width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;a href="https://www.realtek.com/Product/Index?id=576&amp;amp;cate_id=194&amp;amp;menu_id=291" rel="noopener noreferrer"&gt; Product Search: RTL8812BU&lt;/a&gt;



&lt;p&gt;So, to make my Wi-Fi adapter work I need the rtl8812bu driver, and it is definitely currently missing on my Debian. Don’t worry if your device is more complicated and you find it difficult to figure out which driver you need, I have an amazing &lt;a href="https://github.com/morrownr/USB-WiFi" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; for you, which can be considered as a bible of a Wi-Fi adapters support on Linux.&lt;/p&gt;

&lt;p&gt;I strongly recommend to check &lt;a href="https://github.com/morrownr/USB-WiFi/blob/main/home/USB_WiFi_Adapter_Information_for_Linux.md" rel="noopener noreferrer"&gt;USB WiFi Adapter Information for Linux&lt;/a&gt; and &lt;a href="https://github.com/morrownr/USB-WiFi/blob/main/home/USB_WiFi_Adapters_that_are_supported_with_Linux_in-kernel_drivers.md" rel="noopener noreferrer"&gt;USB WiFi adapters that are supported with Linux in-kernel drivers&lt;/a&gt; and &lt;a href="https://github.com/morrownr/USB-WiFi/blob/main/home/USB_WiFi_Adapter_out-of-kernel_drivers_for_Linux.md#usb-wifi-adapters-with-linux-out-of-kernel-drivers" rel="noopener noreferrer"&gt;USB WiFi adapters with Linux out-of-kernel drivers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I started with &lt;a href="https://github.com/morrownr/USB-WiFi/blob/main/home/USB_WiFi_Adapter_out-of-kernel_drivers_for_Linux.md#usb-wifi-adapters-with-linux-out-of-kernel-drivers" rel="noopener noreferrer"&gt;USB WiFi adapters with Linux out-of-kernel drivers&lt;/a&gt; since my current Debian kernel definitely does not have the driver I need—&lt;strong&gt;rtl8812bu&lt;/strong&gt;. However, I found some interesting information on this link:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;chipsets - rtl8812bu and rtl8822bu - AC1200 - USB 3&lt;br&gt;
As of kernel 6.2, the above chipsets have an in-kernel driver. It is located in the rtw88 in-kernel driver. I invite all to test the new in-kernel driver and use it if it meets your needs.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I know that my kernel version is inferior to this, however here is the preciese info:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ uname -r
6.1.0-28-amd64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Troubleshooting Step #3: implementation
&lt;/h4&gt;

&lt;p&gt;So, here is where the troubleshooting starts—I have two choices: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;upgrade the kernel &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;install the kernel module manually using the code kindly provided by the gentleman who maintains this Wi-Fi adapters compatibility "bible": &lt;a href="https://github.com/morrownr/88x2bu-20210702" rel="noopener noreferrer"&gt;Linux Driver for USB WiFi Adapters that use the RTL8812BU and RTL8822BU Chipsets&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will use both methods just for the purpose of this article.&lt;/p&gt;

&lt;p&gt;Personally, I would go for a kernel upgrade, and I do this on my personal setup. However, the issue is that Debian Stable ships with kernel 6.1.x, so I actually have the maximum version available from the Debian stable main package repository. The unique, quite safe way to update the kernel is using &lt;a href="https://backports.debian.org/" rel="noopener noreferrer"&gt;Debian backports&lt;/a&gt;. If you don’t know what that is, you can read &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-14-57b1"&gt;this article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;What are the risks of this method (you may also face them)? I have NVIDIA drivers installed. They are installed from the Debian stable repo. Nvidia drivers require kernel headers, the version of which should match the kernel version. So, I will have to upgrade those from the backports as well. Additionally, MAYBE I will need to reinstall the Nvidia driver from the Debian backports repository. (If all of this doesn’t make sense to you and you don’t understand what I’m talking about, I recommend you read &lt;a href="https://dev.to/dev-charodeyka/debian-12-nvidia-drivers-18dh"&gt;this article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The second option is manual installation. What are the drawbacks? I don’t like this method because it’s less the Debian way, and that’s all. Want to know my policy on it? I’d rather buy a new, compatible plug-and-play Wi-Fi USB adapter—or actually, not even that, but a PCIe network card. For me, it’s not worth installing a kernel module manually and potentially destabilizing my system, even slightly. But this is just my point of view. I’ll let you decide, as I will show you both examples. Please note, my concerns are not related to the developer/maintainer of driver I will be manually installing, he is doing a tremendously great job, it is just me, I manage my system in a particular way. By the way, I use a snapshotting tool, so I can try both. If you don’t know anything about snapshotting, I recommend &lt;a href="https://dev.to/dev-charodeyka/using-timeshift-for-systems-snapshots-and-recovery-on-debian-12-via-command-line-7m6"&gt;this article&lt;/a&gt;. Using snapshot tool can make the procedure of troubleshooting much less problematic.&lt;/p&gt;

&lt;h4&gt;
  
  
  Method 1: upgrading kernel version from bookworm-backports repo
&lt;/h4&gt;

&lt;p&gt;I start with a kernel upgrade from Bookworm backports.&lt;/p&gt;

&lt;p&gt;The first thing to do is to make sure that the Bookworm backports repo is enabled and accessible for your &lt;code&gt;apt&lt;/code&gt; to fetch packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo vim /etc/apt/sources.list
#append these lines:
#bookworm-backports
deb http://deb.debian.org/debian/ bookworm-backports main contrib non-free

$ sudo apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're worried that Debian will now update everything from there when you run &lt;code&gt;sudo apt update&lt;/code&gt;, don’t worry—it won’t do this unless you explicitly ask for it.&lt;/p&gt;

&lt;p&gt;Next, I search for the available kernel versions in the bookworm-backports repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ apt search linux-image*
# I cannot install linux kernel 6.2, the minimum version available is 6.9, I am fine with it
linux-image-6.9.7+bpo-amd64/stable-backports 6.9.7-1~bpo12+1 amd64
  Linux 6.9 for 64-bit PCs (signed)
$ sudo apt install -t bookworm-backports linux-image-6.9.7+bpo-amd64
#if you want to fetch the latest available version from bookworm-backports use
# sudo apt install -t bookworm-backports linux-image-amd64
#immediately after I install Linux headers as well
$ sudo apt install -t bookworm-backports linux-headers-6.9.10+bpo-amd64
$ sudo reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During the installation of the kernel headers, I got these messages on logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dkms: running auto installation service for kernel 6.9.10+bpo-amd64.
Sign command: /lib/modules/6.9.10+bpo-amd64/build/scripts/sign-file
Signing key: /var/lib/shim-signed/mok/MOK.priv
Public certificate (MOK): /var/lib/shim-signed/mok/MOK.der
nvidia-current.ko.xz:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/6.9.10+bpo-amd64/updates/dkms/

nvidia-...................
.....
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;DKMS is rebuilding the NVIDIA kernel modules, which was triggered by the kernel upgrade. That’s quite good and convenient—it seems that I don’t need to reinstall the NVIDIA drivers. Moreover, DKMS is rebuilding the NVIDIA kernel modules signing them after for UEFI Secure Boot, because I have it enabled on my machine, and everything is going as configured. If you also use Secure Boot on your PC but have only a vague idea about it, I recommend reading &lt;a href="https://dev.to/dev-charodeyka/debian-secure-boot-to-be-or-not-to-be-that-is-the-question-1o82"&gt;this article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The first thing I do after rebooting is run &lt;code&gt;sudo apt autoremove&lt;/code&gt; to remove the old kernel packages.&lt;/p&gt;

&lt;p&gt;The NVIDIA drivers are in place, as I can see from the &lt;code&gt;nvidia-smi&lt;/code&gt; output. &lt;code&gt;uname -r&lt;/code&gt; displays expected version of kernel in use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z5troxsr1q4326ofcq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z5troxsr1q4326ofcq8.png" alt=" " width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I use &lt;code&gt;sudo nmtui&lt;/code&gt; to activate Wi-Fi connection, and this time I have the complete list of detected Wi-Fi networks.&lt;/p&gt;

&lt;p&gt;Let’s check if the wireless interface is present in the list of network interfaces and is UP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ip a
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; ...state UNKNOWN ...
   .....
2: eno1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; .... state UP ....
    ....
3: wlx984827dd37bb: &amp;lt;NO-CARRIER,BROADCAST,MULTICAST,UP&amp;gt; ...state UP ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voila!&lt;br&gt;
Additional commands to control drivers in use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ usb-devices
T:  Bus=01 Lev=01 Prnt=02 Port=02 Cnt=01 Dev#=  3 Spd=480 MxCh= 0
D:  Ver= 2.10 Cls=00(&amp;gt;ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs=  1
P:  Vendor=2357 ProdID=0138 Rev=02.10
S:  Manufacturer=Realtek
S:  Product=802.11ac NIC
S:  SerialNumber=123456
C:  #Ifs= 1 Cfg#= 1 Atr=80 MxPwr=500mA
I:  If#= 0 Alt= 0 #EPs= 5 Cls=ff(vend.) Sub=ff Prot=ff Driver=rtw_8822bu
.... 
$ ls /lib/modules/$(uname -r)/kernel/drivers/net/wireless/realtek/rtw88
rtw88_8723de.ko.xz  rtw88_8821c.ko.xz   rtw88_8822b.ko.xz   rtw88_8822c.ko.xz   rtw88_pci.ko.xz
rtw88_8723d.ko.xz   rtw88_8821cs.ko.xz  rtw88_8822bs.ko.xz  rtw88_8822cs.ko.xz  rtw88_sdio.ko.xz
rtw88_8723du.ko.xz  rtw88_8821cu.ko.xz  rtw88_8822bu.ko.xz  rtw88_8822cu.ko.xz  rtw88_usb.ko.xz
rtw88_8821ce.ko.xz  rtw88_8822be.ko.xz  rtw88_8822ce.ko.xz  rtw88_core.ko.xz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Manual installation/building of Realtek driver.
&lt;/h4&gt;

&lt;p&gt;So far, I am satisfied. I have a well-configured system, my NVIDIA drivers were built with DKMS, and I also have Secure Boot enabled and configured on my system. Actually, if your Debian setup wasn’t configured as well and precisely, some things could go wrong—especially with the NVIDIA drivers. That’s why I will also show another method for manually installing the driver for Realtek as an alternative without doing anything with your system's kernel version.&lt;/p&gt;

&lt;p&gt;I am precisely following &lt;a href="https://github.com/morrownr/88x2bu-20210702" rel="noopener noreferrer"&gt;this guide&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt update
$ sudo apt upgrade
# First, my Debian should have installed some packages. (actually, this is what i do not like about manual installation of drivers, it is not installation but rather building, so it requires to have a lot of software to be installed for building
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Mandatory packages: &lt;code&gt;gcc make bc kernel-headers build-essential git&lt;/code&gt;&lt;br&gt;
Highly recommended packages: &lt;code&gt;dkms rfkill iw ip&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install -y linux-headers-$(uname -r) build-essential bc dkms git libelf-dev rfkill iw 
# in my case it will install this
#The following NEW packages will be installed:
  #bc git git-man iw libelf-dev liberror-perl rfkill zlib1g-dev
# Create a directory to hold the downloaded driver
mkdir -p ~/src
# Move to the newly created directory
cd ~/src
# Download the driver
git clone https://github.com/morrownr/88x2bu-20210702.git
# Move to the newly created driver directory
cd ~/src/88x2bu-20210702
# Run the installation script (install-driver.sh)
sudo sh install-driver.sh
# When I was prompted to edit the config, I chose "no" and then I chose "yes" to reboot.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After rebooting, the results are exactly the same as the first method's ones, so I will not repeat the proves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The problem with Wi-Fi USB Adapter is resolved. The one that is left is with the Bluetooth Broadcom device.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ➆ Troubleshooting Step #4: Fetching missing Bluetooth firmware vs Replacing Bluetooth dongle
&lt;/h3&gt;

&lt;p&gt;I’ll start again by analyzing what I have on my device (output of lsusb -v).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Bus 001 Device 009: ID 0b05:17cb ASUSTek Computer, Inc. Broadcom BCM20702A0 Bluetooth
idVendor           0x0b05 ASUSTek Computer, Inc.
idProduct          0x17cb Broadcom BCM20702A0 Bluetooth
iManufacturer           1 Broadcom Corp
iProduct                2 BCM20702A0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I checked the details about my Bluetooth device by ID on the previously shared &lt;a href="http://www.linux-usb.org/usb.ids%20again." rel="noopener noreferrer"&gt;link&lt;/a&gt;: &lt;br&gt;
This time, my device is present:&lt;br&gt;
17cb  Broadcom BCM20702A0 Bluetooth&lt;/p&gt;

&lt;p&gt;But this time, I do not need a driver; I need firmware. Again, I turn to community support, as the Debian repository component &lt;code&gt;non-free-firmware&lt;/code&gt; could not provide the required Bluetooth Broadcom firmware.&lt;/p&gt;

&lt;p&gt;The firmware that &lt;code&gt;dmesg&lt;/code&gt; reported my Bluetooth dongle is searching for is &lt;code&gt;BCM-0b05-17cb.hcd&lt;/code&gt;. Here is &lt;a href="https://github.com/winterheart/broadcom-bt-firmware/tree/master" rel="noopener noreferrer"&gt;the GitHub repository&lt;/a&gt; from where I can get it.&lt;/p&gt;

&lt;p&gt;Will I install the firmware from there? No. Is it because I don’t trust the maintainer? No. It’s because I don’t trust the old firmware from Broadcom. Why? Because the repository maintainer generously shared this information:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Recently several vulnerabilities have been discovered in the Bluetooth stack such as CVE-2018-5383, CVE-2019-9506 (KNOB), CVE-2020-10135 (BIAS) and more. Since Broadcom has stopped active support for its consumer devices, your system may be subject to security risks. You will have to use these devices at your own risk. As a repository maintainer, I cannot provide security fixes.(&lt;a href="https://github.com/winterheart/broadcom-bt-firmware/tree/master" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Am I sure that the firmware my Bluetooth dongle is seeking is not suffering from reported vulnerabilities? I can’t know for sure. But knowing that this dongle is quite dinosauric, the chances are that it could be. So, the solution I see for me is to replace it with another Bluetooth dongle.&lt;/p&gt;

&lt;p&gt;My alternative Bluetooth dongle was purchased recently and works perfectly on Debian—truly plug and play. There’s no need for configuration, installation of drivers/firmware, or any tweaks; it just works. This Bluetooth dongle is from the brand Edimax, purchased on Amazon for around $10 (&lt;a href="https://www.amazon.it/Edimax-BT-8500-networking-card-Bluetooth/dp/B08K1C8B81/ref=sr_1_1?__mk_it_IT=%C3%85M%C3%85%C5%BD%C3%95%C3%91&amp;amp;crid=ZLTO1IFCNR4L&amp;amp;dib=eyJ2IjoiMSJ9.C9-cvbHGVIeQRlN4QI2z7vtVNF0vrVQL8U99ylmDCOLmRsruPp2I1xT5Pa7xjYV9Pkxd0jk_QVTJtbTjL-I_neLGBfxLzubOlnTndR-jNhWv9wb-dZrx2Rd-Hiwm28wB3GHmKUjlBBF_WZ9YxZ6raLtyCpgibFglHgNIOCH9wOwdhk5VD-syKo0vHZ9l8TVP.C-KxxW7m4rG3d5vVIRcDdofQwR1MG2ZHfqqJEbhAucY&amp;amp;dib_tag=se&amp;amp;keywords=edimax+bluetooth+linux&amp;amp;nsdOptOutParam=true&amp;amp;qid=1734999621&amp;amp;sprefix=edimax+bluetooth+linux%2Caps%2C95&amp;amp;sr=8-1" rel="noopener noreferrer"&gt;Edimax BT-8500&lt;/a&gt;). It supports the Bluetooth 5.0 protocol.&lt;/p&gt;

&lt;p&gt;Here are the outputs of the commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ lsusb
Bus 001 Device 009: ID 7392:c611 Edimax Technology Co., Ltd Edimax Bluetooth Adapter

$ sudo dmesg
 1178.132034] Bluetooth: hci1: RTL: examining hci_ver=0a hci_rev=000b lmp_ver=0a lmp_subver=8761
[ 1178.133013] Bluetooth: hci1: RTL: rom_version status=0 version=1
[ 1178.133024] Bluetooth: hci1: RTL: loading rtl_bt/rtl8761bu_fw.bin
[ 1178.144494] Bluetooth: hci1: RTL: loading rtl_bt/rtl8761bu_config.bin
[ 1178.202622] Bluetooth: hci1: RTL: cfg_sz 6, total sz 27814
[ 1178.342160] Bluetooth: hci1: RTL: fw version 0x09a98a6b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even if it may seem unpleasant to you that I opted for such a solution, that is my choice. I would never compromise my system just to make a device, priced no more than $20, work. However, I’ve shared the GitHub repo, and if you have the same devices, you are free to use it at your own risk.&lt;/p&gt;




&lt;p&gt;I’ve achieved the objective of this article: I resolved the problems with my troublesome hardware - Bluetooth Dongle and Wi-Fi USB adapter. The notable solution was with the Broadcom Bluetooth issue. Really, no such inexpensive devices are worth compromising your system for. If your Wi-Fi and Bluetooth hardware pieces are very hard to troubleshoot, just consider replacing them. I’ve shared links, and I will repeat them here. Always remember, if you can, support vendors that care about Linux users, even if sometimes they are a bit overpriced.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/morrownr/USB-WiFi/blob/main/home/Recommended_Bluetooth_Adapters_for_Linux.md" rel="noopener noreferrer"&gt;Recommended Bluetooth Adapters for Linux&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/morrownr/USB-WiFi/blob/main/home/USB_WiFi_Adapters_that_are_supported_with_Linux_in-kernel_drivers.md" rel="noopener noreferrer"&gt;USB WiFi adapters that are supported with Linux in-kernel drivers&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.thinkpenguin.com/" rel="noopener noreferrer"&gt;ThinkPenguin.com&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.reddit.com/r/debian/" rel="noopener noreferrer"&gt;Reddit Debian subreddit&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.reddit.com/r/linux/" rel="noopener noreferrer"&gt;Reddit Linux subreddit&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Feel free to ask the questions on Reddit, and feel free to share with Linux community any Wi-Fi &amp;amp; Bluetooth device that worked seamlessly on your Debian/Linux setup!&lt;/p&gt;

</description>
      <category>linux</category>
      <category>wifi</category>
      <category>bluetooth</category>
      <category>debian</category>
    </item>
    <item>
      <title>Debian 12/13 … is amazing! How to: Create your custom codehouse #5 [From Console only to Custom Graphical User Interface]</title>
      <dc:creator>Anna</dc:creator>
      <pubDate>Sun, 22 Dec 2024 19:59:19 +0000</pubDate>
      <link>https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-4a4-4dfh</link>
      <guid>https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-4a4-4dfh</guid>
      <description>&lt;p&gt;Author's Note: This article was originally written for Debian 12 Stable (Bookworm). However, with the release of Debian 13 approaching -and since I’m currently using Debian Testing (Trixie) - I’ll be sharing screenshots and configs from my own setup. Things might look slightly different on your end. That said, some aspects may vary significantly (or even a lot), and I’ll make sure to point those out.&lt;/p&gt;

&lt;p&gt;Customization is one of those things where, if you want your setup to look cool, you sometimes need cutting-edge versions of software.&lt;br&gt;
Most of the content here will still work on Debian Stable. Just keep in mind that some minor functionalities in certain apps might not.&lt;/p&gt;


&lt;h2&gt;
  
  
  Building a Custom Minimalistic UI on Debian: From Bare Display Server to Fully Functional User Interface with BSPWM and Polybar.
&lt;/h2&gt;



&lt;p&gt;In the previous parts of this Debian 12 series of articles (4 parts), I covered many details about the Debian distribution — what Debian offers, how it packages software, how to install Debian, and how to administer your Debian system in a secure way. If you enjoyed it so far and I’ve sparked your curiosity about Debian, let’s move on to the most interesting part - building Debian custom UI!&lt;/p&gt;



&lt;p&gt;In parts &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-3a4-3fbo"&gt;3A&lt;/a&gt; and &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-3b4-2ca5"&gt;3B&lt;/a&gt;, you may have noticed that all the screenshots I provided are just white text on a black screen. That’s because I installed a very minimal version of Debian that, at the beginning, had only the standard system utilities and no way to run any GUI applications, even if I wanted to. Why? Was something wrong with the graphics drivers? Nope. I have only one video card - NVIDIA (my Intel CPU doesn’t have an integrated video card), I &lt;strong&gt;did have Nouveau drivers&lt;/strong&gt; for the NVIDIA GPU from the start. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;By the way, I installed the proprietary NVIDIA drivers. I’ve described this process in detail in &lt;a href="https://dev.to/dev-charodeyka/debian-12-nvidia-drivers-18dh"&gt;this article&lt;/a&gt; (I couldn’t integrate it here because it would make this article incredibly long).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you’re familiar with Linux systems, you may know about distro flavors. They’re essentially different Desktop Environments (DEs) bundled with the distro of your choice. So, if you download an &lt;code&gt;.iso&lt;/code&gt; with a flavour, and install the system, you’ll have the Desktop Environment you picked. The most famous ones are &lt;a href="https://www.gnome.org/" rel="noopener noreferrer"&gt;GNOME&lt;/a&gt;, &lt;a href="https://xfce.org/" rel="noopener noreferrer"&gt;XFCE&lt;/a&gt;, and &lt;a href="https://kde.org/plasma-desktop/" rel="noopener noreferrer"&gt;KDE Plasma&lt;/a&gt;. However, during installation I did not have any of them installed on my OS, so all I have is this “black" terminal. But are these well-known Desktop Environments are the only way to make your OS more user-friendly and get a UI? NOPE. Otherwise, I wouldn’t have written this article.&lt;/p&gt;

&lt;p&gt;Will I install one of these Desktop Environments on my Debian setup and then show you how to customize it? NOPE. Why? Because it’s no fun. Also, it will bring me bloatware (in this case, software I will never use). To explain myself better, no matter how much you customize GNOME, GNOME is still GNOME. There’s no way to make it perfectly fit your needs while stripping away everything you NEVER use. The only DE I personally tolerate is XFCE. The rest? Meh. Basta. I deal with Windows on my work PC, eight hours a day, five days a week—enough of these cute "click-click" UIs.&lt;/p&gt;

&lt;p&gt;For me, the most precious part of the process of Debian customization is that it brings you really close to the OS itself. Are you learning Rust? Curious about writing applications for Linux OSs (or even Windows)? But you’ve never gone far beyond GNOME and for you it is a magic box? Bad news: chances are, you won’t go far beyond web apps either.&lt;/p&gt;



&lt;p&gt;Let's a bit leave for now UI, and I instead of writing a bullet list of this article content will show you schematically what this article for. So here is how I see basic needs of a user like me - the one who uses Debian for software development purposes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5mfod1okuw0stqjyidk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5mfod1okuw0stqjyidk.png" alt=" " width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What do we have here: input and output devices. As inputs, I have the keyboard and mouse, and also the microphone on the headset. For output, there’s the monitor, and for music or calls, the headset itself. For simplicity, I draw the mouse and keyboard using a wired connection. My mouse is wireless, but it’s important to note that if your keyboard and mouse use a dongle, they usually don’t rely on Bluetooth—they use radio frequency (RF) technology to communicate with their receivers (the dongles), so on the scheme they are not connected in any way to the Bluetooth dongle.&lt;/p&gt;

&lt;p&gt;My keyboard is an amazing AKKO keyboard. Unfortunately, on Linux, it works only in wired mode, but I’m completely fine with that. On the other hand, my Marshall headset connects via Bluetooth. My monitor receives its output from the NVIDIA graphical processing unit.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NB! I mention the brands of the stuff I use not as an advertisement, but to show what works for me.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So, the goal is to install everything needed on my Debian to make all my input/output devices work flawlessly and to configure a custom Graphical Interface. Let's start with the latter. &lt;/p&gt;



&lt;p&gt;I have made the decision to split the broad topic "Building a Custom Minimalistic UI on Debian: From Bare Display Server to Fully Functional Setup" due to the large amount of content. The first part, which you are reading now, focuses on the fundamental components installation and configuration. Here’s the roadmap:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Understanding &lt;em&gt;how&lt;/em&gt; UI and Desktop Environments work on Debian&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Display Servers, Display Communication Protocols and Windows Managers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Installing and configuring BSPWM Windows Manager&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;➃ Installing Browser (Brave browser) and Terminal Emulator (Alacritty)&lt;/p&gt;

&lt;p&gt;➄ About Desktop Bus and inter-process communication (IPC)&lt;/p&gt;

&lt;p&gt;➅ Installing Status Bar utility (polybar), File Manager (thunar), App Launcher (dmenu), Notification Daemon (dunst).&lt;/p&gt;

&lt;p&gt;The following &lt;em&gt;sub-part&lt;/em&gt; will focus on configuring Bluetooth audio devices and Wi-Fi.&lt;/p&gt;

&lt;p&gt;The final article in this Debian series will be dedicated to customization of the status bar (Polybar). Moreover, I will install additional software that is unintegral part of my system's setup. By the end of that article, you’ll be fully equipped to diverge from my configuration files and create an even better UI of your taste—where only your imagination sets the limits!&lt;/p&gt;

&lt;p&gt;Let's start!&lt;/p&gt;



&lt;p&gt;&lt;a id="how-ui-work"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Understanding &lt;em&gt;how&lt;/em&gt; UI and Desktop Environments work on Debian
&lt;/h3&gt;

&lt;p&gt;I don’t know if you’re familiar with web development, but let me use it as an abstract example to explain how Desktop Environments (DE) work and how the GUI of applications function on your Debian. &lt;/p&gt;

&lt;p&gt;_Imagine you visit a website DEV.TO, and you read one of my articles. You like it and decide to give it a unicorn as a reaction. You click on the unicorn, and it appears in the list of reactions under my article. That’s what you see on "your side".&lt;/p&gt;

&lt;p&gt;But behind the scenes—on the server side—your click is actually an event. This event triggers a piece of code on the server. The logic might look something like this:&lt;/p&gt;

&lt;p&gt;If 'unicorn' is clicked → increment the counter for 'unicorn' reactions on my article (identified by its unique ID).&lt;br&gt;
Update the database → write the new number of unicorns to the DB for the article 'idX'.&lt;br&gt;
So, anytime another person opens my article, they’ll see the updated number of unicorns already given to it, which will be fetched from statistics DB._&lt;/p&gt;

&lt;p&gt;Here’s your explanation refined and tied to Debian apps, keeping your tone and style:&lt;/p&gt;

&lt;p&gt;How is this related to any Debian app? It’s exactly the same. Take File Explorer, for example. You open this app, explore the contents of your directories, search for a file, or maybe move a file from one directory to another. You do all of this with a mouse via the GUI, perhaps using drag-and-drop. But behind all of this, there are commands being executed! These commands are similar to the ones you could easily run in the terminal yourself: &lt;code&gt;ls&lt;/code&gt;, &lt;code&gt;mv&lt;/code&gt;, &lt;code&gt;cp&lt;/code&gt;, &lt;code&gt;find&lt;/code&gt;, and so on.&lt;/p&gt;

&lt;p&gt;The GUI acts as a layer that simplifies these operations, making them accessible through clicks, drags, and buttons. But at its core, the GUI is just executing the system commands. It’s like a web app sending events to the server—behind the visuals, there’s logic and execution happening.&lt;/p&gt;

&lt;p&gt;The Desktop Environments, feature-rich GNOME for example, however still under have commands, even though the complexity of them is pretty high. When you double click on the icon of and App GNOME opens a window on the screen with this app, you can expand it or you can collapse it or you can drag it to the corner of the screen - what a magic! Does GNOME sends some "signals" to GPU to re-render the screen really quickly when something changes? NO. Here is the schematic representation of the "parties" involved into your System UI. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaw5mcmlc83pfvxryzx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaw5mcmlc83pfvxryzx1.png" alt=" " width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;
Firmware and GPU drivers are out of the scope of these article, VERY DETAILED explanation about how firmware and drivers work on Debian can be found &lt;a href="https://dev.to/dev-charodeyka/debian-secure-boot-to-be-or-not-to-be-that-is-the-question-1o82"&gt;here&lt;/a&gt; and &lt;a href="https://dev.to/dev-charodeyka/debian-12-nvidia-drivers-18dh"&gt;here&lt;/a&gt; NB! this scheme is accurate only for X (X11) communication protocol, for Wayland protocol is quite different, about their differences read below.



&lt;p&gt;GNOME DE has several key components. The Window Manager is responsible for the functionality of floating windows—allowing you to drag, collapse, or expand them. The Compositor takes care of how these windows look, handling things like transparency, animations, and borders.&lt;/p&gt;

&lt;p&gt;However, what exists outside of GNOME (or any other DE) is the Display Server. All Desktop Environments depend on the display server. It’s the display server that makes the fundamental difference between a terminal-only system and a system with a graphical UI.&lt;/p&gt;



&lt;p&gt;&lt;a id="display-wm"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Display Communication Protocols, Display Servers and Windows Managers
&lt;/h3&gt;
&lt;h4&gt;
  
  
  About Display Servers and Display Communication Protocols
&lt;/h4&gt;

&lt;p&gt;When it comes to display stuff, there are two major &lt;em&gt;"display technologies"&lt;/em&gt; used by almost all Linux operating systems, and the same applies to Debian. Their names are the X Window System (aka X11 or just X) and Wayland.  To reduce confusion with terminology related to X-* stuff: the X Window System is an X display communication protocol, the latest version of which is 11, so it’s also called X11.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;In the X Window System, the X Server itself does not give the user the capability of managing windows that have been opened. Instead, this job is delegated to a program called a window manager.&lt;br&gt;
Regarding Wayland, the job is delegated to display server called a compositor or compositing window manager. (&lt;a href="https://wiki.debian.org/WindowManager" rel="noopener noreferrer"&gt;WindowManager - Debian Wiki&lt;/a&gt;)&lt;/em&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There is a quite of confusion with all these terms related to X11 and Wayland, display communication protocols, display servers. Let's take a browser as an example to see the difference. &lt;em&gt;FYI, there are terminal-based browsers!&lt;/em&gt; However, let’s focus on a classic browser with GUI features like interactive tabs, status bars, etc.&lt;/p&gt;

&lt;p&gt;When you launch a browser—even if you launch it from the terminal (yes, on Linux, you don’t need to click an icon or launch the program from a menu; you can launch it just fine from the terminal with a command i.e. &lt;code&gt;brave-browser&lt;/code&gt;)—it should appear on your screen. Based on where you click and what you do, the image on your screen changes immediately.&lt;/p&gt;

&lt;p&gt;Obviously, some code handles this process, communicating your inputs and determining what should be displayed as outputs. In other words, there’s a &lt;strong&gt;display communication protocol&lt;/strong&gt;. And this is where X11 and Wayland diverge—they ARE completely different protocols!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Wayland is a communication protocol that specifies the communication between a display server and its clients, as well as a C library implementation of that protocol. A display server using the Wayland protocol is called a Wayland compositor, because it additionally performs the task of a compositing window manager. (&lt;a href="https://en.wikipedia.org/wiki/Wayland_(protocol)#Wayland_compositors" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The X Window System core protocol is the base protocol of the X Window System, which is a networked windowing system for bitmap displays used to build graphical user interfaces on Unix, Unix-like, and other operating systems. The X Window System is based on a client–server model: a single server controls the input/output hardware, such as the screen, the keyboard, and the mouse; all application programs act as clients, interacting with the user and with the other clients via the server. (&lt;a href="https://en.wikipedia.org/wiki/X_Window_System_core_protocol" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The key takeaway from these two quotes is NOT ONLY that the Wayland and X Window System protocols are different, BUT that they run on entirely different display servers. The X.Org Server is designed to run on the X11 communication protocol. Meanwhile, in case of Wayland, display servers are called the Wayland compositors and run upon the Wayland protocol.&lt;/p&gt;

&lt;p&gt;However, to give you a clearer idea of how display servers and display communication protocols are distinct and separate things, let me inform you about existence of XWayland. XWayland is a set of patches applied to the X.Org server codebase that allow an &lt;strong&gt;X server&lt;/strong&gt; to run on top &lt;strong&gt;of the Wayland protocol&lt;/strong&gt;. These patches are developed and maintained by the Wayland developers to provide compatibility with X11 applications during the transition to Wayland.&lt;/p&gt;

&lt;p&gt;A bit of background on Wayland: it’s much younger than X11, which was developed over 40 years ago. While X11 has been updated over time, it still struggles with an old-style architecture, legacy code, etc. Wayland, on the other hand, was created completely outside the X11 ecosystem, including X.Org display servers, so it has no dependency on that old technology.&lt;/p&gt;

&lt;p&gt;However, Wayland isn’t brand new—it wasn’t released just last year. Its current status shows increasing adoption. Both GNOME and KDE Plasma desktop environments have transitioned to Wayland. Debian supports it seamlessly: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Xorg is the default X Window server since Debian 4.0 (etch). For Debian 10 and later, the default human interface protocol is Wayland. (&lt;a href="https://wiki.debian.org/Xorg" rel="noopener noreferrer"&gt;source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For a long time, the bottleneck was Nvidia’s proprietary drivers, but that issue has been resolved—Nvidia creates less and less troubles to Wayland based systems with which driver version update.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The display server communicates with its clients over the display server protocol, a communications protocol. The display server is a key component in any graphical user interface, specifically the windowing system.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, here’s the answer to the rhetorical question I posed earlier in this article (&lt;em&gt;After I installed a very minimal version of Debian that, at the beginning, had only the standard system utilities, &lt;strong&gt;why&lt;/strong&gt; there is no way I can run any GUI applications?&lt;/em&gt;): After a minimal installation of Debian, the critical component missing is the Display Server. That’s why I can only use the terminal until I install a display server on the system. &lt;/p&gt;

&lt;p&gt;Therefore, I have to proceed with installation of a display server. And after all this praise for Wayland—and maybe you've seen it hyped on Reddit and other forums— I will go for the X11 communication protocol and the X.Org Server. The reason? The key software I plan to use, &lt;code&gt;bspwm&lt;/code&gt;, requires it.&lt;/p&gt;

&lt;p&gt;BSPWM is a tiling window manager. I’ll update the previous diagram to visualize how it fits into the system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffb0hq3nmd1jfhcupkj3d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffb0hq3nmd1jfhcupkj3d.png" alt=" " width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, BSPWM in my setup will be doing the job of windows management - open the windows for the apps when I launch any and will arrange them for me. What does it mean tiling?&lt;/p&gt;

&lt;p&gt;In the context of window managers, tiling refers to a method of organizing and displaying application windows on a screen in a non-overlapping, grid-like manner. Unlike traditional &lt;strong&gt;stacking&lt;/strong&gt; window managers (like those used in GNOME, KDE), where windows can overlap and need to be manually resized or moved, a tiling window manager &lt;strong&gt;automatically&lt;/strong&gt; arranges windows to make the best use of available screen space.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bspwm&lt;/code&gt; arranges windows as the leaves of a full binary tree.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcq2pkhv1c2xv11unlnsq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcq2pkhv1c2xv11unlnsq.png" alt=" " width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bspwm&lt;/code&gt; is very much in line with the "Debian way" of software—it doesn’t try to absorb a lot of functionality into itself. Instead, some functionality is delegated to other software. For example, in the diagram above, you can see &lt;code&gt;picom&lt;/code&gt; above &lt;code&gt;bspwm&lt;/code&gt;. &lt;code&gt;picom&lt;/code&gt; is a compositor responsible for adding extra effects to the windows managed by &lt;code&gt;bspwm&lt;/code&gt;, like transparency, animations, etc. However, &lt;code&gt;bspwm&lt;/code&gt; can run perfectly fine without it!&lt;/p&gt;

&lt;p&gt;What &lt;code&gt;bspwm&lt;/code&gt; cannot work without is the &lt;code&gt;sxhkd&lt;/code&gt; software. This is because &lt;code&gt;bspwm&lt;/code&gt; doesn’t handle any keyboard or pointer inputs on its own. &lt;code&gt;sxhkd&lt;/code&gt; is needed to translate keyboard and pointer events into &lt;code&gt;bspc&lt;/code&gt; invocations. &lt;code&gt;bspc&lt;/code&gt; is a program that sends messages to &lt;code&gt;bspwm&lt;/code&gt; via its socket.&lt;/p&gt;


&lt;h4&gt;
  
  
  Display server: Xorg
&lt;/h4&gt;

&lt;p&gt;Installing Xorg is as simple as running &lt;code&gt;sudo apt install xserver-xorg-core&lt;/code&gt;. However, if you previously installed NVIDIA drivers using the &lt;code&gt;nvidia-driver&lt;/code&gt; package (my case), it might already be installed. Below is the output of the command used to check if this package is already installed (&lt;code&gt;dpkg -l | grep xserver-xorg*&lt;/code&gt;):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favi8mipwydezeywi70tp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favi8mipwydezeywi70tp.png" alt=" " width="800" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;nvidia-driver&lt;/code&gt; package pulled in &lt;code&gt;xserver-xorg-video-nvidia&lt;/code&gt; as a dependency, which, in turn, brought the &lt;code&gt;xserver-xorg-core&lt;/code&gt; package. Here is the code if your system does not have these packages installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ dpkg -l | grep xserver-xorg*
#if output is empty:
#the X11 server itself without drivers and utilities:
$ sudo apt install xserver-xorg-core
#alternative for "full" xorg server installation:
#sudo apt install xorg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that you won't have the &lt;code&gt;startx&lt;/code&gt; command if you install just x11 server itself, and therefore you will not be able to start a graphical display right after installation. &lt;code&gt;startx&lt;/code&gt; command comes with the package &lt;code&gt;xinit&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install xinit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this, if I execute &lt;code&gt;startx&lt;/code&gt;, Xorg display server starts! &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB! If you cannot launch the X server with &lt;code&gt;startx&lt;/code&gt; and your error says something like (EE) no screens/output found (EE), it’s most likely an issue with the NVIDIA not writing anything properly into the Xorg configuration file after installation. That happened to me once on Debian Sid. The solution from the &lt;a href="https://wiki.debian.org/NvidiaGraphicsDrivers" rel="noopener noreferrer"&gt;Debian Wiki: NVIDIA Proprietary Driver&lt;/a&gt;:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;As the NVIDIA driver is not autodetected by Xorg, a configuration file is required to be supplied. Modern Debian packages for the NVIDIA driver &lt;strong&gt;should not require you to do anything listed here&lt;/strong&gt; as they handle this automatically during installation, but if you run into issues, or are using a much older version of Debian, you may try going through these steps.&lt;br&gt;
Automatic:&lt;br&gt;
Install the &lt;code&gt;nvidia-xconfig&lt;/code&gt; package, then run &lt;code&gt;sudo nvidia-xconfig&lt;/code&gt;. It will automatically generate a Xorg configuration file at /etc/X11/xorg.conf.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is how my screen changes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwm0ecuy1yokut58q9x0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwm0ecuy1yokut58q9x0n.png" alt=" " width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see poorly displayed &lt;code&gt;nvidia-smi&lt;/code&gt; output because the terminal is a bit weird—it’s not using all the available space. By the way, you can see some colors in the terminal area. The default terminal emulator that is used is &lt;code&gt;xterm&lt;/code&gt;. It has its own "window", though! But there’s no window manager yet to handle it more properly. Here’s proof that it’s not the fault of a badly configured Xorg server not understanding my screen size: when I run &lt;code&gt;xrandr&lt;/code&gt;, I see the correct resolution for my monitor.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcx8rt24dqah0urs76fsu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcx8rt24dqah0urs76fsu.png" alt=" " width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a id="bspwm"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Installing and configuring BSPWM Windows Manager
&lt;/h4&gt;

&lt;p&gt;For convenience, to proceed with the &lt;code&gt;bspwm&lt;/code&gt; installation, I return to the terminal space outside the display server. However, there’s no &lt;code&gt;stopx&lt;/code&gt; command to stop the X server started with &lt;code&gt;startx&lt;/code&gt;. So, I just have to kill the X server process. When I run &lt;code&gt;ps aux | grep X&lt;/code&gt;, I can retrieve the PID (process ID) and kill it with kill :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzup4h63su3dufplrf4p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzup4h63su3dufplrf4p.png" alt=" " width="754" height="131"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kill 1191
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;BSPWM installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install bspwm
#this will install also sxhkd package
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your window manager is installed. However, if you run &lt;code&gt;bspwm&lt;/code&gt; in the terminal, it won't work, and if you start the display server, you'll most likely see a black screen. This happens because the window manager is present but not properly configured. You can try it anyway—this will teach you how to open a new TTY session using shortcut keys if something goes wrong in future. For me, it's Ctrl+Alt+F_X_ (F2, F3, F4, F5 ecc - each F key opens new terminal session, or if they're already in use, I can switch between them). In this way you do not loose control over your machine and do not have to use emergency power off.&lt;/p&gt;




&lt;h4&gt;
  
  
  bspwm configuration: configuration directory
&lt;/h4&gt;

&lt;p&gt;So, let's configure &lt;code&gt;bspwm&lt;/code&gt; and its ally &lt;code&gt;sxhkd&lt;/code&gt;! This is done via creating configuration files for them. But where? &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The default configuration file is $XDG_CONFIG_HOME/bspwm/bspwmrc: this is simply a shell script that calls bspc. (&lt;a href="https://github.com/baskerville/bspwm" rel="noopener noreferrer"&gt;Source: BSPWM GitHub page&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;$XDG_CONFIG_HOME is an environmental variable, so you can check its value by running echo $XDG_CONFIG_HOME. I’m quite sure the output will most likely be empty. Why? First of all, what is XDG?&lt;/p&gt;

&lt;p&gt;XDG originally stood for "X Desktop Group," but now it stands for "Cross-Desktop Group." When it comes to desktop environments like GNOME, XFCE, KDE Plasma, etc., you can actually have more than one installed on your system, and you can switch between them. However, their developers need to be somewhat coordinated—especially regarding configuration file directories—otherwise, the system can become a mess. This is where freedesktop.org steps in to help.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;freedesktop.org hosts the development of free and open source software, focused on interoperability and shared technology for open-source graphical and desktop systems. We do not ourselves produce a desktop, but we aim to help others to do so. (&lt;a href="https://www.freedesktop.org/wiki/" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And as part of this interoperability effort, there is a document called the &lt;a href="https://specifications.freedesktop.org/basedir-spec/latest/" rel="noopener noreferrer"&gt;XDG Base Directory Specification&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this specification, you’ll find more details about $XDG_CONFIG_HOME:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;$XDG_CONFIG_HOME defines the base directory relative to which user-specific configuration files should be stored. If $XDG_CONFIG_HOME is either not set or empty, a default equal to $HOME/.config should be used. (&lt;a href="https://specifications.freedesktop.org/basedir-spec/latest/" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you have not set $XDG_CONFIG_HOME (i.e., the output of echo $XDG_CONFIG_HOME is empty), &lt;code&gt;bspwm&lt;/code&gt; will search for its configuration in &lt;code&gt;$HOME/.config/bspwm/&lt;/code&gt;. Your user's HOME directory should be set by default as it is one of the standard variables in Linux environments, and you can check it by running &lt;code&gt;echo $HOME&lt;/code&gt;. (When you run &lt;code&gt;cd&lt;/code&gt;, it brings you to your home directory exactly because of this variable.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hkb54a6q8sfebfl6917.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hkb54a6q8sfebfl6917.png" alt=" " width="362" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I want to draw your attention to the last two outputs from the screenshot. Here’s what I did: I logged in as the root user, and as you can see, the root user’s &lt;code&gt;HOME&lt;/code&gt; directory is different!&lt;/p&gt;

&lt;p&gt;Why does this matter? Well, if you put BSPWM's configuration files in your &lt;strong&gt;current user’s&lt;/strong&gt; home directory, other users won’t have access to the nicely configured UI (using &lt;code&gt;bspwm&lt;/code&gt;'s and other tools' configuration files I’ll show you later). The root user won’t have access either. &lt;/p&gt;

&lt;p&gt;So, what happens if you don’t set &lt;code&gt;$XDG_CONFIG_HOME&lt;/code&gt; and place the BSPWM and SXHKD configuration files in &lt;code&gt;.config&lt;/code&gt; within your user’s home directory (&lt;code&gt;/home/&amp;lt;your-username&amp;gt;/.config&lt;/code&gt;)? Here’s what will happen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;When you log in as your default user&lt;/strong&gt; and start the X display server, you’ll see your configured setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;When you log in as root&lt;/strong&gt; and start the display server, you’ll see a black screen.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;When you log in as any other user&lt;/strong&gt;, they’ll also see a black screen.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The solution to avoid these situations—if you truly need a shared configuration—is to place the configuration files somewhere &lt;strong&gt;outside of any single user’s home directory&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When it comes to X display server configurations, there’s a standardized location for this: &lt;code&gt;/etc/xdg&lt;/code&gt;. By placing the configuration files there, they become accessible system-wide for all users (for !reading!; a user without &lt;code&gt;sudo&lt;/code&gt; privileges cannot modify them. &lt;/p&gt;

&lt;p&gt;If you really need a setup for multiple users, you can place all configurations I mention below in &lt;code&gt;/etc/xdg/..&lt;/code&gt; repeating the directory structure I will use. Both &lt;code&gt;sxhkd&lt;/code&gt; and &lt;code&gt;bspwm&lt;/code&gt; have option "-c" when they are executed. This option is meant to point to the "custom" location of configuration files. Both software is searching for location of their config by interrogating environmental variable &lt;code&gt;XDG_CONFIG_HOME&lt;/code&gt;. If this variable is not set, the fallback default directory for config is &lt;code&gt;$HOME/.config/...&lt;/code&gt;. Even though it is possible to set XDG_CONFIG_HOME environmental variable to point to &lt;code&gt;/etc/xdg/&lt;/code&gt; directory, &lt;strong&gt;I would rather not, due to the fact that many apps will try to write something there, if during installation this $XDG_CONFIG_HOME variable is detected&lt;/strong&gt;. &lt;/p&gt;




&lt;h4&gt;
  
  
  bspwm configuration: configuration files
&lt;/h4&gt;

&lt;p&gt;After bspwm installation you can find the examples of &lt;code&gt;sxhkd&lt;/code&gt;'s and &lt;code&gt;bspwm&lt;/code&gt;'s default configurations that can be used as a starting point for your custom configurations. The both configuration files can be found in &lt;code&gt;/usr/share/doc/bspwm/examples&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Here, I want to introduce some slang related to what I’m about to do. The process of customizing Linux OSs is often called "&lt;em&gt;Linux ricing&lt;/em&gt;". This term is widely used on Reddit, especially in &lt;a href="https://www.reddit.com/r/unixporn/top/?t=all&amp;amp;rdt=63710" rel="noopener noreferrer"&gt;a subreddit dedicated to it&lt;/a&gt;, where people share their setups. Often, when a setup is particularly cool, you’ll see comments from others asking to share the "dot files".&lt;/p&gt;

&lt;p&gt;The term "dot files" actually refers to sharing the configurations used to achieve a specific UI setup. However, "dot files" (or more correctly, DotFiles) is not just slang—there’s even a &lt;a href="https://wiki.debian.org/DotFiles" rel="noopener noreferrer"&gt;Debian wiki page dedicated to DotFiles&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Why DotFiles are called this? I mentioned earlier, the default path for BSPWM configs is $HOME/.config, which is a folder, not a file. But its name starts with a dot (.). If you navigate (&lt;code&gt;cd&lt;/code&gt;) to your home directory and list the contents with &lt;code&gt;ls&lt;/code&gt;, you’ll see just regular files and directories. However, if you run &lt;code&gt;ls -a&lt;/code&gt;, you’ll see much more, especially you will see files and directories starting with a (.).&lt;/p&gt;

&lt;p&gt;To avoid making this article overly long, I’ll also share my "DotFiles" on GitHub.&lt;/p&gt;

&lt;p&gt;Let's proceed with BSPWM configuration. You will have to copy "examples" of configuration to the directory where BSPWM will be searching it. If you are fine that configuration will work only for your current user you have to create subdirectories in &lt;code&gt;$HOME/.config&lt;/code&gt; and place configuration files there in a correct way. I will be keeping configuration files in &lt;code&gt;/etc/xdg&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd (this will teleport you to your $HOME directory)
# you can double check
$ pwd 
$ ls -a 
#if you do not see .config directory in the output, you will have to create it
$ mkdir .config
#create a directory for bspwm config
$ mkdir .config/bspwm
#create a SEPARATE directory for sxhkd config
$ mkdir .config/sxhkd
# copy configuration files there:
$ cp /usr/share/doc/bspwm/examples/bspwmrc ./config/bspwm/
$ cp /usr/share/doc/bspwm/examples/sxhkdrc ./config/sxhkd/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Now, I can start modifying them. If you’ve chosen to keep BSPWM configuration accessible for all users and placed them outside of user's &lt;code&gt;home&lt;/code&gt; directory (in &lt;code&gt;/etc/xdg&lt;/code&gt;), you will have to use &lt;code&gt;sudo&lt;/code&gt;— elevated privileges —to modify these files that are in &lt;code&gt;/etc&lt;/code&gt; directory. If you keep your configs in &lt;code&gt;$HOME/.config&lt;/code&gt;, you don’t need &lt;code&gt;sudo&lt;/code&gt;!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the example of &lt;code&gt;bspwmrc&lt;/code&gt; configuration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe75lh8bmqmwzo5fnsuu9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe75lh8bmqmwzo5fnsuu9.png" alt=" " width="611" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ll start by modifying it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd .config/bspwm
$ sudo vim.tiny bspwmrc
#Let's see what is there:
#----------First Line-----------#
pgrep -x sxhkd &amp;gt; /dev/null || sxhkd &amp;amp;
# this command ensures that sxhkd is running along bspwm
#pgrep -x searches for processes sxhkd, if sxhkd is already running, pgrep exits with a success status
#&amp;gt; /dev/null discards the standard output of pgrep. This means the command doesn’t clutter your terminal with process IDs.
#|| - a logical OR operator. The command after the || only runs if previous command, pgrep, did not find sxhkd process. In such case, it starts sxhkd &amp;amp; in the background (&amp;amp; means run in the background).
#As I will not be keeping configiration files in the default directory, I have to inform sxhkd about where its configuration file is when it is launched:
#pgrep -x sxhkd &amp;gt; /dev/null || sxhkd -c "/etc/xdg/bspwm/sxhkdrc" &amp;amp;
#----------Second Line-----------#
bspc monitor -d I II III ...
# this line sets up "or workspaces", you can perceive them as tabs in the browser. You can navigate between them and have various applications opened. To avoid cluttering, I stop on 5 workspaces. There are Roman numbers by default, but you can name them as you want, 1,2,3 ecc or A,B,C... If you install fonts with icons later, you can use even Icons!
# I just trim everything after V.
bspc monitor -d I II III IV V
#NB! If you have more than one monitors, you can use separate bspc monitor commands to assign different sets of workspaces to each monitor.
#bspc monitor HDMI-1 -d I II III
#bspc monitor XX-1 -d IV V VI

#----------Lines 3-7 -----------#
bspc config ...
#these lines are about default configurations about arrangements of windows, for now i leave them as they are

#----------Lines 8-12 -----------#
bspc rule -a ...
#these lines set some specific rules for programs. I comment them out to keep the syntaxis for later, but I definitely do not need these exact rules:
#bspc rule -a ...
#bspc rule -a ...
#bspc rule -a ...
# ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, I’ll move on to modifying &lt;code&gt;sxhkdrc&lt;/code&gt;. In this configuration file, I need to set the hotkey combinations that will handle opening programs for me. &lt;/p&gt;




&lt;h3&gt;
  
  
  4. Installing Browser (Brave browser) and Terminal Emulator (Alacritty)
&lt;/h3&gt;

&lt;p&gt;To start, the most crucial program I need is a terminal emulator. There are 2 cool Terminal emulators, &lt;a href="https://sw.kovidgoyal.net/kitty/" rel="noopener noreferrer"&gt;Kitty&lt;/a&gt; and &lt;a href="https://alacritty.org/" rel="noopener noreferrer"&gt;Alacritty&lt;/a&gt;. I will be installing Kitty. I also need a browser, and my favorite is &lt;a href="https://brave.com/" rel="noopener noreferrer"&gt;Brave Browser&lt;/a&gt;, so I’ll have to install them first.&lt;/p&gt;

&lt;p&gt;NB! Alacritty can run ONLY on a GPU, so if it is a problem for some reason, consider Kitty or another option like &lt;a href="https://gnome-terminator.org/" rel="noopener noreferrer"&gt;Terminator&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debian 12 Stable vs Debian Testing Alert!&lt;/strong&gt;&lt;br&gt;
Current version of Kitty: version 0.41.1&lt;br&gt;
Kitty version in bookworm repo: 0.26.5-5&lt;/p&gt;
&lt;h2&gt;
  
  
  Kitty version in trixie repo: 0.39.1-1
&lt;/h2&gt;

&lt;p&gt;Current version of Alacritty: 0.15.1&lt;br&gt;
Alacritty version in bookworm repo: 0.11.0-4&lt;br&gt;
Alacritty version in trixie repo: 0.15.1-1&lt;/p&gt;

&lt;p&gt;The difference between Alacritty from bookworm repo vs trixie repo is quite notable, because alacritty of version 0.11 uses .yaml configuration files and alacritty of version 0.15 .toml configuration file. &lt;/p&gt;

&lt;p&gt;On Debian stable to get Alacritty version is 0.15, you will have to build Alacritty from source.&lt;/p&gt;

&lt;p&gt;In install Kitty:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install kitty
$ kitty -v
kitty 0.39.1 created by Kovid Goyal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To install Brave Browser I execute a single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# $ sudo apt install curl

$ curl -fsS https://dl.brave.com/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, I need to go to the &lt;code&gt;sxhkd&lt;/code&gt; configuration file, &lt;code&gt;sxhkdrc&lt;/code&gt;, and add the hotkey shortcuts that will handle opening windows with these two apps for me!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd
$ sudo vim.tiny .config/sxhkd/sxhkdrc 
# I replace urxvt terminal emulator with kitty
super + Return 
   kitty
# I add a new shortcut for Brave Browser:
super + b
   brave-browser
#A bit on syntaxis: 
# - "super" is "Windows" key on your keyboard, in Linux it is called super
# - "+" stands for which key you will press together with "super" key
# - "Return" is Enter key
# - "b" is just a letter of my choice, notice, it is minuscle
# - indented command - this command should be exactly the same that is in use when you launch any app from terminal. If for launching apps before you used only Desktop icons, you will have to google a bit to find the exact commands. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything is almost ready! All that is left is just to tell X display server and X11 communication protocol which windows manager they have to start! To do this, you have to create another configuration file &lt;code&gt;.xinitrc&lt;/code&gt;. This file should be in your home directory, not in &lt;code&gt;.config&lt;/code&gt;, just in &lt;code&gt;$HOME&lt;/code&gt; directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# "teleport" to your home directory
$ cd
# check for the existence of .xinitrc file
$ ls -a
#if it is not there (most probably), you will have to create one
$ touch .xinitrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Okay, now the &lt;code&gt;.xinitrc&lt;/code&gt; file is created, but it is empty. Inside, I need to add just a couple of lines to point to launching the &lt;code&gt;bspwm&lt;/code&gt; window manager.&lt;/p&gt;

&lt;p&gt;This file will be automatically used by the &lt;code&gt;startx&lt;/code&gt; command, which starts a graphical X session.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB!&lt;/strong&gt; In classical desktop environments like GNOME, KDE, etc., the process of &lt;strong&gt;starting a graphical session is handled by a Display Manager&lt;/strong&gt;. A Display Manager is the graphical UI application that you see with a login form prompting you to enter the username of the user you want to log in as and the corresponding password. For example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgnpam76r0pg8mv8w6qt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgnpam76r0pg8mv8w6qt.jpg" alt=" " width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;
Ubuntu Display Manager



&lt;p&gt;However, Display Managers are not always GUI-based; sometimes they are TUI (Text User Interface):&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60pu48vfgbgcv0p6fupl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60pu48vfgbgcv0p6fupl.png" alt=" " width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;a href="https://github.com/fairyglade/ly" rel="noopener noreferrer"&gt;Ly&lt;/a&gt; Display manager



&lt;p&gt;The very famous choice for custom UI setups is &lt;a href="https://wiki.debian.org/LightDM" rel="noopener noreferrer"&gt;LightDM&lt;/a&gt;, which provides a minimalistic GUI for logging in on the go. However, a Display Manager is optional and doesn’t add an extra layer of security. You can simply log in from the terminal and then use the &lt;code&gt;startx&lt;/code&gt; command to start an X graphical session. However, there is one thing that Display Managers handle automatically, which you will need to configure manually...&lt;/p&gt;

&lt;p&gt;To command X11 to execute &lt;code&gt;bspwm&lt;/code&gt; when &lt;code&gt;startx&lt;/code&gt; command is launched is just &lt;code&gt;exec bspwm&lt;/code&gt;. However, to make all your applications with GUI to work properly, there should be another player involved in this. And this other participant is D-Bus a.k.a Desktop Bus. &lt;/p&gt;


&lt;h3&gt;
  
  
  5. About Desktop Bus and inter-process communication (IPC)
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;D-Bus is a message bus system, a simple way for applications to talk to one another. In addition to inter-process communication, D-Bus helps coordinate process lifecycle; it makes it simple and reliable to code a "single instance" application or daemon, and to launch applications and daemons on demand when their services are needed.&lt;/em&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;D-Bus per se is a daemon, meaning it is a process that runs in the background. On Debian 12, it is managed by &lt;code&gt;systemd&lt;/code&gt;. The D-Bus daemon should be preinstalled and up and running; you can check it with &lt;code&gt;systemctl status dbus&lt;/code&gt;. However, there is quite different output if you run &lt;code&gt;systemctl --user status dbus&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdcaxlw6do270ih51s2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdcaxlw6do270ih51s2u.png" alt=" " width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, by default, D-Bus &lt;strong&gt;System&lt;/strong&gt; Message Bus is up and running, while D-Bus &lt;strong&gt;User&lt;/strong&gt; Message Bus is not running!&lt;/p&gt;

&lt;p&gt;Will your &lt;code&gt;bspwm&lt;/code&gt; setup work even like this? Yes. Will you be able to launch applications, yes, most probably. However, here is the log of Brave Browser when I simply started X session with &lt;code&gt;exec bspwm&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5yu9u28q57uij42r8ia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5yu9u28q57uij42r8ia.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See: Brave is complaining (in the left terminal window, you can see many repeating errors related to bus: &lt;code&gt;ERRROR:bus.cc: Failed to connect to the bus&lt;/code&gt;), but running. For many applications it is crucial to have D-Bus Message Bus up and running for the X session they run in, so they will not function properly because they use it to communicate with other apps!&lt;/p&gt;

&lt;p&gt;Having the D-Bus message bus running for the session is not trivial to configure. As I mentioned, the system D-Bus message bus is up and running right from the start of the boot process because it is one of the crucial systemd services.&lt;/p&gt;

&lt;p&gt;Each session you start, when configured properly, must have a &lt;code&gt;DBUS_SESSION_BUS_ADDRESS&lt;/code&gt;. When you boot into a TTY (like in my case, without a GUI environment), the session is started, and &lt;code&gt;systemd&lt;/code&gt; handles it. So, I see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5qy96lkbozfjk6eaamx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5qy96lkbozfjk6eaamx.png" alt=" " width="536" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I logged in as my user, &lt;code&gt;alisa&lt;/code&gt;, and by default, Debian has the &lt;code&gt;dbus-user-session&lt;/code&gt; package installed. As a result, I have the &lt;code&gt;DBUS_SESSION_BUS_ADDRESS&lt;/code&gt; variable set, which is critical. Without this environment variable, applications you start are unaware of the D-Bus session. So, the objective is to have this variable set (&lt;code&gt;echo $DBUS_SESSION_BUS_ADDRESS&lt;/code&gt; should not be empty!).&lt;/p&gt;

&lt;p&gt;When I switch to the root user with &lt;code&gt;su -&lt;/code&gt;, which does not retain any user environment variables, you can see that there is no D-Bus address. However, if I use &lt;code&gt;su&lt;/code&gt; without the &lt;code&gt;-&lt;/code&gt;, the situation is different. Please note that this is a weak example because such a method of logging in is not very thorough and is usually used for other purposes. &lt;code&gt;In general, systemd user space and D-Bus User Message Bus configuration is quite complicated, so my explanation here may provide misleading information.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;When you start an X server display with &lt;code&gt;startx&lt;/code&gt;, you are starting a graphical session, and it behaves differently. It does not require a login, so systemd does not automatically start a D-Bus session for it.&lt;/p&gt;

&lt;p&gt;You might think that after entering the graphical session, you can manually run &lt;code&gt;systemctl --user start dbus&lt;/code&gt; to start the D-Bus session. But this will not work because it’s not that simple. When you start it manually, it runs, and a socket is created. However, systemd behaves differently for users, and you will need to perform various additional steps to make it work properly, leveraging only &lt;code&gt;systemd&lt;/code&gt; capabilities and the &lt;code&gt;dbus-user-session&lt;/code&gt; package.&lt;/p&gt;

&lt;p&gt;Since it is a bit quite of the scope of this article, I just recommend reading more on this topic for a deeper understanding, if you are curious. You can start from &lt;a href="https://wiki.archlinux.org/title/Systemd/User" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you follow a guide like &lt;a href="https://www.baeldung.com/linux/systemd-session-dbus-headless-setup" rel="noopener noreferrer"&gt;this one&lt;/a&gt;, you’ll end up with a user D-Bus user session shared across all sessions and users. While functional, this approach can be considered suboptimal and not exactly the "Debian way," even though many desktop environments choose this route. For more details, you can refer to &lt;a href="https://lists.debian.org/debian-devel/2016/08/msg00554.html" rel="noopener noreferrer"&gt;this communication&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;However, if you're not keen on tweaking &lt;code&gt;systemd&lt;/code&gt;services and socket configurations, even here, Debian has you covered! The objective is to start the D-Bus user message bus for a to be  initialized BSPWM graphical session (with &lt;code&gt;exec bspwm&lt;/code&gt;). There’s a command-line utility that handles this: &lt;code&gt;dbus-launch&lt;/code&gt;. This utility is part of the Debian &lt;code&gt;dbus-x11&lt;/code&gt;package.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Package: dbus-x11&lt;br&gt;
simple interprocess messaging system (X11 deps)&lt;br&gt;
D-Bus is a message bus, used for sending messages between applications. Conceptually, it fits somewhere in between raw sockets and CORBA in terms of complexity.&lt;br&gt;
This package contains the dbus-launch utility, which automatically launches one D-Bus session bus per X11 display per user. &lt;strong&gt;If the dbus-user-session package is also installed, it takes precedence over this package&lt;/strong&gt;. &lt;a href="https://packages.debian.org/bookworm/dbus-x11" rel="noopener noreferrer"&gt;Source&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I install it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install dbus-x11
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have googled around &lt;code&gt;bspwm&lt;/code&gt; installation and configuration, you may have encountered the line you should add to your &lt;code&gt;.xinitrc&lt;/code&gt; to start &lt;code&gt;bspwm&lt;/code&gt; when graphical session is launched:&lt;br&gt;
&lt;code&gt;exec dbus-launch --exit-with-x11 bspwm&lt;/code&gt;. This, however, is not entirely correct, as it should be &lt;code&gt;exec dbus-launch --exit-with-session bspwm&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I modify the &lt;code&gt;.xinitrc&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd
$ vim.tiny .xinitrc
 exec dbus-launch --exit-with-session bspwm 
#if you decided to place the configuration of bspwm in /etc/xdg, you do need to specify path to configuration file!
# exec dbus-launch --exit-with-session bspwm -c "/etc/xdg/bspwm/bspwmrc"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;dbus-launch --exit-with-session bspwm&lt;/code&gt; works well for me because I include only the BSPWM launch command in &lt;code&gt;.xinitrc&lt;/code&gt;. All other programs that are part of the session and run in the background are started from the BSPWM configuration.&lt;/p&gt;

&lt;p&gt;However, if you want to start them separately, you need a slightly different command: &lt;code&gt;exec dbus-launch --exit-with-session ~/.xinitrc.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In my case, to ensure complete tidiness, i go a bit further, and add the check for D-Bus User Bus already running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd
$ vim .xinitrc
 if [ -z "$DBUS_SESSION_BUS_ADDRESS" ]; then
    exec dbus-launch --exit-with-session bspwm
 else
    exit 0
 fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, it's time to try it! In terminal I execute the command &lt;code&gt;startx&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F231kdzygaxmn50kgplgl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F231kdzygaxmn50kgplgl.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see that the previous DBus-related error, “ERROR: bus.cc: Failed to connect to the bus” has been resolved. However, you might notice other errors—there are some related to &lt;code&gt;kwalletd&lt;/code&gt;. It’s happening because Brave Browser tries to launch &lt;a href="https://apps.kde.org/kwalletmanager5/" rel="noopener noreferrer"&gt;KDE Wallet Manager&lt;/a&gt; to use it a security mechanism for saved passwords storage. KDE Wallet Manager is part of the KDE Plasma desktop environment, and since I don’t have KDE Plasma installed, I don’t have the wallet manager either.&lt;/p&gt;

&lt;p&gt;That said, Brave still runs fine without it. If you’d like to use KDE Wallet Manager, it’s easy to install &lt;a href="https://packages.debian.org/bookworm/kwalletmanager%20for%20more%20details" rel="noopener noreferrer"&gt;with&lt;/a&gt; &lt;code&gt;sudo apt install kwalletmanager&lt;/code&gt;. Don't worry it will not install you KDE Plasma DE, even though you will see a quite long list of stuff to be installed into your system.&lt;/p&gt;

&lt;p&gt;At the next &lt;code&gt;startx&lt;/code&gt; session, you’ll most likely be prompted to configure the manager by either setting a password or pointing to an existing GPG key (you need to &lt;strong&gt;generate that beforehand&lt;/strong&gt;, details about how to do it are &lt;a href="https://keyring.debian.org/creating-key.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;However, KWallet may not work properly or might not pop up at all if there’s another missing configuration—the XDG desktop portal. I’ll discuss the &lt;a href="https://wiki.archlinux.org/title/XDG_Desktop_Portal" rel="noopener noreferrer"&gt;XDG Desktop Portal&lt;/a&gt; in my next article, when I’ll be setting up the sound server to configure audio devices. It’s important for the sound server, so I decided to move that explanation to the next part, so I can show you more logs.&lt;/p&gt;

&lt;p&gt;For now, for my standards everything is running as it should, and the base functionality is ready! Now, I just need to proceed with installing additional software to complete my custom UI setup!&lt;/p&gt;




&lt;h3&gt;
  
  
  6. Installing Status Bar utility (polybar), File Manager (thunar), App Launcher (dmenu) and Notification Daemon (dunst).
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Status Bar: Polybar
&lt;/h4&gt;

&lt;p&gt;First, I need a status bar. In the context of a desktop environment, a status bar refers to:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F77ba4akv3u61uw5b6p5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F77ba4akv3u61uw5b6p5x.png" alt=" " width="800" height="16"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this I will be using &lt;a href="https://github.com/polybar/polybar" rel="noopener noreferrer"&gt;Polybar&lt;/a&gt;, that is installed with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install polybar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Polybar is a highly customizable tool, though it comes with some limitations. For example, you cannot use SVG icons directly, but you can use fonts that support icons and include characters from those fonts. By default, this is how the status bar will look if you use default configuration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0i4nv95gkct0uq1vul0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0i4nv95gkct0uq1vul0a.png" alt=" " width="800" height="27"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is the detailed documentation on customization: &lt;a href="https://github.com/polybar/polybar/wiki" rel="noopener noreferrer"&gt;Polybar Wiki&lt;/a&gt;. Like BSPWM and SXHKD, Polybar is customizable through a configuration file:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;By default, polybar will load the config file from ~/.config/polybar/config.ini, /etc/xdg/polybar/config.ini, or /etc/polybar/config.ini depending on which it finds first.&lt;br&gt;
If you do not specify the name of the bar and your config file only contains a single bar, polybar will display that bar. Otherwise you have to explicitly specify bar name. (&lt;a href="https://github.com/polybar/polybar/wiki" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After installation, the default configuration file can be found at &lt;code&gt;/etc/polybar/config.ini&lt;/code&gt;. For now, I’ll leave the default configuration as it is. I plan to modify it in the next article, focusing on the main functionality I want to add or adjust. This includes status bar buttons for managing Wi-Fi connections, sound volume, Bluetooth connections, and display backlight.&lt;/p&gt;

&lt;p&gt;In fact, I plan to have two Polybar status bars—one at the top and another at the bottom of the screen. You can create objects like icons or even simple text labels (as buttons) on Polybar and attach shell scripts to them. These scripts will define the functionality triggered by button presses, mouse clicks, arrow keys, or even the mouse wheel!&lt;/p&gt;

&lt;h4&gt;
  
  
  Notification daemon: dunst
&lt;/h4&gt;

&lt;p&gt;A notification daemon handles the display of notifications, such as error messages or warnings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxsbetviovpl7e98vq8t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxsbetviovpl7e98vq8t.png" alt=" " width="800" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/dunst-project/dunst?tab=readme-ov-file" rel="noopener noreferrer"&gt;Dunst&lt;/a&gt; is a highly configurable and lightweight notification daemon. It can be configured to display various useful notifications, such as messages from email, social media, and more. You can install it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install dunst
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The default configuration will be placed in /&lt;code&gt;etc/xdg/dunst/dunstrc&lt;/code&gt;. If you prefer to keep all configurations in &lt;code&gt;$HOME/.config&lt;/code&gt;, you’ll need to copy it there using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd
$ mkdir .config/dunst
$ cp /etc/xdg/dunst/dunstrc $HOME/.config/dunst
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  How to launch Dunst?
&lt;/h4&gt;

&lt;p&gt;If you've ever seen some dotfiles of other people who use Dunst in their Linux ricing setups, maybe you noticed that they just add the line &lt;code&gt;dunst &amp;amp;&lt;/code&gt; to their &lt;code&gt;bspwmrc&lt;/code&gt; config. If you copied the config files to &lt;code&gt;$HOME/.config/dunst&lt;/code&gt;, that's fine, dunst will find its config there. However, if you kept the config files where they were originally placed in &lt;code&gt;/etc/xdg/dunst/dunstrc&lt;/code&gt; (like I do), you need to tell Dunst to look for them there with &lt;code&gt;-conf&lt;/code&gt; argument (if you did not set explicitly &lt;code&gt;$XDG_HOME_CONFIG&lt;/code&gt; to point to &lt;code&gt;/etc/xdg&lt;/code&gt;). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;However, there is something more than configuration&lt;/strong&gt;. Spoiler: If you just add &lt;code&gt;dunst &amp;amp;&lt;/code&gt; to your &lt;code&gt;bspwmrc&lt;/code&gt; after instalaltion, DUNST WILL NOT WORK, and you will most likely not notice it (it will just silently go in error, if you do not set for it logging). Now I will explain why.&lt;/p&gt;

&lt;p&gt;First, Dunst and Polybar are different. Polybar runs in the background, and so does Dunst. But Dunst is a notification &lt;strong&gt;daemon&lt;/strong&gt;, meaning it's a service, that can be managed by &lt;code&gt;systemd&lt;/code&gt; like other daemons. You can manage it with &lt;code&gt;systemctl&lt;/code&gt;, and it should be available so that apps needing to notify you about something will have Dunst up and running when your X server is running. Let me show you some logs from the console before I start the display server with &lt;code&gt;startx&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcipjcl5p7sgaxwagl24.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcipjcl5p7sgaxwagl24.png" alt=" " width="800" height="337"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;If you try to start &lt;code&gt;dunst&lt;/code&gt; from console (no graphical session), it will go in error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ systemctl --user start dunst
...
Dec 30 11:07:49 wonderland systemd[946]: Starting dunst.service - Dunst notification daemon...
Dec 30 11:07:49 wonderland dunst[1061]: WARNING: Cannot open X11 display.
Dec 30 11:07:49 wonderland dunst[1061]: ERROR: [  get_x11_output:0065] Couldn't initialize X11 output. Aborting...
Dec 30 11:07:49 wonderland systemd[946]: dunst.service: Main process exited, code=killed, status=5/TRAP
Dec 30 11:07:49 wonderland systemd[946]: dunst.service: Failed with result 'signal'.
Dec 30 11:07:49 wonderland systemd[946]: Failed to start dunst.service - Dunst notification daemon.

$ dunst
WARNING: Cannot open X11 display.
ERROR: [  get_x11_output:0065] Couldn't initialize X11 output. Aborting...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, the error is explicit: it cannot find the X11 output, which is pretty logical because the X server is not running and graphical X session was not started yet.&lt;/p&gt;

&lt;p&gt;Okay, I start the server and graphical session with &lt;code&gt;startx&lt;/code&gt;, and then I restart dunst:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ systemctl --user restart dunst
Job for dunst.service failed because a fatal signal was delivered to the control process.
See "systemctl --user status dunst.service" and "journalctl --user -xeu dunst.service" for details.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0crx8lifldlkr1ragap6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0crx8lifldlkr1ragap6.png" alt=" " width="800" height="637"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Same error!&lt;/p&gt;

&lt;p&gt;But as you can see, &lt;code&gt;$DISPLAY&lt;/code&gt; is not empty when I &lt;code&gt;echo&lt;/code&gt; it. So, what’s the problem?&lt;/p&gt;

&lt;p&gt;The issue is that I tried to start Dunst before without X11, and with the unset &lt;code&gt;DISPLAY&lt;/code&gt; variable. So even when I restarted it now, from active graphical session, it goes into an error. How can I fix this?&lt;/p&gt;

&lt;p&gt;I need to make sure the &lt;code&gt;DISPLAY&lt;/code&gt; environment variable is available for &lt;code&gt;systemd&lt;/code&gt; user services before I restart Dunst. I can import this environment variable for systemd user services with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ systemctl --user import-environment DISPLAY
$ systemctl --user restart dunst
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voila:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuafnhsdcykffy9wu5nt5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuafnhsdcykffy9wu5nt5.png" alt=" " width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, the important part about starting Dunst is that before you can start it, you need to import the &lt;code&gt;DISPLAY&lt;/code&gt; environment variable using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ systemctl --user import-environment DISPLAY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you start the graphical session with &lt;code&gt;startx&lt;/code&gt; from the console, the &lt;code&gt;DISPLAY&lt;/code&gt; variable is not available there initially. After graphical X session starts, it becomes available, but Dunst doesn't automatically get informed about it.&lt;/p&gt;

&lt;p&gt;I prefer to launch Dunst using &lt;code&gt;systemctl&lt;/code&gt; rather than &lt;code&gt;dunst &amp;amp;&lt;/code&gt;. If you placed all configurations in &lt;code&gt;/etc/xdg&lt;/code&gt; rather than in &lt;code&gt;$HOME/.config&lt;/code&gt;, to launch Dunst with &lt;code&gt;systemctl&lt;/code&gt; you will also need to pass the path to the config file. I don’t have to do this as I've placed the Dunst configuration in &lt;code&gt;$HOME/.config/dunst/dunstrc&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can find the configuration files for user-space services managed by systemd in &lt;code&gt;/usr/lib/systemd/user&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ls /usr/lib/systemd/user
$ sudo vim.tint /usr/lib/systemd/user/dunst.service
[Unit]
Description=Dunst notification daemon
Documentation=man:dunst(1)
PartOf=graphical-session.target

[Service]
Type=dbus
BusName=org.freedesktop.Notifications
ExecStart=/usr/bin/dunst -conf /etc/xdg/dunst/dunstrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I added the argument to point to the config file:&lt;br&gt;
&lt;code&gt;ExecStart=/usr/bin/dunst -conf /etc/xdg/dunst/dunstrc&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Now I can add both Polybar and Dunst to the &lt;code&gt;bspwmrc&lt;/code&gt; so they are launched automatically by &lt;code&gt;bpc&lt;/code&gt;. I will need to modify the BSPWM configuration file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd 
$ vim.tiny .config/bspwmrc #you do not need sudo if you are in $HOME
#I add this lines after pgrep -x sxhkd....

#DISPLAY env import
systemctl --user import-environment DISPLAY

#DUNST Launching
systemctl --user start dunst

#POLYBAR Launching
# terminate in "elegant" way all running polybarS, if there are some (alternative to kill)
polybar-msg cmd quit
# require logging of polybar
echo "---" | tee -a /tmp/polybar.log
# launch polybar in background and start logging. As for now I am using default polybar config, the command polybar will automatically fetch it, if the configuration is placed in one of the places, where polybar searches it (in my case it is in /etc/polybar/config.ini)

polybar 2&amp;gt;&amp;amp;1 | tee -a /tmp/polybar.log &amp;amp; disown
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h4&gt;
  
  
  XDG desktop portal
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;XDG Desktop Portal&lt;br&gt;
A portal frontend service for Flatpak and other desktop containment frameworks.&lt;br&gt;
xdg-desktop-portal works by exposing a series of D-Bus interfaces known as portals under a well-known name (org.freedesktop.portal.Desktop) and object path (/org/freedesktop/portal/desktop). (&lt;a href="https://flatpak.github.io/xdg-desktop-portal/" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Mostly, it’s needed for containerized software like Flatpaks, as the quote above says. However, it’s not only for this, and even if you don’t use Flatpak, you could still run into problems. A very trivial case could be with a browser like Mozilla or any other when you try to upload something, and the file chooser dialog is needed. It won’t work without the portal.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Portals were designed for use with applications sandboxed through Flatpak, but any application can use portals to provide uniform access to features independent of desktops and toolkits. This is commonly used, for example, to allow screen sharing on Wayland via PipeWire, or to use file open and save dialogs on Firefox that use the same toolkit as your current desktop environment. (&lt;a href="https://wiki.archlinux.org/title/XDG_Desktop_Portal" rel="noopener noreferrer"&gt;Arch Wiki: XDG Desktop Portal&lt;/a&gt;)&lt;/em&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;All well-known Desktop environments like KDE Plasma, Gnome, XFCE etc. use XDG portals, however, what is important is that XDG Desktop Portal has a backend and different DEs use different backends.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;When an application sends a request to the portal, it is handled by xdg-desktop-portal, which then forwards it to a backend implementation. This allows implementations to provide suitable user interfaces that fit into the user's desktop environments, and access environment-specific APIs for requests like opening a URI or recording the screen. Multiple backends can be installed and used at the same time.(&lt;a href="https://wiki.archlinux.org/title/XDG_Desktop_Portal" rel="noopener noreferrer"&gt;Arch Wiki: XDG Desktop Portal Backends&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Each XDG portal backend is based on a specific Toolkit. For example GNOME's desktop portal backend is based on GTK v.4; while KDE Plasma's backend is based on Qt 6. And this is actually why you cannot customize DEs in a similar way. For example, you cannot install a Theme for KDE Plasma 6 and try to apply it to GNOME, it will just not work. Same even for KDE Plasma 5 and KDE Plasma 6 - first uses a Qt 5 and the latter one Qt 6. &lt;/p&gt;

&lt;p&gt;Anyway, with BSPWM, what should you use? Well, there is a generic backend for custom DEs that is based on GTK v.3.  This GTK 3.0 Toolkit will enable you later to apply Themes for your DE! For example, if you like &lt;a href="https://draculatheme.com/" rel="noopener noreferrer"&gt;Dracula Theme&lt;/a&gt;, you will be able to apply it DE-wide : colors, fonts ecc.&lt;/p&gt;

&lt;p&gt;However, remember, XDG desktop portal with at least one backend is not just a question of whether you will be able to apply custom themes or not later, it is about providing applications the means for their proper functioning!&lt;/p&gt;

&lt;p&gt;XDG Desktop Portal can be installed with &lt;code&gt;sudo apt install xdg-desktop-portal&lt;/code&gt; on Debian. The common mistake is to install only this package. This package does not bring you any "default" backend mentioned above. &lt;/p&gt;

&lt;p&gt;I proceed with installation of &lt;code&gt;xdg-desktop-portal&lt;/code&gt; first.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Package: xdg-desktop-portal &lt;br&gt;
desktop integration portal for Flatpak and Snap&lt;br&gt;
xdg-desktop-portal provides a portal frontend service for Flatpak, Snap, and possibly other desktop containment/sandboxing frameworks. This service is made available to the sandboxed application, and provides mediated D-Bus interfaces for file access, URI opening, printing and similar desktop integration features. (&lt;a href="https://packages.debian.org/bookworm/xdg-desktop-portal" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install xdg-desktop-portal
$ systemctl --user start xdg-desktop-portal
$ systemctl --user status xdg-desktop-portal
● xdg-desktop-portal.service - Portal service
     Loaded: loaded (/usr/lib/systemd/user/xdg-desktop-portal.service; static)
     Active: active (running) since Mon 2024-12-30 13:29:28 CET; 1s ago
 ....

Mar 30 13:29:28 wonderland systemd[1017]: Starting xdg-desktop-portal.service - Portal service...
Mar 30 13:29:28 wonderland xdg-desktop-por[7132]: No skeleton to export
Mar 30 13:29:28 wonderland systemd[1017]: Started xdg-desktop-portal.service - Portal service.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the XDG Desktop Portal service is reporting that no backend was found: &lt;code&gt;xdg-desktop-portal[7132]: No skeleton to export.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I install "generic" backend &lt;code&gt;xdg-desktop-portal-gtk&lt;/code&gt; that uses GTK 3 Toolkit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install xdg-desktop-portal-gtk
# you can check that the warning about Skeleton is gone
$ systemctl --user restart xdg-desktop-portal
$ systemctl --user status xdg-desktop-portal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In general, GTK will be completely enough for my setup. &lt;strong&gt;But please note, some apps, especially Flatpaks depend on other services that require a specific backend, like GNOME's one. If a Flatpak doesn’t work with GTK generic backend, you’ll have to install another backend. As I mentioned above, multiple backends can be installed and used at the same time. XDG desktop portal will redirect app's requests to the right backend.&lt;/strong&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  File manager
&lt;/h4&gt;

&lt;p&gt;yazi&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install ffmpeg 7zip jq poppler-utils fd-find ripgrep fzf zoxide imagemagick
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install debootstrap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  File Manager: Thunar
&lt;/h4&gt;

&lt;p&gt;File Manager is the guy that you open to explore what's on your PC, find files, move/copy files and interact with files from USB pen drives:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpskfeb4vc3b7yb3g5ohs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpskfeb4vc3b7yb3g5ohs.png" alt=" " width="800" height="569"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I use &lt;a href="https://docs.xfce.org/xfce/thunar/the-file-manager-window" rel="noopener noreferrer"&gt;Thunar&lt;/a&gt;, that can be installed with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install thunar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  App Launcher: dmenu
&lt;/h4&gt;

&lt;p&gt;App Launcher is a tool that displays installed applications, allowing you to launch them with a mouse click. Additionally, it provides a search utility for quickly finding the app you’re looking for.&lt;/p&gt;

&lt;p&gt;In classic desktop environments, an app launcher typically looks something like this—for example, GNOME's app launcher:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u8ejah6eazsl18ibpt2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u8ejah6eazsl18ibpt2.png" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am using &lt;a href="https://tools.suckless.org/dmenu/" rel="noopener noreferrer"&gt;dmenu&lt;/a&gt;. This app launcher is much more minimalistic compared to the GNOME app launcher displayed above. It won’t display all the icons or other extras, but it will allow you to search efficiently and launch apps with a click.&lt;/p&gt;

&lt;p&gt;I prefer this minimalist approach because I rarely use app launcher anyway. Most of the time, I launch apps from the command line or by using SXHKD keybindings, which I configure for the apps I frequently use. &lt;code&gt;dmenu&lt;/code&gt; can be installed with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install suckless-tools
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I need to add the Thunar and dmenu to the SXHKD configuration to bind hotkeys for launching them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd /etc/xdg/bspwm
# if you decided to keep configurations in $HOME/.config:
#$ cd 
#$ cd .config/sxhkd
$ sudo vim sxhkdrc #you do not need sudo if you are in $HOME
#I add this lines only
# file manger
super + f
      thunar
#dmenu is already there, if you kept the default config!
# super + @space
#      dmenu_run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Here we are! Now, if I run &lt;code&gt;startx&lt;/code&gt; from terminal, all my setup starts:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9ssqojmu5qcou6vrgry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9ssqojmu5qcou6vrgry.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;
On the top: status bar Polybar; on the right: Thunar, Brave Browser



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlod5ggpm4qerd86iaa0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlod5ggpm4qerd86iaa0.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;
 On the top, overlaying polybar status bar: app launcher -dmenu



&lt;p&gt;&lt;strong&gt;VOILA!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>debian</category>
      <category>linux</category>
      <category>ricing</category>
      <category>tilingwm</category>
    </item>
    <item>
      <title>Debian 12: NVIDIA Drivers Installation</title>
      <dc:creator>Anna</dc:creator>
      <pubDate>Sun, 01 Dec 2024 20:05:39 +0000</pubDate>
      <link>https://dev.to/dev-charodeyka/debian-12-nvidia-drivers-18dh</link>
      <guid>https://dev.to/dev-charodeyka/debian-12-nvidia-drivers-18dh</guid>
      <description>&lt;p&gt;NVIDIA, with the rise in popularity of Large Language Models (LLMs), has firmly secured its position as the leader in the GPU market, leaving AMD GPUs far behind. However, for Linux users, installing NVIDIA GPU's drivers is still troublesome. In this article, I’ll cover key concepts about NVIDIA drivers and will show an easy way to install them on Debian OS. This article is a long-read, however, understanding how driver-related things work on Linux will make your user experience much better. &lt;/p&gt;




&lt;p&gt;Speaking about LLMs, other AI models, neural networks, etc. - if you're here because your final goal is to run them on your GPU, you're on the right track. Installing the NVIDIA drivers is the FIRST STEP toward enabling your algorithms to run on your NVIDIA GPU.&lt;/p&gt;

&lt;p&gt;BUT. It’s only the step 1/2. The next essential step is installing the CUDA Toolkit. Don’t be misled: just having the drivers installed is not enough to actually use your GPU with Compute Unified Device Architecture (CUDA).&lt;/p&gt;

&lt;p&gt;The confusion often comes from NVIDIA’s own documentation. Sometimes docs refer to the drivers as "CUDA" something - for example, just try to make sense of the first &lt;a href="https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html" rel="noopener noreferrer"&gt;three compatibility tables here&lt;/a&gt;). Also, after successfully installing the NVIDIA drivers when you run &lt;code&gt;nvidia-smi&lt;/code&gt; and see a "CUDA Version: 1y.x" there, it’s easy to think "cool, it’s all set up—2-in-1, done!" But nope - that’s just the CUDA "user-space" driver. It’s not the actual CUDA toolkit your algorithms need to perform GPU-based computations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffyn6brvmllrfe2tui9it.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffyn6brvmllrfe2tui9it.png" alt=" " width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;a href="https://docs.nvidia.com/deploy/cuda-compatibility/index.html" rel="noopener noreferrer"&gt;NVIDIA Docs Hub: CUDA Compatibility&lt;/a&gt;






&lt;p&gt;Here’s the roadmap of this article:&lt;/p&gt;

&lt;p&gt;1 Linux kernel &amp;amp; &lt;del&gt;Co&lt;/del&gt; .ko: Understanding &lt;em&gt;what are&lt;/em&gt; the drivers on Debian OS and how they are related to Linux kernel.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1.1 Drivers = Kernel Modules&lt;/li&gt;
&lt;li&gt;1.2 NVIDIA drivers ≈ &lt;code&gt;nvidia*.ko&lt;/code&gt; + &lt;code&gt;libcuda.so&lt;/code&gt;; &lt;code&gt;libcuda.so&lt;/code&gt; != CUDA Toolkit!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2 NVIDIA documentation &amp;amp; NVIDIA recommended drivers&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2.1 NVIDIA OpenED Source of their Drivers? O_o&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3 NVIDIA drivers installation: "Debian" way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3.1 Installation Step 1: Add the &lt;code&gt;contrib&lt;/code&gt; and &lt;code&gt;non-free&lt;/code&gt; repository components to the list of sources for &lt;code&gt;apt&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;3.2 Installation Step 2: Check if your OS is booted with Secure Boot&lt;/li&gt;
&lt;li&gt;3.3 Installation Step 3: Check if your system uses &lt;code&gt;dracut&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;3.4 Installation Step 4: Choose a flavour - "proprietary" vs "open"
&lt;/li&gt;
&lt;li&gt;3.5 DKMS is your bro when it comes to drivers&lt;/li&gt;
&lt;li&gt;3.6 About Linux Headers&lt;/li&gt;
&lt;li&gt;3.7 &lt;code&gt;nvidia-kernel-dkms&lt;/code&gt; OR &lt;code&gt;nvidia-open-kernel-dkms&lt;/code&gt; &amp;lt;-- install kernel space components of NVIDIA drivers; &lt;code&gt;libcuda1&lt;/code&gt; and other libraries (&lt;code&gt;lib*&lt;/code&gt;) &amp;lt;-- install user-space components of NVIDIA drivers, a.k.a CUDA drivers
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;4 How and when to update NVIDIA drivers&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;strong&gt;My GPU model: NVIDIA Corporation GA106 [GeForce RTX 3060 Lite Hash Rate]&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;UPD: Added a section about NVIDIA drivers installation on Debian Trixie (testing) and about newest NVIDIA drivers installation following the NVIDIA guide (with precaution measures)&lt;/p&gt;




&lt;p&gt;&lt;a id="about-linux-kernel"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Linux kernel &amp;amp; &lt;del&gt;Co&lt;/del&gt; .ko
&lt;/h3&gt;

&lt;p&gt;When you first install an OS using a graphical installer, you might notice the graphics of Installer look a bit rough or stretched, kind of like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7usdsrlr5pycb068qx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7usdsrlr5pycb068qx0.png" alt="meme cat stretched in wide" width="471" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After installation is finished and you boot for the first time, everything usually looks fine in terms of resolution. That’s because, during the installation of Debian or any other OS, the Linux Kernel with default drivers (including drivers for Graphical Output) are installed.&lt;/p&gt;

&lt;p&gt;You boot from a pen drive with .iso installer to install Debian OS → &lt;br&gt;
at the early stages of installation the Linux kernel is getting installed (as no OS can operate without it) → &lt;br&gt;
the Kernel doesn’t come alone - it has kernel &lt;em&gt;modules&lt;/em&gt;, which are essentially the Linux drivers for your PC's hardware pieces. &lt;/p&gt;

&lt;p&gt;During installation, the installer doesn’t connect to the Linux kernel source repository (upstream, containing the most recent existing version of Linux kernel) to fetch the kernel for your system; instead, it connects to its own repository, where the Linux kernel is packed, optimised, and tuned to work harmoniously with the system. In case of Debian:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The kernels in Debian are distributed in binary form, built from the Debian kernel source. It is important to recognize that Debian kernel source may be (and in most cases is) different from the upstream (or "pristine") kernel source, distributed from &lt;a href="http://www.kernel.org" rel="noopener noreferrer"&gt;www.kernel.org&lt;/a&gt; and its mirrors. Due to licensing restrictions, unclear license information, or failure to comply with the Debian Free Software Guidelines (DFSG), parts of the kernel are removed in order to distribute the source in the main section of the Debian archive. (&lt;a href="https://kernel-team.pages.debian.net/kernel-handbook/ch-source.html" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Key takeaway: &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Your OS's kernel has kernel modules and they are providing your OS with possibility to use various hardware components of your PC, because drivers per se are kernel modules.&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  1.1 Drivers = Kernel Modules
&lt;/h4&gt;

&lt;p&gt;You can explore the drivers available on your system by running &lt;code&gt;ls /lib/modules/$(uname -r)/kernel/drivers/&lt;/code&gt; (&lt;code&gt;$(uname -r)&lt;/code&gt; part retrieves the exact version of the kernel currently in use by your system; there might be multiple kernel versions installed, especially if you regularly update your system packages (some updates bring also new version of Linux Kernel).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55t242g06jt7l2887gq3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55t242g06jt7l2887gq3.png" alt="list of installed kernel modules" width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I run &lt;code&gt;lspci -k | grep -A 3 NVIDIA&lt;/code&gt;, I can see that currently for my NVIDIA video card, the kernel driver in use is Nouveau. By default, Debian provides &lt;a href="https://nouveau.freedesktop.org/" rel="noopener noreferrer"&gt;Nouveau drivers&lt;/a&gt; for NVIDIA graphics cards because they are open-source and non-proprietary, unlike NVIDIA’s official drivers. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Is it because Debian devs are obsessed with some anti-capitalist or whatsoever views so they pack their system only with open source "free" software? Nope. The point in sticking to open source software is that open source software can be tested and integrated to an OS much easier than closed-source software. Its behaviour, impact on other OS’s components is much more predictable because source code is open.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I mentioned that Debian "provides" Nouveau drivers (not only them, ofc), and I’d like to elaborate on word "provide". On Debian, the Linux kernel is not just a standalone "file/object" pulled during installation - it is a package. This package is called &lt;code&gt;linux-image-*&lt;/code&gt;, where the asterisk represents a specific target architecture build of the kernel package, such as &lt;code&gt;cloud&lt;/code&gt;, &lt;code&gt;amd64&lt;/code&gt;, &lt;code&gt;rt&lt;/code&gt;, and the version of this package. This kernel package &lt;code&gt;linux-image-*&lt;/code&gt;brings with itself default kernel modules, like the Nouveau driver (nouveau kernel module), to ensure hardware compatibility out of the box. You can actually check which package provides a specific kernel module with &lt;code&gt;dpkg-query&lt;/code&gt; command. In case of nouveau driver for NVIDIA video card the command will be: &lt;br&gt;
&lt;code&gt;dpkg-query -S /lib/modules/$(uname -r)/kernel/drivers/gpu/drm/nouveau/nouveau.ko&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmly7kr0dav10gi7k9u5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmly7kr0dav10gi7k9u5a.png" alt="nouveau.ko nvidia kernel module" width="800" height="32"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even though there’s nothing wrong with Nouveau drivers for NVIDIA GPU — you can use them and enjoy very good performance (many Linux users prefer them over NVIDIA’s proprietary drivers). However, if you plan to use your NVIDIA GPU for AI, machine learning, or LLMs, you will absolutely need NVIDIA’s proprietary drivers. &lt;/p&gt;
&lt;h4&gt;
  
  
  1.2 NVIDIA drivers ≈ nvidia*.ko + libcuda.so
&lt;/h4&gt;

&lt;p&gt;As I mentioned above, the NVIDIA drivers are kind of the foundation on top of which the CUDA Toolkit will sit. The CUDA Toolkit is what actually provides your algorithms with access to GPU-based computations. However, installing the CUDA Toolkit is the next step—it comes AFTER installing the NVIDIA drivers.&lt;/p&gt;

&lt;p&gt;This section is dedicated to kernel modules; when it comes to installing drivers for something on Linux, it’s technically about installing (a.k.a building) kernel modules. You can even "physically" find them on your system. They have the &lt;code&gt;.ko&lt;/code&gt; extension (.ko stands for Kernel Object) and live in the kernel-related directories, typically under &lt;code&gt;/lib/modules/$(uname -r)/&lt;/code&gt; (remember, &lt;code&gt;uname -r&lt;/code&gt; displays your system's kernel version).&lt;/p&gt;

&lt;p&gt;However, in the case of NVIDIA driver installation, it’s not just about installing the &lt;code&gt;nvidia*.ko&lt;/code&gt; modules (yep, there’s more than one). There’s another important piece that works closely with those NVIDIA kernel modules: the user-space libraries (NB! it is not about users of PC, it is about stuff outside kernel-space). The most important one is &lt;code&gt;libcuda.so&lt;/code&gt; (.so stands for Shared Object). And again—please don’t get confused. Yes, it’s called "CUDA", but it’s not the CUDA Toolkit.&lt;/p&gt;

&lt;p&gt;The key takeaway here is that NVIDIA driver installation should be done sensibly - you want the NVIDIA &lt;code&gt;.ko&lt;/code&gt; kernel modules and the NVIDIA &lt;code&gt;.so&lt;/code&gt; libraries to communicate properly. If you install the drivers in a messed up way - like making multiple installation attempts without properly cleaning up after failed ones - things can easily get twisted and broken.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NVIDIA drivers ≈ nvidia*.ko + libcuda.so &lt;br&gt;
BUT! &lt;br&gt;
libcuda.so != CUDA Toolkit&lt;/strong&gt;&lt;/p&gt;



&lt;p&gt;&lt;a id="nvidia-recommended-drivers"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  2. NVIDIA documentation &amp;amp; NVIDIA recommended drivers
&lt;/h3&gt;

&lt;p&gt;In my experience, when it comes to installing NVIDIA drivers, the most common approach people take is going straight to the official NVIDIA website, finding the driver that matches their GPU model, downloading SOMETHING (it is never a kernel module, driver file or something close - it is usually an installer script, file with extension &lt;code&gt;.run&lt;/code&gt;), and then trying to execute downloaded file to install drivers.&lt;/p&gt;

&lt;p&gt;And it’s usually at that step the frustration starts. Even though the installer provides a TUI (text-based user interface) and tries to guide you through, it starts throwing all kinds of technical prompts you have to manage. If you don’t fully understand what the installer is actually asking, it can be devastating. Don't worry I got you covered.&lt;/p&gt;

&lt;p&gt;If you go to the official NVIDIA site on the &lt;a href="https://www.nvidia.com/en-us/drivers/" rel="noopener noreferrer"&gt;Download The Official NVIDIA Drivers | NVIDIA&lt;/a&gt; page, you’ll be prompted to manually enter your NVIDIA device model. Here’s what’s suggested for my NVIDIA GeForce video card:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvoty1uicqwm605hqiaoe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvoty1uicqwm605hqiaoe.png" alt="Nvidia Official website driver downaload" width="800" height="736"&gt;&lt;/a&gt;&lt;/p&gt;
The second screenshot shows only 2 items from a long list of 10+ "recommended certified versions." The issue is that they aren’t ordered by version—570.x.x.x is followed by 535.x.x.x, and then it jumps back to 550.x.x.x. The confusion starts right here. 



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xgga63tsijx3p973snv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xgga63tsijx3p973snv.png" alt="Nvidia Official website driver confusion" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s say I just pick the first one in the list, which has a nice description:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;What is an NVIDIA Recommended Driver?&lt;br&gt;
This driver meets the quality levels applied to Windows drivers that pass testing in Windows Hardware Quality Labs (WHQL), therefore providing the same attention to driver reliability, robustness, and performance for non-Windows operating systems (e.g., Linux).&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Well that's nice that it was tested in some Windows Hardware Quality Labs, but I am searching for Linux drivers...well...whatever. If I click on the &lt;strong&gt;View&lt;/strong&gt; button, I’m redirected to the Download page. There, you’ll see three tabs—click on &lt;strong&gt;Additional Information&lt;/strong&gt;, and that’s where the installation instructions are hiding!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6ur8wgnhuyh5uj7l19i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6ur8wgnhuyh5uj7l19i.png" alt="Nvidia driver installation docs hidden" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Installation instructions: Once you have downloaded the driver, change to the directory containing the driver package and install the driver by running, as root,&lt;/em&gt; &lt;code&gt;sh ./NVIDIA-Linux-x86_64-570.133.07.run&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Seems very easy-peasy-lemon-squisy. But what is that &lt;code&gt;.run&lt;/code&gt; stuff? &lt;/p&gt;

&lt;p&gt;NVIDIA describes its &lt;code&gt;.run&lt;/code&gt; installation file as a helper script for installation of NVIDIA drivers, as it can help you to select the correct driver version: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;If you are not sure, NVIDIA provides a new detection helper script to help guide you on which driver to pick. For more information, see the Using the installation helper script section later in this post. (&lt;a href="https://developer.nvidia.com/blog/nvidia-transitions-fully-towards-open-source-gpu-kernel-modules/#supported_gpus" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;On Linux systems, a &lt;code&gt;.run&lt;/code&gt; file is typically either a single binary executable or a shell script that includes a binary blob that can be installed (&lt;a href="https://unix.stackexchange.com/questions/92858/difference-between-deb-files-and-run-file" rel="noopener noreferrer"&gt;source&lt;/a&gt;). Essentially, any &lt;code&gt;.run&lt;/code&gt; file acts as an installer script that &lt;strong&gt;will make system-wide changes&lt;/strong&gt;, since it specifies that it should be &lt;strong&gt;executed with root permissions&lt;/strong&gt;. As I mentioned earlier, drivers are kernel modules, so this installer will build an additional kernel module for your OS. But &lt;em&gt;how&lt;/em&gt; exactly will be built this new kernel module(s?) - it will be just binaries, or DKMS will build a new kernel module for my system following the instructions? (&lt;em&gt;If you do not understand the difference, don't worry I will cover it later in this article&lt;/em&gt;)&lt;/p&gt;

&lt;p&gt;Anyway, if you don’t have a clear answer to question above, that's reason enough NOT to execute this &lt;code&gt;.run&lt;/code&gt; file on your Debian system with root privileges without a second thought. &lt;/p&gt;

&lt;p&gt;This isn’t about the security risks with NVIDIA drivers or the possibility of some embedded nasty stuff—it’s about your Debian stability + the future maintenance of installed drivers. Plus, it will be much more difficult to debug/uninstall this stuff if something goes wrong.&lt;/p&gt;

&lt;p&gt;Moreover, a first paragraph of "Additional information" section on the Driver download page states this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note that many Linux distributions provide their own packages of the NVIDIA Linux Graphics Driver in the distribution's native package management format. This may interact better with the rest of your distribution's framework, and you may want to use this rather than NVIDIA's official package. (&lt;a href="https://www.nvidia.com/en-us/drivers/details/242273/" rel="noopener noreferrer"&gt;Source: NVIDIA driver details&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And as you can guess, in this article I will show how to install drivers as Debian's native package. However, I have added a bonus section at the end of this article where I show how I install drivers with precaution measures using NVIDIA installer &lt;code&gt;.run&lt;/code&gt; script.&lt;/p&gt;
&lt;h4&gt;
  
  
  2.1 NVIDIA OpenED Source of their Drivers? O_o
&lt;/h4&gt;

&lt;p&gt;For a long time, NVIDIA drivers were renown for their very close-source-ness and even their &lt;a href="https://www.nvidia.com/en-us/drivers/nvidia-license/" rel="noopener noreferrer"&gt;license&lt;/a&gt; states clearly:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;You may not reverse engineer, decompile, or disassemble the SOFTWARE provided in binary form, nor attempt in any other manner to obtain source code of such SOFTWARE&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;However, there’s an interesting thing — in July 2024, NVIDIA published an article on their site titled "&lt;a href="https://developer.nvidia.com/blog/nvidia-transitions-fully-towards-open-source-gpu-kernel-modules/#supported_gpus" rel="noopener noreferrer"&gt;NVIDIA Transitions Fully Towards Open-Source GPU Kernel Modules&lt;/a&gt;".&lt;/p&gt;

&lt;p&gt;Apparently, this transition didn’t just start in 2024 — they actually began it back in 2022. Here’s the &lt;a href="https://developer.nvidia.com/blog/nvidia-transitions-fully-towards-open-source-gpu-kernel-modules/#supported_gpus" rel="noopener noreferrer"&gt;link&lt;/a&gt; to the article.&lt;/p&gt;

&lt;p&gt;At first glance, you might think: "Wow, cool — now it must be easier to install drivers, as Linux OS can integrated them easier to their codebase".&lt;/p&gt;

&lt;p&gt;However, it seems that it’s not like NVIDIA published the source code of their proprietary kernel modules (NVIDIA drivers) — at least from what I understand. Instead, they created a separate open-source development driver branch, where the community can contribute and all that. Here is the GitHub repository: &lt;a href="https://github.com/NVIDIA/open-gpu-kernel-modules" rel="noopener noreferrer"&gt;NVIDIA/open-gpu-kernel-modules&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;As I mentioned, as a bonus section to this article I will show you the process of NVIDIA drivers installation following NVIDIA documentation. However, luckily, there is much easier and more "comprehensible" way to install NVIDIA drivers - &lt;a href="https://wiki.debian.org/NvidiaGraphicsDrivers" rel="noopener noreferrer"&gt;Debian way&lt;/a&gt;. &lt;/p&gt;



&lt;p&gt;&lt;a id="debian-way-install"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  3. NVIDIA drivers installation: "Debian" way
&lt;/h3&gt;

&lt;p&gt;The procedure for installing NVIDIA drivers is covered in detail in &lt;a href="https://wiki.debian.org/NvidiaGraphicsDrivers" rel="noopener noreferrer"&gt;NvidiaGraphicsDrivers - Debian Wiki&lt;/a&gt;. I mentioned the "Debian way" of drivers installation, and actually it is a broad term related not only to drivers installation, and if you’re using a Debian distro, I recommend familiarizing yourself with this term. You can find many details &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-14-57b1"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://wiki.debian.org/NvidiaGraphicsDrivers" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, the section you need is "Debian 12 Bookworm" (I’m assuming you’re using this version of Debian, the latest stable one). The version of the NVIDIA drivers installed by following that guide will be 535.183.01. Yes, this version is older than the "official" driver available on NVIDIA’s website (currently, it is 570.x.x), which I mentioned earlier. However, this is a trade-off: you sacrifice a bit in terms of the driver’s newness, but you gain system stability and ease of installation.&lt;/p&gt;

&lt;p&gt;The "Debian way" of installing NVIDIA drivers is about installing them as a couple of packages using &lt;code&gt;apt&lt;/code&gt; package manager. However, as I mentioned, NVIDIA’s software is proprietary and historically &lt;strong&gt;close-source&lt;/strong&gt;, so Debian keeps this type of third-party software in a separate component of its package repositories.&lt;/p&gt;

&lt;p&gt;If you’re unfamiliar with how Debian packages its software, &lt;code&gt;apt&lt;/code&gt; for you is a sort of "black box", and the &lt;code&gt;sources.list&lt;/code&gt; file seems completely incomprehensible, I highly recommend checking out these articles before proceeding: &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-14-57b1"&gt;about Debian releases&lt;/a&gt;, &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-3a4-3fbo"&gt;about Debian software installation - repositories and repository components&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The package(s) containing the NVIDIA drivers is located in the "non-free" component of Debian’s package repositories. &lt;strong&gt;The term "non-free" doesn’t mean you need to pay to use the packages found there; rather, it refers to the nature of the software, as it includes closed-source code that isn’t publicly accessible&lt;/strong&gt;. By default, "non-free" component is excluded from the list of sources that apt uses to fetch and install packages onto your system. &lt;/p&gt;
&lt;h4&gt;
  
  
  Debian Way Installation Step 1: Add the &lt;code&gt;contrib&lt;/code&gt; and &lt;code&gt;non-free&lt;/code&gt; repository components to the list of sources for &lt;code&gt;apt&lt;/code&gt;:
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo vim /etc/apt/sources.list
#add contrib and non-free components and check that non-free-firmware is also listed:
deb http://deb.debian.org/debian/ bookworm main contrib non-free non-free-firmware
#this step is important: you have to make apt aware of changes made to this file:
$ sudo apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;em&gt;NB! If you have your kernel package &lt;code&gt;linux-image-*&lt;/code&gt; installed not from Debian _bookworm&lt;/em&gt; repo but from Debian backports repo, you will need to install NVIDIA drivers also from there! This is due to the various dependencies linked to kernel and its version of NVIDIA drivers package!_.&lt;/p&gt;
&lt;h4&gt;
  
  
  Debian Way Installation Step 2: Check if your OS is booted with Secure Boot.
&lt;/h4&gt;

&lt;p&gt;Rule of the thumb - if you use a Dual Boot with Windows 11, high chances is that Secure Boot is enabled!. To be sure, run &lt;code&gt;sudo mokutil --sb-state&lt;/code&gt;. If you see this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo mokutil --sb-state
SecureBoot enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you do not understand what the Secure Boot is about, you will need to study it. I wrote &lt;a href="https://dev.to/dev-charodeyka/debian-secure-boot-to-be-or-not-to-be-that-is-the-question-1o82"&gt;the detailed article about Secure Boot and Debian&lt;/a&gt;, specifically from the perspective of how it impacts kernel modules and NVIDIA drivers. As an alternative, you always can find all answers in Debian Documentation - &lt;a href="https://wiki.debian.org/SecureBoot" rel="noopener noreferrer"&gt;SecureBoot - Debian Wiki&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB! If you have Secure Boot enabled and proceed with the following steps, your installed NVIDIA drivers will NEVER load until you have them signed. The process of kernel modules signing for Secure Boot setups is out of the scope for this article!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you don’t know how to sign kernel modules for UEFI, I strongly recommend reading the Debian documentation or the article I mentioned. Once you’ve learned how to handle Secure Boot and have new kernel modules signed, you can return here to continue with the installation steps. &lt;strong&gt;FYI, in this article, I am installing drivers on my system that has Secure Boot enabled&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Debian Way Installation Step 3: Check if your system uses &lt;code&gt;dracut&lt;/code&gt;.
&lt;/h4&gt;

&lt;p&gt;Dracut is the low-level tool that is used to create an initial image used by the kernel for preloading the block device modules (such as IDE, SCSI or RAID) which are needed to access the root filesystem, mounting the root filesystem and booting into the real system (&lt;a href="https://manpages.debian.org/bookworm/dracut-core/dracut.8.en.html" rel="noopener noreferrer"&gt;Source&lt;/a&gt;). This "job" can be done not only by &lt;code&gt;dracut&lt;/code&gt;, but also by &lt;code&gt;initramfs-tools&lt;/code&gt;. You can find out "who" does this job for your system with &lt;code&gt;dpkg -l | grep -E 'dracut|initramfs-tools'&lt;/code&gt;. If you see in the output &lt;code&gt;dracut&lt;/code&gt;, you have to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make a dracut configuration file &lt;code&gt;/etc/dracut.conf.d/10-nvidia.conf&lt;/code&gt; (you can actually name it anything you like, as long as it ends in &lt;code&gt;.conf&lt;/code&gt;) with the following contents:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;install_items+=" /etc/modprobe.d/nvidia-blacklists-nouveau.conf /etc/modprobe.d/nvidia.conf /etc/modprobe.d/nvidia-options.conf "
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Note the spaces between quotes and characters.&lt;/li&gt;
&lt;li&gt;The modprobe.d files referenced here will be added by the &lt;code&gt;nvidia-driver&lt;/code&gt; package.
&lt;a id="about-flavours"&gt;&lt;/a&gt;
#### Debian Way Installation Step 4: Choose a flavour - "proprietary" vs "open"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I mentioned the fact that there is ongoing transition of NVIDIA fully towards open-source GPU kernel modules. It is cool, no doubts, but in this transition period for us, users, this just adds more confusion. Here is the Debian Documentation on NVIDIA drivers installation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7afmvob8n9pr3pql91kr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7afmvob8n9pr3pql91kr.png" alt=" " width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;a href="https://wiki.debian.org/NvidiaGraphicsDrivers#Version_535.183.01-1" rel="noopener noreferrer"&gt; Debian Documentation for Debian Stable&lt;/a&gt;



&lt;p&gt;Which "flavour" should you choose? If before there was question: which &lt;em&gt;version&lt;/em&gt; should you install? Now the question is: which flavour (open source/proprietary) and which version should you install?&lt;/p&gt;

&lt;p&gt;If you're an open-source software supporter, the choice might seem pretty obvious. But at the same time, it’s worth noting that this relatively new open-source flavor of the NVIDIA driver stack is still, well... relatively new compared to the proprietary version we've been using for years. That might (or might not) mean it's less mature and potentially more prone to bugs or issues.&lt;/p&gt;

&lt;p&gt;However, in practice, as newer driver versions come out and you get your hands on more modern GPUs, you might find that you actually do not have a choice anyway.&lt;/p&gt;

&lt;p&gt;In NVIDIA's article about their transition to open-source kernel modules, I found two important diagrams:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fugs95mcu7h42wip28i09.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fugs95mcu7h42wip28i09.png" alt=" " width="624" height="201"&gt;&lt;/a&gt;&lt;/p&gt;
Before CUDA Toolkit 12.6



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Previously, using the open-source GPU kernel modules would mean that you could not use the top-level metapackage. You would have had to install the distro-specific NVIDIA driver open package along with the cuda-toolkit-X-Y package of your choice.&lt;br&gt;
Beginning with the CUDA 12.6 release, the flow effectively switches places (&lt;a href="https://developer.nvidia.com/blog/nvidia-transitions-fully-towards-open-source-gpu-kernel-modules/#supported_gpus" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut992t7wktz6iq9hhiyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut992t7wktz6iq9hhiyz.png" alt=" " width="624" height="202"&gt;&lt;/a&gt;&lt;/p&gt;
After the CUDA Toolkit 12.6 release




&lt;p&gt;CUDA Toolkit is the thingy which should be installed AFTER NVIDIA drivers installation on your system. and CUDA Toolkit version suitable for your system depends fully on the version of CUDA driver version (remember this aforementioned libcuda.so?). Here is the compatibility matrix:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hd70f7sbdhb0nar5opv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hd70f7sbdhb0nar5opv.png" alt=" " width="800" height="765"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;a href="https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html" rel="noopener noreferrer"&gt;CUDA Toolkit and Corresponding Driver Versions&lt;/a&gt;



&lt;p&gt;So, according to this table, if you install NVIDIA drivers of version &amp;gt;=560.x.x, you must install "open" flavour if you want to install on top of them CUDA Toolkit.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Little spoiler: version 535.x of "open" falvour drivers doesn’t work on my GPU even if it is supposed to.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I say it is supposed to work, because my GPU device is in the list of supported devices by this "open" flavour drivers. How to understand if "open" flavour is an option for your GPU device?&lt;/p&gt;

&lt;p&gt;You need to check the compatibility of your GPU with the NVIDIA's open GPU kernel modules - there’s a table &lt;a href="https://github.com/NVIDIA/open-gpu-kernel-modules" rel="noopener noreferrer"&gt;here&lt;/a&gt; with all supported models (f you scroll down, you’ll see it).&lt;/p&gt;

&lt;p&gt;However, just finding your GPU model name in the table isn’t enough. For example, my GPU — an NVIDIA GeForce RTX 3060 — is listed five or more times. So what you should actually be checking is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;In the below table, if three IDs are listed, the first is the PCI Device ID, the second is the PCI Subsystem Vendor ID, and the third is the PCI Subsystem Device ID.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So first, I had to find the right parameters - at least the first one- the PCI Device ID. Because for my card, in all the mentions in that table, there's only one number that mentioned - the PCI Device ID.&lt;/p&gt;

&lt;p&gt;Here is how to identify both Vendor and Device ID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ lspci -nn | grep NVIDIA
06:0b:0 VGA compatible controller [0300]: NVIDIA Corporation GA106 [GeForce RTX 3060 Lite Hash Rate] [10de:2504] (rev a1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the output above, 10de is the Vendor ID and 2504 the Device ID.&lt;/p&gt;

&lt;p&gt;And, apparently my GPU is in the table of compatible devices:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5xle0aa81i5p1t2c5vs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5xle0aa81i5p1t2c5vs.png" alt=" " width="770" height="62"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NB! You CANNOT install both flavours - "just to test" (to switch flavour later on you will have to purge the installed one), of course, and I want to explain some stuff about what is the software level difference in-depth between these two flavours (even if installation commands look somehow alike).&lt;/p&gt;

&lt;p&gt;NB! DO NOT EXECUTE THESE COMMANDS ONE AFTER ANOTHER, READ THE EXPLANATION BELOW FIRST, IF YOU REALLY DON'T CARE EXECUTE ANY OF THEM, BUT NOT BOTH.&lt;/p&gt;

&lt;p&gt;To install "proprietary" flavor these commands should be run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install nvidia-driver firmware-misc-nonfree
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To install "open" flavor, there is an additional package to install - &lt;code&gt;nvidia-open-kernel-dkms&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install nvidia-open-kernel-dkms nvidia-driver firmware-misc-nonfree
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see that both commands install the &lt;code&gt;nvidia-driver&lt;/code&gt; package and &lt;code&gt;firmware-misc-nonfree&lt;/code&gt;. The latter one is related to firmware, not the driver. Don’t confuse the two - firmware and drivers are not the same thing. However, that’s not really the point here.&lt;/p&gt;

&lt;p&gt;What actually makes the difference when installing a specific flavour is the &lt;em&gt;order of the commands&lt;/em&gt; — and I’ll show you why. It all comes down to the dependencies each package brings in.&lt;/p&gt;

&lt;h4&gt;
  
  
  Debian Way Installation: DKMS is your bro when it comes to drivers
&lt;/h4&gt;

&lt;p&gt;First, I want to elaborate on &lt;code&gt;nvidia-open-kernel-dkms&lt;/code&gt; package — specifically the &lt;code&gt;dkms&lt;/code&gt; part of this package name.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;DKMS is your bro when it comes to Linux drivers (kernel modules).&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Dynamic Kernel Module Support (DKMS) is a framework which allows kernel modules to be dynamically built for each kernel on your system in a simplified and organized fashion. (&lt;a href="https://manpages.debian.org/bullseye/dkms/dkms.8.en.html" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is exactly DKMS that takes care of automatically rebuilding registered kernel modules (read drivers) whenever you install a different Linux kernel. &lt;/p&gt;

&lt;p&gt;If you install some proprietary drivers on Debian without DKMS is not ideal. What’s actually happening under the hood during installation is that standalone kernel modules (those &lt;code&gt;.ko&lt;/code&gt; files) are compiled as-is.&lt;/p&gt;

&lt;p&gt;Leaving stability aside for a second, the first major inconvenience you’ll run some time after installation into is this: you’ll have to reinstall (rebuild) them manually every time there’s a major kernel update. And if you have Secure Boot enabled on your system, you’ll also need to sign them all manually. Every time.&lt;/p&gt;

&lt;p&gt;So yes — DKMS is the bro when it comes to installing extra drivers (kernel modules). It builds them for you in an organized way.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NB! For experienced folk: I know, that I am generalising — I’m aware that sometimes DKMS can turn into the pure evil and nuke stuff, especially because it’s way too eager to rebuild stuff the moment it sees kernel headers updated.&lt;br&gt;
However, IMHO, for regular desktop/home use on stable systems like Debian, DKMS is still bro in the end.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Chances are your OS doesn’t have the dkms package installed — meaning, no DKMS tool at all. As far as I know, DKMS doesn’t come preinstalled.&lt;/p&gt;

&lt;p&gt;DKMS relies heavily on Linux headers, so I want to explain what those are.&lt;/p&gt;
&lt;h4&gt;
  
  
  Debian Way Installation Step: About Linux headers
&lt;/h4&gt;

&lt;p&gt;If you're using your Debian system the "Debian way", you're probably installing packages mostly from the official Debian repositories. Since these packages (including the kernel) come from Debian’s repos, they’re built and tested to work well together.&lt;/p&gt;

&lt;p&gt;The Linux kernel itself comes in the &lt;code&gt;linux-image-*&lt;/code&gt; package. After installation, you get a bunch of preinstalled drivers — which are essentially kernel modules — that make various pieces of your hardware work: Wi-Fi sticks, Bluetooth headsets, etc.&lt;/p&gt;

&lt;p&gt;These preinstalled drivers are actually part of the upstream Linux kernel. They are separate kernel modules, sure, but they communicate with the kernel seamlessly because they’re written and optimized to do exactly that.&lt;/p&gt;

&lt;p&gt;However, when it comes to external, third-party kernel modules (third-party means that the source code was written by non Debian/Linux devs)— for them, your kernel is somehow a black box. These modules can’t just "communicate" to it directly. That’s where Linux headers come in.&lt;/p&gt;

&lt;p&gt;Linux headers can be roughly described as the programming interface needed to interact with the Linux kernel. If a module needs to communicate with the kernel, headers are required to build and install it — and to make it work at all.&lt;/p&gt;

&lt;p&gt;So here's a basic schematic of how it all fits together:&lt;/p&gt;

&lt;p&gt;DKMS&amp;lt;--&amp;gt;linux-headers* package&amp;lt;--&amp;gt;Kernel (linux-image-* package)&amp;lt;--&amp;gt; Your Debian OS&lt;/p&gt;

&lt;p&gt;So when you install the package &lt;code&gt;nvidia-open-kernel-dkms&lt;/code&gt;, it &lt;em&gt;must&lt;/em&gt; pull in headers and dkms as dependencies. You can explore the full dependency chain &lt;a href="https://packages.debian.org/bookworm/nvidia-open-kernel-dkms" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;a id="user-kernel-space"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;code&gt;nvidia-kernel-dkms&lt;/code&gt; OR &lt;code&gt;nvidia-open-kernel-dkms&lt;/code&gt; &amp;lt;-- install kernel space components of NVIDIA drivers; &lt;code&gt;libcuda1&lt;/code&gt; and other libraries (&lt;code&gt;lib*&lt;/code&gt;) &amp;lt;-- install user-space components of NVIDIA drivers, a.k.a CUDA drivers
&lt;/h4&gt;

&lt;p&gt;But now you might be wondering:&lt;br&gt;
"If I want to install the proprietary flavour instead, will I just get some sad, standalone kernel modules built without DKMS?"&lt;/p&gt;

&lt;p&gt;The answer is NO. &lt;code&gt;nvidia-driver&lt;/code&gt; package as dependencies shall bring you something like  &lt;code&gt;nvidia-kernel-dkms&lt;/code&gt; (notice, &lt;strong&gt;non open&lt;/strong&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install --simulate nvidia-driver 

Installing:                     
  nvidia-driver

Installing dependencies:
  dkms                     libnvcuvid1                libnvidia-rtcore          nvidia-persistenced
  firmware-nvidia-gsp      libnvidia-allocator1       nvidia-alternative        nvidia-settings
  glx-alternative-mesa     libnvidia-cfg1             nvidia-driver-bin         nvidia-smi
  glx-alternative-nvidia   libnvidia-egl-gbm1         nvidia-driver-libs        nvidia-support
  glx-diversions           libnvidia-egl-wayland1     nvidia-egl-common         nvidia-suspend-common
  libcuda1                 libnvidia-eglcore          nvidia-egl-icd            nvidia-vdpau-driver
  libegl-nvidia0           libnvidia-encode1          nvidia-installer-cleanup  nvidia-vulkan-common
  libgl1-nvidia-glvnd-glx  libnvidia-glcore           nvidia-kernel-common      nvidia-vulkan-icd
  libgles-nvidia1          libnvidia-glvkspirv        nvidia-kernel-dkms        update-glx
  libgles-nvidia2          libnvidia-ml1              nvidia-kernel-support     xserver-xorg-video-nvidia
  libgles1                 libnvidia-pkcs11-openssl3  nvidia-legacy-check
  libglx-nvidia0           libnvidia-ptxjitcompiler1  nvidia-modprobe

Suggested packages:
  menu  nvidia-cuda-mps

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me give you a comparison. Let's say you want to install the Brave browser, and you already have Firefox installed. When you installed Firefox, it brought in a bunch of dependencies needed for a browser to function on your freshly installed Debian. Brave is also a browser — and even though it definitely has its own unique dependencies (maybe fewer, maybe more), it may still rely on some of the low-level stuff that Firefox already brought in. So when you install Brave, it might skip installing those dependencies because they’re already there.&lt;/p&gt;

&lt;p&gt;It can also go the other way: Brave might be able to use low-level software A or low-level software B to function, and it just happens to find the low-level software B Firefox brought in. So Brave uses that instead of pulling in the low-level software A. But if you had installed Brave first, it would have brought in low-level software A. :3&lt;/p&gt;

&lt;p&gt;Same logic applies here with the NVIDIA drivers.&lt;/p&gt;

&lt;p&gt;If the &lt;code&gt;nvidia-driver&lt;/code&gt; package doesn't find any existing NVIDIA kernel modules already installed, it will go ahead and install everything — including the kernel-level stuff.&lt;/p&gt;

&lt;p&gt;But if you already installed the kernel modules beforehand (through &lt;code&gt;nvidia-open-kernel-dkms&lt;/code&gt;), and then you install &lt;code&gt;nvidia-driver&lt;/code&gt;, it will bring in just the &lt;em&gt;user-space libraries&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install --simulate nvidia-open-kernel-dkms nvidia-driver

Installing:                     
  nvidia-driver  nvidia-open-kernel-dkms

Installing dependencies:
  dkms                     libnvcuvid1                libnvidia-rtcore            nvidia-settings
  firmware-nvidia-gsp      libnvidia-allocator1       nvidia-alternative          nvidia-smi
  glx-alternative-mesa     libnvidia-cfg1             nvidia-driver-bin           nvidia-support
  glx-alternative-nvidia   libnvidia-egl-gbm1         nvidia-driver-libs          nvidia-suspend-common
  glx-diversions           libnvidia-egl-wayland1     nvidia-egl-common           nvidia-vdpau-driver
  libcuda1                 libnvidia-eglcore          nvidia-egl-icd              nvidia-vulkan-common
  libegl-nvidia0           libnvidia-encode1          nvidia-installer-cleanup    nvidia-vulkan-icd
  libgl1-nvidia-glvnd-glx  libnvidia-glcore           nvidia-kernel-common        update-glx
  libgles-nvidia1          libnvidia-glvkspirv        nvidia-legacy-check         xserver-xorg-video-nvidia
  libgles-nvidia2          libnvidia-ml1              nvidia-modprobe
  libgles1                 libnvidia-pkcs11-openssl3  nvidia-open-kernel-support
  libglx-nvidia0           libnvidia-ptxjitcompiler1  nvidia-persistenced

Suggested packages:
  menu  nvidia-cuda-mps  nvidia-kernel-dkms  | nvidia-kernel-source  | nvidia-open-kernel-source

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;And no, this has nothing to do with your user’s home folder or &lt;code&gt;/home&lt;/code&gt; or anything like that. This goes much deeper — we're talking about memory spaces: kernel space vs user space.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The key point here is that in this context, kernel space always comes first — the kernel modules are the base layer. The user-space components (like libraries and tools) attach themselves to what's already there in kernel space. &lt;/p&gt;

&lt;p&gt;_NB! Potential culprit if you have installed NVIDIA drivers but they do not work - your NVIDIA kernel modules versions should be aligned with your nvidia-driver version, you can do analysis with dpkg -l or apt list - - installed and grep &lt;em&gt;nvidia&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So, actually, your flavour choice all boils down to two packages — only one of which will be installed and used on your Debian:&lt;br&gt;
&lt;code&gt;nvidia-kernel-dkms&lt;/code&gt; or (its dependency alternative) &lt;code&gt;nvidia-open-kernel-dkms&lt;/code&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  Installing "proprietary" flavour, version 535.x.y - from LTS branch
&lt;/h4&gt;

&lt;p&gt;I have installed "proprietary" flavor because as I mentioned even though my GPU is in compatibility matrix with "open" flavor of 535.x. driver version does not work. I will show you error I get later.&lt;/p&gt;

&lt;p&gt;Before dealing with anything related to drivers I ALWAY create a snapshot of my system. I use Timeshift tool for this. You can check &lt;a href="https://dev.to/dev-charodeyka/using-timeshift-for-systems-snapshots-and-recovery-on-debian-12-via-command-line-7m6"&gt;this my article on snapshotting with Timeshift&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;To purge all existing nvidia-packages installed on your system you can try something like that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# to list nvidia-related packages installed previously with apt
$ apt list --installed | grep nvidia
$ sudo apt --purge remove '*nvidia*'
$ sudo apt autoremove
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install nvidia-driver firmware-misc-nonfree
# NB!!!If your linux-image-* package is from bookworm backports, you must install nvidia-driver package from this repo to avoid broken dependencies error!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you did not blacklist nouveau drivers before installation, at certain point during installation you will see a warning message in the terminal (likely on the blue background):&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Conflicting &lt;code&gt;nouveau&lt;/code&gt; kernel module loaded&lt;br&gt;
The free nouveau kernel module is currently loaded and conflicts with the non-free nvidia kernel module&lt;br&gt;
The easiest way to fix this is to reboot the machine once the installation has finished&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, you just need to press Enter to proceed, the solution is the reboot - once installation has finished, reboot your system (&lt;code&gt;sudo reboot&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;When you are booted again, run &lt;code&gt;nvidia-smi&lt;/code&gt; command.&lt;br&gt;
This is my output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0ryxs7crzddx3m3n1ol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0ryxs7crzddx3m3n1ol.png" alt="nvidia drivers verification installation" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you see something similar (and no errors), congratulations! You have successfully installed the NVIDIA drivers. For additional confirmation, you can list your PCI devices and the kernel modules in use (loaded drivers) by running &lt;code&gt;lspci -k | grep -A 3 NVIDIA&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here are NVIDIA kernel modules built by DKMS with love for you:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo modinfo -n nvidia
/lib/modules/6.12.21-amd64/updates/dkms/nvidia.ko

$ ls /lib/modules/6.12.21-amd64/updates/dkms/
nvidia-drm.ko  nvidia.ko  nvidia-modeset.ko  nvidia-peermem.ko  nvidia-uvm.ko

$ ls /usr/lib/x86_64-linux-gnu/ | grep libcuda
libcudadebugger.so.1
libcudadebugger.so.535.183.01
libcuda.so
libcuda.so.1
libcuda.so.535.183.01

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;a id="update-nvidia"&gt; &lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. How and when to update NVIDIA drivers
&lt;/h3&gt;

&lt;p&gt;This section is not about NVIDIA driver VERSION updates, because even other Debian releases (Unstable, Testing) offer the SAME major NVIDIA driver version—535.x—just like Bookworm repo. So if you’re not planning to downgrade (which is another story entirely), I don’t really see any "debian source" for VERSION update. &lt;/p&gt;

&lt;p&gt;However, I do want to discuss a different kind of NVIDIA driver update—not about the version, but about the kernel module update itself. &lt;/p&gt;

&lt;p&gt;If this question still did not pop up in your mind, I will anticipate it: &lt;strong&gt;"Since NVIDIA drivers are kernel modules, what happens to them then when Debian kernel version updates?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It can be quite common that when you start using Debian, one of the first things you do is install NVIDIA drivers. But eventually, if you’re using Debian Stable release, you might encounter the need for a more major kernel version—for example, in my case, it was for Wi-Fi USB adapter drivers. So you learn a bit about &lt;a href="https://backports.debian.org/" rel="noopener noreferrer"&gt;backports&lt;/a&gt; and update the kernel from there - not just small regular updates from Debian Stable repositories (like from 6.1.10 to 6.1.11), but literally jumping from kernel v6.1.x to v6.9.x or even v6.12.x. That’s a major jump, and there’s a good chance your NVIDIA driver won’t work properly. Why? Because of Linux headers.&lt;/p&gt;

&lt;p&gt;As I mentioned, headers are really important for NVIDIA drivers to work on your system. They serve as an API or interface that the NVIDIA driver uses to communicate with the kernel. When you update the kernel with &lt;code&gt;sudo apt install -t bookworm-backports linux-image-amd64&lt;/code&gt; (to install the newest kernel version from backports), it will only install the kernel and not the headers! You have to install the headers manually, and they must match your kernel version exactly. So if you install &lt;code&gt;linux-image-amd64&lt;/code&gt; from backports, you also have to install the headers in the same way (&lt;code&gt;linux-headers-amd64&lt;/code&gt;). If you use a more detailed command pointing to the exact version of &lt;code&gt;linux-image*&lt;/code&gt; package (e.g. &lt;code&gt;sudo apt install -t bookworm-backports linux-image-6.9.7+bpo-amd64&lt;/code&gt;), the headers should match it (e.g. &lt;code&gt;sudo apt install -t bookworm-backports linux-headers-6.9.7+bpo-amd64&lt;/code&gt;)—they’re always paired.&lt;/p&gt;

&lt;p&gt;Here’s the trick with NVIDIA driver updates: if you installed them the Debian way, they were built with DKMS. This nice tool will rebuild (&lt;em&gt;read&lt;/em&gt; "update") your NVIDIA drivers, and the trigger for that is installing the &lt;em&gt;linux headers&lt;/em&gt;. During the final steps of their installation, you’ll see in log that DKMS is rebuilding stuff (and not just NVIDIA drivers, but also other kernel modules-drivers you installed in a way that &lt;em&gt;they were built with DKMS)&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;So, this is yet another reason why it’s better to install NVIDIA drivers in the Debian way, or at least make sure that whatever you install manually is built with DKMS and not just scrapped together with some script.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;However, keep in mind, that a VERY major kernel update (for example, when kernel gets updated from version 6.x to 7.x) may require installing a newer NVIDIA driver, as changes to the kernel APIs may not be compatible with the existing driver version.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. BONUS SECTIONS
&lt;/h3&gt;

&lt;h4&gt;
  
  
  5.1 Installing driver in NVIDIA suggested way (with &lt;code&gt;.run&lt;/code&gt; file)
&lt;/h4&gt;

&lt;p&gt;I downloaded my "driver" (&lt;code&gt;.run&lt;/code&gt; file) for my GPU card model from &lt;a href="https://www.nvidia.com/en-us/drivers/details/242273/" rel="noopener noreferrer"&gt;this section of NVIDIA Official website&lt;/a&gt; using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wget https://us.download.nvidia.com/XFree86/Linux-x86_64/570.133.07/NVIDIA-Linux-x86_64-570.133.07.run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The search prompt where you can enter your OS, GPU model, etc., is &lt;a href="https://www.nvidia.com/en-us/drivers/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A link to the README instructions is available under the Additional Information tab on the download page of a driver. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fielwsig3tlzsb9r8qiz5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fielwsig3tlzsb9r8qiz5.png" alt="Nvidia drivers READMI" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Chapter 4 of this &lt;a href="http://us.download.nvidia.com/XFree86/Linux-x86_64/560.35.03/README/installdriver.html" rel="noopener noreferrer"&gt;README&lt;/a&gt; is dedicated to installation of NVIDIA driver with this &lt;code&gt;.run&lt;/code&gt; executable.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The .run file is a self-extracting archive. When executed, it extracts the contents of the archive and runs the contained nvidia-installer utility, which provides an interactive interface to walk you through the installation.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And here is about how Nvidia driver installation will communicate with your kernel:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;When the installer is run, it will check your system for the required kernel sources and compile the kernel interface. You must have the source code for your kernel installed for compilation to work. On most systems, this means that you will need to locate and install the correct &lt;code&gt;kernel-source&lt;/code&gt;, &lt;code&gt;kernel-headers&lt;/code&gt;, or &lt;code&gt;kernel-devel&lt;/code&gt; package; on some distributions, no additional packages are required.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In a default Debian setup (the set of packages you get pre-installed), you’ll only have the package &lt;code&gt;linux-image-*&lt;/code&gt; (which contains kernel itself and default kernel modules-drivers, as it was mentioned above)—without the kernel headers or developer libraries preinstalled.&lt;/p&gt;

&lt;p&gt;So, naturally, the installation process for NVIDIA drivers requires having Linux headers. However, Linux headers aren’t the only requirement for installation. And the &lt;a href="http://us.download.nvidia.com/XFree86/Linux-x86_64/560.35.03/README/installdriver.html" rel="noopener noreferrer"&gt;README&lt;/a&gt; file contains all the additional instructions. &lt;/p&gt;

&lt;p&gt;The important section to pay attention to—which can cause a very unpleasant experience for your PC management—is if this &lt;code&gt;.run&lt;/code&gt; script does not find DKMS (Dynamic Kernel Module Support) installed on your system:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The installer will check for the presence of DKMS on your system. &lt;strong&gt;If DKMS is found&lt;/strong&gt;, you will be given the option of registering the kernel module with DKMS. (&lt;a href="http://us.download.nvidia.com/XFree86/Linux-x86_64/560.35.03/README/installdriver.html" rel="noopener noreferrer"&gt;README&lt;/a&gt;)&lt;/em&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If this &lt;code&gt;.run&lt;/code&gt; script doesn’t find it—will it prompt you to install it, give a warning, fail, or install it for you? I really have no idea. &lt;/p&gt;

&lt;p&gt;So, if the NVIDIA proprietary &lt;code&gt;.run&lt;/code&gt; installer doesn’t find DKMS installed on your system (&lt;code&gt;dpkg -l| grep dkms&lt;/code&gt;), it will probably compile just standalone kernel modules for each component of their drivers—meaning you’ll have to rebuild them manually at &lt;strong&gt;every&lt;/strong&gt; major kernel update. And if you have Secure Boot enabled on your system, you’ll also need to sign them all &lt;strong&gt;manually&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So, first I install DKMS and Linux headers. The &lt;code&gt;dkms&lt;/code&gt; package itself pulls in the &lt;code&gt;linux-headers-*&lt;/code&gt; package as a dependency. However, be cautious! If you've made any modifications to your kernel (like an update or whatever), make sure to double-check that your &lt;strong&gt;Linux headers version match the exact version of your kernel — down to every number and dot&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For example, in my case, my kernel version is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ uname -r
6.1.9-32-amd64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I install &lt;code&gt;linux-headers-*&lt;/code&gt; in this way to ensure the exact match with my kernel version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install linux-headers-$(uname -r)
# then, I install DKMS
$ sudo apt install dkms
# I double check the version of linux headers
$ dpkg -l | grep linux-headers*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;(If you see more than one version of headers, it’s probably because you updated the kernel in the past — most likely automatically with &lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt upgrade&lt;/code&gt;. The most important thing is that in the output you see headers with version that corresponds exactly to the current version of the kernel. You can actually remove older versions of the Linux headers used for previous kernel versions if you don’t need them anymore.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I downloaded the &lt;code&gt;.run&lt;/code&gt; file, and now I’m executing it, following these mini instructions:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Installation instructions: Once you have downloaded the driver, change to the directory containing the driver package and install the driver by running, as root, sh ./NVIDIA-Linux-x86_64-570.133.07.run (&lt;a href="https://www.nvidia.com/en-us/drivers/details/242273/" rel="noopener noreferrer"&gt;https://www.nvidia.com/en-us/drivers/details/242273/&lt;/a&gt;)&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo sh ./NVIDIA-Linux-x86_64-570.133.07.run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first thing I am shown in Terminal UI is the choice of flavour again, and as I mentioned, the "open" flavour from the Debian repo didn’t work. Eh, let’s give it a try here!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1ncwxyja6qzbm7q6r0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1ncwxyja6qzbm7q6r0p.png" alt=" " width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Okay, right after first complaint: it needs a C compiler (gcc), so I install it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install build-essential
# to check:
$ gcc -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nice, as next, it complains about the nouveau driver currently in use by my system:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbcbwlcwbav3e6l78v7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbcbwlcwbav3e6l78v7h.png" alt=" " width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I press --&amp;gt; Ok&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12lywzxef1b00saowmpd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12lywzxef1b00saowmpd.png" alt=" " width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I press --&amp;gt; Yes&lt;/p&gt;

&lt;p&gt;There are some scary words in these commands if you’ve never dealt with it before, but it’s not as bad as it sounds. What the NVIDIA installer is asking for is to first unload the &lt;code&gt;nouveau&lt;/code&gt; kernel module. This isn’t just because it’s installed and present — it’s actually in use, meaning your NVIDIA GPU is currently serving it, and it can’t be responsive to NVIDIA drivers.&lt;/p&gt;

&lt;p&gt;modprobe configurations are essentially files that get dropped into &lt;code&gt;/etc/modprobe.d/&lt;/code&gt;. You’ll get one or two text files with a few simple lines — probably something like &lt;code&gt;blacklist nouveau&lt;/code&gt;. Then, the &lt;code&gt;initramfs&lt;/code&gt; will be updated (rebuilt) so that it knows not to load &lt;code&gt;nouveau&lt;/code&gt; on the next boot, but rather the NVIDIA drivers for the kernel.&lt;/p&gt;

&lt;p&gt;The configuration files in the &lt;code&gt;/etc/modprobe.d&lt;/code&gt; directory will be read during the next boot, and the blacklisting will be respected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcooar3ezpn0azkbf3xy9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcooar3ezpn0azkbf3xy9.png" alt=" " width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So actually it tells you what I explained above;&lt;br&gt;
If you want to revert changes later you can just cancel these written files from /etc/modprobe.d&lt;/p&gt;

&lt;p&gt;I press --&amp;gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnb6as2662nz1h6ms869m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnb6as2662nz1h6ms869m.png" alt=" " width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I press --&amp;gt;  (as an alternative I could do Abort and reboot but I am lazy)&lt;/p&gt;

&lt;p&gt;Installer starts to build kernel modules...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez8nzy6dggx5vvymoysk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez8nzy6dggx5vvymoysk.png" alt=" " width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;--------------You can skip all stuff related to keys, signing if your setup &lt;strong&gt;does not have Secure Boot Enabled&lt;/strong&gt;!-------------------------&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F179tnhba1rwbcd3has90.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F179tnhba1rwbcd3has90.png" alt=" " width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I press --&amp;gt; &amp;lt; Sign the kernel module &amp;gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbei2n70yrluuaqjo0bjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbei2n70yrluuaqjo0bjm.png" alt=" " width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I press --&amp;gt;  (I have generated a keypaor for signing and enrolled MOK key)&lt;/p&gt;

&lt;p&gt;Full path to private key:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flthu5ta6x6xtgt3fpbu1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flthu5ta6x6xtgt3fpbu1.png" alt=" " width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Full path to public key:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2060r9wtyu32c40vyep7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2060r9wtyu32c40vyep7.png" alt=" " width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;----- Secure Boot related prompts finsihed ------------&lt;/p&gt;

&lt;p&gt;Next, warning about 32-bit libraries:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc237uhw585i3013m9xbc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc237uhw585i3013m9xbc.png" alt=" " width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, warning about the fact that some user-space library cannot be installed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5x3gb3czkzzzgz0lm9se.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5x3gb3czkzzzgz0lm9se.png" alt=" " width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dx0092jzwfr7dn10wf9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dx0092jzwfr7dn10wf9.png" alt=" " width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, I press --&amp;gt; "Rebuild initramfs"&lt;/p&gt;

&lt;p&gt;Okay, NVIDIA kernel modules are built.&lt;br&gt;
Only now, the installation of user-space libraries starts:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnytbl38bi5j9uw8i8pp9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnytbl38bi5j9uw8i8pp9.png" alt=" " width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, I select that I do want it writes X configuration file (If you use Wayland, select No!):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwib4pz0bx1c8o0px1udp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwib4pz0bx1c8o0px1udp.png" alt=" " width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, Installation is completed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvwb4q2t448p169haabt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvwb4q2t448p169haabt.png" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am informed that I have to do reboot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgjy5he3ar6lfxyy4y9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgjy5he3ar6lfxyy4y9t.png" alt=" " width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, apparently it works! &lt;/p&gt;

&lt;p&gt;Here is the &lt;code&gt;nvidia-smi&lt;/code&gt; output (though, with some strange log messages (?):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ nvidia-smi
Mon Apr 14 18:47:38 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.133.07             Driver Version: 570.133.07     CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3060        Off |   00000000:01:00.0  On |                  N/A |
|  0%   28C    P8             11W /  170W |     845MiB /  12288MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Summarizing: If you don’t fully understand these instructions and the steps involved, then installing NVIDIA drivers this way might not be for you. You can, of course, experiment (snapshotting tools are great for these kinds of experiments).&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  5.2 Installing NVIDIA drivers on Debian Testing with KDE Plasma that is running on Nouveau drivers.
&lt;/h4&gt;

&lt;p&gt;If you didn't know, the KDE Plasma Desktop Environment can run on both the X display server and the Wayland display server. If you prefer the X display server and use it, this section is not for you. You just install the drivers, reboot, and your KDE  X session should be intact (the session selector is in the bottom left corner on the login menu). The NVIDIA drivers installation process blacklists the Nouveau drivers, so after rebooting, the KDE desktop environment should start with the NVIDIA drivers.​&lt;/p&gt;

&lt;p&gt;However, if you use Wayland and stick with it, you'll need some extra configuration. Right after installing the NVIDIA drivers, your KDE Wayland session might break temporarily until you make a couple of modifications. It's not critical - for example, I installed the NVIDIA drivers, rebooted, and my Wayland session failed. I switched to an X session, made the necessary modifications, rebooted, and after that, my KDE Wayland session started smoothly.&lt;/p&gt;

&lt;p&gt;Here is some info about one of my setups with KDE:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux trixie/sid"
NAME="Debian GNU/Linux"
VERSION_CODENAME=trixie
# My KDE setup is on Debian Testing (Debian 13!)
$ plasmashell --version
plasmashell 6.3.4
$ nvidia-smi
| NVIDIA-SMI 570.133.07             Driver Version: 570.133.07     CUDA Version: 12.8  
# yep I am a baddy, I installed drivers in NVIDIA recommended way :3  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, I installed NVIDIA drivers, rebooted and logged into KDE X session (Wayland session was not starting). Then, I just followed steps from &lt;a href="https://wiki-debian-org.translate.goog/NvidiaGraphicsDrivers?_x_tr_sl=en&amp;amp;_x_tr_tl=it&amp;amp;_x_tr_hl=it&amp;amp;_x_tr_pto=sc#Wayland" rel="noopener noreferrer"&gt;Debian documentation on NVIDIA drivers installation&lt;/a&gt;, section on Wayland support.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo vim /etc/default/grub.d/nvidia-modeset.cfg
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX nvidia-drm.modeset=1"

$ sudo update-grub

$ sudo apt install nvidia-suspend-common

$ sudo systemctl enable nvidia-suspend.service
$ sudo systemctl enable nvidia-hibernate.service
# This command will fail if you did not reboot after nvidia drivers installation and nouveau drivers are still in use by your Debian
$ sudo systemctl enable nvidia-resume.service

$ cat /proc/driver/nvidia/params | grep PreserveVideoMemoryAllocations
PreserveVideoMemoryAllocations: 1

If this parameter is set to zero, you should be able to override it by adding a configuration into modprobe.d (assuming the file doesn't already exist):

$ sudo vim /etc/modprobe.d/nvidia-power-management.conf
options nvidia NVreg_PreserveVideoMemoryAllocations=1

$ sudo reboot 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After reboot, Voila!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1soyrhhylkte9c2x6g6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1soyrhhylkte9c2x6g6.png" alt=" " width="795" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>debian</category>
      <category>nvidia</category>
      <category>linux</category>
      <category>kernel</category>
    </item>
    <item>
      <title>Debian Secure Boot: To be, or not to be, that is the question!</title>
      <dc:creator>Anna</dc:creator>
      <pubDate>Thu, 28 Nov 2024 22:30:14 +0000</pubDate>
      <link>https://dev.to/dev-charodeyka/debian-secure-boot-to-be-or-not-to-be-that-is-the-question-1o82</link>
      <guid>https://dev.to/dev-charodeyka/debian-secure-boot-to-be-or-not-to-be-that-is-the-question-1o82</guid>
      <description>&lt;p&gt;&lt;em&gt;While all my articles are about Debian, this article can still be useful for other Linux OSs since it’s focused on the logic behind Secure Boot.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;NB! In this article, I will show how to install NVIDIA drivers on a setup with Secure Boot enabled. However, I will not go into details about the NVIDIA driver installation peculiarities. I have published the separate &lt;a href="https://dev.to/dev-charodeyka/debian-12-nvidia-drivers-18dh"&gt;article&lt;/a&gt; on this topic. NVIDIA drivers in this article serve as an example of how to install kernel modules signed for Secure Boot.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;🔮🧙‍♀️: If you’re here reading this article, it’s because:&lt;br&gt;
&lt;strong&gt;A.&lt;/strong&gt; You’ve encountered a huuuuugeeee problem with your Linux OS because of Secure Boot (most probably your problem is &lt;em&gt;goodbye 🫡 nvidia kernel module (a.k.a driver) after booting with Secure Boot enabled&lt;/em&gt;).&lt;br&gt;
&lt;strong&gt;B.&lt;/strong&gt; You’re a security freak and firmly believe that if there’s a security mechanism, it is absolutely UNACCEPTABLE not to use it on your OS! &amp;lt;--- I am here.&lt;/p&gt;

&lt;p&gt;Here are some alternative scenarios that might lead you to start googling "How to Enable Secure Boot on OS X (Debian, I hope)":&lt;/p&gt;

&lt;p&gt;So, you’re using your OS, enjoying it, 🦄🌈… and then something happens that pushes you to start digging through forums about Secure Boot. Two possible scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You’re using a dual boot setup with Windows and recently upgraded to Windows 11. Windows 11's minimum requirements include Secure Boot. As a result, you disable Secure Boot in the BIOS to log into your Debian system, then re-enable it in the BIOS to log back into Windows. And you are fed up doing this.&lt;/li&gt;
&lt;li&gt;To install and play a video game (like Valorant, FIFA), your system needs Secure Boot enabled. In this case, Secure Boot acts as part of the game’s anti-cheat mechanisms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Anyway, I guess you have your reasons to read this article, so here is the roadmap:&lt;/p&gt;

&lt;p&gt;1 Understanding what Secure Boot actually is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1.1 What is UEFI&lt;/li&gt;
&lt;li&gt;1.2 UEFI vs BIOS&lt;/li&gt;
&lt;li&gt;1.3 Secure Boot: myths&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2 What are Kernel Modules and their origins: Linux Kernel Upstream, DKMS and third party&lt;/p&gt;

&lt;p&gt;3 Secure Boot Setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3.1 Generating your own signatures&lt;/li&gt;
&lt;li&gt;3.2 Enabling Secure Boot&lt;/li&gt;
&lt;li&gt;3.3 About MOK and how to manage it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;4 Configuring DKMS to automatically sign kernel modules&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;4.1 "Outsource" signing keys generation to DKMS&lt;/li&gt;
&lt;li&gt;4.2 Creating and Enrolling your own Machine Owner Key; pointing DKMS to use it &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;5 Using your MOK to sign installed kernel modules manually&lt;/p&gt;


&lt;h3&gt;
  
  
  1. What is Secure Boot?
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;UEFI Secure Boot (SB) is a verification mechanism for ensuring that code launched by a computer's UEFI firmware is trusted. It is designed to protect a system against malicious code being loaded and executed early in the boot process, before the operating system has been loaded. (&lt;a href="https://wiki.debian.org/SecureBoot" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's decompose it. First, what is &lt;strong&gt;UEFI&lt;/strong&gt;?&lt;/p&gt;
&lt;h4&gt;
  
  
  1.1 About UEFI
&lt;/h4&gt;

&lt;p&gt;UEFI stands for &lt;strong&gt;U&lt;/strong&gt;nified &lt;strong&gt;E&lt;/strong&gt;xtensible &lt;strong&gt;F&lt;/strong&gt;irmware &lt;strong&gt;I&lt;/strong&gt;nterface. &lt;/p&gt;
&lt;h5&gt;
  
  
  &lt;strong&gt;U&lt;/strong&gt;nified ...
&lt;/h5&gt;

&lt;p&gt;If you have more than one OS on your PC and not even in dual boot (when more than ONE OSs reside on the &lt;strong&gt;same&lt;/strong&gt; storage device disk), you will still see in BIOS* ONLY ONE storage device to boot from. That is because UEFI has &lt;strong&gt;one bootable partition&lt;/strong&gt; (ESP) ** for the whole PC**. Usually, it is created when you install the first OS. And then if you install something on additional disk —the booting will be done from the first disk where the firstly installed OS resides. So, UEFI should be intelligent enough about loading firmware based on the choice where you want to boot. For example, UEFI will have to load one set of firmware for booting Debian and another for Windows.&lt;/p&gt;
&lt;h4&gt;
  
  
  ...&lt;strong&gt;F&lt;/strong&gt;irmware &lt;strong&gt;I&lt;/strong&gt;nterface.
&lt;/h4&gt;

&lt;p&gt;So it is about firmware, as you can guess. Why do we talk about firmware? Is it UEFI who messes up your Bluetooth or Wi-Fi dongles? No, it is not that.&lt;/p&gt;

&lt;p&gt;First, a quick reminder: &lt;em&gt;what is firmware&lt;/em&gt;? Firmware refers to embedded software which controls electronic devices. So your PC is just a bunch of electronic devices — hardware — CPU, motherboard, video card, SSD disks, some ports like USB ports with something, or ports for cables. You plug in your PC, so all these guys get energy to run and do something... but do what?&lt;/p&gt;

&lt;p&gt;When you press the power-on button, all these guys do not just start running and doing something. First, of course, they are having a kind of brain—it is the CPU. But what should the CPU do? Okay, you power-on your PC. If you press some combination of keys that depends on your motherboard brand, you GIVE INSTRUCTION to enter BIOS*. Then in BIOS*, you give instruction from which storage device (disk) to &lt;em&gt;boot&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Okay, you selected to &lt;em&gt;boot&lt;/em&gt; from SSD1 where you know you have your Debian OS. You exit BIOS* and wait for booting to finish. But what is going on there behind? WHO is instructing what to do, which hardware to use, and the sequence of actions which should be performed to bring UP your OS? Your OS has a bootloader which is stored in &lt;code&gt;/boot&lt;/code&gt; directory/partition. So UEFI just executes the "instructions" found there &lt;strong&gt;AND provides an access to firmware&lt;/strong&gt; to employ hardware for this boot process. &lt;/p&gt;

&lt;p&gt;I guess you know that the core of the Linux OS is the Linux kernel. &lt;strong&gt;Once launched&lt;/strong&gt; by UEFI, it takes control over UEFI and makes the rest to boot your OS. &lt;strong&gt;Key word related to Secure Boot here is "once launched&lt;/strong&gt;". If &lt;strong&gt;UEFI refuses&lt;/strong&gt; to execute/launch the kernel of your OS, you are not booting anywhere. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Important to know that when you just freshly install Debian, it has ONLY the kernel gently and with love packed for you by Debian devs. BUT eventually, when you start to perform post-installation configuration... you very probably will start installing some kernel modules that are like _additions to your system's Linux kernel&lt;/em&gt; that ALLOW you to use some hardware IN A DIFFERENT way from the default one defined by Linux kernel. Sometimes it is about allowing you to use a hardware piece in general, because it is not possible for some reason within default Linux kernel._&lt;/p&gt;

&lt;p&gt;When you install such additional kernel modules, they should also be launched during the booting process together with your OS's kernel.&lt;/p&gt;

&lt;p&gt;Coming back to the argument of this article, it is &lt;strong&gt;UEFI Secure Boot&lt;/strong&gt; who can completely block your booting process by refusing to load your OS's kernel, or in the most common scenario, it just refuses to load kernel modules that you have installed additionally.&lt;/p&gt;


&lt;h3&gt;
  
  
  1.2 UEFI vs BIOS
&lt;/h3&gt;

&lt;p&gt;If you noticed earlier, whenever I mentioned BIOS, I was adding a ** * **. Now, I’d like to explain why, to avoid any confusion. You see, the term "BIOS" has become a widely used word, often associated with a specific tool that you access mainly to change the booting process—like booting from an installation medium with a new OS or changing the boot order. Over time, the word "BIOS" stuck and is now commonly used in this context. However, from a technical standpoint, it’s not accurate anymore to call this "startup tool" BIOS. BIOS stands for Basic Input/Output System.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The BIOS in older PCs initializes and tests the system hardware components (power-on self-test or POST for short), and loads a boot loader from a mass storage device which then initializes a kernel. (&lt;a href="https://en.wikipedia.org/wiki/BIOS" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;However, BIOS is a legacy (term used for an &lt;em&gt;"outdated"&lt;/em&gt; software that is still in use) for years. If you’re using a relatively modern machine, what you’re actually using is UEFI—the successor to BIOS, designed to overcome its technical limitations. Some operating systems, like Windows 11, have completely dropped support for BIOS firmware and won’t run on older hardware that relies on it. Additionally, all Intel platforms have also discontinued support for BIOS entirely. (&lt;a href="https://www.intel.com/content/www/us/en/content-details/630266/removal-of-legacy-boot-support-for-intel-platforms-technical-advisory.html" rel="noopener noreferrer"&gt;source&lt;/a&gt;). &lt;/p&gt;

&lt;p&gt;When you access the UEFI BIOS on your PC, you can manage via minimalistic graphical user interface (GUI) rather than the terminal-like interface that legacy BIOS had. You can navigate using a mouse, and there might even be graphical metrics displaying your PC's health and performance. At first glance, you might think UEFI is just the "bad guy" who slapped a GUI onto the old BIOS while introducing complexities like Secure Boot or the frustration of a single boot partition (ESP) that impedes you to have "all parts" of OS together on one disk including bootloader - for example, when running multiple operating systems on one PC, UEFI rules out that all bootloaders are stored on the same disk, where the first OS you installed resides.&lt;/p&gt;

&lt;p&gt;However, that negative perception of UEFI isn’t accurate. Legacy BIOS, being a pretty old technology, had significant limitations that made it unsuitable for modern hardware. For instance, it couldn’t handle large storage devices (beyond 2TB), relied on MBR instead of GPT partitioning, and made recovery efforts more challenging when things went wrong. BIOS simply couldn’t meet the demands of modern PCs. So, even if it is possible for you to stick with BIOS, there’s really no reason to cling to such an outdated technology.&lt;/p&gt;

&lt;p&gt;By now, you’ve probably realized that Secure Boot is an exclusive feature of UEFI—it’s simply not possible with BIOS. &lt;/p&gt;


&lt;h3&gt;
  
  
  1.3 Secure Boot: myths
&lt;/h3&gt;

&lt;p&gt;First, a more technical elaboration on what Secure Boot actually is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Secure Boot (SB) works using cryptographic checksums and signatures. Each program that is loaded by the firmware includes a signature and a checksum, and before allowing execution the firmware will verify that the program is trusted by validating the checksum and the signature. When SB is enabled on a system, any attempt to execute an untrusted program will not be allowed. This stops unexpected / unauthorised code from running in the UEFI environment. (&lt;a href="https://wiki.debian.org/SecureBoot" rel="noopener noreferrer"&gt;SecureBoot - Debian Wiki&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Myth 1: Secure Boot is a sneaky tool developed by Microsoft to block the use and spread of Linux OS.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This perspective can be found all over Google, and the myth might even seem plausible if you explore your UEFI BIOS settings to enable Secure Boot. For instance, on my ASUS motherboard, enabling Secure Boot requires changing the OS type to "Windows OS." By default, it’s set to "Other OS," which essentially means Secure Boot is disabled.                      &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhui71s43odiqy0ukjuad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhui71s43odiqy0ukjuad.png" alt=" " width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;
[How to enable Secure Boot on ASUS motherboard](https://www.asus.com/support/faq/1049829/)



&lt;p&gt;Even if I do not use Windows on my PC, my OS type is set to "Windows OS" to have Secure boot enabled. How does Windows OS or Microsoft enter into the booting process when you enable Secure Boot? I am continuing to quote the Debian documentation on Secure Boot:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Most x86 hardware comes from the factory pre-loaded with Microsoft keys. This means the firmware on these systems will trust binaries that are signed by Microsoft. Most modern systems will ship with SB enabled - they will not run any unsigned code by default. Starting with Debian version 10 ("Buster"), Debian supports UEFI secure boot by employing a small UEFI loader called &lt;code&gt;shim&lt;/code&gt; which is signed by Microsoft and embeds Debian's signing keys. This allows Debian to sign its own binaries without requiring further signatures from Microsoft. Older Debian versions did not support secure boot, so users had to disable secure boot in their machine's firmware configuration prior to installing those versions. (&lt;a href="https://wiki.debian.org/SecureBoot" rel="noopener noreferrer"&gt;SecureBoot - Debian Wiki&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Debian developers dealt with Microsoft signatures and Debian OS has its UEFI bootloader signed. This means you don’t need to deal with Microsoft to register anything or go through any registration/login ecc processes.&lt;/p&gt;

&lt;p&gt;If you’re using the default Debian installer and an "official" Debian Linux kernel (from Debian package repos), Secure Boot shouldn’t pose any problems for booting. However, if your OS doesn’t support Secure Boot or its UEFI loader isn’t signed with Microsoft’s certificates—or if you’re running a custom kernel—this will obstacle the booting process until you have these custom components "signed". However, again, this doesn’t mean that Microsoft is anyhow involved in such signing process.&lt;/p&gt;

&lt;p&gt;Secure Boot is about &lt;strong&gt;signatures&lt;/strong&gt; in general, ensuring that only trusted and verified software can run during the boot process.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;UEFI Secure Boot is not an attempt by Microsoft to lock Linux out of the PC market here; SB is a security measure to protect against malware during early system boot. Microsoft act as a Certification Authority (CA) for SB, and they will sign programs on behalf of other trusted organisations so that their programs will also run. There are certain identification requirements that organisations have to meet here, and code has to be audited for safety. But these are not too difficult to achieve. (&lt;a href="https://wiki.debian.org/SecureBoot#MOK_-_Machine_Owner_Key" rel="noopener noreferrer"&gt;SecureBoot - Debian Wiki&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;


&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Myth 2: Secure Boot is a security measure you should only consider enabling and configuring if your personal setup is at considerable risk of being physically accessed by unauthorized individuals with the intent to access your data with workaround by booting from external medium or infect your system with malicious kernel/kernel components.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While this statement is partially true—because Secure Boot does indeed protect against physical threats—it’s not limited to that. As I mentioned earlier, one of the reasons you might have found this article is because your NVIDIA drivers aren’t loading after enabling Secure Boot. So, how does that relate to physical access? No one physically implanted those drivers into your system, I assume that you installed them normally while your OS was running. Yet, they’re still being blocked from loading. &lt;/p&gt;

&lt;p&gt;That’s exactly what Secure Boot is about: verifying that everything loaded during the boot process (and sometimes even after) is trusted and properly signed. It’s not just about protecting against unauthorized physical access.&lt;/p&gt;

&lt;p&gt;Think of it like going to a bank to request a new product. If the bank is reputable, that’s one thing, but in some cases, a bank can overwhelm you with a pile of documents to sign just to get the desirable bank's product. Buried within those documents—perhaps on page N of the nth document—might be a clause stating you agree to an additional product, like a credit card with a sky-high interest rate. The moment you sign, that agreement with the bank takes effect. So, it’s always a good idea to read the details carefully before signing anything.&lt;/p&gt;

&lt;p&gt;Secure Boot works in a similar way. UEFI will only load kernels or kernel modules that are signed—anything else gets blocked. Disabling Secure Boot, on the other hand, is like walking into a bank where all their products take effect automatically, no signatures required, just because you asked for them. This lack of "signing step" opens the door to risks.&lt;/p&gt;

&lt;p&gt;Above, I’ve mentioned the term &lt;em&gt;kernel modules&lt;/em&gt; several times in relation to NVIDIA drivers. But are all your system's drivers kernel modules act and installed in the same way? &lt;strong&gt;No.&lt;/strong&gt; And this is where a potential security gap arises, one that Secure Boot can help protect against. If you’re not sure what exactly you’re installing at certain moment—whether it’s just application software or a kernel module—Secure Boot can step in as a protection layer.&lt;/p&gt;

&lt;p&gt;With NVIDIA drivers, it’s a one-way street: you need them to function, so whether they are kernel modules or not, you’re going to install them. But let’s look at another example: imagine your Bluetooth, touch screen, or some other hardware on your laptop isn’t working on a Linux OS. You hit the forums, dig through solutions, and start following instructions: “install this,” “run this command,” “clone this repo,” “build it with &lt;code&gt;make&lt;/code&gt;,” and so on. &lt;strong&gt;If you don’t fully understand what you’re doing, you’re exposing your system to risks.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Secure Boot won’t protect you from malicious applications you might install, but it will protect your system from malicious kernel additions. If a suspicious or unsigned kernel module tries to load, Secure Boot will block it—end of story. &lt;/p&gt;

&lt;p&gt;So, this &lt;em&gt;refusal&lt;/em&gt; triggered by Secure Boot can act as a flag, signaling that what you have installed is a kernel module. This gives you the chance to investigate further, understand what you actually installed, and make an informed decision about whether to &lt;em&gt;sign&lt;/em&gt; it and enable it or leave it blocked for your safety. &lt;/p&gt;


&lt;h3&gt;
  
  
  2. What are Kernel Modules and their origins: Linux Kernel Upstream, DKMS and third party
&lt;/h3&gt;

&lt;p&gt;Understanding what are the Kernel Modules and where they are coming from is the key in understanding how to make all your OS's components compliant for Secure Boot so they do not fall apart on booting step.&lt;/p&gt;

&lt;p&gt;Repeating: when it comes to hardware devices, it’s about drivers, not just software or apps. And drivers on Linux are kernel modules:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Linux device drivers come in the form of kernel modules - object files which may be loaded into the running kernel to extend its functionality. (&lt;a href="https://kernel-team.pages.debian.net/kernel-handbook/ch-modules.html" rel="noopener noreferrer"&gt;Debian: Managing the kernel modules&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In case of Debian, where do these drivers come from? Who adds them to the OS?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;_Under Debian, the module can be installed from three different kind of sources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upstream Linux kernel modules: Those are shipped in the linux-image-* kernel packages.&lt;/li&gt;
&lt;li&gt;Extra modules, that's aren't in the upstream Linux kernel. Those are usually built using dkms. The available modules can be listed by running &lt;code&gt;apt rdepends dkms&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Others, like third party, proprietary and other or binary blobs modules... You should not install such modules on your system except when you have no other choice. (&lt;a href="https://wiki.debian.org/Modules" rel="noopener noreferrer"&gt;Modules - Debian Wiki&lt;/a&gt;)_&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Most of your PC's hardware relies on drivers from the upstream Linux kernel modules, which are included in the &lt;code&gt;linux-image-*&lt;/code&gt; Debian package, which is installed during OS installation, has a version as any other packages and can be upgraded.&lt;/p&gt;

&lt;p&gt;Them, there's the DKMS source of kernel modules. NVIDIA drivers, for example, fall into this category. They are third-party kernel modules built using DKMS (Dynamic Kernel Module Support), so they aren’t included directly in the upstream kernel, but neither they are just binary blobs. This makes them part of the second category, not the third, in the list of driver sources I mentioned above.&lt;/p&gt;
&lt;h4&gt;
  
  
  What is DKMS?
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;DKMS (Dynamic Kernel Module Support Framework) is a framework designed to allow individual kernel modules to be upgraded without changing the whole kernel. It is also very easy to rebuild modules as you upgrade kernels (&lt;a href="https://packages.debian.org/bookworm/dkms" rel="noopener noreferrer"&gt;Debian - Details of package dkms in bookworm&lt;/a&gt;).&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Kernel modules in this category can be found in Debian repos, however maybe not always in &lt;em&gt;main&lt;/em&gt; component of repo, but also in such components as &lt;em&gt;non-free&lt;/em&gt;, &lt;em&gt;contrib&lt;/em&gt; and &lt;em&gt;non-free-firmware&lt;/em&gt;. (If you do not know how Debian ships its software and you do not know what are Debian repos and how to manage them you can check these articles &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-14-57b1"&gt;1&lt;/a&gt;, &lt;a href="https://dev.to/dev-charodeyka/debian-12-is-amazing-how-to-create-your-custom-codehouse-part-3a4-3fbo"&gt;2&lt;/a&gt;). In most cases, you can find drivers that can be build with DKMS in Debian repos.&lt;/p&gt;

&lt;p&gt;Third-party binaries that build kernel modules for you may come on your horizon when some hardware pieces of your machine don’t work on your Debian setup, and drivers aren’t available in Debian’s package repositories. This often happens with Intel wireless adapters, Realtek Wi-Fi chipsets, Bluetooth devices, input devices like keyboards or mice, USB storage devices, and so on.&lt;/p&gt;

&lt;p&gt;However, when a hardware piece does not work/function well on your PC, it’s important to understand this difference: driver ≠ firmware! If firmware is missing on your OS, Debian usually informs you (Warning message in &lt;code&gt;dmesg&lt;/code&gt;, &lt;code&gt;journalctl&lt;/code&gt;, or when you update your kernel. If Debian informs you, that means that it did not manage to fetch it from any repo you have enabled for &lt;code&gt;apt&lt;/code&gt;. You may need to add additional repositories (like &lt;code&gt;bookworm-backports&lt;/code&gt;) or add components to the stable package repo (&lt;code&gt;contrib&lt;/code&gt;, &lt;code&gt;non-free&lt;/code&gt;, &lt;code&gt;non-free-firmware&lt;/code&gt;). As a last resort, you can fetch a missing firmware directly from the &lt;a href="https://git.kernel.org/" rel="noopener noreferrer"&gt;Linux repository&lt;/a&gt;(&lt;a href="https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/plain/rtl_nic/" rel="noopener noreferrer"&gt;example&lt;/a&gt; of available firmware for Realtek WiFi chipsets). Firmware is tightly connected to the hardware device itself, essentially controlling the small controllers and metal chips within the hardware. The firmware is called upon by the driver when the device is in use.&lt;/p&gt;

&lt;p&gt;So, if you have firmware but no driver for the device, you won’t be able to use it. Conversely, if there’s a working driver but it can’t find the correct firmware—or the firmware in use is faulty—the device won’t function properly.&lt;/p&gt;

&lt;p&gt;For Secure Boot, you don’t need to sign firmware (&lt;code&gt;.fw&lt;/code&gt; files), but kernel modules-drivers (&lt;code&gt;.ko&lt;/code&gt; files) installed separately (that are not shipped with your OS from the start) do require signing. This is because firmware is invoked by kernel modules, and if those modules are untrusted or not loaded by UEFI, the firmware becomes irrelevant—no one calls or uses it.&lt;/p&gt;

&lt;p&gt;With that said, let me conclude about third-party binaries that build kernel modules: use them only in cases of absolute necessity. Don’t resort to them to make some dubious Aliexpress USB “Super Mega Ultra Cool SSD 1,000,000 TB USB3.0” or a random WiFi/Bluetooth USB device work, do not pull something weird on your OS to play with LED colours of your keyboard following a first found guide. While it’s fine for experimental purposes, such as learning how hardware integrates with an OS, I strongly advise against using these solutions for long-term purposes. If something goes wrong, at best, you risk breaking your OS; at worst, you expose you breach security of your system.&lt;/p&gt;


&lt;h3&gt;
  
  
  3. Secure Boot Setup
&lt;/h3&gt;

&lt;p&gt;So, now that I’ve hopefully explained the details about Secure Boot, I can proceed with setting it up. In the end, it all comes down to ensuring that your system’s kernel and kernel modules are signed—either with Microsoft signatures or manually using your own signature. A signature, in the context of Secure Boot, essentially is a key.&lt;/p&gt;
&lt;h4&gt;
  
  
  3.1 Generating your own signatures
&lt;/h4&gt;

&lt;p&gt;If you’re a software developer, you may already be familiar with the concept of keys. For example, authentication on GitHub uses ssh keys, or when you need to secure your app’s traffic over the HTTPS protocol, this involves generation of certificates and keys. Similarly, some applications with high-security access use this method of authentication, where access to certain resources is allowed only from specific devices. In such cases, you, &lt;em&gt;the owner of device (PC)&lt;/em&gt;, generate a key pair—public and private keys—and share the public key with the provider (app's owner). This key is then enrolled in their authentication system, allowing the server to use your shared public key to authenticate your private key when you connect. Your private key always remains only on your device (PC); it should never be shared and must be stored securely. But if you store it securely, there needs to be a system that manages your keys. Otherwise, when a request is sent, it’s not like the authentication mechanism of an app X will search through all the directories on your PC to find a matching private key. Instead, keys need to be created, registered, and enrolled in a structured way.&lt;/p&gt;

&lt;p&gt;On Linux OSs, for Secure Boot all "signatures"-keys are managed by &lt;code&gt;shim&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;shim is a simple software package that is designed to work as a first-stage bootloader on UEFI systems.&lt;br&gt;
A key part of the shim design is to allow users to control their own systems. The distro CA key is built in to the shim binary itself, but there is also an extra database of keys that can be managed by the user, the so-called Machine Owner Key (MOK for short).(&lt;a href="https://wiki.debian.org/SecureBoot#MOK_-_Machine_Owner_Key" rel="noopener noreferrer"&gt;SecureBoot - Debian Wiki&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Decomposing it: &lt;strong&gt;Machine&lt;/strong&gt; is your PC. &lt;strong&gt;Machine Owner&lt;/strong&gt; is you. If you’re using a Linux distro that managed to obtain a Microsoft signature (like Debian OS), your system’s bootloader is already signed (&lt;strong&gt;with distro CA key&lt;/strong&gt;) and compatible with Secure Boot—but not by you, the machine owner. This means that the kernel and the kernel modules that came as part of your distro by default are already signed and you will be booted by Secure Boot with no problem.&lt;/p&gt;

&lt;p&gt;However, any kernel module you install or build manually after the initial installation must be signed by you, using your own key—the &lt;strong&gt;Machine Owner Key&lt;/strong&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  3.2 Enabling Secure Boot
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Secure Boot is possible only if it is enabled&lt;/strong&gt; and only on UEFI BIOS. To check this, go to your BIOS* before booting into any installed OS. Look at the title—if you see "UEFI," you’re good to go. However, if your BIOS* has a terminal-like, scary interface and you only see "BIOS" in the titles—sorry, this is the end of the story for you. No Secure Boot.&lt;/p&gt;

&lt;p&gt;Anyway, I’ll assume you have a UEFI BIOS. How to enable Secure Boot depends on the brand of your motherboard, as each manufacturer designs it differently. My motherboard is ASUS, so I simply searched online for "enable Secure Boot ASUS motherboard." For me, the guide is &lt;a href="https://www.asus.com/support/faq/1049829/" rel="noopener noreferrer"&gt;this one&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Remember, if you’re using a custom kernel or a distro that doesn’t support Secure Boot by default, you won’t be able to boot once you enabled Secure Boot. Unfortunately, I can’t provide a guide for that case. I’m a Debian girl and I don’t use any other distro (except for Arch ho-ho). However, if you are in that situation, the idea is to do the steps I will describe below &lt;strong&gt;before enabling Secure Boot&lt;/strong&gt; (not only that, however, you will have to deal also with GRUB). Once everything is signed for UEFI Secure Boot, you can enable it in BIOS*.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer: For the next steps, I’m not reinventing the bicycle—I’m simply following the &lt;a href="https://wiki.debian.org/SecureBoot" rel="noopener noreferrer"&gt;Debian Guide on Secure Boot&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;


&lt;h4&gt;
  
  
  3.3 About MOK and how to manage it
&lt;/h4&gt;

&lt;p&gt;Let’s align our starting point: there are two possible scenarios:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario&lt;/strong&gt; 1. You have a fresh install and are about to install some drivers—a kernel module, most likely NVIDIA drivers. In this case, you have two options. Remember the DKMS source for your kernel modules? You can configure DKMS to sign all modules it builds a prescindere (by default). Once that’s done, any module built with DKMS—like NVIDIA drivers on Debian—will already be signed. &lt;br&gt;
&lt;strong&gt;Scenario&lt;/strong&gt; 2. You already have NVIDIA drivers installed, but they don’t work because of Secure Boot. Here, you have two ways to fix this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure DKMS to sign your modules and then reinstall the drivers.&lt;/li&gt;
&lt;li&gt;Manually sign the kernel modules that are already installed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will cover both scenarios. &lt;/p&gt;

&lt;p&gt;After enabling Secure Boot in UEFI BIOS, boot into your Debian and check if Secure Boot enabled:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo mokutil --sb-state
SecureBoot enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can check the list of already enrolled keys with command &lt;code&gt;sudo mokutil --list-enrolled&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobzfoo8jx2wcdz3enmjx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobzfoo8jx2wcdz3enmjx.png" alt=" " width="508" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you haven't try to do something with key enrollment before, you will most probably see two enrolled keys: one is the Debian key, and the other is the Debian DKMS module signing key. However, even if this key appears in the list, it does not mean that it will automatically be used to sign any additional kernel modules you are about to build (or install). While there is functionality to export the enrolled MOKs for manipulation, using &lt;code&gt;mokutil --export&lt;/code&gt; will export all enrolled keys in &lt;code&gt;.der&lt;/code&gt; format, and a &lt;code&gt;.der&lt;/code&gt; key is similar to &lt;code&gt;.pub&lt;/code&gt; keys - they are "public" by nature. &lt;code&gt;mokutil --export&lt;/code&gt; will never be able to extract for you private keys that were paired with &lt;code&gt;.der&lt;/code&gt; keys. And without a private key, matching to any extracted &lt;code&gt;.der&lt;/code&gt; key you will not be able to sign anything.&lt;/p&gt;

&lt;p&gt;Therefore, you will need to generate a new MOK for manually signing additional kernel modules you are about to install. By the way, the keys currently enrolled in UEFI, if you did not enroll anything before, are not MOK keys—they are owned by Debian, not you.&lt;/p&gt;

&lt;p&gt;In this article, I will be installing the &lt;code&gt;nvidia-driver&lt;/code&gt; package from the Debian bookworm repository (this package installs "proprietary" flavour of drivers). NVIDIA drivers are kernel modules, and they require a signature to be loaded under Secure Boot. If you &lt;strong&gt;install NVIDIA drivers in an adequate way&lt;/strong&gt; NVIDIA kernel modules will be built using DKMS (for example if you install them with &lt;code&gt;nvidia-driver&lt;/code&gt; Debian package).&lt;/p&gt;




&lt;h3&gt;
  
  
  4. Secure Boot Scenario 1 - Fresh install of NVIDIA drivers; configuring DKMS to automatically sign kernel modules.
&lt;/h3&gt;

&lt;p&gt;If you have been following the &lt;a href="https://wiki.debian.org/NvidiaGraphicsDrivers" rel="noopener noreferrer"&gt;official Debian guide&lt;/a&gt; on NVIDIA drivers installation, you have probably come across the note about Secure Boot. If you clicked on link with "detailed instaructions", you were redirected to the page about &lt;a href="https://wiki.debian.org/SecureBoot#DKMS_and_secure_boot" rel="noopener noreferrer"&gt;Secure Boot and DKMS&lt;/a&gt;. However, if you are here reading this article, it probably didn’t work for you.&lt;/p&gt;

&lt;p&gt;I installed my Debian system using the netinstall &lt;code&gt;.iso&lt;/code&gt; with minimal installation, including only the standard system utilities. I had the kernel and the default kernel modules installed. However, I did not have the kernel headers installed, nor did I have DKMS installed.&lt;/p&gt;

&lt;p&gt;To check if your system has the dkms package installed, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo dpkg -l | grep dkms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even if the output of this command is empty, when you will run 'sudo apt install nvidia-driver', this package as a dependency shall bring for you also &lt;code&gt;dkms&lt;/code&gt; package. &lt;/p&gt;

&lt;h4&gt;
  
  
  4.1 "Outsource" signing keys generation to DKMS
&lt;/h4&gt;

&lt;p&gt;So, &lt;code&gt;dkms&lt;/code&gt; package will be installed as a dependency AS ONE OF THE STEPS of &lt;code&gt;nvidia-driver&lt;/code&gt; package installation. Right after its installation, &lt;code&gt;dkms&lt;/code&gt; will be building NVIDIA kernel modules for you (because it knows how to do it and it will be instructed to do so during installation process). And then, when kernel modules are built...DKMS will sign them for you using &lt;strong&gt;its own signing keys&lt;/strong&gt;...which are not enrolled into your MOK manager and WILL not be acknowledged by UEFI at the next boot.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB! DKMS signs the kernel modules it builds automatically upon the installation/building completion. If you don’t explicitly tell DKMS (via its config file) to use a &lt;em&gt;specific keypair&lt;/em&gt;, DKMS will generate for itself a keypair automatically after building THE FIRST kernel module on your system. HOWEVER, of course, DKMS won’t be able to enroll these generated keys into your PC’s list of MOK-registered keys!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Where does DKMS place &lt;em&gt;his automatically generated keys&lt;/em&gt;? It places them in the &lt;code&gt;/var/lib/dkms/&lt;/code&gt; directory. This is not some sacred info, you can see were DKMS searches for keys in its config file by &lt;code&gt;cat /etc/dkms/framework.conf&lt;/code&gt; (&lt;code&gt;mok_signing_key&lt;/code&gt; and &lt;code&gt;mok_certificate&lt;/code&gt; fields). You can verify that right after installing the &lt;code&gt;dkms&lt;/code&gt; package, this directory will be empty. However, if you install a driver like the &lt;code&gt;nvidia-driver&lt;/code&gt; package - which pulls in DKMS as a dependency - and then, after installation, if you list the contents of &lt;code&gt;/var/lib/dkms&lt;/code&gt; using &lt;code&gt;ls&lt;/code&gt;, you’ll see the keys DKMS used to sign the new modules. In general, if you’re attentive to the installation logs, you’ll also spot an entry indicating that DKMS is signing the built (installed) kernel modules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And here is the cornerstone&lt;/strong&gt;. When DKMS generates a key pair automatically, it’s not exactly creative with naming—it defaults to something like &lt;strong&gt;DKMS Signing Key&lt;/strong&gt;. The issue is, if you run multiple OSes (like different Debians) across different disks, or you frequently install/reinstall your OS for a fresh start, the list of enrolled MOKs can get very cluttered. And cleaning that up later is a pain—especially if the names of different keys are all the same and don’t explain anything. NB! 1. If you purge your OS from disk, destroy all partitions, install on top of it new OS, none of these will remove already enrolled keys related to purged OS from MOK list. 2. You cannot attach reinstalled system to the enrolled keys of previous OS. Private key gone (which is very probable if you wipe out entire OS from disk to reinstall it) = no signing/validating is possible).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, my personal choice is to keep control over key generation and use keypair that I generate and label manually, rather than letting DKMS keep creating new ones and enrolling them each time.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;However, if you prefer minimal effort and OK with using keys generated by DKMS, you just have to enroll public key of generated key pair:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo mokutil --import /var/lib/dkms/mok.pub # prompts for one-time password
$ sudo mokutil --list-new # recheck your key will be prompted on next boot

&amp;lt;rebooting machine then enters MOK manager EFI utility: enroll MOK, continue, confirm, enter password, reboot&amp;gt;

$ sudo dmesg | grep cert # verify your key is loaded
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4.2 Creating and Enrolling your own Machine Owner Key; pointing DKMS to use it
&lt;/h4&gt;

&lt;p&gt;In order to prevent DKMS to generate its own signing keys and keep everything tidy on my system with comprehensive labels, before installing any kernel module for first time on my system (usually it is NVIDIA), I first install dkms and point it to keys it should use for signing.&lt;/p&gt;

&lt;p&gt;First, I create my MOK - Machine Owner Key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ su - #if you are logged
# mkdir -p /var/lib/shim-signed/mok/
# cd /var/lib/shim-signed/mok/
# openssl req -nodes -new -x509 -newkey rsa:2048 -keyout MOK.priv -outform DER -out MOK.der -days 36500 -subj "/CN=My Name/"
# openssl x509 -inform der -in MOK.der -out MOK.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, I enroll this newly generated key, so it becomes acknowledged by &lt;code&gt;shim&lt;/code&gt; and then by UEFI after reboot:&lt;br&gt;
&lt;strong&gt;NB! store securely input password, you will need it later!&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# exit 
$ sudo mokutil --import /var/lib/shim-signed/mok/MOK.der # prompts for one-time password
$ sudo reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the reboot, the device firmware should launch it's MOK manager and prompt you to review the new key and confirm it's enrollment (Press any key and then select "Enroll MOK" option, then "continue", then "yes", insert the password you have chosen when you enrolled the key with &lt;code&gt;mokutil --import&lt;/code&gt;, then "Reboot"). After this, when you boot again, you can verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo mokutil --list-enrolled
$ sudo mokutil --test-key /var/lib/shim-signed/mok/MOK.der
/var/lib/shim-signed/mok/MOK.der is already enrolled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, I have key that I want is getting used by DKMS. Instead of waiting for when dkms will be installed as a dependency during ongoing installation of NVIDIA drivers, I anticipate it to configure DKMS so it uses for signing the keys I want. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;NB! If you tweaked somehow version of kernel on Debian - most common scenario, when you upgraded the version of kernel on Debian Stable from backports repo, you HAVE TO ensure that the version of kernel and kernel headers are aligned! DKMS relies heavily on your system's kernel headers to build modules for your kernel. If version of headers is misaligned with version of kernel, modules will be built in wrong way and most probably will result to be broken and not working.&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# your system's kernel version
$ uname -r 
# version of present kernel headers
$ dpkg -l | grep linux-headers-*
# if in the output you do not see linux-headers of the precisely matching version to the output of uname -r, you have to install them!
# you may also have more than one version of linux-headers,
# that is dure to the fact that anytime your OS kernel version is updated, 
# the old version of kernel is removed with sudo apt auptoremove,
# but headers may remain
# to install linux-headers that match precisely your system's current kernel version:
$ sudo apt install linux-headers-$(uname -r)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbddb8817yr5dheu13y5f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbddb8817yr5dheu13y5f.png" alt=" " width="800" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Installing &lt;code&gt;dkms&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install dkms
#if you installed linux-image-* from bookworm-backports(kernel version &amp;gt;6.1.x)
#sudo apt install -t bookworm-backports dkms

Configuring dkms to use a specific keypair for signing:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go to &lt;code&gt;/etc/dkms&lt;/code&gt; and run &lt;code&gt;ls&lt;/code&gt;—you should see the file &lt;code&gt;framework.conf&lt;/code&gt;. Use &lt;code&gt;cat&lt;/code&gt; to view its contents, and you will find the &lt;strong&gt;mok_signed_key&lt;/strong&gt; and &lt;strong&gt;mok_certificate&lt;/strong&gt; fields pointing to &lt;code&gt;/var/lib/dkms/..&lt;/code&gt;. This directory is the default directory where DKMS searches for keys to sign modules and it is the directory where it will place automatically generated own keys if it finds it empty and if it is configured to look there.&lt;/p&gt;

&lt;p&gt;So, the point is to modify DKMS config file, so it looks for keys in directory where I places my MOK keypair. You simply need to modify these lines, and point dkms to existing and enrolled MOK key pair:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls /etc/dkms
cat /etc/dkms/framework.conf
sudo vim /etc/dkms/framework.conf
#delete '#' in the beginning of the lines!
mok_signing_key="/var/lib/shim-signed/mok/MOK.priv" 
mok_certificate="/var/lib/shim-signed/mok/MOK.der"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can proceed with NVIDIA drivers installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt install nvidia-driver
$ sudo reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After reboot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ nvidia-smi
$ modinfo nvidia-current
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4xg1d859ddfn79iet17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4xg1d859ddfn79iet17.png" alt=" " width="721" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow7fle2xkh9xbqvutrg6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow7fle2xkh9xbqvutrg6.png" alt=" " width="663" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;VOILA!&lt;/p&gt;




&lt;h3&gt;
  
  
  5 Secure Boot Scenario 2 - NVIDIA drivers are already installed; using your MOK to sign installed kernel modules
&lt;/h3&gt;

&lt;p&gt;If you have already installed NVIDIA drivers and they worked perfectly before enabling Secure Boot but stopped working afterward, you have to repeat the steps mentioned above - generate MOK key-pair and enroll this MOK. Then, if you want that in future all kernel modules built with DKMS are automatically signed, you have to add path to your key pair into the DKMS configuration file. However, if you have installed NVIDIA drivers before, this DKMS config documentation will not have a reverse effect—kernel modules that have already been built and installed will not be signed. You have to sign those kernel modules manually, or reinstalled (rebuild) them. Additionally, if for some reason you plan to install kernel modules that will not be built with DKMS, they will also remain unsigned, and you will have to sign them manually. &lt;/p&gt;

&lt;p&gt;For this example, I have installed NVIDIA drivers, but I did not sign anything. Therefore, after I boot, NVIDIA drivers are not available. &lt;br&gt;
I see errors in the &lt;code&gt;dmesg&lt;/code&gt; logs from the boot process: in &lt;code&gt;lsmod&lt;/code&gt;, I do not see any loaded NVIDIA kernel module, and &lt;code&gt;nvidia-smi&lt;/code&gt; shows an error:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgzv4gwz9z3fpqy350mo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgzv4gwz9z3fpqy350mo.png" alt=" " width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, the drivers are definitely installed! All kernel modules are located in &lt;code&gt;/lib/modules&lt;/code&gt;. If you periodically update your kernel version and install a newer version, you will see more than one subdirectory there. I am using the latest kernel, and what I need to check for are updates to this kernel—because when I installed the NVIDIA kernel modules, and since I built the NVIDIA kernel with DKMS, I go to the &lt;code&gt;/dkms&lt;/code&gt; subdirectory. Here they are! My NVIDIA driver kernel modules:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zv53xr4x9bw9gv9dbz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zv53xr4x9bw9gv9dbz4.png" alt=" " width="800" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, these kernel modules I have to sign, to make them work under Secure Boot. This is the method &lt;a href="https://wiki.debian.org/SecureBoot#Using_your_key_to_sign_modules" rel="noopener noreferrer"&gt;indicated by the Debian developers&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#first, you need to create these kernel package-related variables:
$ VERSION="$(uname -r)"
$ SHORT_VERSION="$(uname -r | cut -d . -f 1-2)"
$ MODULES_DIR=/lib/modules/$VERSION
$ KBUILD_DIR=/usr/lib/linux-kbuild-$SHORT_VERSION
$ cd "$MODULES_DIR/updates/dkms" # For dkms packages
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pspgug8s9hm7u603k3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pspgug8s9hm7u603k3f.png" alt=" " width="800" height="223"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#you have to use passphrase you used for enrollment of MOK
$ echo -n "Passphrase for the private key: "
$ read -s KBUILD_SIGN_PIN
$ export KBUILD_SIGN_PIN
#simple bash loop to sign all kernel modules found in the current directory
$ find -name \*.ko | while read i; do sudo --preserve-env=KBUILD_SIGN_PIN "$KBUILD_DIR"/scripts/sign-file sha256 /var/lib/shim-signed/mok/MOK.priv /var/lib/shim-signed/mok/MOK.der "$i" || break; done
#or you can sign modules below the current directory one by one:
$ sudo --preserve-env=KBUILD_SIGN_PIN "$KBUILD_DIR"/scripts/sign-file sha256 /var/lib/shim-signed/mok/MOK.priv /var/lib/shim-signed/mok/MOK.der nvidia-curernt.ko
...
$ sudo update-initramfs -k all -u
$ sudo reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is my result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7667paurdzrln2nr3ib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7667paurdzrln2nr3ib.png" alt=" " width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;VOILA!&lt;/p&gt;




&lt;h3&gt;
  
  
  Summarizing (my personal advices):
&lt;/h3&gt;

&lt;p&gt;🦄 Do not neglect this security mechanism; it is neither as scary nor as hard to configure as it may seem.&lt;br&gt;
 🦄 Do not disable Secure Boot carelessly.&lt;br&gt;
 🦄 Secure Boot protects you not only during the boot process but beyond.&lt;br&gt;
 🦄 Adapting your system to be compatible with Secure Boot is simply a matter of OR enrolling (making it visible to UEFI) a signing key automatically generated by DKMS OR generating your Machine Owner Key (MOK), enrolling it, and then using it to sign your kernel modules or show the software that builds them for you (DKMS) where your enrolled keys are.&lt;br&gt;
 🦄 If you are a software developer and still find Secure Boot challenging or restrictive, I advise to reconsider it. Adapting your system for UEFI Secure Boot is an excellent exercise and an opportunity to deepen your understanding of keys, certificates, and key-based signatures—concepts you will undoubtedly encounter frequently in your developer career.&lt;/p&gt;

</description>
      <category>secureboot</category>
      <category>debian</category>
      <category>linux</category>
      <category>uefi</category>
    </item>
    <item>
      <title>Using Timeshift for System's Snapshots and Recovery on Debian 12 via Command Line</title>
      <dc:creator>Anna</dc:creator>
      <pubDate>Wed, 13 Nov 2024 18:39:51 +0000</pubDate>
      <link>https://dev.to/dev-charodeyka/using-timeshift-for-systems-snapshots-and-recovery-on-debian-12-via-command-line-7m6</link>
      <guid>https://dev.to/dev-charodeyka/using-timeshift-for-systems-snapshots-and-recovery-on-debian-12-via-command-line-7m6</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Timeshift for Linux is an application that provides functionality similar to the System Restore feature in Windows and the Time Machine tool in Mac OS. Timeshift protects your system by taking incremental snapshots of the file system at regular intervals. These snapshots can be restored at a later date to undo all changes to the system.(&lt;a href="https://github.com/linuxmint/timeshift?tab=readme-ov-file" rel="noopener noreferrer"&gt;Timeshift GitHub&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When exploring a new distro, it’s always helpful to have the option to undo changes, especially after installing or configuring something substantial that significantly alters your system. For example, installing NVIDIA drivers modifies multiple components of your system, and simply uninstalling the drivers doesn’t always revert everything to the exact state it was in before driver's installation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Snapshots ≠ Backups
&lt;/h3&gt;

&lt;p&gt;It’s important to note that, although snapshots are sometimes referred to as backups—even Debian documentation lists Timeshift as a backup tool - &lt;a href="https://wiki.debian.org/BackupAndRecovery" rel="noopener noreferrer"&gt;BackupAndRecovery-Debian Wiki&lt;/a&gt;—I somewhat disagree with this classification.&lt;/p&gt;

&lt;p&gt;The maintainers of Timeshift also make this distinction clear:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Timeshift is similar to applications like rsnapshot, BackInTime and TimeVault but with different goals. It is designed to protect only system files and settings. User files such as documents, pictures and music are excluded. This ensures that your files remain unchanged when you restore your system to an earlier date. (&lt;a href="https://github.com/linuxmint/timeshift?tab=readme-ov-file" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you need a &lt;em&gt;true&lt;/em&gt; backup tool, you should look elsewhere or manually back up periodically just some important data to cloud storage or physical media/storage device.&lt;/p&gt;

&lt;p&gt;While many people use snapshots as backup tools—and they work 90% of the time—in the other 10%, they can fail. Holding onto snapshots for too long, especially as the system changes significantly, can lead to problems. This isn't an issue specific to Timeshift, but it can happen if, for example, you take a snapshot, then make major modifications to your storage setup (such as reconfiguring LVM, partitions, changing filesystems). In such cases, it’s best to delete old snapshots and create a fresh one after successful modifications.&lt;/p&gt;

&lt;p&gt;I think that probability to encounter such issues with Timeshift is quite low, but in my experience with VMware snapshots, I ran into trouble for precisely this reason. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;After taking a snapshot of a Virtual Machine (VM), a logical volume has been extended and additional RAID storage has been added to this VM.The snapshot afterward has not been deleted and then a write-intensive task has been launched, which eventually has triggered an I/O error on the added RAID storage. It was a bit of a "&lt;strong&gt;situationship&lt;/strong&gt;" since the RAID was specifically set up to ensure data integrity. Fortunately, no data was lost, but the Virtual Machine got crazy, caught between the restrictive snapshot (designed to allow a full rollback in VMware) and the newly evolved storage configuration—especially with the repetitive write process on the RAID system.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For more details about Snaphshots vs Backups, you can read &lt;a href="https://www.reddit.com/r/sysadmin/comments/hue8gb/should_i_go_for_backups_or_snapshots/?rdt=59974" rel="noopener noreferrer"&gt;this thread&lt;/a&gt; on Reddit. And I proceed with timeshift installation, because I need exactly a snapshot tool, not a backup tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Timeshift GUI - a little drawback
&lt;/h3&gt;

&lt;p&gt;Timeshift has a graphical user interface, which, for some, may be a strong advantage, while for others, it might be less appealing (I, for example, prefer terminal or command line interfaces).&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;timeshift&lt;/code&gt; package depends on &lt;code&gt;lib-gtk-3.0&lt;/code&gt; (&lt;a href="https://packages.debian.org/bookworm/timeshift" rel="noopener noreferrer"&gt;Debian -- details of package timeshift in Bookworm&lt;/a&gt;). &lt;code&gt;lib-gtk&lt;/code&gt; is the GTK graphical user interface library, and this dependency makes &lt;code&gt;timeshift&lt;/code&gt; a suboptimal solution for machines primarily used as servers. However, I use it on my personal PC, and although I can’t recall any other package in my setup that uses GTK, this tool’s reliability outweighs the drawback of having GTK installed. But if this is a concern for you, you may want to consider a different tool for snaphots. Here’s the original response (a bit dated) from the creator of Timeshift:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Timeshift has dependencies on GTK3 libraries so you must have that installed even if you have not installed a desktop. During installation it will install all dependencies it requires.&lt;br&gt;
Separating the codebase into 2 separate packages (with and without Gtk dependencies) involves a lot of work.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’m not sure if the current maintainers of Timeshift have plans to separate Timeshift into GUI-only and CLI/TUI-only versions, or if this might have already been done. I haven’t found any information on it.&lt;/p&gt;




&lt;h3&gt;
  
  
  Preparing "storage device" for Timeshift
&lt;/h3&gt;

&lt;p&gt;My system is partitioned with LVM, and I have a &lt;em&gt;volume group&lt;/em&gt; that holds all my system’s logical volumes (called wonderland-vg). First, I want to check if there’s any free space in this volume group. If not, I’ll need to expand it. &lt;/p&gt;

&lt;p&gt;Most tutorials and guides focus on how to configure Timeshift where to store snapshots via the GUI. However, my system currently doesn’t have a DE nor even a display server (technically, I am on &lt;em&gt;headless&lt;/em&gt; server, that means that I cannot use mouse). Timeshift’s approach to snapshot destinations is based on storage devices (&lt;code&gt;timeshift&lt;/code&gt;’s config file expecting UUID) rather than directories, which makes sense—storing snapshots on the external storage device is useful in case the system breaks. So first, I’ll prepare a separate logical partition—a logical volume—for my future snapshots.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#these commands give me stats about my volume group
$ sudo vgdisplay
$ sudo vgs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have around 160 GB free in &lt;em&gt;wonderland-vg&lt;/em&gt; volume group, so I do not need to expand it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca5arh8l8ca8j7cdxez7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca5arh8l8ca8j7cdxez7.png" alt=" " width="518" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I proceed with creation of new logical volume and mounting it to &lt;code&gt;/timeshift&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create logical volume with name timeshift in volume group wonderland-vg
$ sudo lvcreate -L 20G -n timeshift wonderland-vg
#verify creation
$ sudo lvdisplay
# create a filesystem (all my system is ext4, choose any you use
$ sudo mkfs.ext4 /dev/wonderland-vg/timeshift
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Installing Timeshift
&lt;/h3&gt;

&lt;p&gt;Installation of &lt;code&gt;timeshift&lt;/code&gt; in Debian is quite straightforward: just run &lt;code&gt;sudo apt install timeshift&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7oti9xjemb7rvffof7to.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7oti9xjemb7rvffof7to.png" alt=" " width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Oh my, that’s quite a lot of dependencies!&lt;br&gt;
My current Debian setup is very close to a server setup, so I have very few things installed. Surprisingly, it doesn’t seem to pull in the display server Xorg, which is actually pretty good, especially for people using Wayland. However, there are still quite of some X11 packages installed.&lt;/p&gt;
&lt;h3&gt;
  
  
  Configuring Timeshift
&lt;/h3&gt;

&lt;p&gt;The configuration of &lt;code&gt;timeshift&lt;/code&gt; with command line is done mostly via configuration file. In &lt;code&gt;/etc/timeshift/&lt;/code&gt;, you can find the default config - &lt;code&gt;default.json&lt;/code&gt;. If you had already tried to create a snapshot with &lt;code&gt;sudo timeshift --create&lt;/code&gt;, you will find in that directory also &lt;code&gt;timeshift.json&lt;/code&gt; file. If you want to have control over the choice of storage device where your snapshots will be stored &lt;strong&gt;before&lt;/strong&gt; launching &lt;code&gt;timeshift --create&lt;/code&gt; for the first time, you have to modify &lt;code&gt;/etc/timeshift/default.json&lt;/code&gt;. If you want to change the storage device where timeshift already have placed some snapshot(s), you have to modify &lt;code&gt;/etc/timeshift/timeshift.json&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In order to point timeshift to use a specific storage device, you have to provide its UUID. If you run &lt;code&gt;sudo blkid&lt;/code&gt;, you’ll see the UUIDs of all your storage devices and can pick one where you are willing to place your snapshots. &lt;/p&gt;

&lt;p&gt;If you are with the setup that you have control over your mouse, the task becomes quite simple - run &lt;code&gt;sudo blkid&lt;/code&gt;, identify storage device (it can be a logical volume, there is no requirement for storage to be physical partition, it can be logical partition as well!). Then you copy corresponding UUID and you will have to paste it into timeshift configuration file (&lt;code&gt;/etc/timeshift/default.json&lt;/code&gt; or &lt;code&gt;/etc/timeshift/timeshift.json&lt;/code&gt;) in the field "backup_device_uuid" (between backquotes).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB!&lt;/strong&gt; If you are on headless server and you cannot just easily select a text from terminal, here is the workaround:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# /dev/mapper/wonderland--vg-timeshift is a logical volume where I want to store my snapshots
# with this command I extract UUID and store it into a temporary file
$ sudo blkid -s UUID -o value /dev/mapper/wonderland--vg-timeshift &amp;gt; /tmp/uuid-snapshots.txt
# Then, I open the `timeshift` configuration file to modify.
# I do it before running timeshift for the first time, so I have only `default.json` as a config file
$ sudo vim.tiny /etc/timeshift/default.json
{
  "backup_device_uuid" : "|",
  ...
}
# I place a cursor between backquotes (where | is)
# in normal vim mode, I type:
:r /tmp/uuid-snapshots.txt
# this command will place the content of file where your coursor is, if needed, modify it a bit, so it is placed correctly
# Save the file!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NB! IF you run &lt;code&gt;sudo timeshift --create&lt;/code&gt; for the first time after installation **it will not prompt you to select the storage&lt;/strong&gt;, but instead it will pick one on its own.  In my experience, it selects the first physical partition of a disk where your OS resides (for example, &lt;code&gt;/dev/sdb1&lt;/code&gt;); the problem that often it's a &lt;code&gt;/boot&lt;/code&gt; partition, so it's definitely not the best place to store snapshots. In this case, interrupt the execution (Ctrl +C) and go to modify&lt;code&gt;/etc/timeshift/timeshift.json&lt;/code&gt; (it should be created after the first launch) - either manually provide the correct UUID of the desirable storage or remove automatically set UUIDs (&lt;code&gt;backup_device_uuid&lt;/code&gt; and &lt;code&gt;parent_device_uuid&lt;/code&gt; fields). Once you’ve done that, restart the command, and it should prompt you to select the storage. However, &lt;code&gt;timeshift&lt;/code&gt; has already placed its directory to the storage (which he picked by itself) after the first launch. It may have sense to go and wipe &lt;code&gt;/timeshift&lt;/code&gt; directory from there after configuring &lt;code&gt;timeshift&lt;/code&gt; to store snapshots on desirable storage device**.&lt;/p&gt;

&lt;p&gt;I think the GUI would give more comprehensive control over device selection.&lt;/p&gt;

&lt;p&gt;If using Timeshift's GUI is not an option, you can schedule backups using &lt;code&gt;cron&lt;/code&gt; jobs or changing &lt;code&gt;false&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt; in configuration file for &lt;code&gt;schedule_monthly&lt;/code&gt;, &lt;code&gt;schedule_weekly&lt;/code&gt;, &lt;code&gt;schedule_daily&lt;/code&gt;, &lt;code&gt;schedule_hourly&lt;/code&gt; (&lt;em&gt;NB! I did not test it!&lt;/em&gt;). I don't enable automatic snapshots because, as I mentioned, they’re not true backups, and I don’t need constant snapshotting. I’ll take snapshots manually when necessary (e.g., before major system updates or modifications).&lt;/p&gt;

&lt;p&gt;If you're planning to set up automatic snapshots, you should also check the configuration file for the snapshot retention policy. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0abhyrkadmqah0fq7uk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0abhyrkadmqah0fq7uk.png" alt=" " width="623" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For instance, if you schedule snapshots to run daily, even if they’re incremental, there’s no point in keeping all of them indefinitely (because they occupy storage's space). You need to pay particular attention to the values of these fields in the configuration file: &lt;code&gt;count_monthly&lt;/code&gt;, &lt;code&gt;count_weekly&lt;/code&gt;, &lt;code&gt;count_daily&lt;/code&gt;, &lt;code&gt;count_hourly&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Keep in mind that manually deleting snapshots via command line is a bad idea. Snapshots are incremental: only the first snapshot contains a "full" snapshot, while subsequent snapshots are just deltas (differences) from that first snapshot. If you manually delete the first (or "parent") snapshot, the others become unusable. However, if you use the retention policies and Timeshift commands, like &lt;code&gt;sudo timeshift --delete --snapshot '&amp;lt;name/timestamp from sudo timeshift --list&amp;gt;'&lt;/code&gt;, your snapshots will be managed correctly.&lt;/p&gt;

&lt;p&gt;Another configuration field to check is &lt;code&gt;exclude&lt;/code&gt;. By default all user's home directory is excluded from snapshotting.&lt;/p&gt;

&lt;p&gt;NB! If you’re experimenting with different Desktop Environments, browsers, terminal emulators, or shells, remember that anything related to the customization, display stuff typically has configuration files in your &lt;code&gt;$HOME (/home/your_username)&lt;/code&gt; directory—often in &lt;code&gt;$HOME/.config&lt;/code&gt;. If your &lt;code&gt;/home&lt;/code&gt; directory is excluded from snapshotting (field &lt;code&gt;exclude&lt;/code&gt; in &lt;code&gt;/etc/timeshift/timeshift.json&lt;/code&gt;), when you install for instance KDE -&amp;gt; &lt;code&gt;sudo timeshift --restore&lt;/code&gt; -&amp;gt; install GNOME, all of KDE’s config files will remain in &lt;code&gt;$HOME&lt;/code&gt; and &lt;code&gt;$HOME/.config&lt;/code&gt;. If you do many experiments withs such things, it will inevitably lead to clutter of &lt;code&gt;$HOME&lt;/code&gt; directory over time (remember, that &lt;code&gt;ls&lt;/code&gt; is not sufficient to see all content, try at least &lt;code&gt;ls -a&lt;/code&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing Timeshift
&lt;/h3&gt;

&lt;p&gt;I’ve already created a couple of backups, and since they’re incremental, only the first one was large and took more time to create. Let’s see if this tool can handle reverting a kernel version update and restore the system from "incremental" snapshot (not the first, complete one).&lt;/p&gt;

&lt;p&gt;To see how &lt;code&gt;timeshift&lt;/code&gt; organizes its stuff/files/snapshots, you can mount the storage device which is used by &lt;code&gt;timeshift&lt;/code&gt; with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create a directory - mounting point
$ sudo mkdir /timeshift
# mount there newly created logical partition
$ sudo mount /dev/wonderland-vg/timeshift /timeshift
# to see what is inside
$ ls -a /timeshift
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is directory structure that was created automatically by &lt;code&gt;timeshift&lt;/code&gt; for itself:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ohhqhy10zreksbkz2si.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ohhqhy10zreksbkz2si.png" alt=" " width="800" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My system's kernel version before kernel upgrade from bookworm-backports repo:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3htrzpb2lidi9hbk6i7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3htrzpb2lidi9hbk6i7.png" alt=" " width="248" height="36"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My system's kernel version after kernel upgrade from bookworm-backports repo:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33guy4g3s9g0hjxg4l04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33guy4g3s9g0hjxg4l04.png" alt=" " width="423" height="35"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have 2 snapshots and I will be using the latest one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg7p4mzlw1w3uipb9h7u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg7p4mzlw1w3uipb9h7u.png" alt=" " width="704" height="344"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo timeshift --list
$ sudo timeshift --restore --snapshot &amp;lt;timestamp/name form the list&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Timeshift will ask me whether I want to reinstall GRUB. Since the change to my system was a kernel upgrade, I confirm. Then, I’m asked where GRUB should be installed.&lt;/p&gt;

&lt;p&gt;After the system state and settings are retrieved from the snapshot and ready for rollback, the system reboots.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NB: If you use BTRFS, don't miss the opportunity to take advantage of its excellent restore capabilities—they are fully supported in Timeshift:&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;It is strongly recommended to use BTRFS snapshots on systems that are installed on BTRFS partition. BTRFS snapshots are perfect byte-for-byte copies of the system. Nothing is excluded. BTRFS snapshots can be created and restored in seconds, and have very low overhead in terms of disk space.(&lt;a href="https://github.com/linuxmint/timeshift?tab=readme-ov-file" rel="noopener noreferrer"&gt;Timeshift GitHub&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My system's kernel version after restore:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3htrzpb2lidi9hbk6i7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3htrzpb2lidi9hbk6i7.png" alt=" " width="248" height="36"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;UPD on testing:&lt;/p&gt;

&lt;p&gt;I’ve noticed that sometimes when I do a rollback using &lt;code&gt;timeshift restore&lt;/code&gt; and then reboot, the restored system’s wireless network interface (Wi-Fi) ends up down. &lt;code&gt;NetworkManager&lt;/code&gt; can’t resolve it—it’s not just the Wi-Fi gets disconnected, but the entire wireless interface disappearing (&lt;code&gt;ip a&lt;/code&gt; does not list it at all). However, doing a clean reboot after the restore fixes this issue without any problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summarizing:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Timeshift's Pros:&lt;/strong&gt;&lt;br&gt;
🦄 Continuously maintained and widely-used snapshotting tool&lt;br&gt;
🦄 Easy management via command line&lt;br&gt;
🦄 Incremental snapshots&lt;br&gt;
🦄 Clearly separates snapshots from backups&lt;br&gt;
🦄 Has an active community&lt;br&gt;
&lt;strong&gt;Timeshift's Cons:&lt;/strong&gt;&lt;br&gt;
🫏 No-GUI (CLI-only) installation is not possible&lt;br&gt;
🫏 Brings many dependencies related to the Xorg display server, which may not be the best for server setups or systems with Wayland&lt;br&gt;
🫏 Limited guides on CLI-only usage, even though it's possible&lt;br&gt;
🫏 Storage destination is bound by UUID, which may be confusing&lt;br&gt;
🫏 Limited identifiers in storage selection when creating the first snapshot, making it harder to choose the correct storage device&lt;/p&gt;

</description>
      <category>debian</category>
      <category>ubuntu</category>
      <category>timeshift</category>
      <category>backup</category>
    </item>
  </channel>
</rss>
