<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Adam Weber</title>
    <description>The latest articles on DEV Community by Adam Weber (@adam_weber_6dc0d5bd752326).</description>
    <link>https://dev.to/adam_weber_6dc0d5bd752326</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/adam_weber_6dc0d5bd752326"/>
    <language>en</language>
    <item>
      <title>eBPF</title>
      <dc:creator>Adam Weber</dc:creator>
      <pubDate>Sun, 25 Jan 2026 20:32:39 +0000</pubDate>
      <link>https://dev.to/adam_weber_6dc0d5bd752326/ebpf-14p</link>
      <guid>https://dev.to/adam_weber_6dc0d5bd752326/ebpf-14p</guid>
      <description>&lt;p&gt;After spending the last few posts working through character drivers and proc entries, I decided it was time to take the plunge into eBPF. I'd heard the name thrown around in every security and observability discussion, but honestly had no idea what it actually was or why everyone seemed so excited about it. This post documents my first real encounter with eBPF and what I learned building a minimal process execution tracer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Even Is eBPF&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;eBPF stands for extended Berkeley Packet Filter, which is a confusing name because it does way more than just filter packets now. The core idea is that you can inject code into the running kernel without writing a traditional kernel module. That code runs in a sandboxed environment where the kernel verifies it's safe before loading it. If your BPF program would do something dangerous, it just fails to load instead of kernel panicking your system.&lt;/p&gt;

&lt;p&gt;Coming from kernel modules where a bad pointer dereference means instant death, this felt almost too good to be true. You get kernel-level visibility and performance without the danger.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Two-Part Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What caught me off guard initially was that an eBPF program isn't just one file. It's actually two separate programs working together:&lt;/p&gt;

&lt;p&gt;The kernel-side BPF program, written in restricted C and compiled to BPF bytecode. This is the code that runs inside the kernel when events happen.&lt;/p&gt;

&lt;p&gt;The userspace loader, which is a normal C program that loads the BPF bytecode, attaches it to kernel hooks, and reads the data back out.&lt;/p&gt;

&lt;p&gt;This split makes sense after I thought about it. The kernel-side code has to be heavily restricted for safety, so you can't just printf() or write to files from kernel space. The userspace program handles all the normal I/O and orchestration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building the Execve Tracer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I started with the simplest useful thing I could think of: tracing process execution. Every time a process calls execve() to start a new program, I wanted to see it.&lt;/p&gt;

&lt;p&gt;The kernel-side code hooks into a tracepoint called tp/syscalls/sys_enter_execve. When that fires, my BPF program grabs the process ID and command name, then writes them to a ring buffer. The ring buffer is basically shared memory between kernel and userspace that both sides can access safely.&lt;/p&gt;

&lt;p&gt;The userspace program loads this BPF code into the kernel, polls the ring buffer for events, and prints them to the terminal. Simple enough in theory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Compile Process&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is where things got interesting. The BPF program gets compiled with Clang to BPF bytecode, producing a .bpf.o file. That's not a normal executable, it's an ELF object file containing BPF instructions. The section names in that ELF file tell the loader what type of program it is and where to attach it.&lt;/p&gt;

&lt;p&gt;For example, SEC("tp/syscalls/sys_enter_execve") creates an ELF section with that exact name. When the userspace program opens the .bpf.o file, libbpf parses those section names to figure out "oh, this is a tracepoint program that should attach to sys_enter_execve."&lt;/p&gt;

&lt;p&gt;It's a clever way to encode metadata using standard ELF features instead of inventing something new. Listen to me I sound like I know what I'm talking about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing It&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After getting everything compiled, I ran the tracer with sudo (BPF programs need root to load). Then in another terminal I just ran normal commands: ls, ps, cat. They mostly all showed up as bash, until you run something like vim.&lt;/p&gt;

&lt;p&gt;What really hit me was how lightweight this felt compared to kernel modules. No rebooting. No risking a kernel panic. If the BPF program had a bug, it just wouldn't load (believe me I found out). The kernel's verifier checks every instruction before allowing it to run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ring Buffers and Communication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The ring buffer deserves its own mention because it's the bridge between kernel and userspace. It's a fixed-size circular buffer that prevents memory issues. When it fills up, old events get overwritten. This ensures you're never allocating unbounded kernel memory or writing to invalid addresses.&lt;/p&gt;

&lt;p&gt;The kernel-side code reserves space, writes the event, and submits it. The userspace code polls for new events and processes them. The whole thing is lock-free for most operations, which keeps it fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why This Matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;eBPF is the foundation of modern system observability and security tools. Every major EDR (Endpoint Detection and Response) product uses it. Tracing tools like bpftrace are built on it. Understanding how to write BPF programs means understanding how these tools work under the hood.&lt;/p&gt;

&lt;p&gt;For GhostScope specifically, this opens up the ability to watch syscalls, trace file operations, monitor network connections, and generally observe system behavior in ways that weren't practical before. All without maintaining a risky kernel module.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's Next?&lt;/strong&gt;&lt;br&gt;
Not quite sure, stay tuned to find out.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>kernel</category>
      <category>cybersecurity</category>
      <category>programming</category>
    </item>
    <item>
      <title>Debugging a Filesystem Module: When Reference Counting Goes Wrong</title>
      <dc:creator>Adam Weber</dc:creator>
      <pubDate>Wed, 07 Jan 2026 16:34:52 +0000</pubDate>
      <link>https://dev.to/adam_weber_6dc0d5bd752326/debugging-a-filesystem-module-when-reference-counting-goes-wrong-13b6</link>
      <guid>https://dev.to/adam_weber_6dc0d5bd752326/debugging-a-filesystem-module-when-reference-counting-goes-wrong-13b6</guid>
      <description>&lt;p&gt;As I've been working my way through Linux kernel development,I decided it was time to tackle something that seemed simple on the surface: write a minimal filesystem module. How hard could it be to mount a filesystem that contains a single file you can cat? Turns out, pretty educational.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Goal
&lt;/h2&gt;

&lt;p&gt;I wanted to build the smallest possible virtual filesystem. No disk backing. No persistence, just cat a static file that is generated by the module.The whole thing should live in RAM, expose one file called "hello" that returns some text.&lt;/p&gt;

&lt;p&gt;Seems like the next natural step. I mean, how different could it be?&lt;/p&gt;

&lt;h2&gt;
  
  
  The First Attempt
&lt;/h2&gt;

&lt;p&gt;I started by doing what seemed obvious: create a superblock in &lt;code&gt;fill_super&lt;/code&gt;, manually allocate inodes for the root directory and my hello file, create dentries for them, link everything together. Standard VFS stuff. The code compiled. The module loaded. I could mount it. I could even cat the file and see my message.&lt;/p&gt;

&lt;p&gt;Then I tried to unmount.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[  337.050239] gs_fs: superblock kill called
[  337.050258] ------------[ cut here ]------------
[  337.051811] BUG: Dentry still in use (1) [unmount of gs_fs gs_fs]
[  337.053385] WARNING: CPU: 0 PID: 72 at fs/dcache.c:1590 umount_check+0x56/0x70
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The kernel was &lt;em&gt;not&lt;/em&gt; happy. "Dentry still in use" means I left references dangling somewhere. The VFS couldn't clean up properly because something was still holding onto my hello file's dentry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Down the Rabbit Hole
&lt;/h2&gt;

&lt;p&gt;The error message told me exactly what was wrong but not why. I had to understand the lifecycle of dentries and inodes and their reference counting, and how the VFS expects you to clean up during unmount.&lt;/p&gt;

&lt;p&gt;First theory: maybe I needed to implement &lt;code&gt;evict_inode&lt;/code&gt;. So I added a proper &lt;code&gt;super_operations&lt;/code&gt; struct with an evict callback that calls &lt;code&gt;truncate_inode_pages_final()&lt;/code&gt; and &lt;code&gt;clear_inode()&lt;/code&gt;. That's the standard pattern for cleaning up inodes (so it seems to me, correct me if I'm wrong PLEASE!).&lt;/p&gt;

&lt;p&gt;Nope.&lt;/p&gt;

&lt;p&gt;Second theory: maybe it's how I was creating the dentries. I was using &lt;code&gt;d_alloc_name()&lt;/code&gt; to manually create the dentry for my hello file during mount. That gives you a dentry with a reference count, and there's no automatic mechanism to drop it. The VFS doesn't know about dentries you create manually like that (again, PLEASE set me straight if that's not the case).&lt;/p&gt;

&lt;p&gt;But here's the thing, I wasn't just randomly guessing. I started looking at how other simple filesystems do it. And that's when I found &lt;code&gt;simple_fill_super()&lt;/code&gt;. Probably should start reading more of the kernel docs, I guess?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Kernel's Helper Functions
&lt;/h2&gt;

&lt;p&gt;Turns out the kernel has a bunch of helper functions specifically for pseudo-filesystems like mine. &lt;code&gt;simple_fill_super()&lt;/code&gt; takes an array of file descriptors and sets up all the dentries, inodes, and reference counting for you automatically. It handles the lifecycle properly.&lt;/p&gt;

&lt;p&gt;So I refactored to use it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nf"&gt;gs_fs_fill_super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;super_block&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;fs_context&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;fc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;tree_descr&lt;/span&gt; &lt;span class="n"&gt;files&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;HELLO_FILENAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;gs_hello_fops&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mo"&gt;0444&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;  &lt;span class="c1"&gt;// Sentinel&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;s_op&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;gs_fs_super_ops&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;simple_fill_super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;GS_FS_MAGIC&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;files&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mounted it. Cat'd the file. Worked great. Tried to unmount.&lt;/p&gt;

&lt;p&gt;Nope.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Problem
&lt;/h2&gt;

&lt;p&gt;At this point I was getting frustrated. I had the right helpers. I had proper cleanup. What was I missing?&lt;/p&gt;

&lt;p&gt;Then I looked more carefully at my &lt;code&gt;kill_sb&lt;/code&gt; function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;gs_fs_kill_sb&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;super_block&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;pr_info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"gs_fs: superblock kill called&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;kill_anon_super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  &lt;span class="c1"&gt;// This was the problem&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I was using &lt;code&gt;kill_anon_super()&lt;/code&gt; because I saw it in some example somewhere and it seemed reasonable. Anonymous superblock, right?&lt;/p&gt;

&lt;p&gt;When you use &lt;code&gt;get_tree_nodev()&lt;/code&gt; with &lt;code&gt;simple_fill_super()&lt;/code&gt;, you need to use &lt;code&gt;kill_litter_super()&lt;/code&gt; instead. &lt;code&gt;kill_litter_super()&lt;/code&gt; knows how to properly clean up structures created by &lt;code&gt;simple_fill_super()&lt;/code&gt;. It handles all the dentries and inodes that got set up by that helper.&lt;/p&gt;

&lt;p&gt;Changed one line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;gs_fs_kill_sb&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;super_block&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;pr_info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"gs_fs: superblock kill called&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;kill_litter_super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  &lt;span class="c1"&gt;// Fixed&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Perfect!&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;This bug taught me more about the VFS than any amount of documentation reading could have (entirely speculation here, as I can't actually read). I had to dig into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How dentries cache the filesystem namespace&lt;/li&gt;
&lt;li&gt;How reference counting prevents premature cleanup&lt;/li&gt;
&lt;li&gt;Why the kernel provides helper functions and when to use them&lt;/li&gt;
&lt;li&gt;How different superblock types need different cleanup strategies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The kernel has these subtle API pairings all over the place. Use &lt;code&gt;get_tree_nodev()&lt;/code&gt;? Pair it with &lt;code&gt;kill_litter_super()&lt;/code&gt;. Use &lt;code&gt;simple_fill_super()&lt;/code&gt;? Make sure your &lt;code&gt;super_operations&lt;/code&gt; are set up properly. The compiler won't catch these mismatches because they all compile just fine. You only find out at runtime.&lt;/p&gt;

&lt;p&gt;A valuable set of lessons taught by getting my hands dirty.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Now that I have a working minimal filesystem, the obvious next steps are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement write support&lt;/li&gt;
&lt;li&gt;Add subdirectories&lt;/li&gt;
&lt;li&gt;Make files appear on-demand via &lt;code&gt;.lookup&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not sure I'll continue on the filesystem path or divert, but we'll see.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>kernel</category>
      <category>filesystem</category>
    </item>
    <item>
      <title>Minimal Character Driver</title>
      <dc:creator>Adam Weber</dc:creator>
      <pubDate>Mon, 08 Dec 2025 20:18:42 +0000</pubDate>
      <link>https://dev.to/adam_weber_6dc0d5bd752326/minimal-character-driver-3m0</link>
      <guid>https://dev.to/adam_weber_6dc0d5bd752326/minimal-character-driver-3m0</guid>
      <description>&lt;p&gt;In the last post I spent some time messing around with proc entries. That was a really good first step because it forced me to understand how the kernel exposes simple debug surfaces to userspace without dealing with any actual device semantics. But the next step in this journey is getting into something that behaves like a real device, something I can open from userspace and talk to. That means writing a character driver.&lt;/p&gt;

&lt;p&gt;At first you might be thinking, isn’t this the same thing as making a proc entry? Both expose a "file", right? But once you start writing it, the difference becomes obvious. A proc entry is just a virtual file owned entirely by the procfs layer. It’s for state dumps, metrics, small control knobs. A character driver is an actual device. It shows up in /dev. You open() it. You write() to it. The kernel gives it a major and minor number. Userland treats it like a real hardware-backed device, even if the thing behind it is just your code and a chunk of RAM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Minimal Device&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I wanted to start with the smallest possible character driver that still works like the real thing. Something that loads as a module, registers a device number, and shows up in /dev so userspace can interact with it. No hardware. No complicated concurrency. Just the basics.&lt;/p&gt;

&lt;p&gt;Here’s the flow the kernel expects:&lt;/p&gt;

&lt;p&gt;Allocate a major/minor number for the device.&lt;/p&gt;

&lt;p&gt;Initialize a cdev and register it with the kernel.&lt;/p&gt;

&lt;p&gt;Create a class so udev has something to hook into.&lt;/p&gt;

&lt;p&gt;Create the actual device node so it appears under /dev.&lt;/p&gt;

&lt;p&gt;Implement the basic file operations: open, release, read, write.&lt;/p&gt;

&lt;p&gt;That’s it. Once that scaffolding is in place you’ve got a real Linux device. It's really nice to know that it's just a bunch of call backs set in the proper structure and your'e done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I wrote a tiny driver called gs_char. It exposes a small 256 byte buffer. Whatever you write into it gets stored in kernel memory. When you read from it, you get the last thing you wrote. It’s not fancy, but that’s not the point. The point is understanding how the kernel expects a character device to behave.&lt;/p&gt;

&lt;p&gt;The file operations look almost exactly like what you’d expect coming from userland. read() copies data from the kernel buffer out to userspace. write() copies data from userspace back into the kernel buffer. Nothing magic.&lt;/p&gt;

&lt;p&gt;The initialization code is where the important stuff happens. alloc_chrdev_region() gives you a major/minor. cdev_add() tells the kernel about your driver. class_create() and device_create() make sure the device node shows up in /dev without you having to run mknod manually (in theory, not practice for me, more later). After that you can load the module and immediately see:&lt;/p&gt;

&lt;p&gt;/dev/gs_char&lt;/p&gt;

&lt;p&gt;And that part would honestly be really satisfying.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing It&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With the module loaded inside QEMU, my /dev/gs_char entry was not created. Sad face. It seems though that at first glance I thought something was wrong with my module, but then quickly realized I didn't have anything running to handle the device entries. So a quick mknod later and I'm up and running.&lt;/p&gt;

&lt;p&gt;echo "hello kernel" &amp;gt; /dev/gs_char&lt;br&gt;
cat /dev/gs_char&lt;/p&gt;

&lt;p&gt;And the driver prints out the open, read, write, and close messages using pr_info, so you get a nice trace in dmesg showing each step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why This Matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This little driver is the doorway into all the real kernel work I want to do. Understanding the lifetime of a device, how file operations work, and how major/minor numbers get managed, it becomes way clearer how subsystems behave. This is also where a lot of the kernel’s design patterns start becoming familiar. You stop thinking of drivers as weird special things and you start seeing them as normal kernel code with a few well-defined entry points. A point that's always amazed me since my first OS class. That the operating system and the compilers are just pieces of software themselves. They just have a very special role.&lt;/p&gt;

&lt;p&gt;What's next I don't know. Stay tuned!&lt;/p&gt;

</description>
      <category>linux</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Babies first /proc entry</title>
      <dc:creator>Adam Weber</dc:creator>
      <pubDate>Mon, 08 Dec 2025 17:02:25 +0000</pubDate>
      <link>https://dev.to/adam_weber_6dc0d5bd752326/babies-first-proc-entry-5bdg</link>
      <guid>https://dev.to/adam_weber_6dc0d5bd752326/babies-first-proc-entry-5bdg</guid>
      <description>&lt;p&gt;Over the last couple of posts in this series I’ve been digging deeper into building small kernel modules against mainline, getting them to run cleanly under QEMU, and slowly building up the muscle memory for day-to-day kernel work. Up to this point I’ve only been dealing with pr_* output or simple tracepoints or kprobes, so I figured the next logical step was to expose data from a module to user space. As I understand it the easiest place to start with that is the proc filesystem.&lt;/p&gt;

&lt;p&gt;This post walks through creating a small read-only /proc entry using the modern, accepted interfaces in the Linux kernel. No deprecated APIs, no shortcuts, and nothing that would get laughed out of a patch review. Hopefully.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding /proc at a high level&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The important thing to remember is that nothing in /proc actually lives on disk. The proc filesystem is a virtual filesystem created by the kernel. When you read a file in /proc, the kernel runs code that generates the content on the fly.&lt;/p&gt;

&lt;p&gt;That makes proc files useful for state dumps, diagnostics, or anything where you want to give user space a quick window into whatever your module is doing.&lt;/p&gt;

&lt;p&gt;The modern way of creating proc entries uses two pieces:&lt;/p&gt;

&lt;p&gt;proc_create() with a set of struct proc_ops&lt;/p&gt;

&lt;p&gt;The seq_file API, usually through the single_open() helper&lt;/p&gt;

&lt;p&gt;This is the pattern that upstream kernel developers expect. Older APIs still exist in tree history, but you should not use them in new code. As I understand it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The module&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here’s the complete module. This creates a /proc/gs_proc_demo entry that returns a message plus the current value of jiffies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#include &amp;lt;linux/init.h&amp;gt;
#include &amp;lt;linux/module.h&amp;gt;
#include &amp;lt;linux/kernel.h&amp;gt;
#include &amp;lt;linux/proc_fs.h&amp;gt;
#include &amp;lt;linux/seq_file.h&amp;gt;

#define PROC_NAME "gs_proc_demo"

static int gs_proc_show(struct seq_file *m, void *v)
{
    seq_printf(m, "hello from %s\n", PROC_NAME);
    seq_printf(m, "jiffies: %lu\n", jiffies);
    return 0;
}

static int gs_proc_open(struct inode *inode, struct file *file)
{
    return single_open(file, gs_proc_show, NULL);
}

static const struct proc_ops gs_proc_ops = {
    .proc_open    = gs_proc_open,
    .proc_read    = seq_read,
    .proc_lseek   = seq_lseek,
    .proc_release = single_release,
};

static int __init gs_proc_init(void)
{
    proc_create(PROC_NAME, 0, NULL, &amp;amp;gs_proc_ops);
    pr_info("%s: loaded\n", PROC_NAME);
    return 0;
}

static void __exit gs_proc_exit(void)
{
    remove_proc_entry(PROC_NAME, NULL);
    pr_info("%s: unloaded\n", PROC_NAME);
}

module_init(gs_proc_init);
module_exit(gs_proc_exit);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("adam");
MODULE_DESCRIPTION("Proc filesystem demo");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A couple of quick notes that helped me understand what’s going on:&lt;/p&gt;

&lt;p&gt;single_open() sets up the seq_file context and ensures your show() callback runs exactly once per read. That means you don’t need to think about offsets or partial reads.&lt;/p&gt;

&lt;p&gt;Your show() function should never call printk. You write to the seq_file using seq_printf(). In general it seems that one should never call printk, and instead should use the pr_* macros.&lt;/p&gt;

&lt;p&gt;The contents are generated every time user space reads the file. Nothing is cached.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building against mainline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Same workflow as the earlier modules:&lt;/p&gt;

&lt;p&gt;make -C /path/to/torvalds/kernel M=$PWD modules&lt;/p&gt;

&lt;p&gt;Boot into your QEMU VM that’s running the same kernel image.&lt;/p&gt;

&lt;p&gt;Insert the module:&lt;/p&gt;

&lt;p&gt;insmod gs_proc_demo.ko&lt;/p&gt;

&lt;p&gt;Read from the proc entry:&lt;/p&gt;

&lt;p&gt;cat /proc/gs_proc_demo&lt;/p&gt;

&lt;p&gt;Output looks something like:&lt;/p&gt;

&lt;p&gt;hello from gs_proc_demo&lt;br&gt;
jiffies: 12345678&lt;/p&gt;

&lt;p&gt;Remove it:&lt;/p&gt;

&lt;p&gt;rmmod gs_proc_demo&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Creating a /proc entry is one of those small steps that teaches you several deeper kernel development concepts at the same time: how the VFS layer works, how seq_file buffers output safely, how live kernel data is exposed to user space, and how kernel modules interact cleanly with filesystem interfaces.&lt;/p&gt;

&lt;p&gt;It’s also one of the first things you’ll run into when reading real-world kernel code. Most subsystems have at least one proc entry for debugging or state dumping. Being comfortable with this pattern means you’re finally out of “hello world” territory and starting to write modules with actual interfaces.&lt;/p&gt;

&lt;p&gt;Next up? I don't know honestly, maybe a quick comparison of /proc, sysfs, and debugfs, and where each one is appropriate.&lt;/p&gt;

&lt;p&gt;Thanks for reading and as always if you've any suggestions I'd love to hear them.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>kernel</category>
      <category>modules</category>
      <category>development</category>
    </item>
    <item>
      <title>Tainting the kernel</title>
      <dc:creator>Adam Weber</dc:creator>
      <pubDate>Wed, 03 Dec 2025 19:10:54 +0000</pubDate>
      <link>https://dev.to/adam_weber_6dc0d5bd752326/tainting-the-kernel-2cgh</link>
      <guid>https://dev.to/adam_weber_6dc0d5bd752326/tainting-the-kernel-2cgh</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post, I continue documenting my journey as I pivot into Linux kernel development. This week I spent time understanding how to instrument the kernel using tracepoints and kprobes. My goal was to observe process lifecycle events in a running kernel and eventually build a small out-of-tree module that hooks into these events.&lt;/p&gt;

&lt;p&gt;I ran into a few unexpected issues, including missing symbols, build environment mismatches, and understanding how tracepoint definitions propagate through the kernel build system. This post walks through the concepts, what I tried, what worked, and what I learned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I Was Trying To Do&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Access a specific tracepoint (like sched_process_exit) from an out-of-tree module.&lt;/p&gt;

&lt;p&gt;Experiment with registering a probe that fires when processes exit.&lt;/p&gt;

&lt;p&gt;Validate the behavior in a QEMU guest running a kernel built from mainline sources.&lt;/p&gt;

&lt;p&gt;The bigger purpose being to learn kernel instrumentation, understanding how trace events are plumbed, exploring debugging tooling, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding How Tracepoints Work&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tracepoints are static instrumentation sites built directly into the kernel code.&lt;/p&gt;

&lt;p&gt;They expose a small, stable ABI for things like scheduler events, block I/O, filesystem operations, etc.&lt;/p&gt;

&lt;p&gt;They are declared with macros that generate a struct named _&lt;em&gt;tracepoint&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Accessing a tracepoint from a loadable module requires that the tracepoint symbol exists in the kernel’s symbol table.&lt;/p&gt;

&lt;p&gt;Tracepoint symbols do not always exist in Module.symvers for out-of-tree modules. In fact it seems that most don't as they're not exported.&lt;/p&gt;

&lt;p&gt;Some tracepoints depend on config options.&lt;/p&gt;

&lt;p&gt;If the kernel was not built with tracing support enabled (like CONFIG_TRACEPOINTS, CONFIG_SCHEDSTATS, or CONFIG_EVENT_TRACING), the symbols may not be exported.&lt;/p&gt;

&lt;p&gt;While this was interesting to learn, it seems that more often than not these are not exported for external modules. This makes sense for safety, while it was edifying to go down this route ultimate it did not end up meeting my needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kprobes as an Alternative&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kprobes allow attaching dynamic probes to almost any kernel function.&lt;/p&gt;

&lt;p&gt;Unlike tracepoints, kprobes do not require kernel config flags or static declarations. In fact if you are building external modules you might run into compilation issues if certain flags are set for instrumentation.&lt;/p&gt;

&lt;p&gt;They work even when the kernel was not built with full tracing options.&lt;/p&gt;

&lt;p&gt;They can be registered from an out-of-tree module with minimal prerequisites.&lt;/p&gt;

&lt;p&gt;This module does the following:&lt;/p&gt;

&lt;p&gt;Defined a struct kprobe&lt;/p&gt;

&lt;p&gt;Set the symbol name (for example, "do_exit" or "release_task")&lt;/p&gt;

&lt;p&gt;Registered pre-handlers and post-handlers&lt;/p&gt;

&lt;p&gt;Added pr_info logging for validation&lt;/p&gt;

&lt;p&gt;Kprobes are incredibly flexible but less stable than tracepoints.&lt;/p&gt;

&lt;p&gt;They depend on internal function names which can change across kernel versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Build Environment Problem I Hit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I'm building the kernel on a host machine.&lt;/p&gt;

&lt;p&gt;I'm loading the module into a QEMU VM running a kernel built from a checkout of the mainline source tree.&lt;/p&gt;

&lt;p&gt;I found that most tracepoint symbols are not exported. Leaving my module to have issue in linking.&lt;/p&gt;

&lt;p&gt;Running grep __tracepoint_sched_process_exit Module.symvers returned nothing, which was the clue.&lt;/p&gt;

&lt;p&gt;This was were I decided to use kprobes, which avoid the symbol export issue entirely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Results&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The kprobe fired reliably during process teardown.&lt;/p&gt;

&lt;p&gt;I print small debugging messages, found in dmesg.&lt;/p&gt;

&lt;p&gt;So I learned:&lt;/p&gt;

&lt;p&gt;Getting the tracepoint symbol exported in an out-of-tree module is not a thing that, while possible with kernel source modification, should be done. Perhaps this whole idea of an out-of-tree module that can instrument internals is doomed from the beginning. But hey, this is how we learn. I'd love to hear a more experienced perspective on this. &lt;/p&gt;

&lt;p&gt;Exported symbols appear in Module.symvers. This is what something called modpost seems to use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Lessons Learned&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tracepoints depend heavily on kernel config and build environment.&lt;/p&gt;

&lt;p&gt;Kprobes are powerful when you just need visibility.&lt;/p&gt;

&lt;p&gt;Further understanding the kernel build system (Module.symvers, CONFIG options, modules_prepare).&lt;/p&gt;

&lt;p&gt;Instrumentation is a good entry point for learning kernel internals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s Next&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I think I might go back toward doing a basic character driver, syscall, or /proc creation. I'm not sure, but if you have a suggestion I'd love to hear it. Stay tuned.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>kernel</category>
      <category>c</category>
      <category>kprobe</category>
    </item>
    <item>
      <title>Kernel Module Dev Environment</title>
      <dc:creator>Adam Weber</dc:creator>
      <pubDate>Fri, 28 Nov 2025 19:05:13 +0000</pubDate>
      <link>https://dev.to/adam_weber_6dc0d5bd752326/kernel-module-dev-environment-ie5</link>
      <guid>https://dev.to/adam_weber_6dc0d5bd752326/kernel-module-dev-environment-ie5</guid>
      <description>&lt;p&gt;I’ve been poking at kernel development recently; partly for fun, partly for career growth, partly because I want to understand the layers I’ve leaned on for years. &lt;/p&gt;

&lt;p&gt;This post continues that thread: small, incremental steps, written as reminders to myself and hopefully helpful to anyone taking a similar path.&lt;/p&gt;

&lt;p&gt;This time, I wanted to get a kernel module development environment working end-to-end. Nothing glamorous. No drivers. No deep dives into subsystems. Just:&lt;/p&gt;

&lt;p&gt;Build the kernel&lt;br&gt;
Boot QEMU with it&lt;br&gt;
Build a module&lt;br&gt;
Insert it&lt;br&gt;
Watch it say hello&lt;br&gt;
Remove it&lt;/p&gt;

&lt;p&gt;As usual, half the battle is just wiring together all the moving pieces correctly. So this post documents what I did, what worked, and what I want to remember later.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1
&lt;/h2&gt;

&lt;p&gt;Set up a dedicated folder for modules&lt;/p&gt;

&lt;p&gt;First lesson: keep module work isolated so you don’t end up polluting the kernel tree.&lt;/p&gt;

&lt;p&gt;mkdir -p ~/kernel/hello&lt;br&gt;
cd ~/kernel/hello&lt;/p&gt;

&lt;p&gt;Good fences make good neighbors.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2
&lt;/h2&gt;

&lt;p&gt;Write the simplest possible module&lt;/p&gt;

&lt;p&gt;Kernel modules are surprisingly small once you strip away the noise. Here’s the entire thing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#include &amp;lt;linux/init.h&amp;gt;
#include &amp;lt;linux/module.h&amp;gt;
#include &amp;lt;linux/kernel.h&amp;gt;

MODULE_LICENSE("GPL");
MODULE_AUTHOR("AWs.");
MODULE_DESCRIPTION("Hello world kernel module");
MODULE_VERSION("0.1");

static int __init hello_init(void)
{
    pr_info("Hello: module loaded!\n");
    return 0;
}

static void __exit hello_exit(void)
{
    pr_info("Hello: module unloaded!\n");
}

module_init(hello_init);
module_exit(hello_exit);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It loads, prints a message, and unloads. Found in console output as well as dmesg.&lt;/p&gt;

&lt;p&gt;One thing I wanted to remind myself here:&lt;br&gt;
Userland logging is not kernel logging.&lt;/p&gt;

&lt;p&gt;Use pr_info() instead of printf(). Muscle memory is strong, but so is printk. Note to self, explore the rest of the pr_* macros.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3
&lt;/h2&gt;

&lt;p&gt;Use the kernel’s build system.&lt;/p&gt;

&lt;p&gt;Create a Makefile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;obj-m += hello.o

KDIR := /path/to/your/kernel/source
PWD := $(shell pwd)

all:
    $(MAKE) -C $(KDIR) M=$(PWD) modules

clean:
    $(MAKE) -C $(KDIR) M=$(PWD) clean
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setting KDIR to my source path.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4
&lt;/h2&gt;

&lt;p&gt;Build it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;make
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once I got everything lined up correctly I got:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hello.ko
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5
&lt;/h2&gt;

&lt;p&gt;Get the module into QEMU (virtio-fs)&lt;/p&gt;

&lt;p&gt;The nicest way to test that I could find was adding the following to my qemu alias. There is almost certainly a better way to do this, such that I won't have to modify it for other modules.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-fsdev local,id=mods,path=~/kernel-dev/hello,security_model=none \
-device virtio-9p-pci,fsdev=mods,mount_tag=mods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then inside the VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p /mnt/mods
mount -t 9p -o trans=virtio mods /mnt/mods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mounting the hosts modules folder&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6
&lt;/h2&gt;

&lt;p&gt;Insert the module&lt;/p&gt;

&lt;p&gt;Inside the VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;insmod /mnt/mods/hello.ko
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm with dmesg&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dmesg | tail
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in which I found&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hello: module loaded!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Along with a warning about polluting the kernel.&lt;/p&gt;

&lt;p&gt;finally remove it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rmmod hello
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dmesg | tail
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I observed&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hello: module unloaded!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  What this means for me:
&lt;/h1&gt;

&lt;p&gt;I built my own kernel&lt;br&gt;
I booted it&lt;br&gt;
I built a module against it&lt;br&gt;
I loaded that module into the kernel&lt;br&gt;
I then unloaded it from the kernel&lt;/p&gt;

&lt;p&gt;This is my pipeline for future kernel work. If you have experience let me know any tips or changes you think I should make.&lt;/p&gt;

&lt;h1&gt;
  
  
  What I’m doing next:
&lt;/h1&gt;

&lt;p&gt;Probably /proc/hello or a sysfs attribute. Maybe standing up a minimal character driver.&lt;/p&gt;

&lt;p&gt;The working dev environment was the real milestone.&lt;/p&gt;

&lt;p&gt;If you’re doing your own kernel experiments, I’d love to hear what you’re building.&lt;/p&gt;

</description>
      <category>c</category>
      <category>tutorial</category>
      <category>linux</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Panic in the sandbox</title>
      <dc:creator>Adam Weber</dc:creator>
      <pubDate>Wed, 26 Nov 2025 18:05:42 +0000</pubDate>
      <link>https://dev.to/adam_weber_6dc0d5bd752326/panic-in-the-sandbox-1aek</link>
      <guid>https://dev.to/adam_weber_6dc0d5bd752326/panic-in-the-sandbox-1aek</guid>
      <description>&lt;h2&gt;
  
  
  Day 1 — Trying to get the QEMU kernel sandbox going
&lt;/h2&gt;

&lt;p&gt;Time to setup the environment for kernel development. Rather than risk shooting myself in the foot on my bare host, I decided to: build a custom kernel + minimal user land, and boot it inside QEMU.&lt;/p&gt;

&lt;p&gt;What I thought would be a straightforward path — went somewhat astray, or at least was different than what I'd expected it would be. &lt;/p&gt;

&lt;h2&gt;
  
  
  What I did
&lt;/h2&gt;

&lt;p&gt;Clone a recent upstream Linux kernel source tree.&lt;/p&gt;

&lt;p&gt;Installed dependencies (compiler, build tools, kernel-dev libs, etc.). &lt;/p&gt;

&lt;h1&gt;
  
  
  QEMU
&lt;/h1&gt;

&lt;p&gt;Configure the kernel (make defconfig), enabled built-in drivers I expected to need (make kvm_guest.config). Which the built in configs for kvm was nice to find instead of having to menuconfig and change them all myself, or write config snippets and merge them with merge_config&lt;/p&gt;

&lt;h1&gt;
  
  
  init
&lt;/h1&gt;

&lt;p&gt;Built a minimal root filesystem using BusyBox + a tiny initramfs / minimal userland. This provided a good refresher of how booting the kernel works.&lt;/p&gt;

&lt;p&gt;Launched QEMU: point it to the kernel image, attach the rootfs, set console/serial, etc. &lt;/p&gt;

&lt;h2&gt;
  
  
  What I expected
&lt;/h2&gt;

&lt;p&gt;A quick "hello world" environment. Boot → get a kernel log on serial → minimal root shell → experiment with loading modules / tinkering / debugging — all safely sandboxed, without risking my host’s stability.&lt;/p&gt;

&lt;p&gt;What followed was… a lot of head-scratching.&lt;/p&gt;

&lt;p&gt;Mostly I spent a ton of time digging through forums and reading posts about how the systems worked, but it really wasn't all that bad. Starting with some kernel panics as init wasn't built properly or I was pointing to the wrong bzImage. Once I got all the pieces properly laid out it all worked perfectly. A nice safe environment where I don't have to worry about crashing my daily driver. &lt;/p&gt;

&lt;h1&gt;
  
  
  What’s next
&lt;/h1&gt;

&lt;p&gt;Build and test a trivial kernel module, load it, unload it.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>kernel</category>
      <category>qemu</category>
      <category>learning</category>
    </item>
    <item>
      <title>A Simple Binary</title>
      <dc:creator>Adam Weber</dc:creator>
      <pubDate>Thu, 20 Nov 2025 13:34:31 +0000</pubDate>
      <link>https://dev.to/adam_weber_6dc0d5bd752326/a-simple-binary-58eg</link>
      <guid>https://dev.to/adam_weber_6dc0d5bd752326/a-simple-binary-58eg</guid>
      <description>&lt;h2&gt;
  
  
  Day 1
&lt;/h2&gt;

&lt;p&gt;I’m starting a project to get back closer to the machine where I love to be. I'm not going to start with anything ambitious, but by coming back to something simple: compiling a tiny C program and inspecting the binary that drops out.&lt;/p&gt;

&lt;p&gt;I’ve spent decades building systems where performance, architecture, concurrency, and distributed correctness mattered — but it’s amazing how grounding it can be to step back and revisit the basics. Sometimes the smallest artifacts remind us of the real machinery beneath all the abstractions.&lt;/p&gt;

&lt;p&gt;So today’s exercise was straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#include &amp;lt;stdio.h&amp;gt;

int main(int argc, char ** argv){
    (void)argc;
    (void)argv;
    printf("Hello world!\n");
    return 0;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compile it, and now we have a binary. Nothing special. But it’s a good excuse to remember what actually happens when we run a program.&lt;/p&gt;

&lt;h2&gt;
  
  
  Programs don’t start at main
&lt;/h2&gt;

&lt;p&gt;Even for a tiny program, the binary is more than just my code. It represents the entire journey:&lt;/p&gt;

&lt;p&gt;C source → compilation → linker → assembly → machine instructions&lt;br&gt;
→ stored in an ELF binary → loaded by the OS → environment and runtime initialized → then main() is called&lt;/p&gt;

&lt;p&gt;Having spent a good amount of time in a higher level environment, it's good to remind myself of this.&lt;/p&gt;

&lt;p&gt;In reality, when the OS runs this binary, it doesn’t jump straight to main. It jumps into _start, which:&lt;/p&gt;

&lt;p&gt;Sets up the stack and registers&lt;br&gt;
Prepares arguments and environment&lt;br&gt;
Hooks up the dynamic loader&lt;br&gt;
Transfers control to __libc_start_main&lt;br&gt;
Which finally calls our main&lt;/p&gt;

&lt;h2&gt;
  
  
  A good reminder of binary internals
&lt;/h2&gt;

&lt;p&gt;Ghidra helped me put this back into focus:&lt;/p&gt;

&lt;p&gt;A container of machine code&lt;br&gt;
A structured file format with defined sections (.text, .data, .rodata, etc.)&lt;br&gt;
Metadata telling the kernel how to load it&lt;br&gt;
Instructions designed to map cleanly onto CPU execution flow&lt;/p&gt;

&lt;p&gt;It’s a small thing, but seeing that movement from structure → execution never stops being satisfying.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why start here?
&lt;/h2&gt;

&lt;p&gt;My goal is to grow this project into deeper territory:&lt;/p&gt;

&lt;p&gt;Kernel modules&lt;br&gt;
eBPF tracing&lt;br&gt;
Malware analysis&lt;br&gt;
Reverse engineering&lt;br&gt;
System-level telemetry tools&lt;/p&gt;

&lt;p&gt;And certainly some rabbit holes I don’t know about...&lt;/p&gt;

&lt;p&gt;Starting simple is intentional.&lt;/p&gt;

&lt;p&gt;It’s worth resetting my mental model of:&lt;/p&gt;

&lt;p&gt;“What is a program, to the OS?”&lt;/p&gt;

&lt;p&gt;Rebuilding that mental foundation helps everything else make more sense. At least for me.&lt;/p&gt;

&lt;p&gt;Even if I’ve known these things forever, revisiting them cleanly helps switch the brain back into systems mode — away from frameworks, clouds, and abstractions and back into the machine. Snuggled up spooning with the hardware.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s next
&lt;/h2&gt;

&lt;p&gt;Next steps will be just as incremental:&lt;/p&gt;

&lt;p&gt;Writing my first kernel module again&lt;br&gt;
Loading it with insmod&lt;br&gt;
Reading logs through dmesg&lt;/p&gt;

&lt;p&gt;Nothing grand, just onward progress.&lt;/p&gt;

&lt;p&gt;This is a slow burn, but a good reminder for me.&lt;/p&gt;

</description>
      <category>devjournal</category>
      <category>learning</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
