In Linux, everything is a file—not just text documents or executables, but also devices, network sockets, and even running processes. This idea turns the whole operating system into a giant, interconnected file‑system tree that you can “hunt” through like a detective. In this blog I’ll walk you through 10 meaningful discoveries I made while exploring the Linux file system, focusing on what those files and directories actually do, why they exist, and what interesting insights they reveal.
The brain of the system: /etc
The /etc directory acts as Linux’s configuration brain. It stores almost all system‑wide configuration files, from user accounts (/etc/passwd, /etc/shadow) to service settings, network parameters, and shell preferences.
Why does it exist?
Linux is designed to be modular and customizable. Instead of baking settings into the kernel, the system keeps them in human‑readable files under /etc, so administrators can tweak behavior without recompiling the OS.
What problem it solves:
- Centralized configuration: tools like SSH, NTP, and cron all read their config from here, so you can control the whole machine from one place.
- Portability: take
/etcfrom one server and some settings can be reused on another, as long as the distribution is similar.
Interesting insight:
Opening /etc/environment shows where global environment variables are defined; every user and many system services inherit these values. This means a single file can subtly shape how every process behaves across the system.
DNS and name resolution: /etc/resolv.conf and /etc/hosts
When you type google.com into a browser, Linux must translate that into an IP address. The main files that drive this are /etc/resolv.conf and /etc/hosts.
What they do:
-
/etc/resolv.conf: tells the resolver which DNS servers to query and sets search domains. -
/etc/hosts: maps hostnames to IPs locally, bypassing DNS entirely.
Why they exist:
Without DNS, you’d have to memorise IP addresses for every service. These files let the system know where to ask for DNS answers and allow quick local overrides for testing or debugging.
Example:
$ cat /etc/resolv.conf
nameserver 8.8.8.8
nameserver 1.1.1.1
search mycompany.local
What problem it solves:
- Fault isolation: you can point a test server to a different DNS server by changing just
/etc/resolv.conf. - Local testing:
/etc/hostslets you temporarily routeapi.staging.exampleto127.0.0.1without touching DNS infrastructure.
Interesting insight:
On some systems /etc/resolv.conf is just a symlink to /run/systemd/resolve/resolv.conf because systemd-resolved manages DNS and caches queries. This reveals how multiple layers of abstraction (service ↔ config ↔ resolver) all converge into a single file that applications read.
Routing tables on disk: /proc/net/route and /etc/iproute2
Routing is the process of deciding which network interface and gateway to use for each packet. Linux exposes its routing table through virtual files under /proc and configures higher‑level rules via /etc/iproute2.
What /proc/net/route does:
It’s a text representation of the kernel’s routing table, showing destination networks, gateways, and interfaces in hexadecimal.
Example:
$ cat /proc/net/route
Iface Destination Gateway Flags RefCnt Use Metric MTU Window IRTT
eth0 0000FEA9 00000000 0001 0 0 0 0 0 0
eth0 0200A8C0 00000000 0001 0 0 0 0 0 0
Why /proc/net/route exists:
The kernel maintains routing data in memory, but users‑pace tools need access. /proc exposes this as a file‑like interface so commands like ip route and netstat -r can read and display it.
What problem it solves:
- Troubleshooting: if a server can’t reach a remote subnet, inspecting
/proc/net/route(orip route) can show missing or wrong routes. - Scripting: scripts can parse this file to detect routing changes or validate network config.
Interesting insight:
Seeing the table in hexadecimal initially looks cryptic, but once you decode the destination and gateway fields (e.g., 0200A8C0 ↔ 192.168.0.2), you realise it’s literally the kernel’s raw routing logic laid bare as a text file.
Networking interface configuration
Network interfaces are controlled by config files and runtime data exposed under the file system. The exact paths differ by distro and network manager, but the idea is the same: configuration files live under /etc and running‑state info lives under /sys or /proc.
On many modern systems, NetworkManager writes connection‑specific files such as:
$ ls /etc/NetworkManager/system-connections/
wifi-home.nmconnection
server-wired.nmconnection
Why these files exist:
They persist the settings (SSID, IP mode, gateway, DNS) for each interface so the system doesn’t need to re‑ask for configuration every reboot.
What problem they solve:
- Automatic re‑connection: after a reboot, the system uses these configs to bring interfaces up correctly.
- Multi‑environment: a laptop can have separate configs for home, office, and mobile Hotspot.
Interesting insight:
Exploring /sys/class/net/eth0 shows symbolic links and device‑specific files that expose the interface’s driver, speed, and MAC address. This reveals how even hardware is modelled as a file‑system tree, making low‑level inspection surprisingly scriptable.
Logs and digital breadcrumbs: /var/log
System logs are one of the most powerful “forensic” tools in Linux. The /var/log directory houses logs from the kernel, services, and applications, each stored as ordinary text files.
Typical files:
$ ls /var/log/
auth.log syslog kern.log nginx/access.log apache2/error.log
What they do:
-
auth.log(orsecureon some distros): records SSH logins, sudo attempts, and other auth events. -
syslog/messages: general system‑wide messages from daemons and the kernel. - Service‑specific logs (e.g.,
nginx/*,apache2/*): store web‑server access and errors.
Why they exist:
Logs answer the “what happened and when?” question. They help debug crashes, track performance issues, and detect security incidents.
What problem they solves:
- Incident response: after a suspected breach, you can comb through
/var/log/auth.logto see if there were brute‑force SSH attempts. - Automation: log‑parsing tools read these files and trigger alerts or dashboards.
Interesting insight:
Reading /var/log/kern.log can show exact timestamps of hardware events, such as when a USB device was plugged in or when a disk driver threw an error. This makes /var/log feel like a chronological diary of the machine’s entire life.
Users and accounts: /etc/passwd, /etc/shadow, /etc/group
User management is surprisingly file‑based. The main files are:
-
/etc/passwd: user names, UIDs, home directories, and default shells (no passwords). -
/etc/shadow: password hashes, expiry dates, and login restrictions. -
/etc/group: group definitions and which users belong to them.
Example:
$ head -2 /etc/passwd
root:x:0:0:root:/root:/bin/bash
ritam:x:1000:1000:Ritam,,,:/home/ritam:/bin/zsh
Why they exist:
Linux follows the principle of “everything is a file.” Instead of a hidden database, user and group data live in plain(ish) text files with strict permissions.
What problem they solve:
- Fast, portable access: any program that needs UID‑to‑name mapping can open
/etc/passwdwithout a database daemon. - Security separation: sensitive password hashes are stored in
/etc/shadow, which is readable only by root, preventing casual inspection.
Interesting insight:
Noticing the x in /etc/passwd’s password field and then finding the actual hash in /etc/shadow reveals a clever separation of concerns: one file for public info, another for secret data, all still under /etc.
Permissions and the role of /etc/sudoers
File permissions already give Linux a rich security model, but /etc/sudoers adds another layer on top. It defines which users can run which commands as root or other users.
Example:
$ grep 'ritam' /etc/sudoers
ritam ALL=(ALL:ALL) NOPASSWD:ALL
What it does:
Each line in /etc/sudoers grants a user (or group) permission to run specific commands on specific hosts, sometimes without a password.
Why it exists:
Direct root logins are risky; sudo allows fine‑grained privilege delegation. An admin can give a developer temporary root‑like access to a web server without handing over the root password.
What problem it solves:
- Auditability: sudo logs every command to
/var/log/auth.log, so you can track who did what. - Least privilege: a backup user can run only backup‑related commands, not arbitrary system changes.
Interesting insight:
Hunting through /etc/sudoers and its included snippets (often under /etc/sudoers.d/) reveals that even privilege‑escalation logic is ultimately driven by human‑editable text files, not a hidden binary policy engine.
Processes as files: /proc
The /proc filesystem is one of the most fascinating parts of the Linux file system. Each running process gets a directory under /proc/<PID>, containing status files, memory maps, and environment variables.
Example:
$ ls /proc/1/
cwd environ exe fd/ mem mounts status cmdline
What it does:
-
status: basic metadata like memory usage, state, and user ID. -
cmdline: the command line that started the process. -
fd/: symbolic links to open file descriptors. -
environ: environment variables as null‑separated strings.
Why /proc exists:
The kernel exposes process‑level information in a file‑like interface so tools like ps, top, and lsof can read it without needing special syscalls for every piece of data.
What problem it solves:
- Process inspection: you can literally
cat /proc/$PID/environto see which environment variables a service sees. - Forensics: if a process opens suspicious files or sockets, you can inspect
/proc/$PID/fdand/proc/$PID/mapsto trace them.
Interesting insight:
Looking at /proc/self (a symlink to the current process’s own directory) makes it feel like each process “lives” in the file system, reading and writing information about itself as if it were any other service talking to a config file.
Device files and /dev
In Linux, even hardware devices are represented as files under /dev. These “device files” let you read from or write to physical devices using the same file‑I/O operations a text program would use.
Example:
$ ls -l /dev/sda*
brw-rw---- 1 root disk 8, 0 Apr 22 2026 /dev/sda
brw-rw---- 1 root disk 8, 1 Apr 22 2026 /dev/sda1
What they do:
-
/dev/sda,/dev/sda1, etc.: block device files for disk and its partitions. -
/dev/null: a sink that discards all data written to it. -
/dev/random,/dev/urandom: provide entropy for random data.
Why they exist:
The kernel abstracts hardware into a file‑like interface so applications can treat storage, terminals, and other devices uniformly.
What problem it solves:
- Abstraction: tools like
ddcan back up a disk by literally copying/dev/sdato a file, without caring what controller it is. - Security: restrictive permissions on device files prevent random users from directly accessing hardware.
Interesting insight:
Running ls -l /dev and seeing that /dev/sda is a “block special” file (brw-rw----) shows that even raw disk access is just another file, governed by the same permission model used for regular files and directories.
Boot‑time configuration under /boot and /etc/grub
Booting is orchestrated by files under /boot and configuration under /etc/grub. These files define which kernel to load, which initramfs to use, and what kernel parameters to pass.
Example:
$ ls /boot
vmlinuz-6.8.0-10-generic
initrd.img-6.8.0-10-generic
grub/
What they do:
-
vmlinuz-*: compressed kernel images. -
initrd.img-*: initial RAM disk used early in boot. -
/boot/grub/grub.cfg(or/etc/grub.d/*): boot‑menu configuration.
Why they exist:
The bootloader needs to know which kernel and initramfs to load, and the admin needs a way to tweak kernel parameters (e.g., nomodeset, console=ttyS0).
What problem it solves:
- Predictable boot: the system can always find the correct kernel and initramfs in
/boot. - Flexibility: changing
/etc/default/gruband runningupdate-grubgenerates a newgrub.cfg, allowing you to experiment with different boot options.
Interesting insight:
Opening /boot/grub/grub.cfg and seeing kernel‑parameter lines like linux /boot/vmlinuz-6.8.0-10-generic root=UUID=... ro makes it clear that booting is just another script‑like configuration, defined by plain text that you can read and edit.
Environment and system‑wide configuration
Beyond /etc/environment, many environment variables and shell settings are defined in files such as /etc/profile, /etc/profile.d/*.sh, /etc/bash.bashrc, and user‑specific ~/.bashrc, ~/.profile.
Example:
$ cat /etc/profile.d/java.sh
export JAVA_HOME=/usr/lib/jvm/java-17-openjdk
export PATH=$JAVA_HOME/bin:$PATH
What they do:
These scripts set environment variables and shell aliases that are inherited by every interactive shell session.
Why they exist:
To avoid hard‑coding paths and settings into each user’s shell config, distribution maintainers centralize them in /etc.
What problem it solves:
- Consistency: every user on the system gets the same
JAVA_HOMEunless explicitly overridden. - Maintenance: updating one file under
/etccan propagate a change to all future logins.
Interesting insight:
Hunting through /etc/profile.d/*.sh and comparing them with your own ~/.bashrc reveals that Linux effectively “assembles” your environment from multiple layers of configuration files, like a stack of configuration cards.
Wrapping up: thinking like a system investigator
Exploring the Linux file system beyond basic commands turns you into a kind of system detective. Instead of seeing /etc, /proc, /dev, and /var/log as abstract directories, you begin to recognise them as the living, file‑based control plane of the entire OS.
Each meaningful discovery—whether it’s decoding a routing table from /proc/net/route, reading environment variables from /etc/environment, or tracing a suspicious process via /proc/$PID—tells you why Linux behaves the way it does and how you can safely change or debug that behaviour. This “hunting” mindset doesn’t just help with assignments; it trains you to think like the OS itself, which is exactly what you want when you step into roles as a developer, DevOps engineer, or security analyst.
Top comments (0)