Every time I set up a new server, debug a production issue at 2 AM, or automate a deployment pipeline, I reach for the same set of Linux commands. After over a decade of working in terminals daily, I have built up a toolkit of 150+ commands that handle virtually every situation I encounter.
This guide is the reference I wish I had when I started. Every command includes a real, runnable example with realistic output. No filler, no theory-only explanations. Just practical commands you can copy, paste, and use immediately.
Whether you are a developer who just started using the terminal, a sysadmin managing production servers, or a DevOps engineer building CI/CD pipelines, this list has you covered.
Quick reference table
Here is every command in this guide, organized by category. Bookmark this section for fast lookups.
| Category | Commands |
|---|---|
| Navigation and File System |
pwd, cd, ls, tree, find, locate, which, whereis, file, stat, realpath, basename, dirname, readlink, df
|
| File Operations |
cat, less, more, head, tail, touch, cp, mv, rm, mkdir, rmdir, ln, dd, truncate, split, csplit, shred, install, mktemp, tee
|
| File Permissions and Ownership |
chmod, chown, chgrp, umask, getfacl, setfacl, lsattr, chattr
|
| Text Processing and Search |
grep, sed, awk, cut, sort, uniq, wc, tr, diff, comm, paste, join, fmt, fold, column, nl, expand, unexpand, rev, tac
|
| Compression and Archiving |
tar, gzip, gunzip, bzip2, xz, zip, unzip, zcat, zgrep, 7z
|
| Process Management |
ps, top, htop, kill, killall, pkill, pgrep, nice, renice, nohup, bg, fg, jobs, wait, watch
|
| User and Group Management |
whoami, id, who, w, useradd, usermod, userdel, groupadd, passwd, su, sudo, last, finger
|
| Networking |
ping, curl, wget, ssh, scp, rsync, netstat, ss, ip, ifconfig, dig, nslookup, host, traceroute, mtr, nc, nmap, iptables, route, arp
|
| Disk and Storage |
df, du, mount, umount, fdisk, lsblk, blkid, mkfs, fsck, dd
|
| System Information |
uname, hostname, uptime, date, cal, timedatectl, lscpu, lsmem, free, vmstat, lsusb, lspci, dmesg
|
| Package Management |
apt, apt-get, dpkg, yum, dnf, pacman, snap, flatpak
|
| Shell and Environment |
echo, printf, export, env, printenv, alias, unalias, source, history, type, hash
|
| I/O Redirection and Piping |
>, >>, <, 2>, 2>&1, `\ |
| Job Scheduling | {% raw %}cron, crontab, at, batch, systemctl
|
| Advanced and Power User |
strace, ltrace, lsof, tcpdump, sar, perf, inotifywait, screen, tmux, parallel
|
Navigation and file system
These are the commands you will use hundreds of times per day. They are the foundation of everything else.
pwd — Print working directory
Shows the full path of the directory you are currently in. I use this constantly when jumping between projects to confirm where I am before running destructive commands.
$ pwd
/home/rahul/projects/webapp
cd — Change directory
Moves you to a different directory. The most fundamental navigation command.
# Go to a specific directory
$ cd /var/log
# Go to your home directory
$ cd ~
# Go up one level
$ cd ..
# Go back to the previous directory
$ cd -
/home/rahul/projects/webapp
Tip: cd - is incredibly useful. It acts like an undo for directory changes, toggling between your current and previous locations.
ls — List directory contents
Lists files and directories. I probably run ls more than any other command. The flags make all the difference.
# Basic listing
$ ls
README.md src package.json node_modules dist
# Long format with permissions, size, and dates
$ ls -lh
total 128K
drwxr-xr-x 5 rahul rahul 4.0K Mar 10 09:15 dist
drwxr-xr-x 42 rahul rahul 4.0K Mar 12 14:22 node_modules
-rw-r--r-- 1 rahul rahul 1.2K Mar 10 08:30 package.json
-rw-r--r-- 1 rahul rahul 450 Mar 9 16:45 README.md
drwxr-xr-x 8 rahul rahul 4.0K Mar 12 14:20 src
# Show hidden files too
$ ls -la
total 156K
drwxr-xr-x 7 rahul rahul 4.0K Mar 12 14:22 .
drwxr-xr-x 12 rahul rahul 4.0K Mar 1 10:00 ..
-rw-r--r-- 1 rahul rahul 120 Mar 9 16:45 .env
drwxr-xr-x 8 rahul rahul 4.0K Mar 12 14:20 .git
-rw-r--r-- 1 rahul rahul 45 Mar 9 16:45 .gitignore
-rw-r--r-- 1 rahul rahul 1.2K Mar 10 08:30 package.json
drwxr-xr-x 8 rahul rahul 4.0K Mar 12 14:20 src
# Sort by modification time, newest first
$ ls -lt
Tip: Add alias ll='ls -lah' to your .bashrc. You will use it constantly.
tree — Display directory tree
Shows the directory structure as a visual tree. Perfect for understanding project layouts at a glance.
$ tree -L 2
.
├── README.md
├── package.json
├── src
│ ├── components
│ ├── pages
│ ├── styles
│ └── utils
└── dist
├── index.html
└── assets
7 directories, 3 files
# Show only directories
$ tree -d -L 3
find — Search for files
The most powerful file search command in Linux. I use it daily for finding files by name, type, size, and modification time.
# Find all JavaScript files
$ find /home/rahul/project -name "*.js"
/home/rahul/project/src/index.js
/home/rahul/project/src/utils/helpers.js
/home/rahul/project/src/components/App.js
# Find files modified in the last 24 hours
$ find . -mtime -1 -type f
./src/index.js
./src/components/Header.js
# Find files larger than 100MB
$ find / -size +100M -type f 2>/dev/null
/var/log/syslog.1
/home/rahul/downloads/dataset.csv
# Find and delete all .log files (use with caution)
$ find /tmp -name "*.log" -type f -delete
# Find empty directories
$ find . -type d -empty
./src/tests
./dist/temp
Tip: Always add 2>/dev/null when searching from root to suppress permission denied errors.
locate — Find files by name (fast)
Uses a pre-built database to find files almost instantly. Much faster than find for simple name searches, but the database needs to be updated periodically.
$ locate nginx.conf
/etc/nginx/nginx.conf
/etc/nginx/nginx.conf.bak
# Update the database first if results are stale
$ sudo updatedb
which — Locate a command binary
Shows the full path of an executable. I use this to verify which version of a program is being used.
$ which python3
/usr/bin/python3
$ which node
/home/rahul/.nvm/versions/node/v20.11.0/bin/node
whereis — Locate binary, source, and man pages
Similar to which, but also shows source files and manual pages.
$ whereis gcc
gcc: /usr/bin/gcc /usr/lib/gcc /usr/share/man/man1/gcc.1.gz
file — Determine file type
Tells you what a file actually is, regardless of its extension. Incredibly useful when you download a file and the extension is wrong or missing.
$ file photo.jpg
photo.jpg: JPEG image data, JFIF standard 1.01, resolution (DPI), 72 x 72
$ file mystery_file
mystery_file: gzip compressed data, from Unix, original size modulo 2^32 10240
$ file /usr/bin/ls
/usr/bin/ls: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV)
stat — Display detailed file information
Shows everything about a file: size, permissions, timestamps, inode number. More detailed than ls -l.
$ stat package.json
File: package.json
Size: 1245 Blocks: 8 IO Block: 4096 regular file
Device: 802h/2050d Inode: 1048753 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ rahul) Gid: ( 1000/ rahul)
Access: 2026-03-12 14:22:01.000000000 +0000
Modify: 2026-03-10 08:30:45.000000000 +0000
Change: 2026-03-10 08:30:45.000000000 +0000
Birth: 2026-03-09 16:45:00.000000000 +0000
realpath — Resolve the absolute path
Resolves symlinks and relative paths to give you the true absolute path.
$ realpath ../../config/settings.yml
/home/rahul/config/settings.yml
$ realpath /usr/bin/python3
/usr/bin/python3.11
basename — Extract filename from path
Strips the directory portion from a path. I use this a lot in shell scripts.
$ basename /home/rahul/projects/webapp/src/index.js
index.js
$ basename /home/rahul/projects/webapp/src/index.js .js
index
dirname — Extract directory from path
The opposite of basename. Returns just the directory portion.
$ dirname /home/rahul/projects/webapp/src/index.js
/home/rahul/projects/webapp/src
readlink — Print the target of a symlink
Shows where a symbolic link points to.
$ readlink /usr/bin/python3
python3.11
$ readlink -f /usr/bin/python3
/usr/bin/python3.11
df — Disk free space
Shows how much disk space is available on mounted filesystems. I check this whenever a server runs out of space.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 32G 16G 67% /
/dev/sdb1 200G 145G 45G 77% /data
tmpfs 3.9G 0 3.9G 0% /dev/shm
# Show only specific filesystem type
$ df -h -t ext4
File operations
These commands handle creating, reading, copying, moving, and deleting files. Master these and you can manage any file system.
cat — Concatenate and display files
The simplest way to view a file's contents. I use it for small files and for quick peeks at config files.
$ cat /etc/hostname
web-server-01
# Show line numbers
$ cat -n script.sh
1 #!/bin/bash
2 echo "Starting backup..."
3 rsync -av /data /backup/
4 echo "Backup complete."
# Concatenate multiple files
$ cat header.html body.html footer.html > page.html
less — View files with pagination
The best way to read long files. Lets you scroll up and down, search, and navigate freely.
$ less /var/log/syslog
# Inside less:
# /pattern - search forward
# ?pattern - search backward
# n - next match
# N - previous match
# g - go to beginning
# G - go to end
# q - quit
Tip: Use less +F to follow a file in real time, similar to tail -f, but you can press Ctrl+C to stop following and scroll around.
more — View files page by page
An older, simpler pager than less. Scrolls forward only.
$ more /etc/services
# Press space to go to the next page
# Press q to quit
head — Display the beginning of a file
Shows the first N lines of a file. Default is 10 lines. Great for peeking at CSV headers or log files.
$ head -n 5 /var/log/auth.log
Mar 12 08:00:01 server CRON[12345]: pam_unix(cron:session): session opened
Mar 12 08:15:22 server sshd[12400]: Accepted publickey for rahul from 192.168.1.50
Mar 12 08:15:22 server sshd[12400]: pam_unix(sshd:session): session opened
Mar 12 08:30:01 server CRON[12450]: pam_unix(cron:session): session opened
Mar 12 09:00:01 server CRON[12500]: pam_unix(cron:session): session opened
# Show first 100 bytes
$ head -c 100 binary_file
tail — Display the end of a file
Shows the last N lines. The -f flag is essential for monitoring log files in real time.
# Show last 20 lines
$ tail -n 20 /var/log/nginx/access.log
# Follow a log file in real time (Ctrl+C to stop)
$ tail -f /var/log/nginx/error.log
2026-03-12 14:22:01 [error] 1234#0: *5678 connect() failed (111: Connection refused)
2026-03-12 14:22:05 [error] 1234#0: *5679 upstream timed out (110: Connection timed out)
# Follow multiple files simultaneously
$ tail -f /var/log/nginx/access.log /var/log/nginx/error.log
Tip: tail -f is one of the most important commands for debugging production issues. I have it running in a terminal pane during every deployment.
touch — Create empty files or update timestamps
Creates a new empty file if it does not exist, or updates the modification timestamp if it does.
# Create a new file
$ touch newfile.txt
# Create multiple files at once
$ touch file1.txt file2.txt file3.txt
# Update the timestamp of an existing file
$ touch -m existing_file.txt
cp — Copy files and directories
Copies files or entire directory trees.
# Copy a file
$ cp config.yml config.yml.bak
# Copy a directory recursively
$ cp -r src/ src_backup/
# Copy preserving permissions and timestamps
$ cp -a /var/www/html /backup/html_backup
# Interactive mode (ask before overwriting)
$ cp -i important.conf /etc/
cp: overwrite '/etc/important.conf'? y
mv — Move or rename files
Moves files between directories or renames them. Unlike cp, the source is removed.
# Rename a file
$ mv old_name.txt new_name.txt
# Move a file to another directory
$ mv report.pdf /home/rahul/documents/
# Move multiple files
$ mv *.log /var/log/archive/
# Don't overwrite existing files
$ mv -n source.txt /destination/
rm — Remove files and directories
Deletes files permanently. There is no trash can. Be very careful.
# Remove a file
$ rm unwanted_file.txt
# Remove a directory and all its contents
$ rm -r old_project/
# Force remove without prompts
$ rm -rf build/
# Interactive mode (ask before each deletion)
$ rm -i *.tmp
rm: remove regular file 'cache.tmp'? y
rm: remove regular file 'session.tmp'? y
Gotcha: rm -rf / will destroy your entire system. Always double-check your path, especially when using variables in scripts. I once saw a script with rm -rf /$UNSET_VARIABLE that wiped a production server.
mkdir — Create directories
Creates new directories. The -p flag creates parent directories as needed.
# Create a single directory
$ mkdir logs
# Create nested directories
$ mkdir -p project/src/components/ui
# Create with specific permissions
$ mkdir -m 755 public_html
rmdir — Remove empty directories
Only removes directories that are empty. Safer than rm -r.
$ rmdir empty_folder
$ rmdir non_empty_folder
rmdir: failed to remove 'non_empty_folder': Directory not empty
ln — Create links
Creates hard links or symbolic (soft) links. Symlinks are like shortcuts and are far more commonly used.
# Create a symbolic link
$ ln -s /var/log/nginx/access.log ~/nginx-access.log
# Create a hard link
$ ln original.txt hardlink.txt
# Verify the link
$ ls -l ~/nginx-access.log
lrwxrwxrwx 1 rahul rahul 29 Mar 12 14:00 nginx-access.log -> /var/log/nginx/access.log
Tip: Use symlinks to keep config files in a dotfiles repo while linking them to their expected locations.
dd — Convert and copy files (disk duplicator)
A low-level copy tool that works with raw disk data. Used for creating disk images, bootable USBs, and benchmarking disk speed.
# Create a bootable USB drive
$ sudo dd if=ubuntu-24.04.iso of=/dev/sdb bs=4M status=progress
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 120 s, 17.9 MB/s
# Create a 1GB test file filled with zeros
$ dd if=/dev/zero of=testfile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.5 s, 429 MB/s
# Benchmark disk write speed
$ dd if=/dev/zero of=/tmp/test bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 3.2 s, 335 MB/s
Gotcha: dd is sometimes called "disk destroyer." Specifying the wrong of= target can overwrite your entire operating system. Triple-check before pressing Enter.
truncate — Shrink or extend a file to a specified size
Sets a file to an exact size. I use this to quickly empty log files without deleting them.
# Empty a log file without removing it (keeps the file handle)
$ truncate -s 0 /var/log/app.log
# Create a 10GB sparse file for testing
$ truncate -s 10G test_file.img
$ ls -lh test_file.img
-rw-r--r-- 1 rahul rahul 10G Mar 12 14:30 test_file.img
split — Split a file into pieces
Breaks a large file into smaller parts. Useful for uploading or emailing large files.
# Split a file into 100MB chunks
$ split -b 100M large_backup.tar.gz backup_part_
$ ls backup_part_*
backup_part_aa backup_part_ab backup_part_ac backup_part_ad
# Rejoin the parts
$ cat backup_part_* > restored_backup.tar.gz
# Split by number of lines
$ split -l 10000 huge_log.txt log_chunk_
csplit — Context-based file splitting
Splits files based on patterns rather than fixed sizes. Handy for splitting log files by date or breaking up combined SQL dumps.
# Split a file at every line matching "Chapter"
$ csplit book.txt '/Chapter/' '{*}'
1245
3456
2890
$ ls
xx00 xx01 xx02 xx03
shred — Securely delete files
Overwrites a file multiple times before deleting it, making recovery extremely difficult.
# Overwrite and delete a sensitive file
$ shred -vzu -n 5 secret_keys.txt
shred: secret_keys.txt: pass 1/6 (random)...
shred: secret_keys.txt: pass 2/6 (random)...
shred: secret_keys.txt: pass 3/6 (random)...
shred: secret_keys.txt: pass 4/6 (random)...
shred: secret_keys.txt: pass 5/6 (random)...
shred: secret_keys.txt: pass 6/6 (000000)...
shred: secret_keys.txt: removing
install — Copy files and set attributes
Copies files while setting permissions and ownership in one step. Commonly used in Makefiles and build scripts.
# Install a script with execute permissions
$ install -m 755 build/myapp /usr/local/bin/myapp
# Install with specific owner and group
$ sudo install -o root -g root -m 644 config.conf /etc/myapp/
mktemp — Create temporary files or directories
Creates a uniquely named temporary file or directory. Essential for scripts that need scratch space.
$ mktemp
/tmp/tmp.aB3xY9zK
$ mktemp -d
/tmp/tmp.Qw7rP2mN
# Use in a script
$ TMPFILE=$(mktemp)
$ echo "processing..." > "$TMPFILE"
$ cat "$TMPFILE"
processing...
tee — Read from stdin and write to file and stdout
Splits output so you can see it on screen and save it to a file simultaneously.
# Save command output to a file while still displaying it
$ ls -la | tee directory_listing.txt
total 156K
drwxr-xr-x 7 rahul rahul 4.0K Mar 12 14:22 .
drwxr-xr-x 12 rahul rahul 4.0K Mar 1 10:00 ..
-rw-r--r-- 1 rahul rahul 1.2K Mar 10 08:30 package.json
# Append to a file instead of overwriting
$ echo "new entry" | tee -a log.txt
# Write to a file that requires sudo
$ echo "127.0.0.1 myapp.local" | sudo tee -a /etc/hosts
Tip: The last example is a pattern I use constantly. You cannot do sudo echo "text" > /etc/protected_file because the redirect runs as your user, not root. Piping through sudo tee solves this.
File permissions and ownership
Linux security is built on file permissions. Understanding these commands is non-negotiable for anyone managing a server.
chmod — Change file permissions
Sets who can read, write, and execute a file. You can use numeric mode (octal) or symbolic mode.
# Make a script executable
$ chmod +x deploy.sh
# Set specific permissions (owner: rwx, group: rx, others: rx)
$ chmod 755 deploy.sh
# Remove write permission for group and others
$ chmod go-w config.yml
# Set permissions recursively on a directory
$ chmod -R 644 /var/www/html/*.html
# Common permission patterns:
# 755 - executables and directories
# 644 - regular files
# 600 - sensitive files (SSH keys, .env)
# 700 - private directories
chown — Change file owner and group
Changes who owns a file. Requires root privileges to change ownership to another user.
# Change owner
$ sudo chown www-data index.html
# Change owner and group
$ sudo chown www-data:www-data /var/www/html -R
# Change only the group
$ sudo chown :developers project/
chgrp — Change group ownership
Changes only the group of a file. Similar to chown :group but more explicit.
$ sudo chgrp docker /var/run/docker.sock
$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 Mar 12 08:00 /var/run/docker.sock
umask — Set default permissions
Controls the default permissions for newly created files and directories.
# Show current umask
$ umask
0022
# This means new files get 644 (666 - 022) and new directories get 755 (777 - 022)
# Set a more restrictive umask
$ umask 077
# Now new files get 600 and directories get 700
$ touch private_file.txt
$ ls -l private_file.txt
-rw------- 1 rahul rahul 0 Mar 12 15:00 private_file.txt
getfacl — Get file access control lists
Shows the extended ACL permissions for a file, which go beyond the basic owner/group/other model.
$ getfacl /var/www/html
# file: var/www/html
# owner: www-data
# group: www-data
user::rwx
user:rahul:rwx
group::r-x
mask::rwx
other::r-x
setfacl — Set file access control lists
Sets extended ACL permissions. Lets you give specific users or groups access without changing the base ownership.
# Give a specific user read/write access
$ setfacl -m u:deploy_user:rw /var/www/html/config.php
# Give a group execute access
$ setfacl -m g:developers:rx /opt/scripts/deploy.sh
# Remove ACL for a user
$ setfacl -x u:old_user /var/www/html/config.php
lsattr — List file attributes
Shows special extended attributes on files, like immutability flags.
$ lsattr /etc/resolv.conf
----i--------e-- /etc/resolv.conf
# The 'i' flag means the file is immutable (cannot be modified even by root)
chattr — Change file attributes
Sets special attributes on files. The immutable flag is particularly useful for protecting critical config files.
# Make a file immutable (even root cannot modify or delete it)
$ sudo chattr +i /etc/resolv.conf
# Remove the immutable flag
$ sudo chattr -i /etc/resolv.conf
# Make a file append-only (useful for log files)
$ sudo chattr +a /var/log/audit.log
Text processing and search
These commands are the bread and butter of data manipulation on the command line. Master grep, sed, and awk and you can transform any text data.
grep — Search text with patterns
Searches for patterns in files or input. Probably the command I use the most after ls and cd.
# Search for a string in a file
$ grep "error" /var/log/syslog
Mar 12 14:22:01 server nginx[1234]: error connecting to upstream
Mar 12 14:22:05 server app[5678]: database connection error
# Case-insensitive search
$ grep -i "warning" /var/log/syslog
# Recursive search across all files in a directory
$ grep -r "TODO" ./src/
./src/index.js:// TODO: add error handling
./src/utils.js:// TODO: refactor this function
# Show line numbers
$ grep -n "function" app.js
12:function handleRequest(req, res) {
45:function validateInput(data) {
89:function sendResponse(res, data) {
# Count matches
$ grep -c "404" /var/log/nginx/access.log
1247
# Show only filenames that contain the match
$ grep -rl "API_KEY" ./config/
./config/production.yml
./config/staging.yml
# Invert match (show lines that DON'T match)
$ grep -v "^#" /etc/nginx/nginx.conf
# Use regex patterns
$ grep -E "^[0-9]{1,3}\.[0-9]{1,3}" access.log
192.168.1.50 - - [12/Mar/2026:14:22:01 +0000] "GET / HTTP/1.1" 200
10.0.0.15 - - [12/Mar/2026:14:22:03 +0000] "POST /api HTTP/1.1" 201
Tip: Use grep -r --include="*.py" to search only Python files recursively. This is much faster than piping through find.
sed — Stream editor
Transforms text using patterns. I use sed for find-and-replace operations, especially in scripts and CI/CD pipelines.
# Replace first occurrence on each line
$ sed 's/http/https/' urls.txt
https://example.com
https://api.example.com
# Replace ALL occurrences on each line
$ sed 's/old/new/g' config.txt
# Edit a file in-place
$ sed -i 's/localhost/production.db.example.com/g' config.yml
# Delete lines matching a pattern
$ sed '/^#/d' config.conf
# Removes all comment lines
# Delete blank lines
$ sed '/^$/d' messy_file.txt
# Print only lines 10 to 20
$ sed -n '10,20p' largefile.txt
# Insert text before a specific line
$ sed '3i\new line of text' file.txt
awk — Pattern scanning and processing
A full programming language for text processing. More powerful than sed for structured data like CSV files and log files.
# Print specific columns (space-separated)
$ awk '{print $1, $4}' /var/log/nginx/access.log
192.168.1.50 [12/Mar/2026:14:22:01
10.0.0.15 [12/Mar/2026:14:22:03
# Print with a custom delimiter
$ awk -F: '{print $1, $3}' /etc/passwd
root 0
daemon 1
rahul 1000
# Sum a column of numbers
$ awk '{sum += $5} END {print "Total:", sum}' sales.txt
Total: 45230
# Filter rows by condition
$ awk '$3 > 1000 {print $1, $3}' data.txt
server-02 2048
server-05 4096
# Print lines longer than 80 characters
$ awk 'length > 80' code.py
cut — Extract sections from lines
Cuts out specific columns or character positions from each line. Faster than awk for simple column extraction.
# Extract the first field (colon-delimited)
$ cut -d: -f1 /etc/passwd
root
daemon
bin
rahul
# Extract multiple fields
$ cut -d, -f1,3 data.csv
name,salary
Alice,75000
Bob,82000
# Extract characters 1 through 10
$ cut -c1-10 long_lines.txt
sort — Sort lines of text
Sorts input lines alphabetically, numerically, or by specific fields.
# Alphabetical sort
$ sort names.txt
Alice
Bob
Charlie
Dave
# Numeric sort
$ sort -n numbers.txt
3
15
42
100
# Sort by the third column
$ sort -t, -k3 -n employees.csv
# Reverse sort
$ sort -r names.txt
# Sort and remove duplicates
$ sort -u access.log
# Sort by file size (human-readable)
$ du -sh /var/log/* | sort -rh
1.2G /var/log/journal
450M /var/log/syslog
120M /var/log/nginx
uniq — Report or omit repeated lines
Filters out or counts duplicate lines. Must be used on sorted input.
# Remove duplicate lines (input must be sorted)
$ sort access.log | uniq
# Count occurrences of each line
$ sort ip_list.txt | uniq -c | sort -rn
847 192.168.1.50
523 10.0.0.15
234 172.16.0.100
# Show only duplicate lines
$ sort names.txt | uniq -d
Alice
# Show only unique lines (appear exactly once)
$ sort names.txt | uniq -u
wc — Word, line, and byte count
Counts lines, words, and characters.
$ wc /etc/passwd
42 68 2245 /etc/passwd
# 42 lines, 68 words, 2245 bytes
# Count only lines
$ wc -l /var/log/nginx/access.log
158432 /var/log/nginx/access.log
# Count files in a directory
$ ls -1 | wc -l
27
tr — Translate or delete characters
Replaces or removes specific characters. Useful for quick text transformations.
# Convert to uppercase
$ echo "hello world" | tr 'a-z' 'A-Z'
HELLO WORLD
# Replace spaces with newlines
$ echo "one two three" | tr ' ' '\n'
one
two
three
# Delete specific characters
$ echo "phone: (555) 123-4567" | tr -d '()-'
phone: 555 1234567
# Squeeze repeated characters
$ echo "too many spaces" | tr -s ' '
too many spaces
diff — Compare files line by line
Shows the differences between two files. Essential for code review and configuration management.
$ diff old_config.yml new_config.yml
3c3
< database_host: localhost
---
> database_host: db.production.example.com
7a8,9
> cache_ttl: 3600
> cache_backend: redis
# Unified diff format (more readable, used by git)
$ diff -u old_config.yml new_config.yml
--- old_config.yml 2026-03-10 08:30:00
+++ new_config.yml 2026-03-12 14:22:00
@@ -1,7 +1,9 @@
app_name: myapp
port: 8080
-database_host: localhost
+database_host: db.production.example.com
database_port: 5432
log_level: info
+cache_ttl: 3600
+cache_backend: redis
# Compare directories
$ diff -r dir1/ dir2/
comm — Compare two sorted files
Shows lines that are unique to each file and lines they share.
$ comm file1.txt file2.txt
alice
bob
charlie
dave
eve
# Column 1: only in file1, Column 2: only in file2, Column 3: in both
# Show only lines common to both files
$ comm -12 file1.txt file2.txt
alice
dave
paste — Merge lines of files
Joins corresponding lines from multiple files side by side.
$ paste names.txt ages.txt
Alice 30
Bob 25
Charlie 35
# Use a custom delimiter
$ paste -d, names.txt ages.txt
Alice,30
Bob,25
Charlie,35
# Join all lines into one line
$ paste -s -d, names.txt
Alice,Bob,Charlie
join — Join files on a common field
Like a SQL JOIN but for text files. Both files must be sorted on the join field.
$ cat employees.txt
1001 Alice
1002 Bob
1003 Charlie
$ cat salaries.txt
1001 75000
1002 82000
1003 68000
$ join employees.txt salaries.txt
1001 Alice 75000
1002 Bob 82000
1003 Charlie 68000
fmt — Reformat paragraph text
Reformats text to a specified line width. Useful for cleaning up text files.
$ fmt -w 60 long_paragraph.txt
This is a long paragraph that has been
reformatted to fit within sixty characters
per line, making it much easier to read.
fold — Wrap lines to a specified width
Similar to fmt but works at the character level, not word level.
$ fold -w 40 long_lines.txt
This is a very long line that will be w
rapped at exactly forty characters each
# Wrap at word boundaries
$ fold -s -w 40 long_lines.txt
This is a very long line that will be
wrapped at word boundaries within
forty characters.
column — Format output into columns
Formats input into aligned columns. Makes messy output readable.
$ cat data.txt
Name Age City
Alice 30 NYC
Bob 25 LA
Charlie 35 Chicago
$ column -t data.txt
Name Age City
Alice 30 NYC
Bob 25 LA
Charlie 35 Chicago
# Use a custom delimiter
$ column -t -s, data.csv
nl — Number lines
Adds line numbers to output. More flexible than cat -n.
$ nl script.sh
1 #!/bin/bash
2 echo "Starting..."
3 ./process_data.sh
4 echo "Done."
# Number only non-empty lines (default)
$ nl -ba script.sh
# -ba numbers ALL lines including blank ones
expand — Convert tabs to spaces
Replaces tab characters with spaces.
$ expand -t 4 tabbed_file.txt > spaces_file.txt
unexpand — Convert spaces to tabs
The opposite of expand. Replaces spaces with tab characters.
$ unexpand -t 4 --first-only spaced_file.txt > tabbed_file.txt
rev — Reverse lines character by character
Reverses each line of input. Surprisingly useful in pipelines.
$ echo "hello world" | rev
dlrow olleh
# Extract file extensions using rev + cut + rev
$ echo "/path/to/file.tar.gz" | rev | cut -d. -f1-2 | rev
tar.gz
tac — Concatenate and print in reverse
Like cat but prints lines in reverse order (last line first). The name is literally cat backwards.
$ tac /var/log/syslog | head -5
Mar 12 15:00:01 server CRON[13000]: session closed
Mar 12 14:45:22 server nginx[1234]: request completed
Mar 12 14:30:01 server CRON[12900]: session opened
Mar 12 14:22:05 server app[5678]: database query completed
Mar 12 14:22:01 server app[5678]: processing request
Compression and archiving
Knowing how to compress and extract files efficiently saves time and bandwidth. These commands come up in almost every deployment workflow.
tar — Tape archive
The universal archiving tool. Combines multiple files into one archive and optionally compresses them.
# Create a gzip-compressed archive
$ tar -czf backup.tar.gz /home/rahul/projects/
tar: Removing leading '/' from member names
# Create a bzip2-compressed archive (better compression, slower)
$ tar -cjf backup.tar.bz2 /home/rahul/projects/
# Extract a gzip archive
$ tar -xzf backup.tar.gz
$ tar -xzf backup.tar.gz -C /opt/restore/
# List contents without extracting
$ tar -tzf backup.tar.gz
home/rahul/projects/
home/rahul/projects/webapp/
home/rahul/projects/webapp/package.json
home/rahul/projects/webapp/src/
# Extract a single file from an archive
$ tar -xzf backup.tar.gz home/rahul/projects/webapp/package.json
Tip: Remember the flags: create, e*xtract, **table of contents, **z* for gzip, j for bzip2, f for file.
gzip — Compress files
Compresses a single file using the gzip algorithm. Replaces the original file with a .gz version.
$ gzip access.log
$ ls -lh access.log.gz
-rw-r--r-- 1 rahul rahul 12M Mar 12 15:00 access.log.gz
# Keep the original file
$ gzip -k access.log
# Set compression level (1=fastest, 9=best compression)
$ gzip -9 large_file.txt
gunzip — Decompress gzip files
Decompresses .gz files. Equivalent to gzip -d.
$ gunzip access.log.gz
$ ls access.log
access.log
bzip2 — Compress files (better ratio)
Better compression than gzip but slower. Good for archival storage where compression ratio matters more than speed.
$ bzip2 large_dataset.csv
$ ls -lh large_dataset.csv.bz2
-rw-r--r-- 1 rahul rahul 45M Mar 12 15:05 large_dataset.csv.bz2
# Decompress
$ bzip2 -d large_dataset.csv.bz2
xz — Compress files (best ratio)
The best compression ratio of the standard tools but the slowest. Used for distributing source code and large archives.
$ xz -9 kernel-source.tar
$ ls -lh kernel-source.tar.xz
-rw-r--r-- 1 rahul rahul 78M Mar 12 15:10 kernel-source.tar.xz
# Decompress
$ xz -d kernel-source.tar.xz
zip — Create ZIP archives
Creates ZIP files compatible with Windows. Use this when you need to share archives with non-Linux users.
# Create a ZIP archive
$ zip -r project.zip project/
adding: project/src/ (stored 0%)
adding: project/src/index.js (deflated 62%)
adding: project/package.json (deflated 45%)
# Add a password
$ zip -e sensitive_data.zip secret_files/
Enter password:
Verify password:
unzip — Extract ZIP archives
Extracts ZIP files.
# Extract to current directory
$ unzip project.zip
# Extract to a specific directory
$ unzip project.zip -d /opt/projects/
# List contents without extracting
$ unzip -l project.zip
Archive: project.zip
Length Date Time Name
--------- ---------- ----- ----
0 2026-03-12 15:00 project/
1245 2026-03-10 08:30 project/package.json
3456 2026-03-12 14:20 project/src/index.js
zcat — View compressed files without extracting
Displays the contents of a gzipped file without decompressing it to disk.
$ zcat /var/log/syslog.2.gz | head -5
Mar 08 00:00:01 server CRON[10000]: session opened
Mar 08 00:15:22 server sshd[10050]: connection from 192.168.1.50
Mar 08 00:30:01 server CRON[10100]: session opened
Mar 08 01:00:01 server CRON[10200]: session opened
Mar 08 01:15:05 server nginx[1234]: request completed
zgrep — Search inside compressed files
Searches for patterns inside gzipped files without extracting them first. This has saved me hours when searching through rotated log files.
$ zgrep "error" /var/log/syslog.*.gz
/var/log/syslog.2.gz:Mar 08 14:22:01 server app[5678]: database connection error
/var/log/syslog.3.gz:Mar 05 09:15:30 server nginx[1234]: upstream error
7z — 7-Zip archive tool
Supports many archive formats and typically offers better compression than ZIP. Needs to be installed separately.
# Create a 7z archive
$ 7z a archive.7z folder/
Creating archive: archive.7z
Compressing folder/file1.txt
Compressing folder/file2.txt
Everything is Ok
# Extract a 7z archive
$ 7z x archive.7z
# List contents
$ 7z l archive.7z
Process management
Understanding how to monitor, control, and kill processes is critical for any server administrator. These commands keep your system running smoothly.
ps — Process status
Shows running processes. The options differ between BSD and POSIX style, which can be confusing.
# Show all processes (BSD style)
$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.1 169284 13292 ? Ss Mar10 0:12 /sbin/init
root 2 0.0 0.0 0 0 ? S Mar10 0:00 [kthreadd]
rahul 15234 1.2 2.4 985432 198432 ? Sl 14:00 0:45 node server.js
www-data 15300 0.3 0.8 432156 65432 ? S 14:00 0:12 nginx: worker
# Show process tree
$ ps auxf
# Show processes for a specific user
$ ps -u rahul
PID TTY TIME CMD
15234 ? 00:00:45 node
15301 pts/0 00:00:00 bash
15400 pts/0 00:00:00 ps
# Find a specific process
$ ps aux | grep nginx
www-data 15300 0.3 0.8 432156 65432 ? S 14:00 0:12 nginx: worker
top — Real-time process monitor
Interactive display of running processes sorted by CPU or memory usage. The go-to tool for diagnosing performance issues.
$ top
top - 15:30:22 up 2 days, 7:30, 2 users, load average: 0.45, 0.52, 0.48
Tasks: 234 total, 1 running, 233 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.2 us, 1.1 sy, 0.0 ni, 95.2 id, 0.3 wa, 0.0 hi, 0.2 si
MiB Mem : 7953.5 total, 1234.2 free, 3456.7 used, 3262.6 buff/cache
MiB Swap: 2048.0 total, 2048.0 free, 0.0 used. 4156.8 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
15234 rahul 20 0 985432 198432 12456 S 4.3 2.4 0:45.12 node
15300 www-data 20 0 432156 65432 8234 S 1.2 0.8 0:12.34 nginx
# Inside top:
# P - sort by CPU
# M - sort by memory
# k - kill a process
# q - quit
htop — Interactive process viewer
A much better version of top with color, mouse support, and easier navigation. Install it if it is not already on your system.
$ htop
# Interactive display with color-coded CPU bars
# F5 - tree view
# F6 - choose sort column
# F9 - kill process
# F10 - quit
kill — Send signals to processes
Sends a signal to a process. The default signal (SIGTERM) asks the process to exit gracefully. SIGKILL forces it.
# Graceful shutdown (SIGTERM)
$ kill 15234
# Force kill (SIGKILL) - use when SIGTERM doesn't work
$ kill -9 15234
# Send SIGHUP to reload configuration
$ kill -HUP 15300
# List all available signals
$ kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL
5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE
9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2
13) SIGPIPE 14) SIGALRM 15) SIGTERM
Tip: Always try kill (SIGTERM) first. Only use kill -9 (SIGKILL) as a last resort. SIGKILL does not let the process clean up, which can corrupt data.
killall — Kill processes by name
Kills all processes with a given name. Faster than finding PIDs manually.
# Kill all Firefox processes
$ killall firefox
# Kill all Node.js processes
$ killall node
# Ask for confirmation
$ killall -i python3
Kill python3(15500)? (y/N) y
Kill python3(15501)? (y/N) n
pkill — Kill processes by pattern
Like killall but uses pattern matching. More flexible.
# Kill all processes matching a pattern
$ pkill -f "node server.js"
# Kill processes owned by a specific user
$ pkill -u deploy_user
pgrep — Find processes by pattern
Returns the PIDs of processes matching a pattern. Useful in scripts.
$ pgrep nginx
15298
15299
15300
$ pgrep -a nginx
15298 nginx: master process
15299 nginx: worker process
15300 nginx: worker process
# Check if a process is running (useful in scripts)
$ pgrep -x nginx && echo "nginx is running" || echo "nginx is NOT running"
nginx is running
nice — Run a command with modified priority
Starts a process with a specific scheduling priority. Lower niceness values mean higher priority. Range is -20 (highest) to 19 (lowest).
# Run a backup with low priority so it doesn't slow down the server
$ nice -n 19 tar -czf /backup/full_backup.tar.gz /data
# Run with higher priority (requires root)
$ sudo nice -n -10 ./critical_task.sh
renice — Change priority of a running process
Adjusts the priority of an already running process.
# Lower the priority of a heavy process
$ renice 15 -p 15234
15234 (process ID) old priority 0, new priority 15
# Change priority for all processes of a user
$ sudo renice 10 -u rahul
nohup — Run a command immune to hangups
Keeps a process running even after you close your terminal or SSH session. The output goes to nohup.out.
$ nohup python3 long_running_script.py &
[1] 16000
nohup: ignoring input and appending output to 'nohup.out'
$ nohup ./data_import.sh > import.log 2>&1 &
[1] 16100
bg — Resume a job in the background
Resumes a suspended (Ctrl+Z) job in the background.
# Press Ctrl+Z to suspend a running process
$ python3 train_model.py
^Z
[1]+ Stopped python3 train_model.py
# Resume it in the background
$ bg
[1]+ python3 train_model.py &
fg — Bring a background job to the foreground
Brings a background job back to the foreground.
$ fg %1
python3 train_model.py
# The process is now running in the foreground again
jobs — List background jobs
Shows all jobs running in the current shell session.
$ jobs
[1]+ Running python3 train_model.py &
[2]- Running tail -f /var/log/syslog &
[3] Stopped vim config.yml
wait — Wait for background processes to finish
Pauses the script until background jobs complete. Essential for shell scripts that launch parallel tasks.
$ process_a.sh &
$ process_b.sh &
$ process_c.sh &
$ wait
# Script continues only after all three background jobs finish
$ echo "All processes complete"
watch — Execute a command repeatedly
Runs a command at regular intervals and displays the output. Perfect for monitoring changing values.
# Watch disk usage every 2 seconds
$ watch df -h
# Watch every 5 seconds
$ watch -n 5 "kubectl get pods"
# Highlight differences between updates
$ watch -d free -m
# Watch container logs
$ watch -n 1 "docker ps --format 'table {{.Names}}\t{{.Status}}'"
User and group management
These commands manage who can access the system and what they can do. Critical for multi-user systems and server administration.
whoami — Print current username
Shows your current effective username.
$ whoami
rahul
$ sudo whoami
root
id — Print user and group IDs
Shows your user ID, group ID, and all groups you belong to.
$ id
uid=1000(rahul) gid=1000(rahul) groups=1000(rahul),4(adm),27(sudo),999(docker)
$ id www-data
uid=33(www-data) gid=33(www-data) groups=33(www-data)
who — Show who is logged in
Displays currently logged-in users.
$ who
rahul pts/0 2026-03-12 14:00 (192.168.1.50)
deploy pts/1 2026-03-12 15:00 (10.0.0.100)
w — Show who is logged in and what they are doing
Like who but also shows system load and what each user is running.
$ w
15:30:22 up 2 days, 7:30, 2 users, load average: 0.45, 0.52, 0.48
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
rahul pts/0 192.168.1.50 14:00 0.00s 0.50s 0.00s w
deploy pts/1 10.0.0.100 15:00 5:22 0.10s 0.05s vim config.yml
useradd — Create a new user
Adds a new user account. Use adduser on Debian/Ubuntu for a more interactive experience.
# Create a user with a home directory and default shell
$ sudo useradd -m -s /bin/bash newuser
# Create a user with a specific UID and groups
$ sudo useradd -m -u 1500 -G sudo,docker deploy_user
usermod — Modify a user account
Changes properties of an existing user.
# Add a user to additional groups
$ sudo usermod -aG docker rahul
$ sudo usermod -aG sudo newuser
# Change the default shell
$ sudo usermod -s /bin/zsh rahul
# Lock a user account
$ sudo usermod -L suspect_user
Gotcha: Always use -aG (append to groups) not just -G. Without the -a flag, usermod removes the user from all groups not listed.
userdel — Delete a user account
Removes a user account.
# Delete the user
$ sudo userdel olduser
# Delete the user and their home directory
$ sudo userdel -r olduser
groupadd — Create a new group
Creates a new group.
$ sudo groupadd developers
$ sudo groupadd -g 1500 devops
passwd — Change user password
Sets or changes a user's password.
# Change your own password
$ passwd
Changing password for rahul.
Current password:
New password:
Retype new password:
passwd: password updated successfully
# Change another user's password (as root)
$ sudo passwd newuser
# Force user to change password at next login
$ sudo passwd -e newuser
su — Switch user
Switches to another user account.
# Switch to root
$ su -
Password:
root@server:~#
# Switch to another user
$ su - deploy_user
deploy_user@server:~$
sudo — Execute as superuser
Runs a single command as root (or another user) without switching accounts. The most common way to perform administrative tasks.
# Run a command as root
$ sudo apt update
# Run a command as another user
$ sudo -u www-data php artisan migrate
# Edit a file as root
$ sudo nano /etc/nginx/nginx.conf
# Open a root shell
$ sudo -i
root@server:~#
# Check what sudo privileges you have
$ sudo -l
User rahul may run the following commands:
(ALL : ALL) ALL
last — Show last user logins
Displays recent login history. Useful for auditing access.
$ last -10
rahul pts/0 192.168.1.50 Wed Mar 12 14:00 still logged in
deploy pts/1 10.0.0.100 Wed Mar 12 15:00 still logged in
rahul pts/0 192.168.1.50 Tue Mar 11 09:00 - 18:30 (09:30)
root tty1 Mon Mar 10 08:00 - 08:15 (00:15)
finger — User information lookup
Displays information about a user. Not installed by default on most modern distributions.
$ finger rahul
Login: rahul Name: Rahul Kumar
Directory: /home/rahul Shell: /bin/bash
On since Wed Mar 12 14:00 (UTC) on pts/0 from 192.168.1.50
No mail.
No Plan.
Networking
Networking commands are essential for debugging connectivity issues, transferring files, and managing network interfaces. I use these every time a deployment fails or a service is unreachable.
ping — Test network connectivity
The first command I run when something is not reachable. Sends ICMP echo requests to a host.
$ ping -c 4 google.com
PING google.com (142.250.80.46) 56(84) bytes of data.
64 bytes from lax17s62-in-f14.1e100.net: icmp_seq=1 ttl=118 time=2.34 ms
64 bytes from lax17s62-in-f14.1e100.net: icmp_seq=2 ttl=118 time=2.21 ms
64 bytes from lax17s62-in-f14.1e100.net: icmp_seq=3 ttl=118 time=2.18 ms
64 bytes from lax17s62-in-f14.1e100.net: icmp_seq=4 ttl=118 time=2.25 ms
--- google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 2.180/2.245/2.340/0.059 ms
curl — Transfer data from or to a server
The Swiss Army knife of HTTP requests. I use curl for testing APIs, downloading files, and debugging web services.
# Simple GET request
$ curl https://api.example.com/health
{"status":"ok","uptime":"48h"}
# POST JSON data
$ curl -X POST https://api.example.com/users \
-H "Content-Type: application/json" \
-d '{"name":"Alice","email":"alice@example.com"}'
{"id":42,"name":"Alice","email":"alice@example.com"}
# Download a file
$ curl -O https://example.com/archive.tar.gz
# Follow redirects and show headers
$ curl -LI https://example.com
HTTP/2 301
location: https://www.example.com/
HTTP/2 200
content-type: text/html; charset=utf-8
# Send with authentication
$ curl -u admin:password https://api.example.com/admin/stats
# Show request and response headers (verbose)
$ curl -v https://api.example.com/health
wget — Download files from the web
A dedicated download tool. Better than curl for downloading files because it handles retries, recursion, and resume out of the box.
# Download a file
$ wget https://example.com/dataset.csv
--2026-03-12 15:00:00-- https://example.com/dataset.csv
Resolving example.com... 93.184.216.34
Connecting to example.com|93.184.216.34|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15728640 (15M) [text/csv]
Saving to: 'dataset.csv'
dataset.csv 100%[==================>] 15.00M 12.5MB/s in 1.2s
# Resume a partially downloaded file
$ wget -c https://example.com/large_file.iso
# Mirror an entire website
$ wget --mirror --convert-links --page-requisites https://docs.example.com
ssh — Secure Shell
Connects to remote servers securely. The backbone of remote server administration.
# Connect to a remote server
$ ssh rahul@192.168.1.100
# Connect on a custom port
$ ssh -p 2222 rahul@server.example.com
# Run a command on a remote server without opening a shell
$ ssh rahul@server.example.com "df -h && free -m"
# SSH tunnel (forward local port 8080 to remote port 80)
$ ssh -L 8080:localhost:80 rahul@server.example.com
# Use a specific SSH key
$ ssh -i ~/.ssh/deploy_key deploy@production.example.com
scp — Secure copy
Copies files between hosts over SSH. Simple and reliable.
# Copy a file to a remote server
$ scp backup.tar.gz rahul@server.example.com:/home/rahul/backups/
backup.tar.gz 100% 150MB 12.5MB/s 00:12
# Copy a file from a remote server
$ scp rahul@server.example.com:/var/log/app.log ./
# Copy a directory recursively
$ scp -r ./project/ rahul@server.example.com:/opt/deploy/
rsync — Remote sync
The best tool for syncing files. Only transfers differences, saving bandwidth and time. I use rsync for all backups and deployments.
# Sync a local directory to a remote server
$ rsync -avz --progress ./dist/ rahul@server.example.com:/var/www/html/
sending incremental file list
index.html
2,456 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=14/16)
assets/app.js
45,678 100% 43.56MB/s 0:00:00 (xfr#2, to-chk=12/16)
sent 48,523 bytes received 456 bytes 19,591.60 bytes/sec
total size is 234,567 speedup is 4.79
# Sync with delete (remove files on destination that don't exist on source)
$ rsync -avz --delete ./dist/ rahul@server.example.com:/var/www/html/
# Dry run (see what would change without actually changing anything)
$ rsync -avzn ./dist/ rahul@server.example.com:/var/www/html/
# Exclude patterns
$ rsync -avz --exclude='node_modules' --exclude='.git' ./ remote:/opt/app/
Tip: Always do a dry run with -n before running rsync --delete. I have seen people accidentally delete production files because they had the source and destination reversed.
netstat — Network statistics
Shows network connections, routing tables, and interface statistics. Being replaced by ss on modern systems but still widely used.
# Show all listening ports
$ netstat -tlnp
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1234/sshd
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 5678/nginx
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 5678/nginx
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 9012/postgres
# Show all active connections
$ netstat -an | grep ESTABLISHED
tcp 0 0 192.168.1.100:22 192.168.1.50:54321 ESTABLISHED
ss — Socket statistics
The modern replacement for netstat. Faster and more informative.
# Show all listening TCP ports
$ ss -tlnp
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1234,fd=3))
LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=5678,fd=6))
LISTEN 0 128 127.0.0.1:5432 0.0.0.0:* users:(("postgres",pid=9012,fd=5))
# Show all connections to port 443
$ ss -tn dst :443
# Show socket memory usage
$ ss -tm
ip — Show and manipulate network interfaces
The modern replacement for ifconfig. Handles IP addresses, routes, and network interfaces.
# Show all network interfaces and their IP addresses
$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536
inet 127.0.0.1/8 scope host lo
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
inet 192.168.1.100/24 brd 192.168.1.255 scope global eth0
# Show routing table
$ ip route show
default via 192.168.1.1 dev eth0 proto dhcp metric 100
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.100
# Add an IP address to an interface
$ sudo ip addr add 192.168.1.200/24 dev eth0
# Bring an interface up or down
$ sudo ip link set eth0 down
$ sudo ip link set eth0 up
ifconfig — Configure network interfaces
The classic tool for viewing and configuring network interfaces. Deprecated in favor of ip but still available on most systems.
$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.100 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::1 prefixlen 64 scopeid 0x20<link>
ether 00:11:22:33:44:55 txqueuelen 1000 (Ethernet)
RX packets 1234567 bytes 987654321 (987.6 MB)
TX packets 654321 bytes 123456789 (123.4 MB)
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
dig — DNS lookup
Queries DNS servers for DNS records. The best tool for debugging DNS issues.
$ dig example.com
;; ANSWER SECTION:
example.com. 3600 IN A 93.184.216.34
;; Query time: 12 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
# Query specific record types
$ dig example.com MX
;; ANSWER SECTION:
example.com. 3600 IN MX 10 mail.example.com.
# Short answer only
$ dig +short example.com
93.184.216.34
# Query a specific DNS server
$ dig @8.8.8.8 example.com
nslookup — Query DNS interactively
Another DNS lookup tool. Simpler than dig but less detailed.
$ nslookup example.com
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
Name: example.com
Address: 93.184.216.34
host — Simple DNS lookup
The simplest DNS lookup tool. Good for quick checks.
$ host example.com
example.com has address 93.184.216.34
example.com has IPv6 address 2606:2800:220:1:248:1893:25c8:1946
example.com mail is handled by 10 mail.example.com.
# Reverse DNS lookup
$ host 93.184.216.34
34.216.184.93.in-addr.arpa domain name pointer example.com.
traceroute — Trace the network path to a host
Shows every router hop between you and a destination. Essential for diagnosing network latency and routing issues.
$ traceroute google.com
traceroute to google.com (142.250.80.46), 30 hops max, 60 byte packets
1 gateway (192.168.1.1) 0.512 ms 0.456 ms 0.398 ms
2 isp-router (10.0.0.1) 5.234 ms 5.198 ms 5.156 ms
3 core-router (172.16.0.1) 8.456 ms 8.412 ms 8.378 ms
4 google-edge (142.250.80.1) 10.123 ms 10.089 ms 10.045 ms
5 lax17s62-in-f14.1e100.net (142.250.80.46) 10.234 ms 10.198 ms 10.156 ms
mtr — Traceroute and ping combined
A real-time traceroute that continuously updates, showing packet loss and latency at every hop.
$ mtr google.com
My traceroute [v0.95]
Host: Loss% Snt Last Avg Best Wrst StDev
1. gateway 0.0% 50 0.5 0.6 0.3 1.2 0.2
2. isp-router 0.0% 50 5.2 5.4 4.8 7.1 0.5
3. core-router 0.0% 50 8.5 8.6 8.1 10.2 0.4
4. google-edge 0.0% 50 10.1 10.3 9.8 12.5 0.6
5. lax17s62-in-f14 0.0% 50 10.2 10.4 9.9 12.8 0.5
nc — Netcat (the network Swiss Army knife)
Reads and writes data across network connections. Useful for testing ports, transferring files, and creating simple servers.
# Check if a port is open
$ nc -zv server.example.com 443
Connection to server.example.com 443 port [tcp/https] succeeded!
# Scan a range of ports
$ nc -zv server.example.com 80-85
Connection to server.example.com 80 port [tcp/http] succeeded!
Connection to server.example.com 81 port [tcp/*] failed
Connection to server.example.com 82 port [tcp/*] failed
Connection to server.example.com 83 port [tcp/*] failed
Connection to server.example.com 84 port [tcp/*] failed
Connection to server.example.com 85 port [tcp/*] failed
# Start a simple listener
$ nc -l 9999
# Send a file over the network
# On receiving machine:
$ nc -l 9999 > received_file.tar.gz
# On sending machine:
$ nc server.example.com 9999 < file.tar.gz
nmap — Network mapper
Scans networks to discover hosts and services. The standard tool for network security auditing.
# Scan common ports on a host
$ nmap 192.168.1.100
Starting Nmap 7.94 ( https://nmap.org )
Nmap scan report for 192.168.1.100
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
443/tcp open https
5432/tcp closed postgresql
# Scan a subnet
$ nmap 192.168.1.0/24
# Detect operating system and service versions
$ sudo nmap -sV -O 192.168.1.100
iptables — Firewall management
Configures the Linux kernel firewall. Controls what network traffic is allowed in and out.
# List current rules
$ sudo iptables -L -n -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
1234 120K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
5678 890K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
# Allow incoming SSH
$ sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT
# Block an IP address
$ sudo iptables -A INPUT -s 10.0.0.50 -j DROP
# Allow established connections
$ sudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
route — Show and manipulate the routing table
Displays or modifies the IP routing table. Being replaced by ip route on modern systems.
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 100 0 0 eth0
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
# Add a static route
$ sudo route add -net 10.0.0.0/24 gw 192.168.1.254
arp — View and manipulate the ARP cache
Shows the mapping between IP addresses and MAC addresses on your local network.
$ arp -a
gateway (192.168.1.1) at 00:aa:bb:cc:dd:ee [ether] on eth0
server2 (192.168.1.101) at 00:11:22:33:44:55 [ether] on eth0
Disk and storage
These commands help you understand how disk space is being used and manage storage devices. I run du and df almost every time I SSH into a server.
df — Report file system disk space usage
Shows available space on all mounted filesystems. The -h flag makes the output human-readable.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 32G 16G 67% /
/dev/sdb1 200G 145G 45G 77% /data
tmpfs 3.9G 256M 3.7G 7% /run
/dev/sdc1 500G 12G 463G 3% /backup
# Show only specific filesystem
$ df -h /var
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 32G 16G 67% /
# Show inodes instead of blocks
$ df -i
du — Estimate file space usage
Shows how much disk space files and directories are using. The go-to command for finding what is eating your disk.
# Show the size of a directory
$ du -sh /var/log
1.8G /var/log
# Show sizes of all subdirectories
$ du -h --max-depth=1 /var/log | sort -rh
1.8G /var/log
1.2G /var/log/journal
450M /var/log/nginx
120M /var/log/mysql
15M /var/log/apt
# Find the 10 largest directories
$ du -h --max-depth=2 /home | sort -rh | head -10
Tip: When a server runs out of space, I always run du -h --max-depth=1 / | sort -rh to quickly find which top-level directory is the culprit, then drill down from there.
mount — Mount a filesystem
Attaches a filesystem to the directory tree so you can access its files.
# Mount a USB drive
$ sudo mount /dev/sdb1 /mnt/usb
# Mount an NFS share
$ sudo mount -t nfs 192.168.1.200:/shared /mnt/nfs
# Mount with specific options
$ sudo mount -o ro,noexec /dev/sdc1 /mnt/data
# Show all mounted filesystems
$ mount | column -t
umount — Unmount a filesystem
Detaches a mounted filesystem. Note the spelling: umount, not unmount.
$ sudo umount /mnt/usb
# Force unmount if busy
$ sudo umount -f /mnt/nfs
# Lazy unmount (detaches now, cleans up when no longer busy)
$ sudo umount -l /mnt/stuck
fdisk — Partition table manipulator
Creates and manages disk partitions. Use with extreme caution.
# List all disks and partitions
$ sudo fdisk -l
Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 97656831 97654784 46G 83 Linux
/dev/sda2 97658878 104857599 7198722 3.4G 5 Extended
Disk /dev/sdb: 200 GiB, 214748364800 bytes, 419430400 sectors
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 419430399 419428352 200G 83 Linux
lsblk — List block devices
Shows all block devices (disks and partitions) in a tree format. Easier to read than fdisk -l.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 50G 0 disk
├─sda1 8:1 0 46G 0 part /
└─sda2 8:2 0 3.4G 0 part [SWAP]
sdb 8:16 0 200G 0 disk
└─sdb1 8:17 0 200G 0 part /data
sr0 11:0 1 1024M 0 rom
blkid — Block device attributes
Shows UUIDs and filesystem types for block devices. Useful for writing /etc/fstab entries.
$ sudo blkid
/dev/sda1: UUID="a1b2c3d4-e5f6-7890-abcd-ef1234567890" TYPE="ext4"
/dev/sdb1: UUID="12345678-abcd-ef01-2345-67890abcdef0" TYPE="xfs"
/dev/sda2: UUID="87654321-dcba-fe10-5432-0fedcba98765" TYPE="swap"
mkfs — Create a filesystem
Formats a partition with a filesystem. All data on the partition will be destroyed.
# Create an ext4 filesystem
$ sudo mkfs.ext4 /dev/sdb1
mke2fs 1.46.5 (30-Dec-2023)
Creating filesystem with 52428544 4k blocks and 13107200 inodes
Filesystem UUID: a1b2c3d4-e5f6-7890-abcd-ef1234567890
# Create an XFS filesystem
$ sudo mkfs.xfs /dev/sdc1
fsck — Filesystem check
Checks and repairs filesystem errors. Only run on unmounted filesystems.
# Check a filesystem
$ sudo fsck /dev/sdb1
fsck from util-linux 2.39.3
e2fsck 1.46.5 (30-Dec-2023)
/dev/sdb1: clean, 15234/13107200 files, 3456789/52428544 blocks
# Force check even if filesystem appears clean
$ sudo fsck -f /dev/sdb1
Gotcha: Never run fsck on a mounted filesystem. It can corrupt your data. Always unmount first or run it from a rescue/live disk.
dd — Low-level disk operations
Already covered in File Operations, but worth mentioning here for disk-specific uses like cloning entire drives.
# Clone an entire disk
$ sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync status=progress
# Create a disk image backup
$ sudo dd if=/dev/sda of=/backup/sda_image.img bs=4M status=progress
System information
These commands tell you about the system you are working on. I run them first whenever I SSH into a server I have not worked with before.
uname — System information
Displays basic system information including the kernel version and architecture.
$ uname -a
Linux web-server-01 5.15.0-91-generic #101-Ubuntu SMP x86_64 GNU/Linux
# Kernel name only
$ uname -s
Linux
# Kernel release
$ uname -r
5.15.0-91-generic
# Machine hardware
$ uname -m
x86_64
hostname — Show or set hostname
Displays or changes the system hostname.
$ hostname
web-server-01
# Show the fully qualified domain name
$ hostname -f
web-server-01.example.com
# Show the IP address
$ hostname -I
192.168.1.100 10.0.0.5
uptime — System uptime
Shows how long the system has been running and the load averages.
$ uptime
15:30:22 up 2 days, 7:30, 2 users, load average: 0.45, 0.52, 0.48
Tip: The three load average numbers represent 1-minute, 5-minute, and 15-minute averages. On a single-core system, a load of 1.0 means fully utilized. On a 4-core system, 4.0 means fully utilized.
date — Display or set the date and time
Shows the current date and time. Also used to set the system clock.
$ date
Wed Mar 12 15:30:22 UTC 2026
# Custom format
$ date +"%Y-%m-%d %H:%M:%S"
2026-03-12 15:30:22
# Show date in a different timezone
$ TZ="America/New_York" date
Wed Mar 12 11:30:22 EDT 2026
# Convert epoch timestamp to date
$ date -d @1710259822
Tue Mar 12 15:30:22 UTC 2026
cal — Display a calendar
Shows a calendar for the current month or a specified month/year.
$ cal
March 2026
Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31
# Show a full year
$ cal 2026
# Show a specific month
$ cal 12 2026
timedatectl — Control system time and date
The modern way to manage time settings on systemd-based systems. Shows timezone, NTP sync status, and more.
$ timedatectl
Local time: Wed 2026-03-12 15:30:22 UTC
Universal time: Wed 2026-03-12 15:30:22 UTC
RTC time: Wed 2026-03-12 15:30:22
Time zone: UTC (UTC, +0000)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
# Set the timezone
$ sudo timedatectl set-timezone America/New_York
# List available timezones
$ timedatectl list-timezones | grep America
lscpu — CPU information
Displays detailed CPU architecture information.
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
CPU(s): 4
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Model name: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
CPU MHz: 2400.000
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 35840K
lsmem — List memory ranges
Shows information about available memory ranges and their configuration.
$ lsmem
RANGE SIZE STATE REMOVABLE BLOCK
0x0000000000000000-0x000000007fffffff 2G online yes 0-3
0x0000000100000000-0x000000027fffffff 6G online yes 32-79
Memory block size: 128M
Total online memory: 8G
Total offline memory: 0B
free — Display free and used memory
Shows RAM and swap usage. One of the first things I check on a struggling server.
$ free -h
total used free shared buff/cache available
Mem: 7.8Gi 3.4Gi 1.2Gi 256Mi 3.2Gi 4.1Gi
Swap: 2.0Gi 0B 2.0Gi
Tip: The "available" column is what matters, not "free." Linux uses free RAM for disk caching, which is a good thing. The "available" number tells you how much memory is actually available for new processes.
vmstat — Virtual memory statistics
Reports information about processes, memory, paging, block I/O, and CPU activity.
$ vmstat 1 5
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 0 1234567 234567 3345678 0 0 12 45 200 400 3 1 95 1 0
0 0 0 1234500 234567 3345700 0 0 0 30 180 350 2 1 96 1 0
0 0 0 1234450 234567 3345720 0 0 0 15 190 370 2 1 97 0 0
1 0 0 1234400 234567 3345750 0 0 5 20 210 410 4 1 94 1 0
0 0 0 1234380 234567 3345760 0 0 0 10 175 340 1 1 97 1 0
lsusb — List USB devices
Shows all USB devices connected to the system.
$ lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 0781:5583 SanDisk Corp. Ultra Fit
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
lspci — List PCI devices
Shows all PCI devices including network cards, GPUs, and storage controllers.
$ lspci
00:00.0 Host bridge: Intel Corporation Xeon E5 v4 DMI2 (rev 03)
00:02.0 VGA compatible controller: NVIDIA Corporation Tesla V100 (rev a1)
00:1f.2 SATA controller: Intel Corporation C610/X99 AHCI Controller (rev 05)
03:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection
dmesg — Print kernel ring buffer
Shows kernel messages, including hardware detection, driver loading, and error messages. Essential for debugging hardware issues.
# Show recent kernel messages
$ dmesg | tail -20
[ 234.567890] EXT4-fs (sda1): mounted filesystem with ordered data mode
[ 234.890123] USB 1-1: new high-speed USB device number 2 using xhci_hcd
[ 235.012345] usb-storage 1-1:1.0: USB Mass Storage device detected
# Show only error messages
$ dmesg --level=err
[ 12.345678] nvidia: module verification failed: signature mismatch
# Follow kernel messages in real time
$ dmesg -w
# Show human-readable timestamps
$ dmesg -T | tail -5
[Wed Mar 12 15:00:01 2026] EXT4-fs (sdb1): mounted filesystem
Package management
Different Linux distributions use different package managers. Here are the most common ones.
apt — Advanced Package Tool (Debian/Ubuntu)
The modern, user-friendly package manager for Debian-based distributions.
# Update package lists
$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu noble InRelease
Get:2 http://security.ubuntu.com/ubuntu noble-security InRelease [110 kB]
Reading package lists... Done
45 packages can be upgraded.
# Install a package
$ sudo apt install nginx
Reading package lists... Done
The following NEW packages will be installed:
nginx nginx-common nginx-core
0 upgraded, 3 newly installed, 0 to remove and 42 not upgraded.
# Upgrade all packages
$ sudo apt upgrade
# Remove a package
$ sudo apt remove nginx
# Remove a package and its config files
$ sudo apt purge nginx
# Search for packages
$ apt search "web server"
# Show package details
$ apt show nginx
apt-get — APT package handling (classic)
The older command-line tool. Still used in scripts for its stable, predictable output.
$ sudo apt-get update && sudo apt-get upgrade -y
# Install without recommended packages
$ sudo apt-get install --no-install-recommends nginx
dpkg — Debian package manager
The low-level package installer for .deb files. Works directly with package files rather than repositories.
# Install a .deb file
$ sudo dpkg -i package_name.deb
# List installed packages
$ dpkg -l | grep nginx
ii nginx 1.24.0-1 amd64 high performance web server
# Show files installed by a package
$ dpkg -L nginx
/etc/nginx
/etc/nginx/nginx.conf
/usr/sbin/nginx
# Find which package owns a file
$ dpkg -S /usr/bin/curl
curl: /usr/bin/curl
yum — Yellowdog Updater Modified (RHEL/CentOS 7)
The package manager for older Red Hat-based distributions.
# Install a package
$ sudo yum install httpd
# Update all packages
$ sudo yum update
# Search for packages
$ yum search nginx
# Remove a package
$ sudo yum remove httpd
# List installed packages
$ yum list installed
dnf — Dandified Yum (Fedora/RHEL 8+)
The modern replacement for yum on Fedora and RHEL 8+.
# Install a package
$ sudo dnf install nginx
# Update all packages
$ sudo dnf upgrade
# Search for packages
$ dnf search "text editor"
# Show package info
$ dnf info nginx
# List available updates
$ dnf check-update
pacman — Package Manager (Arch Linux)
The package manager for Arch Linux and its derivatives.
# Sync databases and upgrade all packages
$ sudo pacman -Syu
# Install a package
$ sudo pacman -S nginx
# Remove a package and unused dependencies
$ sudo pacman -Rns nginx
# Search for packages
$ pacman -Ss nginx
# List installed packages
$ pacman -Q
snap — Snap package manager
Ubuntu's universal package format. Packages are sandboxed and auto-updating.
# Install a snap
$ sudo snap install code --classic
# List installed snaps
$ snap list
Name Version Rev Publisher Notes
code 1.85.1 145 vscode classic
core22 20240111 1122 canonical base
# Update all snaps
$ sudo snap refresh
flatpak — Flatpak package manager
Another universal package format, popular on Fedora and Linux Mint.
# Install an application from Flathub
$ flatpak install flathub org.mozilla.firefox
# Run a Flatpak application
$ flatpak run org.mozilla.firefox
# List installed Flatpak apps
$ flatpak list
# Update all Flatpak apps
$ flatpak update
Shell and environment
These commands configure your shell environment, set variables, and create shortcuts. They are essential for customizing your workflow.
echo — Display text
Prints text to the terminal. Used everywhere in scripts.
$ echo "Hello, World!"
Hello, World!
# Print the value of a variable
$ echo "Home directory: $HOME"
Home directory: /home/rahul
# Print without a trailing newline
$ echo -n "Enter your name: "
# Enable escape sequences
$ echo -e "Line 1\nLine 2\tTabbed"
Line 1
Line 2 Tabbed
printf — Formatted output
More precise formatting than echo. Closer to the C printf function.
$ printf "Name: %-10s Age: %d\n" "Alice" 30
Name: Alice Age: 30
$ printf "%05d\n" 42
00042
$ printf "%.2f%%\n" 99.5
99.50%
export — Set environment variables
Makes a variable available to all child processes. Essential for configuring tools and applications.
# Set an environment variable
$ export NODE_ENV=production
# Set PATH
$ export PATH="$PATH:/opt/custom/bin"
# Multiple variables
$ export DATABASE_URL="postgres://localhost/mydb" REDIS_URL="redis://localhost:6379"
# Verify it's set
$ echo $NODE_ENV
production
env — Run a program with a modified environment
Shows environment variables or runs a command with modified environment variables.
# Show all environment variables
$ env
HOME=/home/rahul
PATH=/usr/local/bin:/usr/bin:/bin
SHELL=/bin/bash
USER=rahul
LANG=en_US.UTF-8
# Run a command with a temporary environment variable
$ env NODE_ENV=test npm run test
# Run a command with a clean environment
$ env -i /bin/bash
printenv — Print environment variables
Shows the value of specific environment variables.
$ printenv HOME
/home/rahul
$ printenv PATH
/usr/local/bin:/usr/bin:/bin:/home/rahul/.local/bin
$ printenv USER SHELL
rahul
/bin/bash
alias — Create command shortcuts
Creates short names for long commands. I have dozens of aliases in my .bashrc.
$ alias ll='ls -lah'
$ alias gs='git status'
$ alias dc='docker compose'
$ alias k='kubectl'
# Use the alias
$ ll
total 156K
drwxr-xr-x 7 rahul rahul 4.0K Mar 12 14:22 .
# List all current aliases
$ alias
alias dc='docker compose'
alias gs='git status'
alias k='kubectl'
alias ll='ls -lah'
unalias — Remove an alias
Removes a previously defined alias.
$ unalias ll
# Remove all aliases
$ unalias -a
source — Execute commands from a file
Reads and executes commands from a file in the current shell. Used to reload configuration files.
# Reload your shell configuration after editing it
$ source ~/.bashrc
# Load environment variables from a .env file
$ source .env
# Shorthand (dot notation)
$ . ~/.bashrc
history — Command history
Shows previously executed commands. Incredibly useful for recalling complex commands.
$ history | tail -10
1001 cd /var/log
1002 tail -f nginx/access.log
1003 grep "500" nginx/access.log | wc -l
1004 sudo systemctl restart nginx
1005 curl -I https://example.com
1006 ssh deploy@production.example.com
1007 rsync -avz dist/ production:/var/www/
1008 git log --oneline -10
1009 docker ps
1010 history | tail -10
# Re-run command number 1004
$ !1004
# Re-run the last command
$ !!
# Re-run the last command starting with "ssh"
$ !ssh
# Search history interactively (press Ctrl+R)
# (reverse-i-search)`ssh': ssh deploy@production.example.com
Tip: Ctrl+R for reverse history search is one of the most time-saving shortcuts. Start typing any part of a previous command and it will find it.
type — Describe a command
Shows whether a command is a built-in, alias, function, or external program.
$ type cd
cd is a shell builtin
$ type ls
ls is aliased to 'ls --color=auto'
$ type grep
grep is /usr/bin/grep
$ type type
type is a shell builtin
hash — Remember command locations
Manages the shell's hash table of command locations for faster lookups.
# Show the hash table
$ hash
hits command
5 /usr/bin/git
12 /usr/bin/ls
3 /usr/bin/grep
# Clear the hash table (useful after installing new software)
$ hash -r
I/O redirection and piping
Redirection and piping are what make the Linux command line so powerful. They let you chain simple commands together to perform complex operations.
> — Redirect stdout (overwrite)
Sends a command's output to a file, replacing its contents.
$ echo "server started" > status.log
$ cat status.log
server started
# Overwriting is the default - be careful!
$ echo "server stopped" > status.log
$ cat status.log
server stopped
>> — Redirect stdout (append)
Appends a command's output to a file without overwriting existing content.
$ echo "2026-03-12 15:00 - backup started" >> backup.log
$ echo "2026-03-12 15:05 - backup completed" >> backup.log
$ cat backup.log
2026-03-12 15:00 - backup started
2026-03-12 15:05 - backup completed
< — Redirect stdin
Feeds a file's contents as input to a command.
$ wc -l < /etc/passwd
42
$ sort < unsorted_names.txt
Alice
Bob
Charlie
Dave
2> — Redirect stderr
Redirects error messages to a file, keeping stdout on screen.
$ find / -name "*.conf" 2> /dev/null
/etc/nginx/nginx.conf
/etc/ssh/sshd_config
# Save errors to a log file
$ ./risky_script.sh 2> error.log
2>&1 — Redirect stderr to stdout
Combines stdout and stderr into a single stream. Used when you want to capture or pipe all output.
# Save both output and errors to the same file
$ ./deploy.sh > deploy.log 2>&1
# Shorthand for the above (bash 4+)
$ ./deploy.sh &> deploy.log
# Pipe both stdout and stderr
$ ./build.sh 2>&1 | tee build.log
| — Pipe
Sends the output of one command as input to the next. The fundamental building block of command-line data processing.
# Find the most common IP addresses in an access log
$ cat /var/log/nginx/access.log | cut -d' ' -f1 | sort | uniq -c | sort -rn | head -5
847 192.168.1.50
523 10.0.0.15
234 172.16.0.100
156 192.168.1.75
89 10.0.0.22
# Find large files and sort by size
$ find /var -type f -size +10M | xargs du -h | sort -rh | head -10
# Count how many processes each user is running
$ ps aux | awk '{print $1}' | sort | uniq -c | sort -rn
45 root
12 www-data
8 rahul
5 postgres
tee — Split output to file and screen
Already covered in File Operations, but it is especially important in pipelines. It lets you save intermediate results while the data continues flowing through the pipe.
# Monitor and log a deployment
$ ./deploy.sh 2>&1 | tee deploy.log | grep -E "(error|success)"
[SUCCESS] Built application
[SUCCESS] Deployed to production
xargs — Build and execute commands from stdin
Takes input from stdin and converts it into arguments for another command. Bridges the gap between commands that output data and commands that expect arguments.
# Delete all .tmp files found by find
$ find /tmp -name "*.tmp" | xargs rm -f
# Run a command for each line of input
$ cat servers.txt | xargs -I {} ssh {} "uptime"
15:30:22 up 45 days, 3:22
15:30:23 up 12 days, 18:45
15:30:23 up 90 days, 7:10
# Run commands in parallel (4 at a time)
$ cat urls.txt | xargs -P 4 -I {} curl -sO {}
# Handle filenames with spaces
$ find . -name "*.log" -print0 | xargs -0 rm -f
Tip: Always use -print0 with find and -0 with xargs when dealing with filenames that might contain spaces or special characters.
Job scheduling
Automating tasks with scheduled jobs is fundamental to system administration. These commands let you run scripts at specific times or intervals.
cron — The cron daemon
The background service that runs scheduled tasks. It reads crontab files and executes commands at specified times.
# Check if cron is running
$ systemctl status cron
● cron.service - Regular background program processing daemon
Active: active (running) since Mon 2026-03-10 08:00:00 UTC; 2 days ago
crontab — Manage cron jobs
Edits your personal crontab file. Each line defines a scheduled task.
# Edit your crontab
$ crontab -e
# View your current crontab
$ crontab -l
# m h dom mon dow command
0 2 * * * /home/rahul/scripts/backup.sh
*/5 * * * * /home/rahul/scripts/health_check.sh
0 0 * * 0 /home/rahul/scripts/weekly_report.sh
30 8 * * 1-5 /home/rahul/scripts/daily_digest.sh
# Crontab format:
# ┌───────────── minute (0-59)
# │ ┌───────────── hour (0-23)
# │ │ ┌───────────── day of month (1-31)
# │ │ │ ┌───────────── month (1-12)
# │ │ │ │ ┌───────────── day of week (0-7, 0 and 7 are Sunday)
# │ │ │ │ │
# * * * * * command
# Common examples:
# 0 * * * * - Every hour
# */5 * * * * - Every 5 minutes
# 0 2 * * * - Every day at 2:00 AM
# 0 0 * * 0 - Every Sunday at midnight
# 0 9 1 * * - First day of every month at 9:00 AM
# Remove all cron jobs
$ crontab -r
Gotcha: Cron jobs run with a minimal environment. Always use full paths for commands and scripts. If your script uses environment variables, source them explicitly at the top.
at — Schedule a one-time task
Runs a command once at a specified time. Unlike cron, which is for recurring tasks.
# Run a command at a specific time
$ echo "tar -czf /backup/snapshot.tar.gz /data" | at 02:00
job 1 at Wed Mar 13 02:00:00 2026
# Run a command in 30 minutes
$ echo "/home/rahul/scripts/send_report.sh" | at now + 30 minutes
job 2 at Wed Mar 12 16:00:00 2026
# List pending jobs
$ atq
1 Wed Mar 13 02:00:00 2026 a rahul
2 Wed Mar 12 16:00:00 2026 a rahul
# Remove a job
$ atrm 2
batch — Run a command when system load is low
Like at, but waits until the system load drops below a threshold before running. Good for heavy tasks on shared servers.
$ echo "/home/rahul/scripts/heavy_analysis.sh" | batch
job 3 at Wed Mar 12 15:30:00 2026
# Will run when load average drops below 1.5
systemctl — Manage systemd services and timers
Controls system services and can manage timer units as a modern alternative to cron.
# Start a service
$ sudo systemctl start nginx
# Stop a service
$ sudo systemctl stop nginx
# Restart a service
$ sudo systemctl restart nginx
# Reload configuration without restarting
$ sudo systemctl reload nginx
# Enable a service to start at boot
$ sudo systemctl enable nginx
# Check service status
$ sudo systemctl status nginx
● nginx.service - A high performance web server
Active: active (running) since Mon 2026-03-10 08:00:00 UTC
Process: 1234 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
Main PID: 5678 (nginx)
Tasks: 5
Memory: 12.3M
CPU: 1.234s
CGroup: /system.slice/nginx.service
├─5678 "nginx: master process /usr/sbin/nginx"
└─5679 "nginx: worker process"
# List all running services
$ systemctl list-units --type=service --state=running
# List all timer units (systemd timers)
$ systemctl list-timers
NEXT LEFT LAST PASSED UNIT
Wed 2026-03-12 16:00:00 UTC 29min left Wed 2026-03-12 15:00:00 UTC 30min ago logrotate.timer
Thu 2026-03-13 00:00:00 UTC 8h left Wed 2026-03-12 00:00:00 UTC 15h ago apt-daily.timer
Advanced and power user
These commands are for when you need to dig deeper into system behavior, debug performance issues, or automate complex workflows.
strace — Trace system calls
Shows every system call a process makes. The ultimate debugging tool when you have no idea why a program is failing.
# Trace a running command
$ strace ls /tmp
execve("/usr/bin/ls", ["ls", "/tmp"], ...) = 0
openat(AT_FDCWD, "/tmp", O_RDONLY|O_DIRECTORY) = 3
getdents64(3, /* 15 entries */, 32768) = 480
write(1, "file1.txt\nfile2.log\n", 20) = 20
# Trace a running process by PID
$ sudo strace -p 15234
Process 15234 attached
read(6, "GET / HTTP/1.1\r\nHost: example.co"..., 8192) = 245
write(6, "HTTP/1.1 200 OK\r\n"..., 1456) = 1456
# Trace only specific system calls
$ strace -e trace=open,read,write ./myapp
# Count system calls and show summary
$ strace -c ls /tmp
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- --------
45.23 0.000123 12 10 read
32.15 0.000087 8 11 write
22.62 0.000061 6 10 3 openat
------ ----------- ----------- --------- --------- --------
100.00 0.000271 31 3 total
Tip: When a process is failing silently, strace will show you exactly which file it cannot open, which network connection it cannot make, or which system call is returning an error.
ltrace — Trace library calls
Like strace but traces calls to shared libraries instead of system calls. Useful for debugging higher-level function calls.
$ ltrace ls /tmp
__libc_start_main(0x5555555551a0, 2, 0x7fffffffe3d8, ...)
opendir("/tmp") = 0x5555555592a0
readdir(0x5555555592a0) = 0x5555555592c0
strcmp(".", ".") = 0
readdir(0x5555555592a0) = 0x5555555592e8
strcmp("..", "..") = 0
readdir(0x5555555592a0) = 0x555555559310
puts("file1.txt") = 10
lsof — List open files
Shows all open files and the processes using them. In Linux, everything is a file (including network connections), so this is incredibly powerful.
# Show all open files by a process
$ lsof -p 15234
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
node 15234 rahul cwd DIR 8,1 4096 1048577 /home/rahul/webapp
node 15234 rahul 3u IPv4 45678 0t0 TCP *:3000 (LISTEN)
node 15234 rahul 4u IPv4 45679 0t0 TCP 192.168.1.100:3000->192.168.1.50:54321 (ESTABLISHED)
# Find what process is using a specific port
$ lsof -i :80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 5678 www-data 6u IPv4 12345 0t0 TCP *:http (LISTEN)
# Find all open files in a directory
$ lsof +D /var/log
# Show all network connections
$ lsof -i -P -n
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 1234 root 3u IPv4 11111 0t0 TCP *:22 (LISTEN)
nginx 5678 root 6u IPv4 22222 0t0 TCP *:80 (LISTEN)
node 15234 rahul 3u IPv4 33333 0t0 TCP *:3000 (LISTEN)
Tip: lsof -i :PORT is the fastest way to find what program is using a specific port. I use this almost daily.
tcpdump — Capture network traffic
Captures and displays network packets. The command-line equivalent of Wireshark.
# Capture traffic on all interfaces
$ sudo tcpdump -i any
15:30:22.123456 IP 192.168.1.50.54321 > 192.168.1.100.80: Flags [S], seq 12345
15:30:22.123789 IP 192.168.1.100.80 > 192.168.1.50.54321: Flags [S.], seq 67890
# Capture only traffic to/from a specific host
$ sudo tcpdump host 192.168.1.50
# Capture only HTTP traffic
$ sudo tcpdump -i eth0 port 80
# Save capture to a file for later analysis
$ sudo tcpdump -i eth0 -w capture.pcap
# Capture DNS queries
$ sudo tcpdump -i any port 53
15:30:22.456789 IP 192.168.1.100.45678 > 8.8.8.8.53: 12345+ A? example.com. (30)
15:30:22.567890 IP 8.8.8.8.53 > 192.168.1.100.45678: 12345 1/0/0 A 93.184.216.34 (46)
# Show packet contents in ASCII
$ sudo tcpdump -A -i eth0 port 80 | head -20
sar — System activity reporter
Collects, reports, and saves system activity information. Part of the sysstat package. Great for historical performance analysis.
# Show CPU usage for today
$ sar -u
Linux 5.15.0-91-generic 03/12/2026 _x86_64_
12:00:01 AM CPU %user %nice %system %iowait %idle
12:10:01 AM all 3.45 0.00 1.23 0.34 94.98
12:20:01 AM all 2.89 0.00 1.15 0.28 95.68
12:30:01 AM all 5.67 0.00 2.34 0.89 91.10
# Show memory usage
$ sar -r
12:00:01 AM kbmemfree kbmemused %memused kbbuffers kbcached
12:10:01 AM 1234567 6789012 84.60 234567 3345678
12:20:01 AM 1234500 6789079 84.61 234567 3345700
# Show disk I/O
$ sar -d
# Show network interface stats
$ sar -n DEV
perf — Performance analysis tools
The Linux performance analysis toolkit. Used for profiling CPU usage, cache misses, and more.
# Record performance data for a command
$ sudo perf record ./cpu_intensive_program
[ perf record: Captured and wrote 1.234 MB perf.data (5678 samples) ]
# Show the performance report
$ sudo perf report
Overhead Command Shared Object Symbol
35.67% myapp myapp [.] hot_function
22.34% myapp libc.so.6 [.] malloc
15.89% myapp myapp [.] process_data
# Quick CPU stats for a command
$ sudo perf stat ls /tmp
Performance counter stats for 'ls /tmp':
1.23 msec task-clock
3 context-switches
0 cpu-migrations
145 page-faults
3,456,789 cycles
2,345,678 instructions
inotifywait — Wait for filesystem events
Watches for file changes in real time. Perfect for triggering actions when files are modified, created, or deleted.
# Watch a directory for any changes
$ inotifywait -m /var/www/html
Setting up watches.
Watches established.
/var/www/html/ MODIFY index.html
/var/www/html/ CREATE new_file.js
/var/www/html/ DELETE old_file.css
# Watch for specific events and trigger a build
$ inotifywait -m -e modify -e create -r ./src/ |
while read path action file; do
echo "Change detected: $file - rebuilding..."
npm run build
done
screen — Terminal multiplexer
Lets you run multiple terminal sessions within a single window and detach/reattach sessions. Keeps processes running after you disconnect.
# Start a new screen session
$ screen -S deployment
# Run your long-running command inside screen
$ ./deploy_all_servers.sh
# Detach from screen (Ctrl+A, then D)
# You can now safely close your SSH connection
# List screen sessions
$ screen -ls
There is a screen on:
12345.deployment (Detached)
# Reattach to a session
$ screen -r deployment
tmux — Terminal multiplexer (modern)
A more modern alternative to screen. Supports split panes, better scripting, and a more intuitive interface.
# Start a new tmux session
$ tmux new -s work
# Split horizontally
# Ctrl+B, then "
# Split vertically
# Ctrl+B, then %
# Switch panes
# Ctrl+B, then arrow keys
# Detach from tmux
# Ctrl+B, then D
# List sessions
$ tmux ls
work: 1 windows (created Wed Mar 12 14:00:00 2026)
# Reattach
$ tmux attach -t work
# Kill a session
$ tmux kill-session -t work
Tip: I run tmux on every remote server I work on. If my SSH connection drops, I just reconnect and tmux attach to pick up exactly where I left off.
parallel — Execute commands in parallel
Runs commands in parallel, utilizing multiple CPU cores. A massive productivity boost for batch operations.
# Compress multiple files in parallel
$ ls *.log | parallel gzip
# Download multiple URLs in parallel
$ cat urls.txt | parallel -j 4 wget
# Run a command with different arguments in parallel
$ parallel convert {} -resize 800x600 resized_{} ::: *.jpg
# Process files in parallel with progress
$ find . -name "*.csv" | parallel --progress "python3 process.py {}"
Bonus: essential command-line shortcuts
These are not commands, but keyboard shortcuts that will dramatically speed up your terminal work.
# Navigation
Ctrl+A # Move cursor to the beginning of the line
Ctrl+E # Move cursor to the end of the line
Alt+B # Move back one word
Alt+F # Move forward one word
# Editing
Ctrl+U # Delete from cursor to the beginning of the line
Ctrl+K # Delete from cursor to the end of the line
Ctrl+W # Delete the word before the cursor
Alt+D # Delete the word after the cursor
Ctrl+Y # Paste the last deleted text
# History
Ctrl+R # Reverse search through history
Ctrl+P # Previous command (same as up arrow)
Ctrl+N # Next command (same as down arrow)
!! # Repeat the last command
!$ # Use the last argument of the previous command
# Process control
Ctrl+C # Kill the current process
Ctrl+Z # Suspend the current process
Ctrl+D # Exit the current shell (or send EOF)
Ctrl+L # Clear the screen (same as 'clear')
Putting it all together
The real power of Linux commands comes from combining them. Here are some practical one-liners I use regularly.
# Find the 10 largest files on the system
$ find / -type f -exec du -h {} + 2>/dev/null | sort -rh | head -10
# Monitor a log file for errors and send an alert
$ tail -f /var/log/app.log | grep --line-buffered "ERROR" | while read line; do
echo "$line" | mail -s "App Error Alert" admin@example.com
done
# Kill all zombie processes
$ ps aux | awk '$8=="Z" {print $2}' | xargs kill -9 2>/dev/null
# Bulk rename files (change extension from .txt to .md)
$ find . -name "*.txt" | while read f; do mv "$f" "${f%.txt}.md"; done
# Show the top 10 most frequently used commands
$ history | awk '{print $2}' | sort | uniq -c | sort -rn | head -10
# Check all servers for disk usage
$ cat servers.txt | xargs -I {} ssh {} "hostname && df -h / | tail -1"
# Find files modified in the last hour that contain "TODO"
$ find . -mmin -60 -type f | xargs grep -l "TODO" 2>/dev/null
# Create a quick HTTP server to share files
$ python3 -m http.server 8080
# Watch for high CPU processes
$ watch -n 1 'ps aux --sort=-%cpu | head -10'
# Generate a random password
$ openssl rand -base64 32
kQ7xN9mP2wR5tY8vF1bC4dH6jL0aG3eS5iK7nM9oU=
Summary
This guide covers 150+ Linux commands that I use regularly in real-world scenarios. You do not need to memorize all of them. Start with the basics: cd, ls, cat, grep, find, cp, mv, rm. Build from there as you encounter new problems.
The man pages (man command_name) and the --help flag are your best friends. Every command in this guide has extensive documentation built right into your terminal. When you forget a flag or need to discover a new option, those resources are one keystroke away.
The most important skill is not knowing every command by heart. It is knowing that a command exists for your problem, and being able to quickly look up the details. Bookmark this page as your reference, and keep practicing in the terminal every day.
Frequently Asked Questions
How many Linux commands are there?
A typical Linux distribution ships with over 1,000 executable commands. However, most users only need 50-100 commands for daily work. System administrators and DevOps engineers regularly use 150-200 commands. This guide covers the most practical ones you will actually use.
What is the fastest way to learn Linux commands?
Start with navigation (cd, ls, pwd), file operations (cp, mv, rm, cat), and searching (grep, find). Practice daily in a terminal. Use man pages and the --help flag when stuck. Build muscle memory by using the command line for tasks you would normally do in a GUI.
What is the difference between Linux and Unix commands?
Most commands are identical or very similar. Linux commands are based on GNU utilities which are compatible with Unix but often include additional flags and features. Commands like ls, grep, awk, sed, and find work the same way on both systems.
Can I practice Linux commands on Windows?
Yes. Use Windows Subsystem for Linux (WSL) to get a full Linux terminal on Windows. Alternatively, use Docker containers, virtual machines with VirtualBox, or online terminals like JSLinux. WSL 2 is the easiest option and runs Ubuntu natively.
Originally published at aicodereview.cc
Top comments (0)