You know
grep,sed,awk. You've read the same "10 useful terminal commands" articles written since 2009. This isn't that. These are the commands that change how you think about the terminal — the ones that make a task you'd normally open Python for take 8 seconds instead.
Command 01 — pv: pipe viewer, or "why is this taking forever?"
"I was copying a 40GB database dump over SSH. Zero feedback. Just a blinking cursor for 25 minutes. I had no idea if it was stuck, slow, or done. I killed it twice by accident and had to start over."
— every backend engineer, at least once
pv inserts into any pipe and gives you a live progress bar, transfer rate, ETA, and bytes transferred. It's invisible to the data — it just watches and reports.
Install:
brew install pv # macOS
apt install pv # Debian/Ubuntu
Usage:
# Compress a large file with progress
pv hugefile.sql | gzip > hugefile.sql.gz
# Copy with progress bar
pv source.tar.gz > /backup/source.tar.gz
# Pipe through multiple commands
pv dump.sql | gzip | ssh user@remote "cat > dump.sql.gz"
# Throttle transfer rate to 1MB/s
pv -L 1m source.iso > /dev/null
Tip: Use
pv -petrafor all metrics at once: progress, ETA, timer, rate, and bytes. Paste that alias into your shell config.
Result: never fly blind on a long pipe again.
Command 02 — moreutils: vipe, sponge, chronic — the toolkit nobody installs
"I needed to edit the middle of a pipeline — transform some JSON, hand-fix two records, then pass it on. I ended up saving to a temp file, opening it, editing, piping again. Four steps for something that should've been one."
— a data engineer mid-ETL
moreutils is a collection of Unix tools that should have existed from the start. Three standouts:
Install:
brew install moreutils
apt install moreutils
vipe — edit mid-pipe
Opens your $EDITOR in the middle of a pipe. Edit stdin, save, and the result continues down the pipe.
# Generate JSON, hand-edit it, then process it
curl -s api.example.com/data | vipe | jq '.results[]'
# Filter a log, manually tweak some lines, send to file
cat app.log | grep ERROR | vipe > errors-reviewed.log
sponge — safe in-place editing
Reads all stdin before writing to the output file. Lets you safely read and write the same file in one command — something > will silently destroy.
# DANGEROUS — truncates file before sort finishes reading it
sort file.txt > file.txt
# SAFE — sponge buffers everything first
sort file.txt | sponge file.txt
# Deduplicate a file in place
sort -u config.txt | sponge config.txt
chronic — silent success, loud failure
Runs a command silently on success, but shows full output on failure. Built for cron jobs — no more noisy emails for commands that succeed 99% of the time.
# In crontab — only emails you if backup fails
0 2 * * * chronic /usr/local/bin/run-backup.sh
Result: three commands that patch real Unix gaps.
Command 03 — fd: find, but written for humans
"I've been using
findfor 8 years and I still google the syntax every single time. The flags are backwards, case sensitivity is opt-in, and ignoring node_modules requires a paragraph of shell."
— a senior engineer who finally switched
fd is a modern replacement for find. It's faster (parallel by default), respects .gitignore, is case-insensitive by default, and has sane syntax.
# find — search for JS files, ignore node_modules
find . -name "*.js" -not -path "*/node_modules/*"
# fd — same thing
fd -e js
# fd — find files modified in the last 2 days
fd --changed-within 2d
# fd — find and execute a command on each result
fd -e log -x rm {}
# fd — search hidden files too
fd -H .env
Tip:
fduses regex by default. Pass-gfor glob patterns if that's more natural for the task.Warning: On some systems
fdis installed asfdfindto avoid a naming conflict. Alias it:alias fd=fdfind
Result: never google find syntax again.
Command 04 — hyperfine: benchmarking that's actually rigorous
"I was arguing with a teammate about which implementation was faster. We were doing
time ./script_aandtime ./script_bback and forth. The results bounced around by 30% depending on system load. We had no idea which was actually faster."
— an engineer mid code-review argument
hyperfine runs commands multiple times, warms up the cache, computes mean and standard deviation, and gives you a statistically meaningful comparison. It's what time should have been.
Install:
brew install hyperfine
cargo install hyperfine
Usage:
# Benchmark a single command (10 runs by default)
hyperfine 'grep -r "TODO" src/'
# Compare two implementations
hyperfine 'python parse_v1.py data.json' 'python parse_v2.py data.json'
# Warm up first, then benchmark
hyperfine --warmup 3 './build/server --dry-run'
# Export results to markdown table
hyperfine --export-markdown results.md 'cmd_a' 'cmd_b'
# Run with different input parameters
hyperfine 'sort -n {input}' --parameter-list input small.txt medium.txt large.txt
Tip: Use
--export-jsonto pipe results into your own analysis. Pairs well withjqfor custom reporting.
Result: win every "which is faster" argument with data.
Command 05 — atool: one command for every archive format
"Someone sent me a
.tar.bz2. I couldn't remember if it was-xjfor-xzf. I guessed wrong, it failed silently, and I spent 10 minutes wondering why the directory was empty."
— a dev on their third Stack Overflow tab
atool wraps tar, zip, rar, 7z, bz2, xz — every format — behind four consistent commands. You never look up flags again.
Install:
brew install atool
apt install atool
The only four commands you need:
# Extract anything — format auto-detected
aunpack archive.tar.bz2
aunpack archive.zip
aunpack archive.7z
# List contents without extracting
als archive.tar.gz
# Create an archive (format from extension)
apack output.tar.gz file1 file2 dir/
# Repack from one format to another
arepack old.tar.gz new.tar.xz
Result: one muscle memory. Every archive format. Forever.
Command 06 — entr: run any command when files change
"I was writing a C utility with no build tool, no webpack, no hot reload. Every edit meant switching windows, pressing up-arrow, enter. After the 40th time I started looking for a way to automate it. Turns out it already existed."
— a systems programmer who found entr at 2am
entr watches a list of files and re-runs any command when they change. No config file, no daemon, no 500-line Makefile. Just a pipe.
Install:
brew install entr
apt install entr
Usage:
# Re-run tests when any Go file changes
find . -name '*.go' | entr go test ./...
# Recompile on any C file change
ls *.c | entr make
# Restart server on source change
ls src/**/*.js | entr -r node server.js
# Clear screen before each run
ls *.py | entr -c python main.py
# Watch a directory recursively (combine with fd)
fd -e ts | entr -rc npm run build
Tip:
-rkills and restarts a long-running process.-cclears the screen.-dexits when a new file is added to the watched set — useful in scripts.
Result: hot reload for literally anything.
None of these are in the standard curriculum. None of them show up in "learn Linux" courses. They exist in blog posts, dotfiles repos, and the .bashrc of engineers who've been quietly shipping faster than everyone else for years. Now you have them too.
Install one today. Add it to your dotfiles. Then forget you ever lived without it.
Found one you didn't know? Drop a comment with your own hidden gem — I'll add the best ones to a follow-up post.
Top comments (0)