90% of find usage looks like this:
find . -name "*.log"
Which is like using Python only for print("hello").
The operators nobody uses
Time-based filtering
Find files modified in the last 30 minutes:
find /var/log -mmin -30
Files NOT accessed in the last 90 days:
find /home -atime +90
Files newer than a reference file:
find /etc -newer /etc/passwd
That last one is wild. "Show me everything in /etc that changed after the last password update." Incredibly useful for incident response - if you know when the breach happened, you can find every file modified after that timestamp.
Permission filtering
Find all SUID binaries (a classic security audit):
find / -perm -4000 -type f
Files writable by others:
find /var -perm -o+w -type f
Files with no owner (orphaned after a user deletion):
find / -nouser -o -nogroup
Size filtering
Files over 100MB:
find / -size +100M -type f
Empty files (often leftover from crashed processes):
find /tmp -empty
Combining conditions
find supports AND (implicit), OR (-o), and NOT (!). With parentheses for grouping.
"Log files over 10MB that haven't been modified in a week":
find /var/log -name "*.log" -size +10M -mtime +7
"Config files that are world-writable OR have no owner":
find /etc \( -perm -o+w -o -nouser \) -type f
-exec is where it becomes a language
-exec executes a command on each result. The {} is replaced with the filename, and \; terminates the command.
Delete old temp files:
find /tmp -mtime +30 -type f -exec rm {} \;
Change ownership of orphaned files:
find / -nouser -exec chown root {} \;
But the real trick is -exec with + instead of \;. The semicolon variant runs one command per file. The plus variant batches files into a single command invocation, like xargs:
find /var/log -name "*.log" -exec gzip {} +
That's one gzip call with hundreds of arguments, not hundreds of gzip calls. On a system with 50,000 log files, the difference is minutes vs seconds.
-delete is the dangerous one
find has a built-in -delete action. It's faster than -exec rm because it avoids forking a subprocess for each file.
find /tmp -name "*.cache" -mtime +7 -delete
The danger: -delete implies -depth (processes directories bottom-up). If you put -delete in the wrong position with other filters, it can delete more than you intended. Always test with -print first.
Real incident response example
Server compromised. You know the attacker got in around 14:00 yesterday. Find every file modified since then:
touch -t 202604151400 /tmp/marker
find / -newer /tmp/marker -not -path "/proc/*" -not -path "/sys/*" -type f
Create a reference file with the known breach time. Find everything newer. Exclude virtual filesystems. This gives you a timeline of everything the attacker touched, including backdoors, modified configs, and planted scripts.
Now pipe it to get details:
find / -newer /tmp/marker -not -path "/proc/*" -type f -exec ls -la {} +
Full permissions, ownership, and timestamps for every modified file. Ten seconds to run. Manual inspection of each directory would take hours.
Why this isn't taught well
find has terrible man page ergonomics. The options are dense, the interaction between operators is non-obvious, and the difference between -exec {} \; and -exec {} + is buried in a paragraph that reads like a POSIX spec.
Most people learn find . -name "pattern" and stop there because the next level of the man page is intimidating. Understandable. But the difference between "I can find files by name" and "I can audit an entire filesystem in one command" is about two hours of practice with the operators above.
Several of the Linux challenges on SudoRank use find as the primary tool. Not because find is the only way, but because it's usually the fastest.
Top comments (0)