DEV Community

Kristian Ivanov
Kristian Ivanov

Posted on

15 Linux commands I’ve used most recently and love in general

Photo by [Lukas](https://unsplash.com/@lukash?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral)

Intro

I’ve been using Linux for 15+ years. Not because I am a hacker or anything. It is just easier to set up dev environments compared to Windows, it has better hardware options than Mac. There is the occasional Nvidia driver issue, but as long as you are using it for work and don’t expect it to have a good gaming performance, despite a lot of advancements in Proton, Lutris being amazing and Steam having Linux support for some titles like Dota2.

It provides extreme flexibility and options to configure everything to your liking. It also provides tens of thousands of commands (based on the number of man pages).

The problem with this is that, as a biology teacher of mine used to say, the main function of the brain is to forget things it doesn’t need all the time. Because there are so many different commands, there is always something new to discover and a lot of things that were forgotten, because they aren’t used on a daily basis.

So with all of that in mind, I’ll try to share N number of useful commands I’ve discovered or used last week or so from time to time.

Reading large files —less

When dealing with large files, such as logs, opening them in an editor can be slow and memory-intensive. less provides an efficient way to view and navigate large files without loading the entire file into memory.

less largefile.txt

In this example, /error can be used to search for the keyword "error" within the file, and F allows you to follow new lines as they are added (similar to tail -f).

You can efficiently scroll through and search thousands of lines of logs to find specific errors without opening the entire file in a text editor. This is especially useful when working with large system logs or data files.

Try opening something large with nano or vim and see the difference.

Link for more details

Find Files by Size — find

Disk space can quickly fill up, especially on systems with large media files, backups, or logs. Instead of manually checking file sizes across directories, you can use find to locate large files that might be hogging space.

find / -type f -size +500M

In this example, we are searching for all files larger than 500MB across the entire file system.

Automatically find and address large files taking up space without manually traversing directories. It’s especially helpful when performing system maintenance or managing disk space on servers.

Recently I had an unpleasant experience files taking tens of gigabytes of storage space. It took me quite some time to figure out the root cause, and have since remembered that command fondly.

You can also use find to recursively remove files based on pattern

find /path -name "*.log" -type f -delete

This finds and deletes all .log files under the specified directory and its subdirectories.

Link for more details

Search Inside Compressed Files — zgrep

Instead of decompressing these files every time you need to search for something inside them, zgrep allows you to search within compressed files directly.

zgrep "ERROR" /var/log/system.log.gz

Example of a return

Jun 23 10:15:22 ERROR failed to start service
Enter fullscreen mode Exit fullscreen mode

The matched line from within the compressed file is returned, without needing to extract the contents first.

Link for more details

Preview File Contents with Head/Tail

When analyzing logs or large files, sometimes you only need to see the first or last few lines, such as in cases of debugging or inspecting recently added log entries. head and tail make this quick and efficient. This is particularly beneficial when dealing with large files where only the start or end matters (e.g., logs).

head -n 20 access.log

tail -n 20 access.log

In both examples we are showing up to 20 lines of data. In the first one, from the start of the document and in the second one, from the end of it.

Link for more details

Link for more details

Bulk Text Replacement with sed

When working with configuration files, scripts, or large datasets, you may need to replace certain text patterns across multiple files. Instead of manually opening each file, sed automates the text replacement process in bulk.

sed -i 's/oldword/newword/g' *.txt

This replaces all occurrences of “oldword” with “newword” in all .txt files within the current directory.

Link for more details

Find and Replace for Complex Patterns — perl

When you need to perform more complex text replacements, such as matching patterns using regular expressions, perl is a powerful tool. It handles advanced search-and-replace scenarios that sed might not handle well.

perl -pi -e 's/(abc\d+)/xyz$1/g' *.log

This replaces all instances of “abc” followed by digits (e.g., “abc123”) with “xyz” followed by the same digits (e.g., “xyz123”) in all .log files.

Link for more details

Sort and Uniquely Count Lines

Analyzing log files or datasets often requires identifying unique lines and counting their occurrences. It’s useful for tasks like understanding user activity or identifying common patterns in logs.

sort access.log | uniq -c

The sort command sorts the file, and uniq -c counts how many times each unique line appears.

10 127.0.0.1 — — [23/Jun/2023] “GET /feature1”
5 192.168.1.1 — — [23/Jun/2023] “GET /feature2”

Link for more details

Link for more details

Monitor Multiple Log Files Simultaneously with— multitail

Using multitail, you can view and track changes in multiple log files in real-time in a single terminal. Let’s be honest, after a while the terminal is the same as the browser — too many tabs opened. This way, you can keep things more maintainable.

multitail /var/log/syslog /var/log/auth.log

This command opens both syslog and auth.log in separate panes within the terminal, showing updates to both logs as they occur.

Link for more details

Parallel Processing with — xargs

When performing tasks like file deletion, backups, or processing large numbers of files, doing these tasks sequentially can take a long time. xargs allows you to run commands in parallel, significantly reducing execution time.

find ~/files-to-transfer/ -type f | xargs -P 4 -I {} scp {} user@remote-server.com:/remote/path/

The command will transfer multiple files in parallel to the remote server.

  • find ~/files-to-transfer/ -type f: This finds all the files in the ~/files-to-transfer/ directory.

  • xargs: Passes these files as arguments to scp.

  • -P 4: This runs up to 4 parallel scp processes, speeding up the file transfer.

  • -I {}: This tells xargs to replace {} with the name of each file found by find.

  • scp {}: Copies each file {} to the remote server (user@remote-server.com:/remote/path/).

Link for more details

Batch File Renaming with — mmv

mmv is incredibly powerful when you need to rename large sets of files with complex rules. It allows you to use wildcards and placeholders for advanced renaming tasks, such as adding prefixes, changing parts of file names, or moving files across directories in bulk.

mmv ‘backup_*.txt’ ‘#1_$(date +%Y-%m-%d).bak’

  • 'backup_*.txt': This pattern matches all files starting with backup_ and ending with .txt.

  • 1: This is a placeholder that captures the part of the filename after backup_ (i.e., report1, report2, data1, data2).

  • $(date +%Y-%m-%d): This shell command inserts the current date in the format YYYY-MM-DD.

  • .bak: The new file extension for the renamed files.

Link for more details

Search for Text in Multiple Files with — grep

When debugging or searching through large codebases, logs, or configuration files, you often need to find specific keywords or patterns across many files. grep allows you to search through multiple files quickly and efficiently.

It is basically ctrl+shift+f for your terminal

grep -Hnr “ERROR” /var/log/

This will recursively search for the word “ERROR inside all log files in the specified directory.

Link for more details

Filter Log Files with — awk

Log files and datasets are often structured in columns (e.g., CSV files or tab-separated data). awk allows you to extract specific columns and apply filters based on conditions, making it ideal for data analysis and report generation.

awk ‘$3 > 100’ file.txt

This command prints only the lines where the value in the third column is greater than 100. A return from it can look something like

line 5: 120
line 9: 133
line 13: 101

Link for more details

Extract Specific Columns from Files with — cut

Similar to awk. Cut can be used to extract only certain columns/fields from files. The difference is the lack of filtering available without the filtering.

cut -d’,’ -f2,3 file.csv

  • -d',': Specifies the delimiter (in this case, a comma).

  • -f2,3: Extracts the second and third columns.

Link for more details

Convert Images from One Format to Another with — convert

If you need to convert images from one format to another, such as converting .png files to .jpg, or resizing images for web optimization. The convert command from ImageMagick makes this easy. You can also use it for converting, resizing, optimizing, or applying other transformations in bulk.

convert image.png image.jpg

This one is self explanatory.

convert input.png -resize 800x600 -gravity southeast -draw “image Over 0,0 0,0 ‘watermark.png’” -quality 85 output.jpg

This one is a bit more interesting. Here are some details for it:

  • input.png: The original image you want to process.

  • -resize 800x600: Resizes the image to 800x600 pixels while maintaining the aspect ratio (if the dimensions don't match exactly).

  • -gravity southeast: Positions the watermark in the bottom-right corner of the image.

  • -draw "image Over 0,0 0,0 'watermark.png'": Draws the watermark image (watermark.png) onto the original image at the specified position.

  • -quality 85: Sets the output image quality to 85% (a balance between quality and file size for web use).

  • output.jpg: The final processed image in .jpg format.

And another one:

for img in *.png; do convert “$img” -resize 1024x768 -colorspace Gray “${img%.png}.jpg”; done

This one is a bit complex as well:

  • for img in *.png; do ...; done: Loops through all .png files in the current directory.

  • convert "$img": Refers to each image file in the loop.

  • -resize 1024x768: Resizes each image to 1024x768 pixels while maintaining the aspect ratio.

  • -colorspace Gray: Converts each image to grayscale.

  • "${img%.png}.jpg": Saves the output image in .jpg format with the same base name, but replacing .png with .jpg.

And my favorite use case — creating a .gif out of images:

convert -delay 20 -loop 0 frame*.png animation.gif

Link for more details

Killing process in a UX friendly way — xkill

Speaking of my favorite use cases. The command I have used the most over the last 15+ years is xkill.

Running xkill turns your mouse cursor into a crosshair (X), and you can click on any window to immediately force-close it.

Why I love it? Because I can just click on the problematic window instead of manually searching for processes.

Link for more details

If you have gotten this far, I thank you and I hope it was useful to you! Here is a cool image of a cat as a thank you!

Photo by [Yerlin Matu](https://unsplash.com/@yerlinmatu?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral)

Top comments (0)