DEV Community

Ertugrul
Ertugrul

Posted on

🐚 Tackling LeetCode Bash Problems – A Practical Guide

Recently, I completed the LeetCode Bash problem set, and in this post I’ll share how I approached these challenges, what I learned, and why these problems are surprisingly useful even for non-scripting heavy developers.


πŸš€ Why Bash Problems?

At first glance, Bash problems may look trivial compared to data structures and algorithms in Python or C++. But they focus on something equally important: practical data manipulation using Unix tools. Almost every engineer encounters situations where text processing or quick data cleanup is needed.

The problems helped me understand:

  • How to use pipes (|) to chain commands
  • How regex + grep can filter structured text
  • How to reshape data with awk
  • When to use sed for substitutions
  • And how utilities like sort, uniq, cut, and tr solve everyday text challenges

πŸ›  My Approach

When I saw a problem, I tried to answer three questions:

  1. What is the core task? (e.g., extract a line, validate a format, count words)
  2. Which Unix tool is most suited? (e.g., grep for patterns, awk for columns)
  3. Can it be done in a one-liner? (LeetCode problems encourage concise pipelines)

By asking these, I avoided overcomplicating and learned to think in terms of data streams.


πŸ“– Example Problems & Explanations

1️⃣ Valid Phone Numbers

Problem: Print all valid phone numbers from a file.

grep -E '^([0-9]{3}-[0-9]{3}-[0-9]{4}|\([0-9]{3}\) [0-9]{3}-[0-9]{4})$' file.txt
Enter fullscreen mode Exit fullscreen mode

πŸ”Ž Explanation:

  • grep -E enables extended regex.
  • ^...$ anchors the full line.
  • Two formats are allowed: 123-456-7890 or (123) 456-7890.
  • Regex alternation | captures both.

This problem trains regex precision.


2️⃣ Word Frequency

Problem: Count frequency of each word in a file and sort by frequency.

cat words.txt | tr -s ' ' '\n' | sort | uniq -c | sort -nr
Enter fullscreen mode Exit fullscreen mode

πŸ”Ž Explanation:

  • tr -s ' ' '\n' replaces spaces with newlines (words per line).
  • sort groups identical words together.
  • uniq -c counts occurrences.
  • sort -nr sorts numerically and in reverse (highest first).

This problem shows the power of pipelines.


3️⃣ Transpose File

Problem: Transpose rows and columns.

awk '{
  for (i=1; i<=NF; i++) {
    if (NR==1) { out[i]=$i }
    else { out[i]=out[i]" "$i }
  }
} END {
  for (i=1; i<=NF; i++) print out[i]
}' file.txt
Enter fullscreen mode Exit fullscreen mode

πŸ”Ž Explanation:

  • NF = number of fields in current line.
  • NR = current row number.
  • For each column, we build a string.
  • After processing all lines, print the collected columns.

This problem illustrates why awk is the Swiss army knife of text processing.


4️⃣ Tenth Line

Problem: Print the 10th line of a file.

sed -n '10p' file.txt
Enter fullscreen mode Exit fullscreen mode

πŸ”Ž Explanation:

  • -n suppresses automatic printing.
  • '10p' explicitly prints line 10.

A simple but elegant use of sed.


🎯 Key Takeaways

  • Think in streams: Bash treats text as flowing streams, not fixed data structures.
  • Pick the right tool: awk for fields, grep for regex, sed for edits.
  • Chain commands: Combining simple commands often beats writing complex scripts.

πŸ“Œ Final Thoughts

Even though these are small exercises, they gave me confidence in handling real-world text processing tasks. From cleaning logs to quick file inspections, these tools save enormous time.

πŸ‘‰ I’ve uploaded all my solutions to GitHub. Check them out and feel free to suggest improvements!
πŸ‘‰ Also my Linkedin Linkedin

Top comments (0)