Recently, I completed the LeetCode Bash problem set, and in this post Iβll share how I approached these challenges, what I learned, and why these problems are surprisingly useful even for non-scripting heavy developers.
π Why Bash Problems?
At first glance, Bash problems may look trivial compared to data structures and algorithms in Python or C++. But they focus on something equally important: practical data manipulation using Unix tools. Almost every engineer encounters situations where text processing or quick data cleanup is needed.
The problems helped me understand:
- How to use pipes (
|
) to chain commands - How regex + grep can filter structured text
- How to reshape data with awk
- When to use sed for substitutions
- And how utilities like
sort
,uniq
,cut
, andtr
solve everyday text challenges
π My Approach
When I saw a problem, I tried to answer three questions:
- What is the core task? (e.g., extract a line, validate a format, count words)
-
Which Unix tool is most suited? (e.g.,
grep
for patterns,awk
for columns) - Can it be done in a one-liner? (LeetCode problems encourage concise pipelines)
By asking these, I avoided overcomplicating and learned to think in terms of data streams.
π Example Problems & Explanations
1οΈβ£ Valid Phone Numbers
Problem: Print all valid phone numbers from a file.
grep -E '^([0-9]{3}-[0-9]{3}-[0-9]{4}|\([0-9]{3}\) [0-9]{3}-[0-9]{4})$' file.txt
π Explanation:
-
grep -E
enables extended regex. -
^...$
anchors the full line. - Two formats are allowed:
123-456-7890
or(123) 456-7890
. - Regex alternation
|
captures both.
This problem trains regex precision.
2οΈβ£ Word Frequency
Problem: Count frequency of each word in a file and sort by frequency.
cat words.txt | tr -s ' ' '\n' | sort | uniq -c | sort -nr
π Explanation:
-
tr -s ' ' '\n'
replaces spaces with newlines (words per line). -
sort
groups identical words together. -
uniq -c
counts occurrences. -
sort -nr
sorts numerically and in reverse (highest first).
This problem shows the power of pipelines.
3οΈβ£ Transpose File
Problem: Transpose rows and columns.
awk '{
for (i=1; i<=NF; i++) {
if (NR==1) { out[i]=$i }
else { out[i]=out[i]" "$i }
}
} END {
for (i=1; i<=NF; i++) print out[i]
}' file.txt
π Explanation:
-
NF
= number of fields in current line. -
NR
= current row number. - For each column, we build a string.
- After processing all lines, print the collected columns.
This problem illustrates why awk is the Swiss army knife of text processing.
4οΈβ£ Tenth Line
Problem: Print the 10th line of a file.
sed -n '10p' file.txt
π Explanation:
-
-n
suppresses automatic printing. -
'10p'
explicitly prints line 10.
A simple but elegant use of sed
.
π― Key Takeaways
- Think in streams: Bash treats text as flowing streams, not fixed data structures.
-
Pick the right tool:
awk
for fields,grep
for regex,sed
for edits. - Chain commands: Combining simple commands often beats writing complex scripts.
π Final Thoughts
Even though these are small exercises, they gave me confidence in handling real-world text processing tasks. From cleaning logs to quick file inspections, these tools save enormous time.
π Iβve uploaded all my solutions to GitHub. Check them out and feel free to suggest improvements!
π Also my Linkedin Linkedin
Top comments (0)