There are great guides on bash
(or Bourne-compatible shell: sh
, zsh
, ksh
) out there. I don't want to teach you bash, or any special trick. I want to convince show you why I think it is worth learning. It won't be much but, hopefully, it is enough, if the following itches your curiosity:
$ git log --name-only --pretty="format:" \
| sed '/^\s*$/'d \
| sort \
| uniq -c \
| sort -rn \
| head
I assume you already know how to use a shell to run commands, and that you have git
installed.
Composition using Pipes
On a POSIX shell, bash for example, you can use pipes (|
) to use the output of a program as input of another:
$ seq 1 5
1
2
3
4
5
$ seq 1 5 | sort -n -r
5
4
3
2
1
To learn what a command does you can use man <command>
, <command> --help
, info <command>
or help <command>
. An excerpt from the man
pages of commands above shows:
-
seq <first> <last>
prints a sequence of numbers fromfirst
tolast
. -
sort [options] [file]
sort lines of text files. Without afile
, it reads from standard input.
Notice thinks between [brackets]
and <less-greater signs>
? This means <required>
and [optional]
, a convention mostly everyone follows. All programs used in these examples are available even on most basic distributions. Even [alpine][], which is known for being very small and lean:
$ docker run --rm -it alpine sh
# seq 1 3 | sort -n -r
3
2
1
It is worth noting that man
(and its counterparts) work offline. Getting to know how them and [the pager] will give you access to invaluable knowledge (git man pages are a treat).
Loops and Conditionals
You can think about a shell as a "place to run other programs". When it really is an infinite loop running one command: readline. Once you wrap your head around that, you can quickly develop and debug small programs. Like a never-ending running test-suite.
I like to approach this is by using history expansion (zsh, macOS default shell, also has it):
-
!!
executes last successful command. -
!<prefix>
executes last command that matchesprefix
. -
!$
expands to the last ($
on regex is used as "the end of a string") argument of the last executed program (only works onbash
).
Some great CLI citizens use them. After a git clone <repo> [dir]
, for example, you can cd !$
to enter the directory you've just cloned. Notice how the last option is useful for other commands. That, my great comrade, is good design. Remember this and you will remember the order of argument for some pretty usefull programs:
-
ln <path/to/file> <path/to/symlink>
: The symlink is the useful part, so it is last. You can!$
to run the binary orcd !$
if it as directory. -
cp <source [source [source]]> <dest>
: You can copy multiple files and directories to one destination, which is the useful part. So it is last.
Back to our "never-ending test-suite": I try a command until I am satisfied with its result and then pass it on with history expansion to a loop or another command.
Suppose you want to update all Git repositories inside your $HOME
directory. The outline of the idea: (1) find all directories with .git
inside of them, (2) for every repository cd <repo>
into it and (3) run git pull
.
$ find "$HOME" -type d -name ".git"
/home/augustohp/.tmux/plugins/tpm/.git
/home/augustohp/.vim/bundle/vim-nerdtree-tabs/.git
/home/augustohp/.vim/bundle/nvim-lspconfig/.git
/home/augustohp/.vim/bundle/trouble.nvim/.git
/home/augustohp/src/github.com/expressjs/.git
The command above lists all .git
(-name
) directories (-type d
) inside $HOME
. Note the results have .git
on them - we want its parent directory. So I will try and use sed
to remove .git
from the end of each line, I will keep trying until I have:
$ !find | sed 's/\/\.git$//'
/home/augustohp/.tmux/plugins/tpm
/home/augustohp/.vim/bundle/vim-nerdtree-tabs
/home/augustohp/.vim/bundle/nvim-lspconfig
/home/augustohp/.vim/bundle/trouble.nvim
/home/augustohp/src/github.com/expressjs
sed
accepts any regular expression delimiter, we are using /
(which most examples you see use it as well) but when dealing with paths (which use /
as directory separator) it is useful to use another - avoiding the escape (\
). Dot is also an special character we need to escape (\
), it is used to match "any character". Using another delimiter the command becomes:
$ find "$HOME" -type d -name ".git" | sed 's#/\.git$##'
/home/augustohp/.tmux/plugins/tpm
/home/augustohp/.vim/bundle/vim-nerdtree-tabs
/home/augustohp/.vim/bundle/nvim-lspconfig
/home/augustohp/.vim/bundle/trouble.nvim
/home/augustohp/src/github.com/expressjs
Bash, as other shells, have conditions and loops. With variables and command substitution, we can start to compose more complex instructions:
$ find "$HOME" -type d -name ".git" | sed 's/\/\.git$//'
$ repositories=$(!!)
$ for repo in $repositories
do
cd "$repo"
git pull --auto-stash
cd -
done
-
$(!!)
executes the previous command (!!
) inside a sub-shell and returns its output. -
repositories=$(!!)
defines the contents of the previous command executed ($(!!)
) intorepositories
variable. -
for name [ [in [words …] ] ; ] do commands; done
executes a loop:-
cd "$repo"
enters the repository. It is good to always quote ("
) paths because they might have spaces on their names. -
git pull --auto-stash
will update the repository and save (stash) any uncommitted changes. -
cd -
returns to previous directory, before the firstcd
was made. - If you want to do that in one line, you need to change
\n
(new line) to;
. If you search the command usinghistory
, you will see it on that short format.
-
Let's say you don't want to update repositories that have uncommitted changes in them. For that, the output of git status
should be empty which can be tested with test -z
(man test
to see available operators for if
conditions):
$ for repo in $(find "$HOME" -type d -name ".git" | sed 's/\/\.git$//')
do
cd "$repo"
git_status_output="$(git status)"
if [ ! -z "$git_status_output" ]
then
git pull --auto-stash
else
echo "Error: $repo has uncommitted changes."
fi
cd -
done
Conditionals and exit codes
You know conditionals right? On shells they look the same but they have a twist, one that is useful for running commands: The return of a command can always be evaluated as a conditional. If it runs successfully, it is true
. Every command that return 0
(zero), is successful. So commands can have as many error codes they want.
I've made the instructions bigger to improve understanding, usually I'd one-line them with &&
(AND) and ||
(OR) operators:
$ cd /tmp/non-existing-directory
-bash: cd /tmp/non-existing-directory: No such file or directory
$ echo $?
1
The special variable $?
has the return code of the previous command. Since it is 1
it was an error, if the error message did not give it away. As you've guessed, you can do this:
$ if cd /tmp/non-existing-directory
then
echo "great success!"
else
echo "not"
fi
-bash: cd /tmp/non-existing-directory: No such file or directory
not
You can, of course, get rid of these error messages using redirections:
$ cd /tmp/non-existing-directory 2> /dev/null
$ echo $?
1
The 2>
redirects file descriptor 2
(stderr
) to /dev/null
. You can also shorten every conditional using ||
and &&
operators:
$ test -z "$git_status_output" || git pull --auto-stash
This would just execute git pull
if the result of test -z
would be false - return status code ($?
) different than 0
(success). As the shell already has conditions builtin the REPL, the test
programs just have some handy operators:
-
-z
for testing for empty strings and-n
for non empty strings. -
-f
for existing file and-d
for directories. -
-lt
and-le
for "less than or equal".
How do you see other conditional operators? Since it is a program: man test
.
What can you do with it?
This may look like "too much" at first glance but think about it: How many things you could automate since everything is a program and follows the same conventions?
If, for example, you have gh
(GitHub CLI program) installed, you can clone all the repositories of an organisation with:
for repo in $(gh repo list --limit 200 --source --no-archived "$owner" | awk '{print $1 }')
do
gh repo clone "$repo"
done
As long programs return text (spoiler alert: they will) you can compose them with other programs. If you need to transform text, for example, you have some great tools already available. Here are the ones I've used the most:
$ alias rank="sort | uniq -c | sort -nr"
$ alias second_column_only='awk "{ print \$2 }"'
$ alias top10="rank | head -n 10 | second_column_only"
$ history | second_column_only | top10
awk
column
sed
cut
cat
tr
split
mktemp
fg
z - (zoxide, this one needs installation)
fzf - (fuzzy finder, this too needs installation)
What seems like a limitation at first, the output is just text, is actually great software design. You will notice everything is already done for you: from getting the nth column of an output to splitting a huge file into smaller ones (with split
).
What now?
Time to make your own history
. Make sure it is configured right on your shell, I like to:
- Keep it big, disk space is cheap. The default usually only holds a couple of hundred commands. I like to have it a lot. You can use
CTRL-R
to search and use it, since its output is text you can... you get the idea. - Ignore entries that start with space. You will always type something (e.g.: API Key) you don't want to keep saved in a file somewhere.
I know it is tempting to Google for one-liners and such, try not to. The best feature of a shell is to make it your own. Different from an IDE or GUI, it expects you to customize it: to make its output your own. So use it: find a pattern, create a shortcut to it and learn something new (man
pages). All shells allow you to load custom files on startup, use them.
The shell is a program. If you are a programmer, make it a good one. The journey will teach you a lot.
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.