Why
Suppose you are receiving your brand new computer. You have a bunch of softwares to install (your favorite browser, code editor, windows management tool, etc.) and some configurations and shortcuts tweaks. How long does it take you to be fully productive once unboxed? One hour, one day, one week?
If you do this manually, you'll make mistakes. Because, we are all human beings.
You can end up doing it successfully with a well written todo-list but, wait, do you make all the downloads, clicks, configuration by hand? You work in computer science, use computers for doing repetitive and boring tasks.
Now we suppose your laptop is ready for work. You're happy with it and want the same work environment on your desktop, which has a different operating system, should you start again from a blank page?
That's why you need installation automation. They are often called "Dotfiles". Let's deep dive into it.
What
We want to put all our configuration for various environments in a place that is easy to share or reuse, and ideally idempotent.
Context
The following guidelines and explanations work on an Unix-like operating systems. Sorry Microsoft's users, since I've not used this system for a while, I can't talk about it. It would work with Windows Subsystem for Linux because it's a Unix system under the hood.
Content
We have two things to take care of: installation of softwares and their configurations.
For the installation part, thankfully, most operating-systems have a built-in or mainstream packages manager. Aptitude, Brew, Pacman, you name it. It's a good starting point but not every tool or software are available on it or on each architecture. You may need to download a binary or an archive and to put it at the right place.
For the configuration part, it's more ~complicated~ diverse. Some tools rely on files, others on environment variables and others are simply not configurable easily, e.g. hidden or binary files.
Note that Brew, the package manager for macOS, is not built-in and must be installed manually first.
Storage
It looks like we are starting to talk about code, notably infrastructure as code. Code nowadays is in a version control software (VCS). The most broadly use is Git. In order to make it shareable to everyone, you must store your Git repository on GitHub, GitLab, Bitbucket, etc. They are easy to use, well adopted by the community and easy to find.
Git has many features, the one we'll focus on is branching. With branches, you can handle the architecture x operation-system
matrix with ease. A branch for "personal-macos", a branch for "work-ubuntu", a branch for "home-raspbian", etc. You can also add conditions in your code for doing things based on hostnames, environment variables, etc.
My Dotfiles are in place for personal and professional contexts, on amd64
and arm64
architectures, on macOS and Linux operating systems. They are all in one repository with only two branches (main
and work
). I normalize many differences of each operating system.
How
Bash
You can find many tools on GitHub (or elsewhere) to bootstrap your Dotfiles. Some people choose to rely on Ansible, others on some tools you have to install. But how to install a tool that install the tools? Manual installation is not an option. It's like a chicken-egg problem.
Obviously, you need a starting point and ensure that it's available everywhere. On Unix-like systems, Bash is the common denominator. This responsibility, being the common denominator, is hard when you are a thirty years old software.
For example, in the latest macOS version, the shipped version of bash
is 3.2.57
, a version from 2007, because of a licensing change. I'll not explain all the differences between v3 and v5 (current one in 2021) but you can see below a difference.
MY_VARIABLE="macOS"
# Lower case, works only with bash 4+
echo "${MY_VARIABLE,,}"
# Lower case, the old way, bash 3 compliant
echo "${MY_VARIABLE}" | tr "[:upper:]" "[:lower:]"
So, Bash is the broader compatible software already installed on most systems. We know how to run our code, but how can we retrieve it then?
Zip & Curl
We saw before that the best way to store our Dotfiles is in Git, but git
must be installed for cloning the repository, that will install git
. Chicken-egg problem again. Fortunately, solutions like GitHub or Gitlab provide a way to download a repository in a Zip archive. unzip
, like bash
, is present natively on most distributions.
For fetching datas, we also need a tool. curl is the common way to make a request on the Internet from a terminal and, guess what, it's also built-in on most distributions.
The golden triangle
Thank to Bash piping, we can curl
a bootstrap script, pipe it to bash
and start installation by unzip
-ing archive of the repository. All tools are present after installation (tested on macOS, Manjaro, Debian 10).
curl "https://my.bootstrap.script" | bash
⚠️ When doing this kind of command, in any case, always check what you are going to run, especially when some "one-liner install scripts" require sudo
. The Internet is a cool thing, but not everyone acts as a cool person on it.
In the bootstrap script, you can choose where the Dotfiles will be (in my case, I place it in ${HOME}/code/dotfiles
). I would bring your attention on how the curl
options are written, in their long format. When writing a script, you're not in a terminal trying to be quick and limiting your keyboard input. You write code that will be shared and must be comprehensive by other humans. Be verbose.
#!/usr/bin/env bash
set -o nounset -o pipefail -o errexit
main() {
local INSTALL_PATH="${HOME}/code"
local GITHUB_USER="ViBiOh"
local DOTFILES_NAME="dotfiles"
local DOTFILES_BRANCH="main"
local ARCHIVE_FILENAME="${INSTALL_PATH}/dotfiles.zip"
mkdir -p "${INSTALL_PATH}"
curl \
--disable \
--silent \
--show-error \
--location \
--max-time 60 \
--output "${ARCHIVE_FILENAME}" "https://github.com/${GITHUB_USER}/${DOTFILES_NAME}/archive/${DOTFILES_BRANCH}.zip"
unzip "${ARCHIVE_FILENAME}" -d "${INSTALL_PATH}"
rm -f "${ARCHIVE_FILENAME}"
rm -rf "${INSTALL_PATH:?}/${DOTFILES_NAME}"
mv "${INSTALL_PATH}/${DOTFILES_NAME}-${DOTFILES_BRANCH}" "${INSTALL_PATH}/${DOTFILES_NAME}"
(
cd "${INSTALL_PATH}/${DOTFILES_NAME}"
"./init" -a
git init
git remote add origin "http://github.com/${GITHUB_USER}/${DOTFILES_NAME}.git"
git fetch origin
git checkout --force "${DOTFILES_BRANCH}"
)
}
main
The bootstrap script does the "first download" of all needed files. Once done, the init
script takes over. The init
is the script that installs and configures all the things, you'll run it multiple times in the future.
At the end of the script, you can see that there are git
commands. After a successful installation, Git will be installed, and I reconcile the archive downloaded (without .git
folder) with the Git upstream.
Structure
The first thing we need in our Dotfiles repository is the two scripts described before: bootstrap
and init
. You'll need also at least 3 folders more: install
, symlinks
and sources
. We are gonna dive into them.
> tree -L 1
.
├── bootstrap
├── init
├── install/
├── sources/
└── symlinks/
Install
Softwares are most of the time available as a binary and maybe on your favorite package manager. But there are also tasks that are not "software installation" in proper words but need to be run at least once on your computer (e.g. disabling unwanted features of your os, generating configuration file that cannot be symlinked, etc.). From my point of view, it's part of the installation scripts.
Among the softwares I use, all are not available on the package manager. Package managers often install optional softwares I don't want and install them globally, which can interfere with other users. I tend to download binaries directly from GitHub and put them in my ${PATH}
instead of relying on the package manager. It's also easier to stick to a defined version and binaries tend to update faster than package upstreams.
In an ideal world, Dotfiles should not run any command in sudo
. You install your configuration on a machine, keep everything in your ${HOME}
if you can. For example, I create an ${HOME}/opt/
folder for putting my stuff in it (GOPATH, Python packages, etc). I add the ${HOME}/opt/bin
in my ${PATH}
. If I delete my opt/
folder, I'm the only person impacted.
In the install/
folder, you can separate concerns of your installation by splitting things in different files. This way, you can disable a script based on an environment variable. e.g. On my server, I don't need to install my code editor. I set an environment variable DOTFILES_NO_EDITOR
and the install script will not try to run the install/editor
file.
I see the installation phase in 3 steps:
-
clean
. Installation must be idempotent, it can require cleaning before installing or simply "resetting" Dotfiles installation. -
install
. Installation as you may think of it: running script that download from package manager, put binaries in appropriate folders, etc. -
credentials
. Retrieving secrets from your password-manager and put them in your configuration file. See the Secrets section insources/
for more details. This phase must be done after all installations (you have to install the password manager first).
For running each phase separately, we can rely on the presence of a function in a file. Sourcing a file is easy in Bash and checking that a function is available too.
If we condense all we have talked before in one function, it will be like the following code, put in the init
script.
browse_install() {
while IFS= read -r -d '' file; do
local BASENAME_FILE
BASENAME_FILE="$(basename "${file}")"
local UPPERCASE_FILENAME
UPPERCASE_FILENAME="$(printf "%s" "${BASENAME_FILE}" | tr "[:lower:]" "[:upper:]")"
local DISABLE_VARIABLE_NAME="DOTFILES_NO_${UPPERCASE_FILENAME}"
if [[ ${!DISABLE_VARIABLE_NAME:-} == "true" ]]; then
continue
fi
if [[ -r ${file} ]]; then
for action in "${@}"; do
unset -f "${action}"
done
source "${file}"
for action in "${@}"; do
if [[ $(type -t "${action}") == "function" ]]; then
printf "%s - %s" "${action}" "${BASENAME_FILE}"
"${action}"
fi
done
fi
done < <(find "${CURRENT_DIR}/install" -type f -print0 | LC_ALL=C sort --zero-terminated)
# ${CURRENT_DIR} is the root of the Dotfiles repository
}
browse_install clean install
browse_install credentials
Symlinks
The easiest tools to configure are the ones that rely on a single file on your home directory, that starts with a dot. That's where the repository takes its name: the famous "dotfiles". You know them: the .bashrc
, the .vimrc
or the .gitconfig
. In order to keep the file under version control system (Git), but not committing your entire ${HOME}
, the easy trick is using a symlink. The file exists in the ${HOME}
folder, so your tool can read it, but the content remains on the folder under version control. The best of both worlds.
~/ > ls -la
.bashrc -> /Users/macbook/code/dotfiles/symlinks/bashrc
.curlrc -> /Users/macbook/code/dotfiles/symlinks/curlrc
.gitconfig -> /Users/macbook/code/dotfiles/symlinks/gitconfig
.ignore -> /Users/macbook/code/dotfiles/symlinks/ignore
.inputrc -> /Users/macbook/code/dotfiles/symlinks/inputrc
.tmux.conf -> /Users/macbook/code/dotfiles/symlinks/tmux.conf
.vimrc -> /Users/macbook/code/dotfiles/symlinks/vimrc
They are simple to install: put all your .file
in a directory (e.g. symlinks) and link every file in that folder to your ${HOME}
folder. Note that my files are named without a dot (e.g. bashrc
) and I add the "dot" during the symlink phase. A dotfile is by default a hidden file on most operting systems. I don't want it to be hidden in code, only when used.
Creating the symlinks can be done with the following snippet, put in the init
script.
create_symlinks() {
while IFS= read -r -d '' file; do
local BASENAME_FILE
BASENAME_FILE="$(basename "${file}")"
if [[ -n ${FILE_LIMIT} ]] && [[ ${BASENAME_FILE} != "${FILE_LIMIT}" ]]; then
continue
fi
rm -f "${HOME}/.${BASENAME_FILE}"
ln -s "${file}" "${HOME}/.${BASENAME_FILE}"
done < <(find "${CURRENT_DIR}/symlinks" -type f -print0)
# ${CURRENT_DIR} is the root of the Dotfiles repository
}
A gentle reminder on symlinks: they are living "content". When you pull the new version of your Dotfiles repository, the configuration is changed instantaneously.
👍 It has good side: you pull your code and have nothing to do.
👎 It has down side: if you have a Git conflict on your symlinks/gitconfig
file, your Git can be broken because your ~/.gitconfig
is invalid 😅.
Chicken-egg problem, always.
Sources
For configuring you shell environment, the music is a little bit different, in other words, complex! The common way for configuring your shell is the ~/.bashrc
(or ~/.zshrc
, .you-name-it-rc
). That's what we just configured before with symlinks. But putting the configuration of every tools used in the same file makes the file unreadable, with a lot of comments for separating purpose of code. Not a clean way to do it.
Fortunately, it's up to you to split it correctly. My ~/.bashrc
sources every file contained in the dotfiles/sources/
folder.
I don't want to hardcode the path of my Dotfiles folder. Thanks to symlinks already being linked to a folder inside my repository, I'm able to find where my files really are with the ${BASH_SOURCE[0]}
trick.
Sourcing the sources
folder can be done with the following snippet, put in the symlinks/.bashrc
file.
source_all() {
local SCRIPT_DIR
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
for file in "${SCRIPT_DIR}/../sources/"*; do
[[ -r ${file} ]] && [[ -f ${file} ]] && source "${file}"
done
}
In sources/
folder, you put every terminal configuration you need, e.g. environment variable, function, aliases etc. It's time for a good reminder.
Rule #1 of the Dotfiles Club
You don't put secrets in your Dotfiles.
Rule #2 of the Dotfiles Club
YOU DO NOT PUT SECRETS IN YOUR DOTFILES.
If you want to have secrets in your environment variables (e.g. a token, a password), put them in the ~/.localrc
. Not symlinked anywhere! As its name stands, it remains "local".
if [[ -e "${HOME}/.localrc" ]]; then
source "${HOME}/.localrc"
fi
Secrets
If you have read the two rules of the Dotfiles Club, you may wonder how to configure automatically your computer without putting secrets in a repository. The secret of the secrets management is: a password manager.
Because we are in Unix environments, I personally use pass that is easily scriptable, relies on the solid PGP encryption and stores everything in Git. I need to import my GPG Public Key (relying on a Yubikey from the private) and it's fine. Security is a serious topic. You can use any other password manager, install the CLI with install/
scripts and you should be able to retrieve your credentials and to push them in your configuration files during the credentials
phase.
Tips and tricks
Keep in mind that your dotfiles are expected to work on various environments at various stages of installation. Always be backward compatible and lenient on errors.
When sourcing, you should not use Bash flags -o nounset -o pipefail -o errexit
, because if an error happens, it will crash Bash startup and you can end up being locked outside of your terminal.
Also, if you use a tool that is not built-in, always check its presence with the following snippet. You may have disabled its installation, it may not be available on your architecture, etc.
if command -v git >/dev/null 2>&1; then
# do `git` related stuff
fi
Bashrc
Using a .bashrc
for automatically loading your environment is a good thing, but you don't need it when performing a headless operation (such as rsync
). In that case, it can slow you down. Fortunately, there is this trick for disabling the loading of the .bashrc
.
[[ -z ${PS1:-} ]] && return
macOS has a particular case here. By default, it doesn't look for the ~/.bashrc
file, but the ~/.bash_profile
. You can fix it by creating this file and pointing it to your .bashrc
.
#!/usr/bin/env bash
if [[ -f "${HOME}/.bashrc" ]]; then
source "${HOME}/.bashrc"
fi
Order
Browsing files must be done in a certain order because you may have dependencies between files (e.g. changing the ${PATH}
variable before checking if a software is installed, installing Python before using pip
). You don't have this kind of problems in a single file, because you simply put the line at the right place.
When iterating on a folder with for
or find
, Bash relies on the alphabetic order, defined by the LC_ALL / LC_COLLATE
environment variables. You can control order of sourcing by ensuring your files are in alphabetical order. Pro-tips: underscore is before any lower case character.
In order to properly configure your locale, you can add the following snippet before the source_all
, that gracefully sets locale if it's present.
set_locale() {
local LOCALES=("en_US.UTF-8" "en_US.utf8" "C.UTF-8" "C")
local ALL_LOCALES
ALL_LOCALES="$(locale -a)"
for locale in "${LOCALES[@]}"; do
if [[ $(echo "${ALL_LOCALES}" | grep --count "${locale}") -eq 1 ]]; then
export LC_ALL="${locale}"
export LANG="${locale}"
export LANGUAGE="${locale}"
return
fi
done
return 1
}
All together
I've made the simplest possible dotfiles repository in this Gist. It's not possible to create folder on a Gist, so for example symlinks_bashrc
should be symlinks/bashrc
.
You'll find a snippet for installing Brew and changing the default shell of user on macOS. If you install Bash with Brew, you have two bash
: one in /bin/bash
(3.2.57) and one in /usr/local/bin/bash
(5+).
dotfiles
How it works?
Please have a look at my article here
Installation
curl "https://dotfiles.vibioh.fr/bootstrap.sh" | bash
Update
"${HOME}/code/dotfiles/init.sh" -a
Configuration
You can set following environment variables for customizing installation behavior:
-
DOTFILES_NO_NODE="true"
doesn't perform install ofinstallations/node
file (replaceNODE
by any uppercase filename ininstallations/
dir)
#!/usr/bin/env bash
# Dotfiles configuration example for a server
export DOTFILES__SCRIPTS="true"
export DOTFILES_RIPGREP="true"
export DOTFILES_VIM="true"
export DOTFILES_YQ="true"
SSH
ssh-keygen -t ed25519 -a 100 -C "$(whoami)@$(hostname)" -f "${HOME}/.ssh/id_ed25519"
GPG
gpg --full-generate-key
Command Line Tools (macOS)
Reinstall them by running following command:
sudo rm -rf $(xcode-select -print-path)
xcode-select --install
Brew
Fix it with following command when it's broken.
sudo chown -R "$(whoami)" "$(brew --prefix)"/*
…My personal Dotfiles have more syntactic sugar but are the source of inspiration for writing this article. It's a "template repository", feel free to use it. With near 2k commits, achieving my Dotfiles was a journey and I just wanted to share it. Hope you enjoyed the reading ride!
PS: English is not my native language and it's my first post here, be kind, I'm open to advice for improving =)
Top comments (7)
In that same area, I can also recommend GNU Stow.
This is a very nice write-up.
Great article. I use Bare repositories explained in this video I think you'll find it useful 😀.
Dude, check out chezmoi. It's the best tool for this job ever.
It also support templating
chezmoi.io
Yep, it looks like a good tool. I've seen it recently but everything is already present in Bash. For installing tools, chezmoi also relies on Bash. I personally try to limit my dependencies.
I feel it only appropriate to share my dotfiles, please feel free to criticize accordingly because I am sure they're not the prettiest haha
They are not so bad ^^ There are no "good dotfiles", the good ones are the ones you master!
I don't see scripts that are supposed to be run for installing the whole things (or you clone it your
$HOME
maybe). Many things are described (private joke included !, it lacks automation I think