When my company internal IT informed me that my new laptop arrived I was glad to finally set up a new Ubuntu LTS release. While Ubuntu comes with a lot of great applications installed out-of-the-box there are many applications which I have to install after OS installation to make my laptop useful for everyday work. Of course, there is the third part of the laptop setup equation which makes my work possible - my data :)
My laptop is set up using these scripts in a repeatable way in minutes. I just keep them in a private GitHub repo and make sure that I don't install applications manually without adding them to those scripts.
No sensitive data like passwords or similar are uploaded there but I still don't want to make this repo public because some of our company client names and our company project names are visible in some of the rules for the backup (exclude/include).
Restore of the most valuable data is ensured with backups to both, external drive & Cloud.
TLDR;
Using a collection of Bash scripts and the GNU Stow you can set up your machine quickly in an easily repeatable way. Make all of the operations you perform idempotent so you can run all scripts as many times as you want (on a new machine, on a partially set up machine or if you just want to make sure all apps are installed).
Add a proper backup application to that and you won't need to spend much time on setting up your new laptop - you will be able to start to use it almost immediately after it arrives.
OS Installation
[Ubuntu installation[(https://ubuntu.com/tutorials/install-ubuntu-desktop) is quite straight-forward and there is no need to describe it here in more details.
Adding applications & OS customization
I have read a few articles about configuring and setting up a machine using GNU Stow and a bunch of shell scripts and I wanted to check how good can it work for me.
What I came up with was a set of scripts with where I need to call only install.sh
script (to be more precise, sudo install.sh
) which in turn, calls all the other scripts:
#!/bin/bash
set -eu
$(dirname $0)/install-from-repos.sh
$(dirname $0)/install-custom.sh
$(dirname $0)/install-github-releases.sh
$(dirname $0)/install-custom-opt.sh
$(dirname $0)/install-config.sh
install-from-repos.sh
This script first adds custom Ubuntu repositories (using another script) and after that executes a bunch of apt install
& snap install
commands, something like:
#!/bin/bash
set -eu
$(dirname $0)/add-custom-repos.sh
apt install -y git
apt install -y stow
apt install -y source-highlight
apt install -y vim
...
snap install xmlstarlet --classic
snap install go --classic
...
install-custom.sh
Custom installation is for now very simple - it just enables syntax colouring in the less
pager command:
#!/bin/bash
set -eu
echo -e '#!/bin/bash\ndiff -u -r "$1" "$2" | cdiff | less -R' > /usr/local/bin/ldiff
chmod +x /usr/local/bin/ldiff
echo "installed ldiff script"
echo ""
install-github-releases.sh
Installation of GitHub releases is more complex but it boils down to fetching releases list using GitHub REST API and installing the latest version (if it is not already installed). There are more of it for many different applications but this will give you enough info to make your version of a script:
#!/bin/bash
set -eu
echo ""
echo "Starting to install custom releases from GitHub."
echo ""
# Fetches available versions from GitHub and finds the latest version satisfying
# grep constraints
function find_github_latest_release_url() {
local -r release_repo_path="$1"
local -r release_grep="$2"
local -r release_list_url="https://api.github.com/repos/${release_repo_path}/releases"
local -r asset_download_url_list="$(curl -s \
"${release_list_url}" \
| jq -r '.[].assets[].browser_download_url')"
local -r asset_url="$(echo "${asset_download_url_list}" \
| grep "${release_grep}" \
| head -n1)"
if [[ -z "asset_url" ]]; then
echo "Latest relase for '${release_repo_path}' with grep '${release_grep}' not found, exiting." >&2
exit 1
fi
echo "${asset_url}"
}
# Get version from GitHub release download URL eg:
# https://github.com/croz-ltd/dpcmder/releases/download/v0.6.0/dpcmder-linux-amd64
function get_github_release_version() {
local -r asset_url="$1"
echo "$(echo "${asset_url}" | sed 's|.*/download/||; s|/.*||')"
}
function install_asset() {
local -r local_version="$1"
local -r release_repo_path="$2"
local -r release_grep="$3"
local -r installer_function_name="$4"
local -r asset_url="$(find_github_latest_release_url \
"${release_repo_path}" "${release_grep}")"
local -r github_version="$(get_github_release_version "${asset_url}")"
echo "Local version: '${local_version}', version on the GitHub: '${github_version}' (${release_repo_path})"
if [[ "${local_version}" != "${github_version}" ]]; then
asset_file="${asset_url##*/}"
wget -nv "$asset_url" -O "${asset_file}"
${installer_function_name} "${asset_file}"
fi
}
################################################
# Callback functions for apps installation BEGIN
################################################
#########
# dpcmder
#########
function install_dpcmder() {
local -r downloaded_file="$1"
chmod +x "${downloaded_file}"
mv "${downloaded_file}" /usr/local/bin/dpcmder
echo ""
echo "installed dpcmder: $(dpcmder -v || true)"
echo ""
}
#########
# oc
#########
function install_oc() {
local -r downloaded_file="$1"
tar -xf "${downloaded_file}"
local -r unpack_dir="${downloaded_file%.tar.gz}"
mv "${unpack_dir}/oc" /usr/local/bin/oc
mv "${unpack_dir}/kubectl" /usr/local/bin/kubectl
rm -rf "${unpack_dir}"
rm "${downloaded_file}"
oc completion bash > /etc/bash_completion.d/oc_bash_completion
echo ""
echo "installed oc: $(oc version || true)"
echo ""
}
################################################
# Callback functions for apps installation END
################################################
################################################
# Install apps
################################################
################################################
# dpcmder
dpcmder_version_full="$(dpcmder -v || true)"
dpcmder_version="$(echo "${dpcmder_version_full}" | sed 's/.* version //; s/ .*//')"
install_asset "${dpcmder_version}" "croz-ltd/dpcmder" "linux" install_dpcmder
################################################
# oc & kubectl
oc_version_full="$(oc version || true)"
oc_version="$(echo "${oc_version_full}" | grep oc | sed 's/.*v/v/; s/+.*//')"
install_asset "${oc_version}" "openshift/origin" "v3.*client.*linux" install_oc
echo ""
echo "Finished installing custom releases from GitHub."
echo ""
install-custom-opt.sh
This script installs custom applications which I usually unpack to the /opt
directory, for example Eclipse. Here I install specific version of the product and URL to each product is hardcoded in this script. There are not many of these so I find that this approach is the best fit for my requirements. For these apps following steps are performed:
- check the current version installed
- does directory
opt/{app_name}.{app_version}
exist?
- does directory
- if the target version is not found:
- download an archive
- unpack an archive to
opt/{app_name}.{app_version}
directory - make a symbolic link from
opt/{app_name}
toopt/{app_name}.{app_version}
install-config.sh
This is where the custom configuration (not installation) of my Ubuntu is performed. These are custom configuration tasks performed by this script:
- setting the vim as my default editor
- stowing my dotfiles
- make docker run as a non-root user
- changing some Ubuntu key bindings
- configuring my restic backups
- enabling ufw
This is a bit shortened version of this script:
#!/bin/bash
set -eu
username="my_username"
update-alternatives --set editor /usr/bin/vim.basic
sudo -u ${username} mkdir -p /home/${username}/.config/systemd/user
# Get all directories except bin
STOW_DIRS="$(ls */ -d | grep -Ev 'download|bin')"
sudo -u ${username} stow --dir=/home/${username}/dotfiles \
--ignore=download \
--ignore=bin \
${STOW_DIRS}
# run docker as a non-root user
groupadd docker || true
usermod -aG docker ${username}
# Remove <Ctrl><Alt>[<Shift>]ArrowKey mappings
sudo -u ${username} dconf write /org/gnome/desktop/wm/keybindings/move-to-workspace-down "['<Super><Shift>Page_Down']"
# Configure restic-backup timer
sudo -u ${username} loginctl enable-linger username
sudo -u ${username} systemctl --user enable restic-backup.timer
sudo -u ${username} systemctl --user start restic-backup.timer
ufw enable
dotfiles & Stow - examples
To give you the better overview of what can be configured using Stow here are some example of files which I keep in the GIT repo:
# "classic" dotfile configurations
dotfiles/bash/.bashrc
dotfiles/bash/.profile
# scripts for the setup
dotfiles/bin/add-custom-repos.sh
dotfiles/bin/install-config.sh
dotfiles/bin/install-custom-opt.sh
dotfiles/bin/install-custom.sh
dotfiles/bin/install-from-repos.sh
dotfiles/bin/install.sh
# Restic backup scripts
dotfiles/bin/restic/.gitignore
dotfiles/bin/restic/restic-backup.sh
dotfiles/bin/restic/restic-exclude
dotfiles/bin/restic/restic-include
dotfiles/bin/restic/restic-password
# Gnome desktop launcher files
dotfiles/eclipse/.local/share/applications/eclipse.desktop
# Restic service & timer configuration
dotfiles/restic-backup/.config/systemd/user/restic-backup.service
dotfiles/restic-backup/.config/systemd/user/restic-backup.timer
Restic backup setup
For the backup, I use the restic application written in Go. I won't write a lengthy description about it here but I do find it a great app with many configuration options to securely back up your encrypted data. If you want to get a good description of how to use Restic I would suggest you check it's documentation on "Read the Docs" pages. It can also make a backup to the AWS S3 which I use. In my case, this is just a few GB backup of the most valuable data. Costs of storing a few GB in AWS S3 is just around a 1$ per year and gives me a peace of mind for the worst-case scenario.
Restic is run by systemd timer running systemd service and both of those are created using Stow + custom configuration (check sytemctl enable
& systemctl start
in the install-config.sh
script).
My Restic configuration is one of dotfiles configuration directories. Restic script run by restic-backup.service is also present there:
dotfiles/bin/restic/.gitignore
dotfiles/bin/restic/restic-backup.sh
dotfiles/bin/restic/restic-exclude
dotfiles/bin/restic/restic-include
dotfiles/bin/restic/restic-password
dotfiles/restic-backup/.config/systemd/user/restic-backup.timer
dotfiles/restic-backup/.config/systemd/user/restic-backup.service
Just make sure that you don't commit your restic-password
file to GIT repository by mistake - add it to your .gitignore file :)
restic-backup.sh
#!/bin/bash
set -eu
restic backup \
-r ~/restic-repo \
--exclude-caches \
--files-from $(dirname $0)/restic-include \
--exclude-file $(dirname $0)/restic-exclude \
--password-file $(dirname $0)/restic-password
restic-backup.timer
###################################################
# restic-backup.timer - runs restic backup each day
# loginctl enable-linger vedran
# systemctl --user list-timers
# systemctl --user enable restic-backup.timer
# systemctl --user start restic-backup.timer
[Unit]
Description=Regular restic backup
[Timer]
Unit=restic-backup.service
OnCalendar= *-*-* 15:00:00
Persistent=true
[Install]
WantedBy=default.target
restic-backup.service
##########################################################################
# restic-backup.service - runs restic backup (used by restic-backup.timer)
# systemctl --user status restic-backup
[Unit]
Description=Restic backup
[Service]
Type=oneshot
ExecStart=/home/vedran/dotfiles/bin/restic/restic-backup.sh
Which data I do back up using this setup?
I back up with this setup only the data I find hard to "reconstruct".
I don't back up application installations available from the Internet - these can be easily fetched again from the Internet if required.
I don't back up virtual machines using this setup - there is only one VM I do back up manually, periodically but I just save whole VirtualBox machine to the external drive.
Conclusion
This approach is a DevOps inspired way to set up and configure a new desktop machine (Ubuntu Linux OS). Simplicity and ease of this approach make a process of the migration to the next machine so fast you won't think twice about it.
The cover image is created by Pixabay.
Top comments (0)