This blog will be the first in the three-part series (maybe more, we will see) of self-hosting. In the first part, we will explain how to start and secure your self-hosted server. The second part will address zero-downtime deployment using Docker Swarm. In the third part, we will discuss backing up your databases.
What is this all about? Why self-hosting?
Let's say you are the developer, which you most likely are. Let's say you get an idea of an application that you want to make. Namely, you should host somewhere that application, as your home computer most likely doesn't have that stable internet connection, nor IP, as usually (read always) those are changed dynamically by your ISP.
Okay, so you have an idea for an application, you want to try it out under your terms, and what is your first instinct?
CLOUD!
AWS!
GOOGLE!
SERVICES!
REGISTRIES!
ACTIONS!
CI/CD!
MORE CLOUD SERVICES!
And many more...
Now, there is a catch in all of those little things/services/conveniences, cloud is expensive. For everything covered in this part and the future part of this series, you will be able to find equivalent services in AWS, Google Cloud, etc., of course, you would, but it might cost you quite a bit the more services you take under your belt.
Now, don't get me wrong, I am not against using cloud services (although I think those are a bit costlier than they should be), I am simply stating that you should minimize the costs of everything possible until you get some revenue from your application. Once you start getting revenue, and you stop being the sole developer working on your app, I am telling you, it will be a breeze to scale both vertically and horizontally (okay, horizontally is a bit more involved, but still, it won't be that difficult). When there is money involved in an application, everything will be easier regarding your development, then you might hire a DevOps (if you are one, then congrats, you might hire a developer to write you an app for your impeccable infrastructure), more developers, etc., you get the point.
Therefore, to conclude the big why:
There is no point in you paying large chunks of money for the development of an app that is still not generating any revenue. Infrastructure for app functioning should be paid from its profit. Therefore, this series is focused on gathering the knowledge to reduce the costs of development and MVPs until you get some meaningful profits.
So, enough chit-chat, let's get the server working!
Why is a server needed?
As we have previously explained. The server must be bought, and that is a plain infrastructure problem. You cannot really control your network connection, or if you lose electricity in your apartment, or if your ISP changes your home IP address. We are trying to make application infrastructure cheap, but by no means do we want to convert that with application uptime. We don't want our users to be unable to access our application, that is where we draw the line. Therefore, you must have a remote server bought. We are not getting into free 60 days trials from Google Cloud, or any other free trial. Why you ask? Considering that your server will be up longer than that, you might end up paying more than pay lower price from the beginning.
After much research, at the time of writing this blog, the winner is simply Hetzner. The ratio of costs and quality is simply the best at this moment (not promoted, I promise).
Okay, so we will go with Hetzner. Specifically, I will take a server for 6.30€ (at the time of writing this blog) and has the following specifications:
- 8GB RAM
- 4vCPU
- 80GB Disk Storage
Which, in my opinion, according to the current market, is a pretty good deal. You can go with even lower specifications if you want, but these specifications will work just fine for me.
Buying the server
Once we have decided which server to buy, we shall proceed with its configuration, as presented below.
Germany is closest to me and Ubuntu 22.04 is just fine for me, note that you can choose a different version.
Next, we will choose which server we want from the provided options.
After deciding on the strength of our machine, we shall proceed with its SSH configuration.
You should add a public SSH key from your local machine (don't worry, public SSH keys are free to share with others). If you don't, then you will receive an e-mail with the root user password, which you don't really want. There is no need to add a third party in the whole password credentials generation. This way, when you add your public SSH key, you will receive no e-mail, and security engineers will be proud.
To check what your public SSH key is, run this command:
cat ~/.ssh/id_rsa.pub
Then simply copy/paste from the terminal and you are good to go.
Once we have completed setting up the machine, we can start SSH connection to its terminal from our local machine with the following command:
ssh root@{your server ip}
You should answer any prompt that might occur for the first SSH connection (for fingerprint). That prompt is received only once, and if you get it on any following SSH connections, you are most likely a victim of a Man in the Middle attack, just so you know what to Google if that happens.
Now, let's make our server secure!
1) Update everything to the latest version
It is important to keep everything on the server up to date, as newer versions are patching, among other stuff, for security flaws. Therefore, we always want to operate with the latest versions of that software.
To update everything, run the following commands:
apt update
apt upgrade
After that, once you have upgraded everything, run the following command:
ls /var/run/reboot-required
If you get /var/run/reboot-required as a response from the last command, that means you should reboot your machine (duh!). To reboot, simply run:
reboot
and wait for your machine to reboot. Note that you can also reboot from your dashboard from your provider, all major providers allow for dashboard reboot.
2) Change the password for the root user
In the following steps, we will disable the root user completely, but I wanted to show you how you can first change the root user's password. To change it, type the following command:
passwd
and simply enter a new password when prompted.
3) Create a non-root user
It is important to get rid of the root user as soon as possible, as the root user does have all permissions to do whatever the root user wants. Now, since we are root at the moment, we don't type sudo for anything, but if someone malicious was to reach our server (we certainly hope that is not going to happen!), we want them to reach that server at most as some other user, namely, if they want to temper with some system configuration, they need to type sudo and to know the password for sudo (which we will create and make it so it is hard to figure out).
Okay, let's create a non-root user by typing the following:
adduser {username you want}
and then type a new password (make sure it is hard to guess password, use some random generator or whatever, as it will be the one you will be using when typing sudo) and also fill in answers for questions related to the user information. After that new user is created. Remember, keep this password somewhere safe, it will be needed for future endeavors.
Then we should add this user to the sudo group with the command:
usermod -aG sudo {username you have chosen}
Check it by typing groups {username you have chosen}
and see if a chosen username is in the sudo group. If you see your chosen username and sudo as output, then we are good to go.
Now, we need to enable the newly created user to connect with our local machine via SSH (as previously added SSH is only for the root user). We will accomplish that by exiting the current session from the remote server (just type exit
and you are out), and logging in with our newly created user by typing the following:
ssh {chosen username}@{server ip}
Now we will be prompted to type our newly created user password because we don't have SSH configured yet. Type in the password and enter the terminal in a remote machine.
To enable a new user SSH login, first, we need to get our local machine SSH (remember, it is cat ~/.ssh/id_rsa.pub
), and then type the following:
mkdir .ssh
nano .ssh/authorized_keys
and simply paste the public key that you logged in to your local machine terminal. You can add as many as you want public SSH keys to the authorized_keys file.
4) Disable password login
Now that we have configured SSH login (do not do this step if you haven't configured SSH login, you might lock yourself out of the server and then need to go into rescue mode from the dashboard), we should disable password login completely, so we omit all those brute force attacks that try to guess our password and enter our machine, trust me, ssh is much harder to guess.
To disable password login, type the following into your server terminal:
sudo nano /etc/ssh/sshd_config
In the document, find #PasswordAuthentication
, uncomment and set it to "no".
After that, you need to restart the SSH service for changes from sshd_config
to take effect:
sudo service ssh restart
From here on forward, password login is disabled entirely, and we are much safer from brute force attacks on our host machine.
5) Disable root login
In step 2, when we changed the password for the root user, we mentioned that we would disable the root user from logging in entirely, and we are going to do that now.
Go to the same sshd_config file by typing sudo nano /etc/ssh/sshd_config
and set PermitRootLogin
to no to disable root logging in regardless if it is SSH or password logging in method.
Again, you need to restart the SSH service for changes from sshd_config
to take effect:
sudo service ssh restart
From now on, nobody can log in as a root user, so even if someone reaches our server, they still have to figure out our user password (which we made super hard to guess) to mimic root commands. That is all the philosophy around sudo and why you shouldn't use root user by default.
6) Network and firewall policies
You should configure your firewall settings and close all unnecessary ports. For example, for web applications, usually only ports 80 (HTTP) and 443 (HTTPS) are needed, as well as port 22 for SSH connection, which means that all other ports can be closed.
Closing ports can be done from the provider dashboard, like in the Hetzner example below:
Or by using ufw for Ubuntu, which comes with it as the default firewall configuration tool.
Whichever method you decide, close all unused ports, if you are not sure yet what app will be hosted, or if any will be hosted, close all except 22
for SSH logging in.
7) Change the default ssh port
Optionally, you can change the default 22
port which you use to log in. Usually, scripts have port 22 included by default so it can be potentially another layer of hustle for any malicious request. But note that the other port, whichever you decide for it to be (preferably above 1024, to avoid potential conflict with other services, but it is up to you) can be quickly figured out by malicious requests, so this is mainly added as another small layer of hustle for malicious requests. To add a custom port, type the following:
sudo nano /etc/ssh/sshd_config
and change Port 22
to whichever number you want. Let's say, for example, that we want to change it to 1602
, then we would have that line written as Port 1602
.
Afterward, do not forget to update the firewall configuration (previous step) and set SSH port to be whatever you have written instead of 22.
Note that now you will have to log in to the remote server using -p (short flag for port), as we are using a non-standard port. For example:
ssh {username}@{your server ip} -p {your chosen port number}
To avoid this tedious writing of port and username every time we try to connect to a remote server via SSH, we can add configuration to our local machine to let it know with which user we want to log in when we type ssh {your server ip}
. To update that configuration, type the following:
cd .ssh
sudo nano config
Type the following configuration:
Host {your remote host ip}
Port {your custom SSH port}
User {username of remote server}
and save and exit. With that configuration in place, the next time you want to log in to your remote server, just type the following:
ssh {your server ip}
Also, note that if you have multiple SSH keys you can specify which SSH key you want to use with the Identity
key and the name of the file that you want to identify with.
8) Configure automatic updates
It is good to allow automatic updates of packages on your server, and to achieve that we will use the unattended-upgrades package, therefore, type the following:
sudo apt install unattended-upgrades
and then:
sudo dpkg-reconfigure unattended-upgrades
and hit yes. After that, upgrades will be automatic on the remote server.
9) Add fail2ban package
You should also add fail2ban package to prevent brute force attacks. Namely, this package times out too many repeated failed requests to log in, and therefore creates a lot of hustle for automated scripts that are trying various combinations of SSH secret keys to enter your server (which is hard to brute force by itself), so this package will increase security drastically. To add it, type the following:
apt install fail2ban
Note that you can customize its behavior, but usually, defaults are enough, at least in the beginning.
10) Add 2FA using Google Authenticator
Adding two-factor authentication has its pros and cons. Pros are that it is safe and nobody can access your remote server without the code that is available only in the authenticator app on your mobile. Cons are that automated tools might have a hard time connecting to your remote server, like, for example, GitHub Actions (there are some actions that kind of allow you to type in code for other actions to run, but that is all shady and low stability) and therefore for each deploy in the future you need to be present with authentication code from your application. Also, it is tedious to write auth code every time you log in to the server.
Don't get me wrong, I use the authenticator app for remote servers, it is just that you need to be aware of the pros and cons before making an educated decision to use it.
So, how can we enable 2FA in our remote server?
Simply follow the step-by-step instructions for Ubuntu about configuring the 2FA.
Now, this step-by-step guide didn't quite work for me properly, as it didn't prompt me for auth code once I tried to SSH into the remote server. Therefore, after digging a bit more, the following configuration needed to be changed:
cd /etc/ssh/sshd_config
then scan visually this config file and make sure you have the following lines (wherever in the file, those just need to be present there) in the config:
UsePAM yes
PasswordAuthentication no
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive
PermitEmptyPasswords no
Then do the following:
cd /etc/pam.d/sshd
and scan visually to have this config:
# Standard Un*x authentication.
#@include common-auth
# Require authenticator, if not configured then allow
auth required pam_google_authenticator.so debug nullok
auth required pam_permit.so
After this setup, your 2FA should work as expected and you should be prompted to add an authenticator code the next time you try to SSH to a remote server.
Also, for good practice, go to the remote server and type the following commands:
cd .ssh
chmod 600 authorized_keys
We are adding read/write permissions to the owner of the file only, to make sure other users cannot change this without special permission (this is especially useful if you have multiple people working on the application and you don't want just anyone to be able and lock out everyone else from the server, accidentally or intentionally).
Note: You can also block connections per IP or VPN, but that is not feasible for home setup as we don't really have static IPs, and therefore let's leave it as an option here.
Conclusion
We have discussed why we would want to self-host our application and set up a remote server from scratch. We have also outlined a step-by-step guide to making your remote server secure and controllable only by your local machine.
This is quite enough for starting with remote servers and getting yourself up and running in a self-hosted world. Note that you don't have to buy a remote server for development, as you can do that on your local machine, as you can do that only when you want to provide end users with the stability of your app, or, namely, provide a production environment.
In the next part of this series, we will focus on deploying our web application (in my case it is a web application) using Docker Swarm and zero downtime deployment. We will also look into how we can omit container registries and establish communication directly between our local machine and remote server (mainly to reduce costs, because, as you remember, our app shouldn't be too much of an expense until it starts to generate revenue once it changes the world).
Top comments (3)
Is
fail2ban
stil relevant? Sure it was for IPv4, but now with IPv6... the price of changing IP is zero...Yes, I agree. But also, IPv6 can be disabled on server level. Therefore, I would say, it is relevant.
On the other hand, it cannot hurt to have it? :)
it can eat a lot of RAM... even just with IPv4; but sure, as long as you have enough of it...