This is part of my HNG DevOps internship series. Follow along as I document every stage.
Choosing a Cloud Provider
It all started where I had to choose a cloud to use. Ordinarily, the options would have been Google Cloud, AWS or Azure, but somehow I felt there might be more, so I began to search on Reddit and found out that Oracle Cloud has some generous free tier that offers as much as 24GB memory and 200GB storage for free for a lifetime, and it hit me that this is what I had been looking for.
For a lot of other cloud providers, it is either you have limited access for some number of months as a new user, or they give you some cloud credits to spend. The reason Oracle resonated with me is I didn't want something that after the internship I would have to shut down. I needed something I can keep all my work on and reference when necessary.
So I set up Oracle Cloud and subscribed for the free tier instance. At first I set up an instance using Oracle Linux (which is basically a RHEL, RedHat Enterprise Linux), but I quickly realised I was having problems installing ufw, which was one of the required packages for the task. So I completely removed that instance, created another one, this time around using Ubuntu.
The Task
We were given this task:
DEVOPS TRACK, STAGE 0: Linux Server Setup & Nginx Configuration
You will provision a Linux server, install and configure Nginx to serve two different locations, and secure it with a valid SSL certificate. No Docker, no Compose, no automation tools. Just a bare Linux server and your hands.
Here is a summary of what needed to be done:
Server Setup
- Create a non-root user called
hngdevopswith sudo privileges - Configure passwordless sudo for
hngdevopsfor/usr/sbin/sshdand/usr/sbin/ufw - Disable root SSH login
- Disable password-based SSH authentication (key-based only)
- Configure UFW to allow only ports 22, 80, and 443
Nginx Configuration
-
GET /: serves a static HTML page containing your HNG username as visible text -
GET /api: returns this JSON response exactly:
{
"message": "HNGI14 Stage 0",
"track": "DevOps",
"username": "your-hng-username"
}
SSL
- Obtain a valid SSL certificate using Let's Encrypt (Certbot)
- HTTP requests must redirect to HTTPS with a
301
But First: Why Stage 0? Why Provision a Linux Server?
To simply put it, everything on the internet runs on a server somewhere. Learning to provision and manage a bare Linux server is the foundation of all DevOps work. Before Docker, containers, Kubernetes, and all the fancy tooling, there is always a Linux machine underneath. So it is best to start from the foundation, so that anything coming after will feel natural, having understood where it all started from.
Step 1: Creating a Non-Root User
First I created a non-root user. Someone might be second-guessing and asking why we had to do this. Ordinarily, when you provision an instance (a Linux server), the user you get is a root account, and a root account has zero restrictions. It can delete every file on the server with a single command, without any confirmation. So running your daily work on a server as root is dangerous and not advisable.
Hence we create a user called hngdevops that can do everything needed via sudo, where mistakes won't have a tremendous effect on the server. Also see it as the principle of least privilege: every user and process should have only the minimum access required to do their job.
To create the user:
sudo adduser hngdevops
Then because I still needed to give this user permission to run some sudo commands, I added them to the sudo group:
sudo usermod -aG sudo hngdevops
Step 2: Copying SSH Keys to the New User
Next I copied my authorized_keys as a root user to the hngdevops user, so they can log in to the server. Here are the commands I used:
# Create the .ssh directory in the hngdevops home folder
sudo mkdir -p /home/hngdevops/.ssh
# Copy authorized_keys so hngdevops can log in
sudo cp ~/.ssh/authorized_keys /home/hngdevops/.ssh/
# Grant hngdevops ownership of the directory
sudo chown -R hngdevops:hngdevops /home/hngdevops/.ssh
# Set correct permissions on the directory
sudo chmod 700 /home/hngdevops/.ssh
# Set correct permissions on the keys file
sudo chmod 600 /home/hngdevops/.ssh/authorized_keys
The permissions matter here. SSH is strict. If the .ssh directory or authorized_keys file has permissions that are too open, SSH will refuse to use them entirely.
Step 3: Passwordless Sudo for Specific Commands
The next step was to grant hngdevops passwordless sudo for sshd and ufw only. This is taking the principle of least privilege even further. With this, even if someone else were to gain access to this user account, there isn't much damage they can do. For everything else requiring sudo, they will be met with a password prompt.
# Open the sudoers file safely
sudo visudo -f /etc/sudoers.d/hngdevops
Note: I used
vimas my editor. You can use any editor of your choice:vi,vim,emacs,nano, etc. If you don't have vim installed:sudo apt install vim
Add this exact line and save the file:
hngdevops ALL=(root) NOPASSWD:/usr/sbin/sshd,/usr/sbin/ufw
Always use visudo to edit sudoers files. It validates syntax before saving. A broken sudoers file can lock you out of your own server permanently.
Step 4: Hardening SSH Access
This step is about disabling root login and password-based authentication entirely. Every server on the internet gets thousands of automated login attempts per day from bots scanning for weak credentials. They always try root first because root exists on every Linux machine by default. And passwords can be guessed, brute-forced, or leaked.
By disabling both, the only way into your server is physical possession of your private key file, which is a 256-bit cryptographic secret that is mathematically impossible to brute force.
sudo nano /etc/ssh/sshd_config
Find and update these lines, and if you can't find them, then just add them as they are to the file, with each on it's own line just like this:
PermitRootLogin no
PasswordAuthentication no
PubkeyAuthentication yes
Important: Before saving this, open a second terminal window and verify you can SSH in as
hngdevopsusing your key. If you save this and you're locked out, you'll need to use Oracle Cloud's browser console to recover.
Then restart SSH to apply the changes:
sudo systemctl restart sshd
# Verify the config reads correctly
sudo sshd -T | grep -E "permitrootlogin|passwordauthentication"
You should see:
permitrootlogin no
passwordauthentication no
Step 5: Configuring UFW (Firewall)
Your server has 65,535 network ports. By default, any service running on any port is potentially reachable from the entire internet. UFW closes all of them except the three you explicitly need: 22 (SSH), 80 (HTTP), and 443 (HTTPS). This dramatically shrinks your attack surface. A port that is closed cannot be exploited, no matter what software is running behind it.
# Deny all incoming connections by default
sudo ufw default deny incoming
sudo ufw default allow outgoing
# Allow only the required ports
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
# Enable the firewall
sudo ufw enable
# Verify it is active
sudo ufw status verbose
Note: Oracle Cloud Ubuntu images don't come with UFW pre-installed. If you get a "command not found" error, install it first with
sudo apt install ufw -y
Enable UFW only after confirming port 22 is allowed. Enabling it without allowing SSH will lock you out immediately.
Step 6: Getting a Domain Name
Before installing Nginx or setting up SSL, I needed a domain name. Let's Encrypt won't issue a certificate for a bare IP address. It can only verify ownership of a domain. So SSL is impossible without one.
I didn't want to pay for a domain just yet, so I went looking for a free option. I first tried FreeDNS (afraid.org), signed up, created a subdomain, and filled in my server's IP as the destination. However the DNS ended up not working, I waited for some minutes probably for it to sync and tried, still nothing was resolving hence, I switched to DuckDNS.
DuckDNS is completely free, takes about 5 minutes to set up, and works perfectly with Let's Encrypt. Here's how to set it up:
- Go to duckdns.org and log in with Google or GitHub
- Choose a subdomain name. Mine became
gideonbature.duckdns.org - Enter your server's public IP in the IP field and click Update IP
Then verify it's pointing to your server:
ping <your-subdomain-name>.duckdns.org
# Should show your server's IP in the response
Once the ping resolves to your server's IP, you're ready to proceed.
Step 7: Installing and Configuring Nginx
Nginx is your web server. It listens on ports 80 and 443, receives HTTP requests, and decides what to serve. In real production systems, Nginx sits in front of your actual application and handles routing, SSL termination, rate limiting, caching, and more. Here we use it to serve two routes.
sudo apt update
sudo apt install nginx -y
sudo systemctl enable nginx
sudo systemctl start nginx
Create your HTML page:
sudo nano /var/www/html/index.html
<!DOCTYPE html>
<html>
<body>
<h1><your-hng-username></h1>
<p>HNG DevOps Stage 0</p>
</body>
</html>
Your username must be visible text on the page. Not in a comment, not hidden with CSS.
Then create your Nginx config:
sudo nano /etc/nginx/sites-available/hng
server {
listen 80;
server_name <your-subdomain-name>.duckdns.org;
# Serve HTML at root
location / {
root /var/www/html;
index index.html;
}
# Return JSON at /api
location = /api {
add_header Content-Type application/json;
return 200 '{"message":"HNGI14 Stage 0","track":"DevOps","username":"<your-hng-username>"}';
}
}
Notice the = sign in location = /api. That is an exact match. Without it, /api/anything would also match, which is sloppy.
Enable the site and reload:
sudo ln -s /etc/nginx/sites-available/hng /etc/nginx/sites-enabled/
sudo rm /etc/nginx/sites-enabled/default
sudo nginx -t
sudo systemctl reload nginx
Step 8: The Oracle Cloud Firewall Problem
This is where a lot of people get stuck with Oracle Cloud specifically, and it caught me too. After Nginx was running and confirmed listening on port 80, I still couldn't reach my server from the outside:
curl -I http://<your-subdomain-name>.duckdns.org
# curl: (28) Failed to connect to port 80 after 75326 ms
The issue is that Oracle Cloud has two separate layers of firewall that both need to be opened:
Layer 1: Oracle's Security List (network level)
- Go to Oracle Cloud Console → Networking → Virtual Cloud Networks
- Click your VCN → Security Lists → default security list
- Click Add Ingress Rules and add:
| Source CIDR | Protocol | Port |
|---|---|---|
| 0.0.0.0/0 | TCP | 80 |
| 0.0.0.0/0 | TCP | 443 |
Layer 2: iptables on the server itself
Oracle Cloud Ubuntu images ship with extra iptables rules that block ports regardless of UFW. This is the one most people miss:
sudo iptables -I INPUT -p tcp --dport 80 -j ACCEPT
sudo iptables -I INPUT -p tcp --dport 443 -j ACCEPT
# Make these rules survive a reboot
sudo apt install iptables-persistent -y
sudo netfilter-persistent save
After both layers were open, everything started working:
curl -I http://<your-subdomain-name>.duckdns.org
# HTTP/1.1 200 OK ✅
Step 9: SSL with Let's Encrypt
HTTP sends everything in plain text: passwords, session tokens, personal data. Anyone on the same network can read it. HTTPS encrypts the connection so only the client and server can read the traffic. In 2026 there is no acceptable reason to run a public website without HTTPS.
The 301 redirect specifically matters because it tells browsers and search engines this site is HTTPS only, permanently. Browsers cache 301s, so after the first visit they never even attempt HTTP again.
# Install Certbot
sudo apt install certbot python3-certbot-nginx -y
# Obtain and install the certificate
sudo certbot --nginx -d <your-subdomain-name>.duckdns.org
Certbot will ask for your email address, ask you to agree to terms, and then automatically obtain the certificate, modify your Nginx config to use it, and set up the HTTP → HTTPS 301 redirect. Auto-renewal is also configured automatically via a systemd timer.
Verify both directions work:
# Should show 301 Moved Permanently
curl -I http://<your-subdomain-name>.duckdns.org
# Should show 200 OK
curl -I https://<your-subdomain-name>.duckdns.org
Final Verification
Before submitting, I ran through every check to make sure nothing was missed:
# API response
curl https://<your-subdomain-name>.duckdns.org/api
# HTML page
curl https://<your-subdomain-name>.duckdns.org
# 301 redirect
curl -I http://<your-subdomain-name>.duckdns.org
# HTTPS working
curl -I https://<your-subdomain-name>.duckdns.org
# SSH hardening
sudo sshd -T | grep -E "permitrootlogin|passwordauthentication"
# UFW status
sudo ufw status
Everything came back clean.
The Big Picture
Looking back at everything, Stage 0 is really about building a secure foundation. Every single step answers a specific threat:
| What we did | Why it matters |
|---|---|
| Non-root user | Limits damage from mistakes |
| Key-based SSH only | Stops password brute force attacks |
| Root login disabled | Removes the default target for bots |
| UFW configured | Closes unnecessary attack surface |
| HTTPS with valid cert | Encrypts data in transit and proves identity |
A server without these protections is not a question of if it gets compromised. It is a question of when. With all of these in place, you have something that can sit on the public internet and hold up.
Stage 1 is next. Follow along as I keep documenting the journey.
Top comments (0)