Introduction
I decided to write this after diving into the ongoing conversation around learned helplessness in software engineering. This is something David Heinemeier Hansson (creator of Ruby on Rails) has been very vocal about especially on his twitter. His points may resonate with you depending on your business needs and the layers of abstraction you are willing to take on. I've seen so many companies rack up huge cloud bills, and that can easily convince smaller teams that they need to do the same to be “serious.” But a lot of this complexity is sold to us by vendors whose business depends on making things look harder than they really are, A.K.A "Merchants of complexity"
Learned helplessness, in this context, happens when engineering teams slowly lose the ability, or even the confidence to manage and understand their own infrastructure. Over time, everything becomes someone else’s service: databases, hosting, even cron jobs. And when that happens, teams risk losing technical depth, the ability to troubleshoot under pressure, and even the curiosity that drives real innovation.
The truth is, setting up your own infrastructure isn’t always as hard or as costly as it seems. Sometimes, going hands-on—provisioning your own servers, configuring your own network—can teach you more and cost you less. This article walks through a high-level, hands-on setup of a simple app using IaaS-level tools (like VPSs, subnets, and Apache proxies), not because it’s the “right” way for every project, but because understanding the layers beneath the abstraction gives you real control—and that’s a power every engineer should have.
All of this has led me to revisit the foundational layers of cloud infrastructure, not to throw shade at modern abstractions, but to get a clearer picture of what they’re built on. In this article, I’ll walk through a high-level setup of a simple, non-production-ready web app. This focuses purely on the purpose of learning and technical understanding. It’s a hands-on journey that starts at the Infrastructure-as-a-Service (IaaS) layer, the lowest abstraction tier in cloud computing (the others being PaaS and SaaS).
Most of us have used cloud platforms in some form, whether it’s deploying serverless functions like AWS Lambda or Firebase Cloud Functions, or using tools like Heroku or Vercel that abstract away orchestration entirely. But beneath all that convenience lies real, raw infrastructure: virtual machines, subnets, proxies, and firewalls. This article is a small tutorial that dives into exactly that. I’ll also drop “nuggets” throughout. Basically pointers to deepen your understanding if you’re curious to dig further at any point in the process.
Resources Used for This Exploration
To keep things practical and grounded, I built a small full-stack notes application that serves as the foundation for the tutorial. The frontend is a Next.js application, responsible for rendering UI and communicating with the backend via API calls. The backend is a lightweight Node.js app built with Koa, handling user authentication and CRUD operations for notes. For data storage, I used a PostgreSQL database containerized and hosted within the private subnet. Everything runs on Akamai’s cloud infrastructure—specifically their VPC and Linode offerings—which provide just enough control to explore low-level networking, subnetting, and proxy setups, without overwhelming complexity.
Provisioning a VPC and Partitioning Your Network
The first step is to rent a VPS from a provider that gives you fine-grained control over networking—options include DigitalOcean, Linode, and AWS EC2. For this project, I chose Akamai’s Linode platform, which allowed me to create a Virtual Private Cloud (VPC) and define custom subnets. I partitioned the network into two subnets: a public subnet that can access the internet (ideal for hosting the frontend), and a private subnet that has no direct internet access (reserved for backend services and the database). When creating your subnets, you’ll need to allocate CIDR blocks to define the IP ranges. For example, the public subnet could use 10.0.1.0/24
, while the private subnet could use 10.0.2.0/24
. These ranges should be chosen with future growth and IP efficiency in mind. A good practice is to size your subnets based on how many nodes or services you expect to scale into each zone.
💡 Nugget: Take some time to explore how CIDR blocks work, how IP addresses are distributed, and why certain ranges are considered private. It’s a foundational concept for understanding modern networking.
Creating Firewalls to Enforce Subnet Isolation
With your subnets in place, the next step is to enforce network boundaries using firewall rules. Firewalls allow you to control which traffic is allowed to enter or leave a node based on IP ranges, ports, and protocols. For this tutorial, we’ll design our firewall to completely isolate the private subnet from the internet, while exposing only the necessary ports in the public subnet. Let’s break this down into inbound and outbound rules.
Inbound Rules
Inbound rules govern what kind of traffic is allowed into your nodes.
Default Deny:
By default, deny all inbound traffic. Only allow what’s explicitly needed. This is the safest baseline.ICMP for Testing (Optional):
You may want to temporarily allow ICMP (ping) traffic to help debug connectivity.
- Protocol: ICMP
- Source:
0.0.0.0/0
(or your own IP) - Action: Accept
- Public Subnet — Web Access (Frontend App): Your public-facing frontend must be reachable from the internet.
- Ports:
80
(HTTP),443
(HTTPS) - Protocol: TCP
- Source:
0.0.0.0/0
- Action: Accept
- Private to Public — Forward Proxy Access: To allow your private nodes (backend/DB) to make outbound requests via the public proxy, you must enable inbound access on the proxy port in the public node.
- Port:
8080
(or your chosen proxy port) - Protocol: TCP
- Source:
10.0.2.0/24
(Private subnet IP range) - Action: Accept
- Private Subnet — Internal Communication (Backend ↔ DB): Backend services in the private subnet need to talk to each other, especially to your database node.
- Port: e.g.,
5432
(PostgreSQL) - Protocol: TCP
- Source:
10.0.2.0/24
- Action: Accept
- SSH Access for Maintenance: You’ll want to be able to SSH into your Linodes for debugging or setup.
- Port:
22
- Protocol: TCP
- Source: Your public IP (or
0.0.0.0/0
if unrestricted, though this is not recommended) - Action: Accept
- 💡 Tip: For security, restrict this to your personal IP only.
💡 Nugget: SSH uses asymmetric cryptography—your private key remains on your machine, while the public key is added to the server’s
~/.ssh/authorized_keys
. Understanding this is essential when managing key-based access securely.
Outbound Rules
Outbound rules control what kind of traffic your nodes are allowed to initiate.
- ICMP for Testing (Optional): Allow outbound ping (ICMP) for basic connectivity tests.
- Protocol: ICMP
- Destination:
0.0.0.0/0
- Action: Accept
- Internet Access (Public Subnet Only): Allow HTTP and HTTPS requests from the public subnet.
- Ports:
80
,443
- Protocol: TCP
- Destination:
0.0.0.0/0
- Action: Accept
- Private Subnet via Proxy (Handled Later): The private subnet won’t have direct internet access. Instead, outbound requests will go through the forward proxy configured on the public node. We’ll configure this in a later section.
This firewall setup ensures that your private services are protected, your frontend is accessible, and your infrastructure remains tightly controlled. Always test your rules incrementally—misconfigurations are common but easily fixed if introduced step-by-step.
An example of inbound rules:
Action | Source (IPv4) | Port | Purpose |
---|---|---|---|
ACCEPT | 0.0.0.0/0 |
80 | Public HTTP access (frontend) |
ACCEPT | 0.0.0.0/0 |
3000 | (Optional) direct access to frontend’s internal port (e.g. for testing or bypassing Apache) |
ACCEPT | 0.0.0.0/0 |
443 | Public HTTPS access |
ACCEPT | 0.0.0.0/0 |
- | ICMP for ping/testing |
ACCEPT | 0.0.0.0/0 |
22 | SSH access (can be restricted to your personal IP) |
ACCEPT | 10.0.1.2/32 |
8000 | Backend service from public proxy |
ACCEPT | 10.0.2.0/24 |
8080 | Proxy communication from private subnet to public proxy |
ACCEPT | 10.0.2.0/24 |
5432 | PostgreSQL access for backend services in the same subnet |
Example of Outbound rules:
Action | Destination (IPv4) | Port | Purpose |
---|---|---|---|
ACCEPT | 0.0.0.0/0 |
443 | HTTPS (package installs, certs, APIs) |
ACCEPT | 0.0.0.0/0 |
80 | HTTP (package installs, etc.) |
ACCEPT | 0.0.0.0/0 |
- | ICMP (ping) |
Setting Up Your First Linode (Public Subnet)
With your VPC and firewall rules in place, it's time to spin up your actual infrastructure nodes—starting with a public-facing Linode. This Linode will act as the gateway to your application, serving your frontend (and optionally proxying to your backend), and it’s where we’ll verify that your network and firewall setup is working correctly.
To begin, provision a low-cost Linode (around \$5/month at the time of writing) and assign it to your public subnet. Make sure to also attach the firewall you previously configured, so that all the carefully crafted rules now apply to this instance.
Once deployed, you’ll need to access the machine. You can do this in two ways:
- SSH into the Linode using your terminal and the public IP.
- Use LISH (Linode Shell) from the Akamai console if you don’t have SSH access yet.
Inside your Linode, perform a few critical connectivity tests to ensure your networking is correctly configured:
- Test Internet Access Run a simple ping to Google’s DNS to verify outbound access is allowed by your firewall:
ping 8.8.8.8
- Update the Package Index This verifies that outbound HTTPS is working and you can install packages:
sudo apt update
If both commands succeed, your firewall and public subnet are configured correctly. You now have a functioning public node, fully capable of installing software, serving applications, and acting as a forward or reverse proxy for your private subnet. This will serve as the entry point to your application infrastructure as we move forward.
Setting Up a Private Linode (Backend Node)
Next, provision your private Linode, which will host your backend application. Just like the public node, this Linode is also very affordable (around \$5/month), but unlike the public node, this one will not be assigned a public IP address—ensuring it has no direct access to or from the internet.
When creating this Linode:
- Assign it to the private subnet you created earlier.
- Do not assign a public IP address. This isolation is intentional: your backend should only talk to the frontend and the database, not the public internet.
- Ensure your firewall rules allow this node to communicate with:
- The public node (for outbound access via proxy)
- Other private nodes like the DB server
Since the private node will not have internet access, you’ll configure a forward proxy later in the article—hosted on the public Linode—to help it install packages or make outbound HTTP requests securely and indirectly.
In addition to forward proxying, you’ll also need a reverse proxy, typically using Apache, to:
- Route external requests to the appropriate internal services (e.g.,
/api
to the backend) - Handle SSL termination and clean URL routing
To configure Apache as a reverse proxy, you’ll later modify the default site config file:
sudo nano /etc/apache2/sites-available/000-default.conf
or better yet, create your own virtual host file to keep things modular and clear.
Buy and Configure a Domain
To make your app feel more “real” and not just accessible by an IP, you should purchase a cheap domain (e.g., from Namecheap or Google Domains). Once you own a domain:
-
Create an A Record that maps your domain (e.g.,
notes.online
) to the public IP of your frontend Linode. - Once DNS propagation completes, install a free HTTPS certificate via Let’s Encrypt by running:
sudo certbot --apache -d yourdomain.com
- If your DNS records are correctly set up, this command will:
- Validate domain ownership
- Automatically install an HTTPS cert
- Store it in
/etc/letsencrypt/live/yourdomain.com
This setup ensures that your frontend can be accessed securely via your custom domain, and that traffic can be reverse-proxied to your backend securely—all while maintaining strict network isolation between tiers.
Setting Up a Private Linode for the Database (PostgreSQL)
The final component of your infrastructure setup is the database node, which will also reside entirely in your private subnet. This ensures that your data is not exposed to the public internet and can only be accessed by other internal services—specifically, your backend application.
Here’s how to set it up:
- Provision a new Linode (again, a \$5/month plan will suffice).
- Assign this Linode to the private subnet, just like your backend node.
- Apply the same firewall to this node so that only traffic from the private subnet—especially your backend—can reach it.
- Do not assign a public IP. Your DB should never be exposed to the internet.
Once your Linode is up and running, SSH into it using LISH (if it doesn't have internet access) or set up a jump box via your public Linode, and install PostgreSQL:
sudo apt update
sudo apt install postgresql postgresql-contrib -y
After installation, configure PostgreSQL to accept connections over the private subnet:
Step 1: Update postgresql.conf
This file controls PostgreSQL’s runtime behavior. You need to allow it to listen for connections beyond localhost
:
sudo nano /etc/postgresql/14/main/postgresql.conf
Find the line:
#listen_addresses = 'localhost'
Replace it with:
listen_addresses = '*'
This tells PostgreSQL to listen on all network interfaces—including the private IP assigned by the subnet.
Step 2: Update pg_hba.conf
This file defines who can connect, from where, and how they authenticate.
sudo nano /etc/postgresql/14/main/pg_hba.conf
At the bottom, add a rule that allows incoming connections from the private subnet:
host all all 10.0.2.0/24 md5
This means: allow all users to connect to all databases from any machine in the private subnet using password (md5) authentication.
💡 Nugget: Spend time reading about how
postgresql.conf
andpg_hba.conf
interact. They are the gatekeepers of your DB’s network exposure and authentication model.
Step 3: Restart PostgreSQL
Apply your configuration changes:
sudo systemctl restart postgresql
Your PostgreSQL instance is now fully isolated within the private subnet and only reachable by other nodes in the same subnet—specifically your backend node. You’ve effectively recreated a secure, cloud-style VPC networking setup, but on your own terms, for a fraction of the cost.
Enhance Communication Between Nodes and the Internet
In previous sections, we intentionally designed the infrastructure so that nodes in the private subnet do not have direct access to the internet. This is a common and recommended security posture—but it introduces a challenge: how do backend services fetch updates, install packages, or interact with external APIs?
The solution is to introduce a forward proxy in the public subnet. This allows private nodes to send outbound traffic via a trusted middleman (the public node), without exposing themselves directly to the internet.
Why Not Use Basic NAT?
While it's tempting to set up a 1:1 NAT (Basic NAT) for simplicity, this approach bypasses the layered security model we’re trying to build. It essentially grants your private nodes direct exposure, undermining the purpose of subnet separation.
🔗 This is the official documentation of the Akamai Forward Proxy
Set Up a Forward Proxy with Apache (on the Public Node)
Let’s walk through setting up a proper forward proxy using Apache.
Step 1: Access the Public Linode
SSH into your public-facing Linode (in the public subnet) or use the LISH console via Akamai’s dashboard.
ssh root@<your-public-linode-ip>
Step 2: Install & Prepare Apache
Update your packages and ensure Apache is installed:
sudo apt update -y
sudo apt install apache2 -y
Enable necessary Apache modules:
sudo a2enmod proxy proxy_http proxy_connect
Step 3: Create a Forward Proxy Configuration
Open a new config file:
sudo nano /etc/apache2/sites-available/fwd-proxy.conf
Paste the following configuration (adjust IPs as needed):
# Listen on the internal IP (public node) at port 8080.
# This sets up the Apache server to accept proxy requests from the private subnet via port 8080.
Listen 10.0.2.2:8080
<VirtualHost *:8080>
# Admin email for server issues (not strictly required unless you're sending error reports).
ServerAdmin webmaster@localhost
# Root directory for served files (not used in proxying but required syntactically).
DocumentRoot /var/www/html
# Log errors from proxy activity here (useful for debugging)
ErrorLog ${APACHE_LOG_DIR}/fwd-proxy-error.log
# Log all access through the proxy
CustomLog ${APACHE_LOG_DIR}/fwd-proxy-access.log combined
# Enable forward proxy mode (Apache acts as a middleman for outbound traffic)
ProxyRequests On
# Adds headers like Via: to show request went through a proxy (useful for tracing)
ProxyVia On
# Restrict proxy access to only IPs from the private subnet
<Proxy "*">
Require ip 10.0.2.0/24
</Proxy>
</VirtualHost>
💡 Nugget: The
ProxyRequests On
directive enables forward proxying. The<Proxy "*">
block restricts usage of this proxy to requests originating from your private subnet only.
Save and close the file.
Step 4: Enable the Proxy Site and Restart Apache
sudo chown root:root /etc/apache2/sites-available/fwd-proxy.conf
sudo chmod 0644 /etc/apache2/sites-available/fwd-proxy.conf
sudo a2ensite fwd-proxy
sudo systemctl restart apache2
Test the Proxy
From a private Linode, you can now route outbound traffic through the proxy:
curl -x http://10.0.2.2:8080 https://example.com
If successful, your private node is now securely communicating with the internet through your public node—without needing a public IP of its own.
This setup retains your network isolation while still enabling secure, auditable internet access.
Enhance Communication Between Nodes and the Internet (Part 2: Private Nodes)
Now that your forward proxy is configured and running on the public Linode, it’s time to set up your private nodes—specifically the backend and database Linodes—to route their outbound internet traffic through this proxy.
Backend Private Node Configuration
On your backend node in the private subnet (which should not have direct access to the internet), you’ll need to explicitly configure it to use the forward proxy you previously set up on the public Linode (e.g., 10.0.1.2:8080
).
Start by configuring the proxy settings for apt
, so you can perform package installations via the proxy:
echo 'Acquire::http::proxy "http://10.0.2.2:8080";' | sudo tee /etc/apt/apt.conf.d/proxy.conf
Once done, test it using:
sudo apt update
You can also test general HTTP traffic routing through the proxy with:
curl --proxy 10.0.2.2:8080 http://example.com
If both work as expected, proceed to route all HTTP/HTTPS traffic system-wide through the proxy. This ensures that any application or system utility that needs external access will use the proxy automatically.
Edit the /etc/environment
file to export proxy variables globally:
sudo nano /etc/environment
Then add:
http_proxy="http://10.0.1.2:8080"
https_proxy="http://10.0.1.2:8080"
no_proxy="localhost,127.0.0.1,::1"
These environment variables will persist across sessions and reboots. For them to take full effect, reboot the node:
sudo reboot
Setup Applications and Storage
With your networking, firewall, and proxy configuration complete, the next step is to deploy your applications on the respective nodes. This section guides you through cloning, configuring, and starting both the frontend and backend apps used in this tutorial, along with their storage setup.
Public Linode – Frontend Application
Your public Linode is where the frontend (Next.js) application will live, and it’s accessible to the outside world via your domain and Apache reverse proxy.
Follow these steps:
- Run an update on your packages:
sudo apt update
- Install Git:
sudo apt install git
- Clone the frontend repository made specifically for this tutorial:
git clone https://github.com/Joojo7/notes-app-frontend
cd notes-app-frontend
- Install the required Node.js dependencies:
npm install
There’s no need to over-engineer this with a CI/CD pipeline, since it’s a one-off learning project. You can start the app directly or with a tool like pm2
if you want to keep it alive in the background.
Private Linode – Backend Application
On your private backend Linode, follow similar steps, with additional backend-specific setup:
- Update the system and install Git:
sudo apt update
sudo apt install git
- Clone the backend repository:
git clone https://github.com/Joojo7/notes-app-backend
cd notes-app-backend
- Ensure you have the following installed:
- Node.js (v18 or higher)
- npm
- Docker + Docker Compose
- Create a
.env
file at the root of the project. You can copy from.env.example
and customize as needed:
DB_USER=your_db_user
DB_PASSWORD=your_db_password
DB_HOST=localhost
DB_PORT=5432
DB_NAME=notes_db
JWT_SECRET=your_jwt_secret
JWT_EXPIRATION=15m
PORT=8000
- To simplify the startup process, a
startup.sh
script has been provided. Make it executable and run it:
chmod +x startup.sh
./startup.sh
This script will handle the Docker Compose setup and the backend service bootstrapping.
More details are available in the backend repo’s README:
🔗 notes-app-backend GitHub
Setup Applications and Storage (continued)
Serve and Test the Application
Once both the frontend and backend are installed and configured, it’s time to serve the application to the internet and test its full flow. The public Linode will act as a reverse proxy, routing requests to the appropriate services via Apache.
Reverse Proxy from Domain to Frontend (and Backend API)
To serve your frontend from https://yourdomain.com
without needing to append a :3000
port, we’ll configure Apache as a reverse proxy.
Steps to Enable Apache Modules
SSH into your public Linode and enable the necessary Apache modules:
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod headers
sudo a2enmod rewrite
Then restart Apache to apply the changes:
sudo systemctl restart apache2
Configure Apache Reverse Proxy for HTTP and HTTPS
- Create or modify a site configuration file:
sudo nano /etc/apache2/sites-available/yourdomain.com.conf
- Paste the following configuration:
HTTP (Port 80) – used for initial redirect or Certbot challenge:
<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/reverse-proxy-error.log
CustomLog ${APACHE_LOG_DIR}/reverse-proxy-access.log combined
ProxyRequests Off
# Reverse proxy to backend API
ProxyPass /api/notes/ http://10.0.2.2:8000/
ProxyPassReverse /api/notes/ http://10.0.2.2:8000/
<Proxy *>
Require ip 10.0.2.0/24
</Proxy>
</VirtualHost>
HTTPS (Port 443) – full site access and secure reverse proxy:
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerAdmin webmaster@localhost
ServerName yourdomain.com
DocumentRoot /var/www/html
ProxyRequests Off
# Reverse proxy to backend API (must come before frontend proxy)
ProxyPass /api/ http://10.0.2.2:8000/
ProxyPassReverse /api/ http://10.0.2.2:8000/
# Reverse proxy to Next.js frontend
ProxyPass / http://localhost:3000/
ProxyPassReverse / http://localhost:3000/
# Allow Certbot HTTP challenge
<Location /.well-known/acme-challenge>
Require all granted
</Location>
# SSL configuration (provided by Certbot)
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/yourdomain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/yourdomain.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
# Logging
ErrorLog ${APACHE_LOG_DIR}/reverse-proxy-error.log
CustomLog ${APACHE_LOG_DIR}/reverse-proxy-access.log combined
</VirtualHost>
</IfModule>
Final Steps
- Enable the new site config:
sudo a2ensite yourdomain.com.conf
- Disable the default config (optional but recommended):
sudo a2dissite 000-default.conf
- Reload or restart Apache:
sudo systemctl reload apache2
DNS Setup and Request Flow
-
Go to your domain provider (e.g., Namecheap, GoDaddy, etc.):
- Add an A record that points your domain (e.g.,
yourdomain.com
) to the public IP of your public Linode. - Wait for DNS propagation — this may take anywhere from a few minutes to a few hours depending on TTL settings.
- Add an A record that points your domain (e.g.,
After DNS is live, the request flow will look like this:
Client Browser
↓
DNS Lookup (yourdomain.com resolves to public IP)
↓
Firewall (allows ports 80/443 to Apache)
↓
Apache Web Server (reverse proxy)
↓
- /api/ → Private backend service via 10.0.2.2:8000
- / → Local frontend served from port 3000
- Now visit
https://yourdomain.com
— your frontend should load without the port, and API requests to/api/notes
should proxy correctly to the backend on the private node.
Future Improvements and Deep Dives to Sharpen Your Understanding
First off—if you’ve made it this far, take a moment to acknowledge what you’ve accomplished. You’ve not only provisioned infrastructure at the IaaS level, but also configured firewalls, private/public subnets, secure proxies, reverse routing, and a full-stack deployment—all from scratch. That’s huge.
But this journey doesn’t end here. There’s so much more to explore, and none of it is out of reach.
Add More Services for Realistic Environments
Now that you’ve successfully deployed a frontend, backend, and database, try introducing additional Linodes to simulate multi-service architectures. For example:
- Add a Redis node, a message queue, or a monitoring service like Prometheus or Grafana.
- Observe how service-to-service communication happens over private IPs.
- Practice managing security and performance as your ecosystem grows.
Learn About Load Balancing
Load balancing is a cornerstone of high-availability systems.
- Study how Apache or Nginx can distribute requests across multiple backend servers.
- Try simulating stress or high traffic to watch your load balancing strategy in action.
- Experiment with sticky sessions, round-robin, and IP-hash load balancing techniques.
This is how production-scale infrastructure starts.
Rebuild the Proxy Setup in Nginx
Everything you’ve configured with Apache can be reimagined in Nginx.
- Learn how to configure reverse proxies and forward proxies in Nginx.
- Explore advanced modules like
ngx_http_proxy_connect_module
for forward proxying. - Compare the verbosity, performance, and control between Apache and Nginx.
- You’ll appreciate Apache more—and gain a new appreciation for Nginx too.
You’re not starting from zero anymore. You now have mental models to guide you.
Prepare for Production-Like Workflows
Imagine if this app had users. Or stakeholders. Or deadlines. What would you automate?
- Practice triggering deployments from GitHub via webhooks.
- Explore CI/CD pipelines with tools like GitHub Actions, ArgoCD, or Terraform.
- Think about container orchestration and start reading up on Kubernetes or Nomad.
- Look into secrets management, versioned configuration, or observability tooling.
These are things you’ll naturally grow into—and now you know where to start.
Final Thoughts
This wasn’t just a tutorial. It was a walk down the forgotten path of technical self-reliance—the kind that builds confidence, clarity, and curiosity.
Yes, modern SaaS and PaaS platforms are convenient—but they abstract away the very systems we’re responsible for. Sometimes, by getting your hands dirty and walking a little closer to the metal, you reclaim something powerful: your understanding.
So, keep tinkering. Keep asking questions. Keep exploring.
You don’t need a million-dollar cloud budget to learn this stuff.
You just need a \$5 Linode, some grit, and a healthy dose of curiosity.
You’ve got this.
Top comments (1)
Very insightful article. Enjoyed it.