Series: From "Just Put It on a Server" to Production DevOps
Reading time: 12 minutes
Level: Beginner-friendly
Introduction
Every developer has a moment where they think: "I've built this amazing app locally. Now I just need to put it on a server somewhere."
Sounds simple, right?
You rent a server, upload your code, run npm start, and boom—you're live.
This article is about what happens next.
We're going to deploy a real application—the Sales Signal Processing Platform (SSPP)—the old-school way. No Docker. No Kubernetes. No CI/CD magic. Just you, a Linux server, and SSH.
By the end, you'll understand why modern DevOps tools exist—not because someone said they're "best practices," but because you'll feel the pain they solve.
What We're Building
The Sales Signal Processing Platform is an event-driven system that:
- Accepts events via REST API (email sent, page viewed, meeting booked)
- Queues them in Redis for async processing
- Calculates signal scores (how "hot" a sales lead is)
- Stores data in PostgreSQL
- Indexes in Elasticsearch for search/analytics
Why this project?
It's not a toy. It's a real-world SaaS pattern used by companies like Segment, Mixpanel, and sales intelligence platforms.
Architecture (Simple Version):
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Client │─────▶│ API Service │─────▶│ Redis Queue│
│ (HTTP POST) │ │ (NestJS) │ │ │
└─────────────┘ └─────────────┘ └──────┬──────┘
│
▼
┌─────────────┐
│ Worker │
│ (Node.js) │
└──────┬──────┘
│
┌──────────────────┼───────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌─────────────┐
│PostgreSQL│ │ Redis │ │Elasticsearch│
└──────────┘ └──────────┘ └─────────────┘
Tech Stack:
- API: NestJS (TypeScript)
- Worker: Node.js (TypeScript)
- Databases: PostgreSQL 15, Redis 7, Elasticsearch 8
Step 1: Rent a Server (The Easy Part)
We're using Linode (now Akamai Cloud) because:
- Simple pricing ($5-$10/month gets you started)
- No vendor lock-in (real VMs, not proprietary services)
- Shows you understand infrastructure fundamentals
Create a Linode Instance:
- Sign up at linode.com
-
Create a new Linode:
- Distribution: Ubuntu 22.04 LTS
- Plan: Shared CPU - Nanode 1GB ($5/mo)
- Region: Choose closest to you
-
Label:
sspp-server-01
Set a root password (save it somewhere secure)
Boot it up
What you just got:
- A real computer running Linux somewhere in a datacenter
- A public IP address (e.g.,
45.79.123.45) - SSH access via port 22
Step 2: SSH Into Your Server
SSH (Secure Shell) is how you control remote servers.
ssh root@45.79.123.45
First time? You'll see:
The authenticity of host '45.79.123.45' can't be established.
ED25519 key fingerprint is SHA256:...
Are you sure you want to continue connecting (yes/no)?
Type yes. This saves the server's "fingerprint" so you know it's the same server next time.
You're now controlling a computer 500+ miles away.
Type pwd (print working directory):
/root
You're in the home directory of the root user—the superuser who can do anything.
Step 3: Install Dependencies (The Manual Way)
Our app needs:
- Node.js (runtime for our JavaScript/TypeScript)
- PostgreSQL (database)
- Redis (queue and cache)
- Elasticsearch (search/analytics)
Install Node.js
# Update package list
apt update
# Install Node.js 18.x
curl -fsSL https://deb.nodesource.com/setup_18.x | bash -
apt install -y nodejs
# Verify
node --version # Should show v18.x.x
npm --version # Should show v9.x.x
Install PostgreSQL
# Install PostgreSQL
apt install -y postgresql postgresql-contrib
# Start the service
systemctl start postgresql
systemctl enable postgresql # Auto-start on boot
# Create database and user
sudo -u postgres psql <<EOF
CREATE DATABASE sales_signals;
CREATE USER sspp_user WITH PASSWORD 'sspp_password';
GRANT ALL PRIVILEGES ON DATABASE sales_signals TO sspp_user;
\q
EOF
Install Redis
# Install Redis
apt install -y redis-server
# Start it
systemctl start redis-server
systemctl enable redis-server
# Test
redis-cli ping # Should return "PONG"
Install Elasticsearch
# Install dependencies
apt install -y apt-transport-https
# Add Elasticsearch GPG key
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
# Add repository
echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" > /etc/apt/sources.list.d/elastic-8.x.list
# Install
apt update
apt install -y elasticsearch
# Start it
systemctl start elasticsearch
systemctl enable elasticsearch
# Test (wait 30 seconds for startup)
curl localhost:9200
How long did this take? Probably 15-20 minutes.
What happens if you need to do this again? You repeat every command. Manually.
Step 4: Deploy Your Application
Now let's get our code onto the server.
Clone the Repository
cd /opt
git clone https://github.com/daviesbrown/sspp.git
cd sspp
Install API Dependencies
cd /opt/sspp/services/api
npm install
This takes 2-5 minutes. Watch the terminal spam scroll by.
Build the API
npm run build
TypeScript compiles to JavaScript in the dist/ folder.
Create Environment Variables
cat > .env <<EOF
NODE_ENV=production
PORT=3000
DB_HOST=localhost
DB_PORT=5432
DB_NAME=sales_signals
DB_USER=sspp_user
DB_PASSWORD=sspp_password
REDIS_HOST=localhost
REDIS_PORT=6379
ELASTICSEARCH_URL=http://localhost:9200
EOF
Run the API
npm start
Output:
[Nest] 12345 - 12/22/2025, 10:30:15 AM LOG [NestFactory] Starting Nest application...
[Nest] 12345 - 12/22/2025, 10:30:15 AM LOG [InstanceLoader] AppModule dependencies initialized
[Nest] 12345 - 12/22/2025, 10:30:16 AM LOG [RoutesResolver] EventsController {/api/v1/events}
[Nest] 12345 - 12/22/2025, 10:30:16 AM LOG [NestApplication] Nest application successfully started
[Nest] 12345 - 12/22/2025, 10:30:16 AM LOG API listening on port 3000
🎉 Your app is running!
Step 5: Test It
Open another terminal on your local machine (not the server):
# Test the API
curl http://45.79.123.45:3000/api/v1/health
# Expected response:
{
"status": "ok",
"timestamp": "2025-12-22T10:31:00.000Z"
}
Send an event:
curl -X POST http://45.79.123.45:3000/api/v1/events \
-H "Content-Type: application/json" \
-d '{
"accountId": "acct_001",
"userId": "user_001",
"eventType": "email_sent",
"timestamp": "2025-12-22T10:00:00Z",
"metadata": {
"campaign": "Q4_Outreach"
}
}'
# Response:
{
"status": "accepted",
"jobId": "1",
"message": "Event queued for processing"
}
IT WORKS! 🎊
You're a full-stack DevOps engineer now, right?
Step 6: Reality Hits
Go back to your SSH session and press Ctrl+C to stop the app.
Now try accessing the API again:
curl http://45.79.123.45:3000/api/v1/health
Output:
curl: (7) Failed to connect to 45.79.123.45 port 3000: Connection refused
Your app is dead.
Why?
When you run npm start in your SSH session:
- It starts a foreground process
- That process is attached to your SSH session
- When you stop it (Ctrl+C) or disconnect, the process dies
Let's try running it in the background:
npm start &
The & runs it in the background. You get your terminal back.
Now close your SSH session (disconnect).
Wait 10 seconds, then test again:
curl http://45.79.123.45:3000/api/v1/health
Still dead.
Why?
When you disconnect from SSH, the shell sends a SIGHUP (hangup signal) to all child processes, which kills them.
Step 7: The "Just Keep It Running" Attempts
Attempt 1: Use nohup
nohup npm start &
nohup (no hangup) ignores the SIGHUP signal.
This works! But:
- No automatic restart if it crashes
- No logs management (they dump to
nohup.out) - No process monitoring
Attempt 2: Use screen or tmux
These create persistent terminal sessions:
screen -S api
npm start
# Press Ctrl+A, then D to detach
Better! But:
- Still manual
- No restart on crash
- No resource limits
- You have to remember screen commands
Step 8: The Real Problems
Let's say you got it "working" with nohup or screen. Here are the problems you'll hit in production:
Problem 1: No Automatic Restart
Your app crashes (memory leak, unhandled exception). It's just... down. Until you SSH in and restart it manually.
Problem 2: No Startup on Reboot
Server reboots (Linode maintenance, kernel update). Your app doesn't start automatically.
Problem 3: "Works on My Machine"
Your coworker tries to deploy. They have Node 16 installed, you have Node 18. Different npm versions. Different PostgreSQL versions. It breaks.
Problem 4: Manual Scaling
You get featured on Product Hunt. Traffic spikes 50x. You need to:
- Rent 5 more servers
- SSH into each one
- Repeat all the installation steps
- Set up a load balancer manually
- Pray nothing goes wrong
Time to scale: 2-4 hours (if you're fast)
Problem 5: Zero Deployment Strategy
You need to deploy a bug fix. To avoid downtime:
- Deploy to a new server
- Switch DNS/load balancer
- Drain old connections
- Hope nothing broke
Or you just restart the app and accept 5-30 seconds of downtime.
Problem 6: No Rollback
The new version has a critical bug. To rollback:
- SSH in
-
git checkoutold commit -
npm install(hope dependencies didn't change) - Restart
Time to rollback: 5-10 minutes (plus panic time)
Problem 7: Configuration Management
You have 3 environments: dev, staging, production.
Each needs different:
- Database credentials
- API keys
- Feature flags
- Resource limits
Your solution: .env files, config.dev.json, config.prod.json, remembering which is which.
Problem 8: Security
Your server is wide open:
- PostgreSQL accepting connections from anywhere
- No firewall rules
- Root SSH access enabled
- Passwords in plaintext
.envfiles - No SSL certificates
What We Learned
We successfully deployed an app to a server the manual way. And we discovered:
- Process management is hard - Apps need to survive crashes and reboots
- Environment consistency is critical - "Works on my machine" kills teams
- Scaling is manual - Copying setup to multiple servers doesn't scale
- Deployments are risky - No safety net, no rollback strategy
- Security is an afterthought - When you're fighting fires, security waits
This is why DevOps exists.
Every tool you've heard of—Docker, Kubernetes, CI/CD, Infrastructure as Code—solves one of these problems.
But you can't appreciate the solution until you've felt the pain.
What's Next?
In Part 2, we'll solve the first problem: keeping your app alive.
We'll use PM2 (Process Manager 2) to:
- Automatically restart crashed processes
- Start apps on server reboot
- Manage logs properly
- Monitor resource usage
This will make our deployment slightly better. But it won't solve the deeper problems.
Spoiler: We'll eventually replace all of this with Docker and Kubernetes. But first, we need to understand what life was like before containers.
Try It Yourself
Don't just read—do it:
- Spin up a Linode (or DigitalOcean Droplet, AWS EC2, etc.)
- Follow this guide exactly
- Get the app running
- Try to kill it in creative ways:
kill -9 <pid>- Disconnect SSH
- Reboot the server
- Notice how often it stays dead
Bonus challenge: Try to deploy the Worker service too (hint: it's even harder because it's a background service with no HTTP endpoint).
Discussion
What have your "just put it on a server" horror stories been?
Drop a comment or open an issue on GitHub.
Next: Part 2: Process Managers - Keeping Your App Alive with PM2
About the Author
I'm building this series to show real-world DevOps thinking for my Proton.ai application. If you're hiring for DevOps/Platform roles and want someone who understands infrastructure (not just follows tutorials), let's talk.
- GitHub: @daviesbrown
- LinkedIn: David Nwosu
Top comments (0)