DEV Community

Cover image for Secure Linux Server Setup & Application Deployment
Yousuf Basir
Yousuf Basir

Posted on

Secure Linux Server Setup & Application Deployment

A Practical, Opinionated Guide for Real Production Servers

Deploying an application is easy.
Deploying it securely, so that one compromised app does not take down your entire server, requires discipline and structure.

This guide documents the exact process we follow to prepare a fresh Linux server, deploy databases and applications, and keep the system secure, isolated, and maintainable.

This is not theory.
This is a battle-tested setup suitable for real production servers.


What This Guide Is For

This setup works for:

  • Node.js backends (NestJS, Express, Fastify)
  • Next.js (standalone build)
  • Static frontends (React / Vite)
  • Databases (PostgreSQL, MongoDB, Redis) using Docker
  • Reverse proxy with HTTPS (Caddy / Nginx)

Core Security Philosophy

Before commands, understand the rules:

  1. Root is not an app runtime
  2. One app = one service user
  3. Humans deploy, services run
  4. Databases are private by default
  5. Reverse proxy is the only public entry point
  6. Assume one app will eventually be compromised

Our goal is simple:

If one application is hacked, everything else must remain safe.


Step 1 — Create a Non-Root Admin User

On a fresh server, you usually start as root.

Create a normal admin user (example: dev):

adduser dev
usermod -aG sudo dev
Enter fullscreen mode Exit fullscreen mode

Why?

  • Root SSH access is dangerous
  • Using sudo is auditable
  • Accidents are easier to recover from

Step 2 — Harden SSH Access

Generate SSH key (on your local machine)

ssh-keygen -t ed25519
Enter fullscreen mode Exit fullscreen mode

Copy the public key to the server:

mkdir -p /home/dev/.ssh
nano /home/dev/.ssh/authorized_keys
Enter fullscreen mode Exit fullscreen mode

Fix permissions:

chown -R dev:dev /home/dev/.ssh
chmod 700 /home/dev/.ssh
chmod 600 /home/dev/.ssh/authorized_keys
Enter fullscreen mode Exit fullscreen mode

Disable dangerous SSH options

Edit SSH config:

sudo nano /etc/ssh/sshd_config
Enter fullscreen mode Exit fullscreen mode

Ensure:

PermitRootLogin no
PubkeyAuthentication yes
Enter fullscreen mode Exit fullscreen mode

Disable password login:

Match all
    PasswordAuthentication no
Enter fullscreen mode Exit fullscreen mode

Reload SSH safely:

sudo systemctl reload ssh
Enter fullscreen mode Exit fullscreen mode

Step 3 — Enable Firewall (UFW)

Allow only what’s required:

sudo ufw allow OpenSSH
sudo ufw allow 80
sudo ufw allow 443
sudo ufw enable
Enter fullscreen mode Exit fullscreen mode

Check:

sudo ufw status verbose
Enter fullscreen mode Exit fullscreen mode

Only ports 22, 80, 443 should be public.


Step 4 — Install Docker (Admin User Only)

Docker is treated as root-equivalent.

  • Only the admin user (dev) may use Docker
  • Application users never touch Docker

Install Docker from the official repository (not apt docker.io).

After installation:

sudo usermod -aG docker dev
Enter fullscreen mode Exit fullscreen mode

Re-login and verify:

docker ps
Enter fullscreen mode Exit fullscreen mode

🚫 Never add service users to the docker group.


Step 5 — Run Databases Securely in Docker

Key rules for databases

  • Never expose DB ports publicly
  • Bind to 127.0.0.1 only
  • Use Docker volumes
  • Access from local machine via SSH tunneling

Example: PostgreSQL (secure)

docker run -d \
  --name postgres \
  --restart unless-stopped \
  -e POSTGRES_USER=appuser \
  -e POSTGRES_PASSWORD=STRONG_PASSWORD \
  -e POSTGRES_DB=appdb \
  -v pgdata:/var/lib/postgresql \
  -p 127.0.0.1:5432:5432 \
  postgres:18
Enter fullscreen mode Exit fullscreen mode

Verify:

ss -tulpn | grep 5432
Enter fullscreen mode Exit fullscreen mode

Expected:

127.0.0.1:5432
Enter fullscreen mode Exit fullscreen mode

Important note on port binding:
We explicitly bind database ports to 127.0.0.1 because Docker does not honor UFW rules for published ports. Docker inserts its own iptables rules, so if a container port is mapped to 0.0.0.0, it will be publicly accessible even if UFW blocks that port. Binding to 127.0.0.1 ensures the database is reachable only from the server itself and never exposed to the internet.


Step 6 — Access Databases via SSH Tunnel

From your local machine:

ssh -N -L 5432:127.0.0.1:5432 dev@SERVER_IP
Enter fullscreen mode Exit fullscreen mode

Now connect locally using:

  • Host: 127.0.0.1
  • Port: 5432

🔐 Encrypted, private, safe.


Step 7 — One GitHub Deploy Key per Repository

For each private repository, we generate a dedicated SSH deploy key on the server (as the admin user):

ssh-keygen -t ed25519 -C "deploy-myapp"
Enter fullscreen mode Exit fullscreen mode

Use a unique filename per repository:

~/.ssh/id_ed25519_myapp
Enter fullscreen mode Exit fullscreen mode

Add the public key to:

GitHub → Repository → Settings → Deploy keys
Enter fullscreen mode Exit fullscreen mode

Grant read-only access and clone the repository using an SSH alias (not HTTPS).

👉 For the full, step-by-step explanation (SSH config, aliases, and examples), see:
https://dev.to/yousufbasir/securely-managing-github-access-on-production-servers-20l3


Step 8 — Create a Service User (Per App)

Each app runs as its own locked-down user:

sudo adduser \
  --system \
  --no-create-home \
  --group \
  --shell /usr/sbin/nologin \
  svc-myapp
Enter fullscreen mode Exit fullscreen mode

This user:

  • cannot SSH
  • has no shell
  • has no sudo
  • owns only its app directory

Step 9 — Clone & Build as Admin User

cd /var/apps
git clone git@github.com-myapp:org/repo.git
cd repo

npm ci
npm run build
npm prune --production
Enter fullscreen mode Exit fullscreen mode

Humans build.
Services only run.


Step 10 — Environment Variables (Build-time vs Runtime)

In production, environment variables are never committed to Git.
They are managed by the system and injected only when required.

We store runtime environment variables in a system-managed file:

/etc/systemd/system/myapp.env
Enter fullscreen mode Exit fullscreen mode

This file is owned by root and loaded by systemd at runtime.

Special case: Next.js public environment variables

If a Next.js project uses variables starting with NEXT_PUBLIC_, those must be available at build time, because the Next.js compiler embeds them into the generated JavaScript.

In that case, we explicitly inject the env file when building:

sudo -E bash -c '
  set -a
  source /etc/systemd/system/myapp.env
  set +a

  npm run build
'
Enter fullscreen mode Exit fullscreen mode

If there are no NEXT_PUBLIC_* variables, this step is not required—runtime injection via systemd is enough.


Next.js standalone build (required asset copy)

When building Next.js as a standalone application, we must also copy static assets manually so the app can run without the full .next directory.

After next build, we run:

mkdir -p .next/standalone/.next
cp -r .next/static .next/standalone/.next/
cp -r public .next/standalone/
Enter fullscreen mode Exit fullscreen mode

Why this is needed:

  • The standalone output contains only server code
  • Static files (_next/static, public/) are not included automatically
  • Without this step, the app will run but assets will 404

This produces a fully self-contained Node.js app that can be started via systemd.


Step 11 — Transfer Ownership to Service User

After build:

sudo chown -R svc-myapp:svc-myapp /var/apps/myapp
sudo chmod -R o-rwx /var/apps/myapp
Enter fullscreen mode Exit fullscreen mode

At this point, even dev should get permission denied — that’s intentional.


Step 12 — Run the App with systemd (Hardened)

Example systemd service:

[Unit]
Description=My Application
After=network.target

[Service]
User=svc-myapp
Group=svc-myapp
WorkingDirectory=/var/apps/myapp
EnvironmentFile=/etc/systemd/system/myapp.env

ExecStart=/usr/bin/node dist/main.js
Restart=always
RestartSec=3

# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/apps/myapp
PrivateTmp=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
RestrictNamespaces=true
LockPersonality=true
RestrictSUIDSGID=true
CapabilityBoundingSet=
AmbientCapabilities=
UMask=0077

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode

Enable and start:

sudo systemctl daemon-reload
sudo systemctl enable myapp
sudo systemctl start myapp
Enter fullscreen mode Exit fullscreen mode

Step 13 — Static Frontend Builds (React / Vite)

Static frontend apps (React, Vite, etc.) do not run via systemd.

They are built once and served directly by the reverse proxy.

Injecting environment variables at build time

For static apps, public variables (e.g. VITE_*) must be injected during the build.

We place the env file in the project root:

.env.production
Enter fullscreen mode Exit fullscreen mode

Then build explicitly:

npm run build
Enter fullscreen mode Exit fullscreen mode

The build output (HTML, JS, CSS, assets) is now fully static and contains the injected values.


Granting Caddy read access

Caddy runs as its own user and needs read-only access to the static build folder.

Example:

sudo chown -R svc-frontend:svc-frontend /var/apps/frontend
sudo chmod -R 755 /var/apps/frontend/dist
Enter fullscreen mode Exit fullscreen mode

Why we do this:

  • Caddy must read static files
  • No write access is needed
  • Prevents accidental or malicious file modification

Static apps have no runtime, no open ports, and no background process—this significantly reduces attack surface.


Step 14 — Reverse Proxy Configuration (Caddy)

Caddy is the only public entry point to the server.
All applications—dynamic or static—are exposed through it.


Example: Node.js app running via systemd

The app runs internally on a private port (e.g. 127.0.0.1:5000).

Caddyfile:

api.example.com {
    reverse_proxy 127.0.0.1:5000
}
Enter fullscreen mode Exit fullscreen mode

Notes:

  • The app port is not exposed publicly
  • Firewall blocks direct access
  • Only Caddy can reach it

This applies to NestJS, Next.js (standalone), Express, etc.


Example: Static frontend site

The static build lives at:

/var/apps/frontend/dist
Enter fullscreen mode Exit fullscreen mode

Caddyfile:

app.example.com {
    root * /var/apps/frontend/dist
    encode gzip zstd
    try_files {path} {path}/ /index.html
    file_server
}
Enter fullscreen mode Exit fullscreen mode

What this does:

  • Serves static files directly
  • Supports client-side routing (SPA)
  • Enables compression
  • No Node.js process required

Why this architecture matters

  • Only ports 80 and 443 are public
  • Apps never bind directly to the internet
  • Static sites have zero runtime risk
  • Dynamic apps are isolated behind systemd and firewall

This clean separation keeps the server secure, observable, and easy to reason about.


Step 15 — Updating an App Safely

sudo chown -R dev:dev /var/apps/myapp

cd /var/apps/myapp
git pull
npm ci
npm run build
npm prune --production

sudo chown -R svc-myapp:svc-myapp /var/apps/myapp
sudo chmod -R o-rwx /var/apps/myapp

sudo systemctl restart myapp
Enter fullscreen mode Exit fullscreen mode

🚫 Never sudo git pull.


What This Setup Protects Against

  • Privilege escalation
  • Lateral movement between apps
  • Accidental data leaks
  • Exposed databases
  • Root-level compromise from app bugs

Even if one app is hacked:

The system survives. Other apps survive. Data survives.


Final Thoughts

Security is not about tools.
It’s about clear boundaries and boring defaults.

This setup avoids complexity, avoids magic, and relies on Linux doing what it already does best.

That’s how small teams run production servers professionally.


If you follow this guide end-to-end, your server will already be more secure than most production environments.

Happy (and secure) deploying 🚀

Top comments (0)