DEV Community

Cover image for πŸš€ How I Deployed My Startup's Server Without Kubernetes or Docker (Yet)
Sergey Podgornyy for Kinsly

Posted on

πŸš€ How I Deployed My Startup's Server Without Kubernetes or Docker (Yet)

Almost every article these days talks about setting up Kubernetes clusters, writing Terraform scripts, or diving deep into AWS-managed services. While those tools are powerful, I believe that for most small projects, they are unnecessary overhead.

For Kinsly, my still-small project, I chose a simpler and more old-school approach. It's lighter, uses less memory, has fewer layers of abstraction, and remains secure. In this post, I'll walk through how I deployed my server, why I avoided AWS/GCP at this stage, and the exact setup I used, from SSH security to deployment automation.


🌍 Choosing the Right Infrastructure

For years, I favored DigitalOcean: simple, affordable, and reliable. But for this first phase, I went even more cost-effective and chose Contabo VPS. Although their SLA looks low on paper, performance in practice has been surprisingly stable.

Unlike AWS or GCP, where you can easily drown in unnecessary configuration, here I get just what I need: a straightforward server that I fully control. And importantly, I can move it anywhere at any time, without being locked into provider-specific services.

πŸ’‘ Lesson: Keep deployments provider-agnostic so you can move quickly when needed.


πŸ” Securing Access with SSH

The first thing I did was lock down access. By default, servers allow SSH login with a password. That's a major attack vector: bots constantly attempt brute-force logins with common credentials. An SSH key is far more secure.

Secure SSH key was created locally using this command:

$ ssh-keygen -t ed25519 -f ~/.ssh/deploy_server.pem -C "deploy@gitlab"
Enter fullscreen mode Exit fullscreen mode

And then the local public key was uploaded to the server:

# This command is the easiest way to install your key.
$ ssh-copy-id -i ~/.ssh/deploy_server.pem deploy@[target-ip]
Enter fullscreen mode Exit fullscreen mode

Next, I hardened the SSH configuration on the server.

$ sudo vim /etc/ssh/sshd_config
Enter fullscreen mode Exit fullscreen mode

Inside this file, I made sure the following lines were set. This completely disables password logins and ensures only key-based access is allowed:

PermitRootLogin prohibit-password
# Disallow password authentication entirely.
PasswordAuthentication no
# Ensure public key authentication is enabled.
PubkeyAuthentication yes
# Also disable challenge-response auth, which can sometimes be a password fallback.
ChallengeResponseAuthentication no
Enter fullscreen mode Exit fullscreen mode

Finally, I applied the changes by restarting the SSH service:

$ sudo systemctl restart sshd
Enter fullscreen mode Exit fullscreen mode

From now on, only valid SSH keys can connect. This instantly shuts down most automated attacks.

⚠️ IMPORTANT: Before you close your current terminal session, open a new terminal and confirm you can still log in with your SSH key. If you don't, you could lock yourself out of your own server!


🌐 Cloudflare for Domains & Security

I registered my domain through Cloudflare. Unlike GoDaddy or Namecheap, Cloudflare sells domains at cost price (no markup) and forces you to use its DNS β€” this turned out to be a blessing. DNS hosting is free and comes with Cloudflare Proxy.

With just the free plan, I got:

  • Hidden server IP (Cloudflare Proxy)
  • Free DDoS protection
  • Global caching across regions for speed
  • Basic analytics
  • Built-in security filtering

Cloudflare request analytics

πŸ‘‰ In the past, I even paid AWS money just to register DNS records. Cloudflare gives more for free.


βš”οΈ Hardening the Server Firewall

As soon as I installed nginx, bots started probing for vulnerabilities (checking for /wp-login.php, .git folders, etc.). To prevent direct access to my VPS, I restricted requests to Cloudflare IPs only.

To prevent this, ufw (Uncomplicated Firewall) method is the best practice. It provides the highest level of security and efficiency. It is a one-time setup, though you should plan to update the IP list once or twice a year, as Cloudflare occasionally adds new ranges.

First, always allow SSH, or you'll lock yourself out.

$ sudo ufw allow ssh
Enter fullscreen mode Exit fullscreen mode

or

$ sudo ufw allow 22/tcp
Enter fullscreen mode Exit fullscreen mode

Then allow traffic from all of Cloudflare's IPs on port 443 (HTTPS). You can get the latest lists from cloudflare.com/ips. You can run a simple loop to add all the current IPv4 and IPv6 addresses:

# For IPv4
$ for ip in $(curl -s https://www.cloudflare.com/ips-v4); do sudo ufw allow from $ip to any port 443 proto tcp; done

# For IPv6
$ for ip in $(curl -s https://www.cloudflare.com/ips-v6); do sudo ufw allow from $ip to any port 443 proto tcp; done
Enter fullscreen mode Exit fullscreen mode

After specifically allowing Cloudflare IPs, you can now safely deny all other traffic on ports 80 and 443.

$ sudo ufw deny http
$ sudo ufw deny https
Enter fullscreen mode Exit fullscreen mode

This forces all web traffic through Cloudflare's secure proxy, effectively locking the back door to the server.

Then, if it's not already enabled, turn on the firewall. It will ask for confirmation.

$ sudo ufw enable
Enter fullscreen mode Exit fullscreen mode

Verify your rules are in place.

$ sudo ufw status verbose
Enter fullscreen mode Exit fullscreen mode

You should see a list showing that port 22 is allowed from anywhere, port 443 is allowed from Cloudflare's IPs, and ports 80/443 are denied from anywhere else.

You can also do this within nginx itself, though it's slightly less efficient as nginx has to process the connection before denying it.

You would create a file (/etc/nginx/snippets/cloudflare-ips.conf) containing allow rules for all of Cloudflare's IPs, and then include it in your server block, followed by deny all;.

# IPv4 Ranges
allow 173.245.48.0/20;
allow 103.21.244.0/22;
allow 103.22.200.0/22;
allow 103.31.4.0/22;
allow 141.101.64.0/18;
allow 108.162.192.0/18;
allow 190.93.240.0/20;
allow 188.114.96.0/20;
allow 197.234.240.0/22;
allow 198.41.128.0/17;
allow 162.158.0.0/15;
allow 104.16.0.0/13;
allow 104.24.0.0/14;
allow 172.64.0.0/13;
allow 131.0.72.0/22;

# IPv6 Ranges
allow 2400:cb00::/32;
allow 2606:4700::/32;
allow 2803:f800::/32;
allow 2405:b500::/32;
allow 2405:8100::/32;
allow 2c0f:f248::/32;
allow 2a06:98c0::/29;

deny all;
Enter fullscreen mode Exit fullscreen mode

This snippet will then need to be included separately in the corresponding server directive.

server {
    # ...
    include /etc/nginx/snippets/cloudflare-ips.conf;
    # ...
}
Enter fullscreen mode Exit fullscreen mode

πŸ—οΈ A Simple, Minimal Stack

My architecture is intentionally simple: instead of React or Angular SSR, the first version of Kinsly runs on a static HTML page. Form submissions go to a small Go service that listens only on the local loopback interface (127.0.0.1) and it's not exposed to the outside world. Nginx acts as a reverse proxy, forwarding public requests from api.getkinsly.com to this private port. This is a secure and standard pattern.

I also considered using Unix sockets, but TCP on localhost was good enough.

Here's a look at the core of my Nginx config for the API:

# /etc/nginx/sites-available/api.getkinsly.com
server {
    server_name api.getkinsly.com;

    listen 443 ssl http2;
    # ... (all my SSL and security headers go here) ...

    location / {
        # These two lines enable efficient keep-alive connections to the backend.
        proxy_http_version 1.1;
        proxy_set_header Connection "";

        # Pass essential client information to the Go application.
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # The actual proxy pass to the local Go service.
        proxy_pass http://127.0.0.1:8001;
    }
}
Enter fullscreen mode Exit fullscreen mode

For now, Kinsly just needs to collect form data. Deploying a full Postgres or MySQL server at this stage would be pure over-engineering, so I deliberately chose SQLite - it's file-based, requires zero configuration, incredibly fast, and perfect for the simple task of collecting waiting list emails.

Later, when scaling requires it, migrating to Postgres or MySQL will be straightforward.


βš™οΈ Keeping the App Alive with systemd

My Go application runs as a systemd service, ensuring it's always on. The service is configured to run as the non-privileged www-data user for security.

Example service file (/etc/systemd/system/kinsly-api.service):

[Unit]
Description=Kinsly API Service
# Start this service only after the network is available.
After=network.target

[Service]
# The user and group the service will run as.
# Running as a dedicated, non-sudo user is a critical security best practice.
User=www-data
Group=www-data

# The command to start the application. Make sure the binary is executable (chmod +x binary)
ExecStart=/opt/kinsly/current/kinsly-api
Restart=always
# Wait 5 seconds before restarting to prevent rapid-fire restarts.
RestartSec=5

# Set environment variables if your application needs them.
Environment="APPLICATION_MODE=prod"

[Install]
# This allows the service to be enabled to start on boot.
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode

With this, if the app crashes or the server reboots, systemd automatically restarts it.


πŸ‘€ Creating a Safe Deploy User

My GitLab CI/CD pipeline, however, connects as a separate deploy user that does not have full sudo access. This creates a problem: how does the CI/CD pipeline restart the service after a new deployment?

The solution is to grant the deploy user permission to run only that one specific command. This is done via the sudoers file.

The ONLY safe way to edit the sudoers file is with visudo. It performs a syntax check on save to prevent you from breaking sudo.

$ sudo visudo -f /etc/sudoers.d/deploy-app
Enter fullscreen mode Exit fullscreen mode

I added this single line to the bottom of the file. This allows the deploy user to restart the kinsly-api service without a password.

deploy ALL=(ALL) NOPASSWD: /usr/bin/systemctl restart kinsly-api.service
Enter fullscreen mode Exit fullscreen mode

This is a secure and granular way to enable deployment automation without giving away root access. The CI/CD pipeline can now run sudo systemctl restart kinsly-api.service, and it's the only sudo command it's allowed to execute.

ℹ️ Make sure the directive @includedir /etc/sudoers.d is enabled in the main /etc/sudoers configuration file. If it's not there, you should add it using sudo visudo command (without -f argument).

Also, ensure correct file ownership & permissions:

$ sudo chown root:root /etc/sudoers.d/deploy-app
$ sudo chmod 0440 /etc/sudoers.d/deploy-app
Enter fullscreen mode Exit fullscreen mode

Validate sudoers syntax, so you don't lock yourself out:

$ sudo visudo -c
Enter fullscreen mode Exit fullscreen mode

πŸ”„ Deployment via GitLab CI

Deployment is automated:

  1. GitLab CI builds a Go binary.
  2. It packages the binary + static HTML into a .tar.gz archive.
  3. The archive is uploaded to the server and unpacks under new release folder /opt/kinsly/releases/<timestamp>.
  4. A symlink current β†’ <latest_release> is updated.
  5. The service is restarted.
  6. Old releases are pruned (keep the last 5 for rollback).

This is simple, reliable, and avoids the need for Docker at this stage.

GitLab pipelines triggered by tag creation


πŸͺ Frontend Challenges: Analytics, Consent & Captcha

Even with a simple site, there are complexities. To use Google Analytics, I needed a cookie consent banner to comply with EU law. I chose axept.io, a free and Google-certified service.

I also added a free captcha (Altcha) to prevent bots from spamming my form with fake emails. Spam protection was essential. Without it, malicious bots could flood forms, causing emails to be marked as spam. While not perfect, it filters out the majority of malicious requests.


πŸ›‘οΈ Final Security Checks

Before going live, I checked open ports with nmap and online scanners to ensure only nginx was exposed. My Go app, database, and system processes remain inaccessible from outside.

$ nmap -sS -Pn -T5 -p- [target-ip]
Enter fullscreen mode Exit fullscreen mode

Scanned open ports on the server


βœ… Conclusion

This setup may sound "basic" compared to a Kubernetes cluster on AWS, but for an early-stage project, it's exactly what I need:

  • Minimal cost
  • Maximum control
  • Easy portability
  • Strong enough security

As the project grows, I will likely containerize services and introduce Docker orchestration, but for now, this lean approach lets me move fast without unnecessary complexity.

It reminds me that sometimes the most elegant solution is the simplest one.

Top comments (0)