DEV Community

Cover image for Deploying a Rails + SQLite App to a Synology NAS
julianrubisch
julianrubisch

Posted on

Deploying a Rails + SQLite App to a Synology NAS

I built a small CRM for my consulting business using Avo on Rails 8. It tracks contacts, companies, deals, and follow-ups — nothing fancy, but shaped exactly to how I work. Single user. No team access needed.

The question was where to run it. A $7/month VPS felt wrong for a tool only I use. I don't need uptime guarantees or global availability — I need it accessible from my laptop, phone, and tablet, wherever I am.

The answer was already sitting in my office: a Synology NAS on my Tailscale network. Rails 8's Solid stack (Cache, Queue, Cable — all SQLite-backed) means the entire app is one process with one database file. That's NAS-friendly. And Tailscale means every device I own can reach it without exposing anything to the public internet.

Here's how I set it up.

The Setup

  • Synology DS918+ — Intel Celeron J3455 (x86_64, quad-core 1.5GHz), 4GB RAM
  • DSM 7.2.2-72806 — Container Manager ships Docker daemon 24.0.2
  • Tailscale installed from Package Center
  • SSH access enabled (Control Panel → Terminal & SNMP)
  • A Rails app with a working Dockerfile (Rails 7.1+ generates one by default)

Rails + Ruby in a container typically consumes 200–400MB. DSM uses around 1–1.5GB. With 4GB total, there's plenty of headroom.

Part 1: Prepare the NAS

Create a deploy user

Create a dedicated user via Control Panel → User & Group → Create.

  • Add to the administrators group (required for SSH access on Synology)
  • Shared folders: Read/Write on docker only, No Access on everything else
  • Application permissions: Deny all — this user only needs SSH, which comes from the administrators group membership

Permissions for the  raw `docker` endraw  user

Set up SSH key authentication

Enable home directories first — without this, ssh-copy-id fails because there's nowhere to put authorized_keys. Go to Control Panel → User & Group → Advanced tab, check Enable user home service.

ssh-copy-id deploy@hal.local
ssh deploy@hal.local  # verify passwordless login
Enter fullscreen mode Exit fullscreen mode

Fix the Docker PATH problem

This is the single biggest Synology gotcha. Non-interactive SSH commands (how deployment scripts work) get a limited PATH: /usr/bin:/bin:/usr/sbin:/sbin. Docker lives at /usr/local/bin/docker. It won't be found.

SSH into your NAS interactively:

# Edit sshd_config
sudo vi /etc/ssh/sshd_config
# Find PermitUserEnvironment and set it to: PermitUserEnvironment PATH

# Save the full interactive PATH to your SSH environment file
env | grep PATH | tee ~/.ssh/environment
Enter fullscreen mode Exit fullscreen mode

Reboot the NAS (simplest way to restart sshd cleanly), then verify from your local machine:

ssh deploy@hal.local docker -v
# Should output: Docker version 24.0.2, build 610b8d0
Enter fullscreen mode Exit fullscreen mode

Heads up: DSM updates can reset /etc/ssh/sshd_config. After any major update, re-verify with ssh deploy@hal.local docker -v and re-apply if needed.

Create directories for your app

sudo mkdir -p /volume1/docker/myapp/db
sudo mkdir -p /volume1/docker/myapp/storage

# UID 1000 matches the rails user inside the container
sudo chown -R 1000:1000 /volume1/docker/myapp/db /volume1/docker/myapp/storage
Enter fullscreen mode Exit fullscreen mode

Enable passwordless sudo for Docker commands

The deploy script runs Docker commands over non-interactive SSH, which can't prompt for a password:

ssh deploy@hal.local
sudo sh -c 'echo "deploy ALL=(ALL) NOPASSWD: /usr/local/bin/docker-compose, /usr/local/bin/docker" > /etc/sudoers.d/deploy'
sudo chmod 440 /etc/sudoers.d/deploy
exit
Enter fullscreen mode Exit fullscreen mode

This limits passwordless sudo to just Docker commands — no blanket access.

Set up a local Docker registry

A local registry means images never leave your network.

ssh deploy@hal.local "sudo mkdir -p /volume1/docker/registry/data"
Enter fullscreen mode Exit fullscreen mode

Create the compose file on the NAS:

ssh deploy@hal.local "cat > /volume1/docker/registry/docker-compose.yml << 'EOF'
services:
  registry:
    image: registry:2
    container_name: registry
    ports:
      - \"5050:5000\"
    volumes:
      - /volume1/docker/registry/data:/var/lib/registry
    restart: unless-stopped
EOF"
Enter fullscreen mode Exit fullscreen mode

Port 5000 is used by DSM, so the registry maps to 5050. Start it:

ssh deploy@hal.local "cd /volume1/docker/registry && sudo docker-compose up -d"
Enter fullscreen mode Exit fullscreen mode

Verify:

curl http://<nas-tailscale-ip>:5050/v2/
# Should return: {}
Enter fullscreen mode Exit fullscreen mode

Now configure your local Docker client to trust this HTTP registry. The registry runs plain HTTP, but Docker defaults to HTTPS.

OrbStack — edit ~/.orbstack/config/docker.json:

{
  "insecure-registries": ["<nas-tailscale-ip>:5050"]
}
Enter fullscreen mode Exit fullscreen mode

Docker Desktop — Settings → Docker Engine, add the same entry.

Restart after the change. This is safe — traffic runs over Tailscale, which is already encrypted.

Confirm Tailscale is working

tailscale status
Enter fullscreen mode Exit fullscreen mode

Note your NAS's Tailscale IP. That's the address you'll use to access the app.

Part 2: Prepare Your Rails App

Configure the database for persistent storage

In config/database.yml, point the production database to a path that will be mounted from the host:

production:
  <<: *default
  database: /data/production.sqlite3
Enter fullscreen mode Exit fullscreen mode

/data as an absolute path (not relative to /rails) ensures the database lives on the mounted volume and survives container replacements.

Update the Dockerfile

The Rails-generated Dockerfile needs a few modifications.

In the build stage, add Node.js and Yarn for asset compilation (skip this if you're not using Vite/esbuild):

RUN apt-get update -qq && \
    apt-get install --no-install-recommends -y \
      build-essential git libyaml-dev pkg-config nodejs npm && \
    npm install -g yarn && \
    rm -rf /var/lib/apt/lists /var/cache/apt/archives
Enter fullscreen mode Exit fullscreen mode

After COPY . ., install JS dependencies:

RUN yarn install --frozen-lockfile
Enter fullscreen mode Exit fullscreen mode

In the final stage, create the /data directory before switching to the non-root user. This is the part that tripped me up — mkdir /data and chown must happen while still root:

RUN groupadd --system --gid 1000 rails && \
    useradd rails --uid 1000 --gid 1000 --create-home --shell /bin/bash && \
    mkdir /data && \
    chown rails:rails /data

# Copy built artifacts BEFORE switching user
COPY --chown=rails:rails --from=build "${BUNDLE_PATH}" "${BUNDLE_PATH}"
COPY --chown=rails:rails --from=build /rails /rails

USER 1000:1000
Enter fullscreen mode Exit fullscreen mode

If USER 1000:1000 comes before mkdir /data, the non-root user won't have permission to create it.

Build and push the image

# Build for linux/amd64 (DS918+ uses Intel Celeron)
docker build --platform linux/amd64 -t <nas-tailscale-ip>:5050/myapp:latest .

# Push to the local registry
docker push <nas-tailscale-ip>:5050/myapp:latest
Enter fullscreen mode Exit fullscreen mode

The --platform linux/amd64 flag is essential if you're building on Apple Silicon. Without it, you'll get an exec format error when the container starts on the NAS.

Part 3: Deploy to the NAS

The compose file

Create docker-compose.production.yml in your repo:

services:
  web:
    image: localhost:5050/myapp:latest
    container_name: myapp
    ports:
      - "3000:8080"
    environment:
      RAILS_ENV: production
      RAILS_MASTER_KEY: ${RAILS_MASTER_KEY}
      RAILS_SERVE_STATIC_FILES: "true"
      HTTP_PORT: "8080"
    volumes:
      - /volume1/docker/myapp/db:/data
      - /volume1/docker/myapp/storage:/rails/storage
    restart: unless-stopped
Enter fullscreen mode Exit fullscreen mode

The port mapping: Rails 8 includes Thruster, a small Go proxy that handles asset caching and gzip. Inside the container, Thruster listens on HTTP_PORT (8080) and proxies to Puma on 3000. Docker maps external 3000 to Thruster's 8080. Thruster can't use its default port 80 because the container runs as a non-root user.

The image address: localhost:5050 because the NAS pulls from its own registry. Your Mac pushes to the Tailscale IP — same registry, different address. Docker trusts localhost by default, so no insecure registry config is needed on the NAS side.

Create the .env file on the NAS

One-time setup — this file stays on the NAS and is never committed:

ssh deploy@hal.local "echo 'RAILS_MASTER_KEY=<your-master-key>' > /volume1/docker/myapp/.env"
Enter fullscreen mode Exit fullscreen mode

The deploy script

Add bin/deploy to your repo:

#!/usr/bin/env bash
set -e

NAS_HOST="deploy@hal.local"
REGISTRY="<nas-tailscale-ip>:5050"
IMAGE="$REGISTRY/myapp:latest"
APP_DIR="/volume1/docker/myapp"

echo "==> Building image..."
docker build --platform linux/amd64 -t $IMAGE .

echo "==> Pushing to local registry..."
docker push $IMAGE

echo "==> Copying compose file to NAS..."
cat docker-compose.production.yml | ssh $NAS_HOST "cat > $APP_DIR/docker-compose.yml"

echo "==> Pulling image and restarting..."
ssh $NAS_HOST "cd $APP_DIR && sudo docker-compose pull && sudo docker-compose up -d"

echo "==> Done! App is live."
Enter fullscreen mode Exit fullscreen mode
chmod +x bin/deploy
bin/deploy
Enter fullscreen mode Exit fullscreen mode

On the first run, the Docker entrypoint runs db:prepare automatically — creates the database and runs migrations. Subsequent deploys run pending migrations on container startup.

Confirm that the container is up and running in Container Manager:

DSM Container Manager showing running Rails containers

Open http://<nas-tailscale-ip>:3000 from any device on your Tailscale network.

Backups

This is where running on a NAS pays off.

HyperBackup

The database lives at /volume1/docker/myapp/db/ as a regular file on the NAS filesystem. Include it in any HyperBackup task — to an external drive, another NAS, or a cloud destination.

This is a real advantage over Docker named volumes, which are hidden in /volume1/@docker/volumes/ and invisible to File Station and backup tools.

File Station showing the SQLite database file at /docker/myapp/db/ — visible and accessible like any regular file on the NAS, unlike Docker named volumes

Btrfs Snapshots

The DS918+ supports Btrfs. Use Snapshot Replication for point-in-time recovery — especially useful as a safeguard before running migrations.

Manual backup

sudo cp /volume1/docker/myapp/db/production.sqlite3 \
       /volume1/docker/myapp/db/production.sqlite3.backup.$(date +%Y%m%d)
Enter fullscreen mode Exit fullscreen mode

The whole deploy cycle is bin/deploy — build, push to the local registry, restart the container. No CI pipeline, no cloud provider, no monthly bill. The NAS handles backups through the same tools you're already using for everything else on it.

If you're building small Rails apps for yourself or a handful of users, this is a setup worth considering. The Solid stack made the architecture possible. The NAS makes the operations trivial.

Top comments (0)