The Problem Nobody Talks About
If you're running an AI assistant at home — Claude Code, a local LLM, an agent framework — there's a good chance it lives on a machine that's at least partially internet-facing.
Mine does. A Mac mini in Lisbon running Claude Code 24/7, connected to a Telegram bot, with access to GitHub, Railway, Gmail, and a Synology NAS.
One day I realized: everything sensitive was sitting on that machine. API keys. Credentials. Project context. Voice cloning references. SSH keys.
If the Mac mini got compromised, an attacker would have all of it.
The fix took 20 minutes and used hardware I already had.
The Architecture
Before:
Internet → Mac mini (Claude Code + all data)
After:
Internet → Mac mini (Claude Code, compute only)
↕ NFS (LAN)
Synology NAS (all sensitive data)
The Mac mini becomes "dumb compute." It runs the AI and processes requests, but holds nothing sensitive. The NAS — which never accepts inbound connections from the internet — holds everything that matters.
Step 1: Enable NFS on the NAS
On Synology DSM:
Control Panel → File Services → NFS → Enable NFS service
Then SSH into the NAS and add the export:
sudo bash -c 'echo "/volume1/docker 192.168.68.0/24(rw,sync,no_subtree_check,no_root_squash,insecure)" >> /etc/exports && exportfs -ra'
Adjust the path and subnet to match your setup.
Step 2: Copy Your Data to the NAS
# Copy everything except large Python venvs (they stay local for performance)
tar --exclude='Claude/chatterbox-env' \
--exclude='Claude/sadtalker-env' \
| ssh user@nas-ip "cd /volume1/docker/ECHO && tar -xzf - --strip-components=1"
Step 3: Mount NFS on the Mac
mkdir -p ~/mnt/NAS
sudo mount_nfs -o resvport,rw 192.168.68.52:/volume1/docker ~/mnt/NAS
Make it auto-mount at boot. Create /Library/LaunchDaemons/com.nas-mount.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "...">
<plist version="1.0">
<dict>
<key>Label</key><string>com.nas-mount</string>
<key>ProgramArguments</key>
<array>
<string>/sbin/mount_nfs</string>
<string>-o</string><string>resvport,rw</string>
<string>192.168.68.52:/volume1/docker</string>
<string>/Users/youruser/mnt/NAS</string>
</array>
<key>RunAtLoad</key><true/>
</dict>
</plist>
sudo launchctl load /Library/LaunchDaemons/com.nas-mount.plist
Step 4: Symlink Your Data Folder
# Move the local folder, create symlink
mv ~/Claude ~/Claude-local-backup
ln -s ~/mnt/NAS/ECHO/claude-workspace ~/Claude
Everything that was at ~/Claude/ still works at the same path — scripts, configs, tools. Nothing needs to know the data moved.
Handling Large Local Dependencies
My setup uses Chatterbox TTS (a voice cloning model — ~1.4GB) and SadTalker (~4GB). These need fast local disk access for inference. Moving them to NAS would add latency.
Solution: keep them local, symlink them back in:
# Move venvs to local storage
mv ~/Claude/chatterbox-env ~/Claude-chatterbox-env
# Symlink back so all paths still work
ln -s ~/Claude-chatterbox-env ~/Claude/chatterbox-env
The AI model weights live locally. The credentials, context, and scripts live on the NAS. Both are accessible at the same paths.
The Security Benefits
Threat model: An attacker gains code execution on the Mac mini (via the Telegram bot, a vulnerability in a dependency, or anything else the machine is exposed to).
Before this change: They immediately have all your API keys, credentials, voice clones, and project context.
After this change: They have... a Python virtual environment and some model weights. The NAS is unreachable from the internet — no open ports, no exposed services on the WAN side.
Additional benefits:
- RAID redundancy — NAS data survives a drive failure
- Snapshots — Synology can snapshot the workspace daily, rolling back any accidental deletion
- Multi-device — any machine on your home network can mount the same workspace and pick up where you left off
- Disposable compute — the Mac mini can be wiped and reinstalled at any time without losing data
Bonus: NFS vs SMB
I tried SMB first. It works but had two issues:
- Special characters in passwords need URL encoding in the mount command
- Slightly higher overhead for frequent small file access (scripts, config reads)
NFS is cleaner for this use case — it's designed for Unix filesystem access over a network, authentication is handled at the network level (IP-based), and macOS has native NFS support.
What This Doesn't Protect Against
- An attacker who can already reach your LAN (compromised router, someone on your WiFi)
- Credentials stored in environment variables on the Mac mini itself
For the first: normal home network hygiene (WPA3, guest network for IoT, VLANs if your router supports it).
For the second: if your AI tools need credentials at runtime, they'll need to either fetch them from the NAS at startup or use a secrets manager. My setup reads credentials from ~/Claude/context/credentials.md at the start of each session — that file now lives on the NAS.
The 20-Minute Version
- Enable NFS on your NAS (DSM checkbox)
- Add NFS export via SSH (one command)
-
rsyncortaryour data folder to the NAS - Mount NFS on the Mac, create a LaunchDaemon for auto-mount
- Symlink your data folder to the NAS mount
That's it. Your AI assistant now runs on an internet-facing machine with nothing sensitive on it.
Top comments (0)