Over the past few weeks, I’ve started experimenting with self-hosting to give my personal projects a proper home. It’s still a learning process with plenty of room for improvement, but I’m enjoying the challenge and the sense of control it brings. In this post, I’ll share an overview of how I set up my server on a Raspberry Pi, complete with GitHub-powered CI/CD and secure internet access enabled through a Cloudflare Tunnel.
Below, I’ll begin with a high-level overview of the overall architecture, followed by a detailed walkthrough of the configuration process. You can explore the full example implementation and configuration in the GitHub repository here. It has a simple application with a backend and a frontend and doesn't have any goal besides demonstrating the configuration.
Key components
My home server setup is designed to balance simplicity with flexibility. It runs on a Raspberry Pi 5 (8 GB) and uses Docker to manage all services, with the exception of the GitHub Actions runner, which is installed directly on the host system. Source code is maintained on GitHub, with Docker Hub serving as the image registry. Continuous integration and deployment are handled through GitHub Actions. I rely on PostgreSQL as the database and Caddy as the reverse proxy. External access to services is securely provided via a Cloudflare Tunnel.
So the key components are:
- Raspberry Pi
- Docker + Docker Hub
- GitHub + Actions
- PostgreSQL
- Caddy
- Cloudflare tunnel
How it works
Services access
The services deployed on the Raspberry Pi are accessed through a Cloudflare Tunnel, which ensures that the server itself is not directly exposed to the internet. Instead, a secure connection is established with Cloudflare, and all client requests are routed through it before reaching the server. The tunnel is maintained by the Cloudflared service, running in Docker, which forwards incoming traffic to the Caddy reverse proxy—also containerized. Caddy then applies routing rules to direct requests to the appropriate service.
Data persistence
For data storage, I use a PostgreSQL instance running in Docker with volumes configured for persistence. While this setup is sufficient for my personal projects, it is not suitable for production environments due to limitations in scalability, disaster recovery, data security, and other factors. In cases where reliability is critical, a more advanced solution - such as a managed database service - would be advisable. However, for my current use case, this lightweight approach works well.
Code delivery
Since my project is still evolving, I wanted a way to apply changes quickly and efficiently. This led me to set up a CI/CD pipeline with GitHub Actions. With this in place, I can deliver new versions of the application seamlessly, without unnecessary overhead. While there are many ways to organise such a workflow, my current process follows these steps:
- Change the codebase and push changes to the repository.
- GitHub action starts working, running the jobs:
- Run tests (on GitHub)
- Build Docker images (on GitHub)
- Push the images to the Docker Hub (on GitHub)
- Pull the images from the Docker Hub (on the server)
- Deploy containers from the images (on the server)
The initial three jobs run within GitHub’s infrastructure, while the final two execute directly on the Raspberry Pi. Images are stored in Docker Hub.
I currently use a free Docker account, which means all images are publicly accessible. For improved security, consider switching to a paid Docker Hub plan with private repositories or adopting an alternative image registry that supports private image storage.
For GitHub Actions executed on the Raspberry Pi, a self-hosted runner is configured. This runner is installed as a service on the host machine and registered with GitHub. It runs directly on the system, rather than inside Docker, to enable it to manage Docker operations such as pulling images and starting containers.
One side effect of using self-hosted runners was the need to create a GitHub organization where I am the only member, which, fortunately, is free. This step was necessary because personal GitHub accounts do not support shared self-hosted runners. With a personal account, each repository that needs to deploy to the same server would require its own runner registration and a dedicated instance of that runner on the server. In contrast, organization accounts support organization-level runners, which can be configured once and reused across multiple projects. While this adds a small layer of complexity, it also helps me separate projects: the organization contains only those intended to be deployed as services, which in my case are relatively few. An alternative approach is to maintain a single repository containing all the projects you intend to deploy. This way, a single runner could handle deployments for all services, avoiding the need for an organization-level setup (as in the demo project).
Setting up
With the overview in place, we can move on to the setup. I will not provide step-by-step instructions for every detail; in some cases, I will reference the relevant documentation. This approach accounts for variations in individual systems, which may require different installation steps.
It is assumed that the operating system is already installed on your machine. In my case, I use Raspberry Pi OS, an ARM64-based distribution built on Debian.
Some steps will require a user with sudo privileges. Once ready, connect to your machine via SSH or direct access; both methods work equally well for this setup.
Docker Engine (sudo user)
First, we will install Docker Engine, which will serve as the isolated environment for running all services. Additionally, a dedicated Linux user will be created to handle deployment operations.
The Docker Engine installation process may vary depending on your system. The recommended approach is to follow the official Docker installation guides: Docker Engine Installation.
Most installation instructions assume the use of the root user; however, you can also enhance security by installing Docker as a non-privileged user or in rootless mode. In this post, Docker Engine will be installed using the root user, while a separate user will be created to manage CI/CD tasks and interact with Docker.
Once Docker Engine is installed, we can create a dedicated Linux user to manage CI/CD operations. For this guide, we will name the user ci-user.
Create a new user, set a password, and add it to the Docker group:
sudo useradd -m ci-user
sudo passwd ci-user
group sudo usermod -aG docker ci-user
GitHub Actions (sudo user)
Below is an overview of the steps to configure GitHub Actions. If any issues arise, refer to the official documentation for guidance: GitHub Self-Hosted Runners.
- Go to your GitHub organization's or project’s Settings
- On the left side, hit Actions - Runners
- On the right side, click New Runner - New self-hosted runner
- Choose your OS and Architecture (Linux and ARM64 for me)
A list of commands that you need to run on your server will appear
One key difference is that instead of running ./run.sh, which only starts the runner for the current session, you need to configure it as a service that runs continuously in the background.
The detailed configuration is described in the official documentation: Configure the Application. In essence, the setup requires the following steps:
- Being in the runner folder, install the service and start it, then optionally check the status:
sudo ./svc.sh install
sudo ./svc.sh start
sudo ./svc.sh status
Before deploying, you will likely need to configure environment variables for your GitHub Actions workflows. To do this, navigate to Settings → Secrets and variables → Actions in your repository. From there, you can define the necessary secrets and variables that your CI/CD pipelines will use during build and deployment. This ensures that sensitive information, such as API keys or credentials, is securely managed and accessible only within the workflow environment.
To make your project use GitHub Actions, you need to have a corresponding configuration in your project source code. You can check one in the GitHub example for this article (check
.github/workflows/ci.ymlfile).
Now that we don’t need a sudo user anymore, let’s switch the user to ci-user.
Cloudflare configuration
As mentioned earlier, external access to the services is provided through a Cloudflare Tunnel, so let’s configure it. To use this feature, you’ll need a registered domain. I purchased mine directly from Cloudflare, though it’s also possible to use an external domain provider; however, I don’t have personal experience with that setup.
Once you have your domain:
- Navigate to Cloudflare Zero Trust → Networks → Tunnels → Create a tunnel.
- Select Cloudflared as the tunnel type and provide a descriptive name.
- Click Save tunnel.
- Choose the Docker option - where you’ll see a Docker
runcommand. Do not execute it - copy and save the token, as it will be needed later. - Click Next to proceed.
Before continuing, I would like to point out that I use multiple subdomains, which I distinguish on my server side. For example, if the domain is
selfhostingdemo.com, I configure subdomains likekitchen.selfhostingdemo.comandlivingroom.selfhostingdemo.com. To simplify configuration, I use a wildcard (*) subdomain in Cloudflare. If you have a static or single domain setup, you can adjust this configuration accordingly.
- On the next page, put the following values in the configuration:
| Property name | Value |
|---|---|
| Hostname | |
| Subdomain |
*. |
| Domain | <DOMAIN>.com |
| Service | |
| Type | HTTPS |
| URL |
caddy [1]
|
| Additional application settings | |
| TLS | |
| Origin Server Name |
*.<DOMAIN>.com. |
| No TLS Verify |
ON [2]
|
[1] It’s the address of our reverse proxy in Docker
[2] This might sound insecure, but keep in mind that it’s about two services communicating within Docker. (Cloudflare Tunnel and Caddy). The tunnel communication between the Docker service and Cloudflare stays secure. But if you need to use a more advanced configuration, you can change it.
- Click Complete setup
At this point, your newly created tunnel should appear in the list of tunnels with a status of inactive. This is expected, as the tunnel service has not yet been started on your server.
Infrastructure services (ci-user)
As mentioned earlier, all services (besides the GitHub runner) - including infrastructure components - are deployed as Docker containers. I use Docker Compose along with several environment files to manage configuration. The following folder structure (also available in the accompanying GitHub example project) defines the layout and configuration of the infrastructure. Detailed explanations of each file are provided in the sections below and within the example project’s infrastructure directory.
The files in the example project differ slightly from those described here and include a few additional ones to make the setup runnable and testable on localhost.
This configuration could also be integrated into a GitHub Actions workflow to automate deployment and follow an Infrastructure-as-Code approach (with the exception of the environment files). I haven’t implemented this yet, as the infrastructure is relatively stable and not expected to change soon - but it would certainly be a worthwhile enhancement for future iterations.
The folder structure:
ci-user home
|
|__apps
|
|__config
| |__caddy
| |__Caddyfile
|
|__env
| |__cloudflared.env
| |__postgres.env
|
|
|__apps-compose.yaml
The file contents:
Caddyfile
*.<YOUR_DOMAIN>.org {
tls internal
handle /api/* {
reverse_proxy backend:8080
}
handle {
reverse_proxy frontend:80
}
}
The Caddyfile is the configuration file for Caddy, which in this setup functions as a reverse proxy. It is configured to serve two routes - one for the backend and another for the frontend. Naturally, you’ll need to update this configuration to reflect your own domain, and adjust it as needed if you have a different number of services.
cloudflared.env
TUNNEL_TOKEN=<CLOUDFLARE_TUNNEL_TOKEN>
The cloudflared.env file is one of the configuration files used to store environment variables for the Cloudflared service (the tunnel component). It is a simple text file containing key-value pairs that define the service’s environment variables. This file is linked to the service through the Docker Compose configuration, which will be described later.
Update the value of the TUNNEL_TOKEN variable with the Cloudflare token you saved earlier.
For security reasons, the file’s permissions should be restricted so that it is readable and writable only by the owner (or a sudo user). This can be done with the following command:
chmod 600 cloudflared.env
You can verify the permissions by running ls -la, which should display output similar to the following:
-rw------- 1 ci-user group 14 Oct 5 22:24 cloudflared.env
There are several ways to manage secrets; this approach is a simple and straightforward one.
postgres.env
POSTGRES_USER=<YOUR_DB_USER_NAME>
POSTGRES_PASSWORD=<YOUR_DB_USER_PASSWORD>
The postgres.env file is similar to cloudflared.env and contains the environment variables used by the PostgreSQL service. In this file, you can define the master database credentials. These values must be set before the database server is created. After initialization, you can connect to the database and add additional users as needed. Replace the placeholder values with your chosen credentials—ideally strong and secure ones.
Remember to restrict file permissions by running:
chmod 600 postgres.env
apps-compose.yaml
name: apps
services:
cloudflared:
image: cloudflare/cloudflared:latest
container_name: cloudflared
restart: unless-stopped
command: tunnel run
env_file: "./env/cloudflared.env"
networks:
- apps_net
caddy:
image: caddy:2.10.0-alpine
container_name: caddy
restart: unless-stopped
networks:
- apps_net
volumes:
- ./config/caddy:/etc/caddy
- caddy_data:/data
postgres:
image: postgres:17
container_name: postgres-17
restart: unless-stopped
networks:
- apps_net
volumes:
- pg_data:/var/lib/postgresql/data
env_file: "./env/postgres.env"
networks:
apps_net:
name: apps_net
volumes:
pg_data:
caddy_data:
The apps-compose.yaml file defines the Docker Compose configuration for the infrastructure services. It includes references to configuration and environment variable files. When the Docker Compose command is executed, these definitions will be used to create and start the infrastructure services required by the applications.
To start and update the services (including the Cloudflare tunnel), run:
docker compose -f apps-compose.yaml up --detach --force-recreate
Afterward, verify that all containers are running by executing:
docker ps
Ensure that the postgres, cloudflared, and caddy containers are listed and active.
Finalising Cloudflare configuration
Next, return to the Cloudflare Tunnels dashboard and verify that the tunnel status is Healthy.
Verifing configuration with docker
If you use the example project, you can also run the backend and frontend services to verify that your configuration is functioning correctly. Once Cloudflare is configured, you can optionally expose these services to the internet through the Cloudflare Tunnel, which provides an additional way to validate that your setup is working as intended.
Conclusion
In summary, this setup illustrates a practical approach to self-hosting personal projects on a Raspberry Pi. While this configuration meets my current needs, it is not intended as a production-grade solution; rather, it serves as a foundation for experimentation, learning, and gradual improvement.
This article does not go into detail about the specific applications being deployed. Each project can vary significantly in architecture, requirements, and configuration, making it challenging to cover every possible scenario in a single guide. Instead, the focus has been on setting up the underlying infrastructure and deployment workflow, which can be adapted to a wide range of projects.
Self-hosting in this way provides both a hands-on learning experience and greater control over your projects. The setup can be further expanded or refined over time, offering opportunities to experiment with more advanced features, improve security, and optimize performance as your needs evolve.


Top comments (2)
thanks fir taking the time to write it.
Np, hope it was useful