DEV Community

Cover image for Deploying Hermes Agent on Ubuntu 26.04
Aashish Chaurasiya for Vultr

Posted on • Originally published at docs.vultr.com

Deploying Hermes Agent on Ubuntu 26.04

Hermes Agent is an open-source self-hosted AI agent framework by Nous Research that runs continuously on a server with persistent memory and tool access. It integrates with Telegram, Discord, Slack, and WhatsApp, and supports any OpenAI-compatible LLM provider including Vultr Serverless Inference. This guide deploys Hermes Agent using Docker Compose with Traefik handling automatic HTTPS, and configures an LLM backend. By the end, you'll have a self-hosted AI agent running with a secured web dashboard and verified LLM connectivity.


Install Docker

The official Docker repository gives you the most current Docker Engine builds.

1. Install dependency packages:

$ sudo apt install apt-transport-https ca-certificates curl -y
Enter fullscreen mode Exit fullscreen mode

2. Add Docker's GPG key:

$ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
Enter fullscreen mode Exit fullscreen mode

3. Register the Docker repository:

$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Enter fullscreen mode Exit fullscreen mode

4. Refresh APT and install Docker:

$ sudo apt update
$ sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
Enter fullscreen mode Exit fullscreen mode

5. Add your user to the Docker group:

$ sudo usermod -aG docker $USER
$ newgrp docker
Enter fullscreen mode Exit fullscreen mode

Set Up the Project Directory

1. Create the project directory:

$ mkdir -p ~/hermes/data
$ cd ~/hermes
Enter fullscreen mode Exit fullscreen mode

2. Create the environment file:

$ nano .env
Enter fullscreen mode Exit fullscreen mode
DOMAIN=hermes.example.com
LETSENCRYPT_EMAIL=admin@example.com
Enter fullscreen mode Exit fullscreen mode

3. Generate dashboard authentication credentials:

$ docker run --rm httpd:2.4-alpine htpasswd -nbB admin 'your_dashboard_password' > .htpasswd
Enter fullscreen mode Exit fullscreen mode

Deploy with Docker Compose

1. Create the Docker Compose file:

$ nano docker-compose.yml
Enter fullscreen mode Exit fullscreen mode
services:
  traefik:
    image: traefik:v3
    command:
      - --providers.docker=true
      - --entrypoints.web.address=:80
      - --entrypoints.websecure.address=:443
      - --certificatesresolvers.letsencrypt.acme.email=${LETSENCRYPT_EMAIL}
      - --certificatesresolvers.letsencrypt.acme.storage=/data/acme.json
      - --certificatesresolvers.letsencrypt.acme.tlschallenge=true
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./data:/data

  hermes:
    image: nousresearch/hermes-agent:latest
    volumes:
      - ./data:/opt/data
    labels:
      - traefik.enable=true
      - traefik.http.routers.hermes.rule=Host(`${DOMAIN}`)
      - traefik.http.routers.hermes.entrypoints=websecure
      - traefik.http.routers.hermes.tls.certresolver=letsencrypt

  dashboard:
    image: nginx:alpine
    volumes:
      - ./.htpasswd:/etc/nginx/conf.d/.htpasswd
    labels:
      - traefik.enable=true
      - traefik.http.routers.dashboard.rule=Host(`${DOMAIN}`) && PathPrefix(`/dashboard`)
      - traefik.http.middlewares.dashboard-auth.basicauth.usersfile=/etc/nginx/conf.d/.htpasswd
      - traefik.http.routers.dashboard.middlewares=dashboard-auth
Enter fullscreen mode Exit fullscreen mode

2. Open the firewall:

$ sudo ufw allow 80/tcp
$ sudo ufw allow 443/tcp
Enter fullscreen mode Exit fullscreen mode

3. Start the services:

$ docker compose up -d
Enter fullscreen mode Exit fullscreen mode

4. Verify the services are running:

$ docker compose ps
Enter fullscreen mode Exit fullscreen mode

Configure the LLM

1. Run the model configuration command:

$ docker run -it --rm \
    -v ~/hermes/data:/opt/data \
    nousresearch/hermes-agent:latest model
Enter fullscreen mode Exit fullscreen mode

2. Follow the prompts to select a provider and model.

The configuration supports OpenAI, Anthropic, OpenRouter, and any OpenAI-compatible endpoint. To use Vultr Serverless Inference, select the OpenAI-compatible option and provide your Vultr API key and the inference endpoint URL.


Verify the Deployment

1. Check the service logs:

$ docker compose logs hermes
Enter fullscreen mode Exit fullscreen mode

2. Open the dashboard in a browser:

Visit https://hermes.example.com and log in with the credentials from the .htpasswd file.

3. Test LLM connectivity:

Send a simple query through the dashboard. A response from the model confirms the LLM connection is working.


Next Steps

Hermes Agent is running and connected to a language model. From here you can:

  • Connect messaging integrations for Telegram, Discord, Slack, or WhatsApp through the dashboard
  • Enable persistent memory so the agent retains context across sessions
  • Switch to Vultr Serverless Inference for a cost-effective, OpenAI-compatible LLM backend

For the full guide with additional tips, visit the original article on Vultr Docs.

Top comments (0)