DEV Community

augustine Egbuna
augustine Egbuna

Posted on • Originally published at fivenineslab.com

Streaming Rugby Through a Self-Hosted RTMP Proxy with Docker and OBS

Last March, our office wanted to stream a rugby match — Highlanders vs Brumbies — to multiple monitors without juggling browser tabs or relying on flaky third-party streams. The problem: we needed one reliable ingestion point, the ability to record the stream, and the flexibility to push it to multiple destinations (local screens, recording storage, backup relay). No commercial streaming service gave us that level of control.

We solved this by running our own RTMP proxy using nginx-rtmp-module in Docker, pulling the source stream with ffmpeg, and distributing it across our internal network. This isn't about piracy — it's about understanding media streaming infrastructure at the protocol level. You can use the same pattern for security camera feeds, internal presentations, or any scenario where you need to ingest, transcode, and redistribute live video.

Why RTMP Still Matters

RTMP (Real-Time Messaging Protocol) remains the workhorse protocol for live video ingestion. While HLS and DASH dominate delivery to browsers, RTMP handles low-latency, persistent connections between encoders and servers. OBS, ffmpeg, and most professional broadcast tools speak RTMP natively.

The stack we built:

  • nginx with rtmp module: accepts incoming RTMP streams, handles restreaming
  • ffmpeg: pulls external streams (HLS, RTSP, etc.), transcodes, pushes to nginx
  • Docker Compose: orchestrates everything, handles restarts
  • Prometheus node-exporter (optional): monitors bitrate, dropped frames

Containerized RTMP Server

First, we built a Docker image for nginx with the RTMP module. The official nginx image doesn't include it, so we compile it in.

FROM alpine:3.18 AS builder

RUN apk add --no-cache \
    build-base \
    git \
    pcre-dev \
    openssl-dev \
    zlib-dev

WORKDIR /tmp
RUN git clone https://github.com/arut/nginx-rtmp-module.git && \
    wget http://nginx.org/download/nginx-1.24.0.tar.gz && \
    tar -xzf nginx-1.24.0.tar.gz

WORKDIR /tmp/nginx-1.24.0
RUN ./configure \
    --with-http_ssl_module \
    --add-module=../nginx-rtmp-module \
    --prefix=/usr/local/nginx && \
    make && make install

FROM alpine:3.18
RUN apk add --no-cache pcre openssl
COPY --from=builder /usr/local/nginx /usr/local/nginx
COPY nginx.conf /usr/local/nginx/conf/nginx.conf
EXPOSE 1935 8080
CMD ["/usr/local/nginx/sbin/nginx", "-g", "daemon off;"]
Enter fullscreen mode Exit fullscreen mode

The nginx configuration handles stream ingestion on port 1935 and serves an HLS endpoint on 8080:

rtmp {
    server {
        listen 1935;
        chunk_size 4096;

        application live {
            live on;
            record off;

            # Enable HLS
            hls on;
            hls_path /tmp/hls;
            hls_fragment 2s;
            hls_playlist_length 6s;

            # Allow publishing from local network only
            allow publish 10.0.0.0/8;
            allow publish 172.16.0.0/12;
            allow publish 192.168.0.0/16;
            deny publish all;
        }
    }
}

http {
    server {
        listen 8080;

        location /hls {
            types {
                application/vnd.apple.mpegurl m3u8;
                video/mp2t ts;
            }
            root /tmp;
            add_header Cache-Control no-cache;
            add_header Access-Control-Allow-Origin *;
        }

        location /stat {
            rtmp_stat all;
            rtmp_stat_stylesheet stat.xsl;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Ingesting the External Stream

Most live sports streams are delivered via HLS (.m3u8 playlists). We use ffmpeg to pull that HLS stream and push it to our RTMP server:

#!/bin/bash
SOURCE_URL="https://example.com/stream/playlist.m3u8"
RTMP_DEST="rtmp://localhost:1935/live/rugby"

ffmpeg -i "$SOURCE_URL" \
  -c:v copy \
  -c:a aac -b:a 128k \
  -f flv "$RTMP_DEST"
Enter fullscreen mode Exit fullscreen mode

This script runs in a separate container (or systemd service). The -c:v copy flag avoids re-encoding video — we're just remuxing from HLS to RTMP. If the source codec isn't compatible, replace copy with libx264 -preset veryfast.

Docker Compose Stack

Here's the complete docker-compose.yml:

version: '3.8'

services:
  rtmp-server:
    build: ./nginx-rtmp
    ports:
      - "1935:1935"
      - "8080:8080"
    volumes:
      - ./recordings:/tmp/hls
    restart: unless-stopped

  stream-ingester:
    image: jrottenberg/ffmpeg:4.4-alpine
    depends_on:
      - rtmp-server
    environment:
      SOURCE_URL: ${SOURCE_URL}
      RTMP_DEST: rtmp://rtmp-server:1935/live/rugby
    command: >
      -i ${SOURCE_URL}
      -c:v copy
      -c:a aac -b:a 128k
      -f flv rtmp://rtmp-server:1935/live/rugby
    restart: unless-stopped
Enter fullscreen mode Exit fullscreen mode

Launch with docker-compose up -d. The ingester container pulls the external stream and feeds it into the nginx RTMP server.

Connecting Clients

Now you have three access methods:

  1. RTMP direct (VLC, ffplay, OBS): rtmp://your-server:1935/live/rugby
  2. HLS browser playback: http://your-server:8080/hls/rugby.m3u8
  3. Statistics dashboard: http://your-server:8080/stat

For office monitors, we used VLC with this command:

vlc rtmp://10.0.1.50:1935/live/rugby --fullscreen
Enter fullscreen mode Exit fullscreen mode

RTMP latency is typically 2-4 seconds. HLS adds another 6-10 seconds due to segment buffering.

Handling Stream Failures

Live streams fail. Networks hiccup, source servers restart, uplinks saturate. We added a watchdog script that monitors the ffmpeg process and restarts it on failure:

import subprocess
import time
import requests

RTMP_STAT_URL = "http://localhost:8080/stat"
RTMP_STREAM = "rugby"
RESTART_THRESHOLD = 15  # seconds without data

def check_stream_alive():
    try:
        resp = requests.get(RTMP_STAT_URL, timeout=5)
        # Parse XML, check if stream is active
        return RTMP_STREAM in resp.text
    except:
        return False

while True:
    if not check_stream_alive():
        print("Stream dead, restarting ingester...")
        subprocess.run(["docker-compose", "restart", "stream-ingester"])
    time.sleep(10)
Enter fullscreen mode Exit fullscreen mode

This runs as a sidecar container or systemd service. In production, you'd use proper XML parsing and integrate with your monitoring stack (Prometheus, Grafana).

Bitrate and Transcoding Considerations

If you're streaming over a constrained network, you may need to transcode down to a lower bitrate. Replace the -c:v copy in the ffmpeg command with:

-c:v libx264 -preset veryfast -b:v 2500k -maxrate 2500k -bufsize 5000k
Enter fullscreen mode Exit fullscreen mode

This caps the video at 2.5 Mbps. For multiple quality levels (adaptive bitrate), you'd configure nginx-rtmp to output multiple HLS variants. That's beyond scope here, but the hls_variant directive handles it.

Recording for Later Playback

To record the stream as it arrives, enable recording in the nginx config:

application live {
    live on;
    record all;
    record_path /tmp/recordings;
    record_suffix -%Y%m%d-%H%M%S.flv;
}
Enter fullscreen mode Exit fullscreen mode

Mount /tmp/recordings to a Docker volume. Each stream session gets saved as an FLV file. Convert to MP4 later with:

ffmpeg -i recording-20260315-193000.flv -c copy match.mp4
Enter fullscreen mode Exit fullscreen mode

What We Learned

Running your own RTMP infrastructure isn't overkill if you need control. We deployed this for rugby, but the same stack handles security cameras, webinar recordings, and internal broadcasts. The latency is lower than most third-party services, and you avoid their bandwidth throttling.

Key takeaways:

  • RTMP is still the best protocol for ingestion, despite being "old"
  • Docker makes nginx-rtmp trivial to deploy and version
  • Always monitor stream health — live video fails in creative ways
  • HLS adds latency but gives you browser compatibility

The entire stack runs on a $20/month VPS with 2 vCPUs and 4GB RAM. For a single 1080p stream, that's more than enough.


This post is an excerpt from Practical AI Infrastructure Engineering — a production handbook covering Docker, GPU infrastructure, vector databases, and LLM APIs. Full book with 4 hands-on capstone projects available at https://activ8ted.gumroad.com/l/ssmfkx


Originally published at fivenineslab.com

Top comments (0)