We're going to Las Vegas 🎰
NAB Show 2026 is almost here — April 19 to 22 at the Las Vegas Convention Center — and the Ant Media team will be there.
If you are attending and working on anything related to live streaming, low latency video, IP camera infrastructure, broadcast workflows, or real-time applications — come find us at our booth. We would genuinely love to talk shop, no sales pitch required.
But before that, let me share some of what we have been working on and thinking about — because NAB is not just a conference, it is a moment to reflect on where the industry is heading.
The problem nobody talks about enough: latency vs scale
Most streaming solutions make you choose. You either get low latency or you get scale. WebRTC gives you sub-half-second latency but historically has been brutal to scale beyond a few hundred viewers. HLS scales beautifully but 8 to 10 seconds of delay makes it useless for anything interactive — auctions, live sports betting, game shows, real-time monitoring.
The thing we have been obsessing over at Ant Media is collapsing that trade-off.
Here is the architecture pattern that actually works at scale:
Publishers (OBS / hardware encoders / WebRTC)
↓
Origin Cluster (ingest + stream metadata)
↓
Edge Cluster (WebRTC delivery to viewers)
↓
Viewers (sub-500ms latency, thousands concurrent)
The key insight is separating ingest from delivery. Origins handle publishers. Edges handle viewers. They talk to each other via a shared MongoDB cluster for stream metadata and routing. Horizontal scaling becomes trivial — add Edge nodes when viewer count grows, add Origins when publisher count grows. They never compete for the same resources.
On a c5.9xlarge (36 vCPU), a single Edge node handles roughly 800 to 830 concurrent WebRTC viewers at 720p before hitting limits. Scale math becomes predictable.
RTSP ingestion — the unsexy backbone of enterprise video
WebRTC gets all the attention. But a huge chunk of real-world video infrastructure runs on RTSP. IP cameras. VMS systems. Security feeds. Industrial monitoring. Every camera in every warehouse, factory, hospital and data center is almost certainly pushing RTSP streams somewhere.
We have been doing a lot of work on high-volume RTSP ingestion — pulling streams from cameras, transcoding or passthrough routing, and distributing to AI clusters or human viewers downstream.
A pattern we see a lot:
200 x IP Cameras (RTSP, 4K, H264)
↓
Ant Media (ingest + transcode)
↓ ↓ ↓
AI Cluster 1 (4K @ 15fps)
AI Cluster 2 (1080p @ 15fps)
AI Cluster 3 (4K @ 1fps for snapshots)
The 1fps output is the one people always underestimate. For computer vision workloads that just need periodic frame analysis rather than full video — dropping to 1fps cuts GPU load to almost nothing compared to full rate output. Small detail, big impact on server count.
SRT is quietly becoming the protocol of choice for contribution
If you work in broadcast or live production, you already know this. SRT (Secure Reliable Transport) has become the go-to for contribution links — getting video from the field into your ingest point reliably over unpredictable networks.
We support SRT ingest natively. One thing that bit us recently in a Kubernetes deployment — the default Helm chart was only exposing RTMP port 1935 through the load balancer. Port 4200 UDP for SRT was missing. If you are deploying Ant Media on AKS or any Kubernetes cluster and wondering why your SRT streams are not reaching the server, check your load balancer config and make sure 4200 UDP is exposed.
# Make sure this is in your service config
- port: 4200
protocol: UDP
name: srt
Small thing, but it has caught a few teams out.
Kubernetes deployments — the IP assignment question
Since we are talking about Kubernetes — this is something that comes up every single time someone deploys Ant Media on AKS in a private enterprise network.
The question is always: which components consume VNet IPs vs which ones use overlay IPs?
Here is the short answer for Azure CNI Overlay deployments:
| Component | IP Type |
|---|---|
| Origin Pods | CNI Overlay IP |
| Edge Pods | CNI Overlay IP |
| MongoDB Pod | CNI Overlay IP |
| Ingress Controller | VNet IP |
| Azure Load Balancer | VNet IP |
| AKS Nodes | VNet IP |
With hostNetwork set to false on your AMS pods, all application pods get Overlay IPs. Only the external-facing entry points consume VNet subnet IPs. This matters a lot in enterprise environments where VNet IP space is limited and carefully managed.
Also — if you are not using WebRTC (pure RTMP/SRT/HLS deployments), disable Coturn entirely. It is not needed and it adds unnecessary complexity to the IP routing.
Come talk to us at NAB
We will be at NAB Show 2026, April 19 to 22 in Las Vegas.
If you are working on:
- Live streaming infrastructure at scale
- Low latency WebRTC delivery
- RTSP camera ingestion and distribution
- AKS / cloud-native streaming deployments
- Broadcast contribution workflows with SRT
- AI video analytics pipelines
...come find us. We are happy to talk architecture, share what we have learned, and hear what you are building.
No forced demos. No sales scripts. Just streaming engineers talking about streaming problems. Which honestly is the best kind of conversation.
See you in Vegas 🎲
Ant Media Server is an open source and enterprise live streaming solution supporting WebRTC, RTMP, HLS, SRT, RTSP and more. antmedia.io
Top comments (0)