DEV Community

Cover image for Building Zero-Config LAN Discovery in Node.js (mDNS + UDP Broadcast)
Alan
Alan

Posted on

Building Zero-Config LAN Discovery in Node.js (mDNS + UDP Broadcast)

I was building a small daemon that runs on multiple machines in a LAN — think a couple of dev servers, a NAS, maybe a Raspberry Pi. I wanted them to discover each other automatically. No central server, no config files listing IP addresses, no "go edit this YAML and restart."

Just start it and it should find its peers.

I ended up layering three approaches: mDNS, a custom UDP broadcast protocol I basically stole from 1990s Novell NetWare, and a brute-force subnet scanner as the last resort. The whole thing is ~300 lines of TypeScript.

mDNS: great on paper

First attempt was pure mDNS. Typical setup:

Laptop (Wi-Fi) → office AP → corporate router → server running the daemon
Enter fullscreen mode Exit fullscreen mode

In theory mDNS handles this fine. In practice it worked on my MacBook at home, worked in a small office, then fell apart on real networks. Routers silently dropping multicast. Access points with client isolation. One corporate network had multicast disabled and nobody in IT could tell me why.

Node.js mDNS annoyances:

  • No built-in support — you need the multicast-dns npm package
  • macOS handles it natively (Bonjour), Linux needs avahi, Windows is a coin flip
  • Port 5353 conflicts if avahi-daemon is already running
  • Fails silently. Your queries just go into the void.

Stealing from 1990s NetWare

Novell NetWare had SAP — Service Advertising Protocol. Every server periodically broadcasts "hey, I exist, here's what I do" to the entire subnet. No multicast groups, no special infrastructure.

I built the same thing on UDP/IP. Two message types:

interface DiscoverMessage {
  type: "discover";
  version: 1;
}

interface AnnounceMessage {
  type: "announce";
  version: 1;
  instance_id: string;
  display_name: string;
  port: number;
  tls: boolean;
}
Enter fullscreen mode Exit fullscreen mode

Node starts up, broadcasts discover. Running instances reply with announce. Periodic announce on a timer so late joiners get found too.

Socket setup

import * as dgram from "node:dgram";
import * as os from "node:os";

const DISCOVERY_PORT = 17891;

async function startDiscovery(): Promise<dgram.Socket> {
  return new Promise((resolve, reject) => {
    const socket = dgram.createSocket("udp4");

    socket.on("error", (err) => {
      if (!socket.address()) {
        reject(err); // failed during bind
      }
    });

    socket.bind(DISCOVERY_PORT, () => {
      socket.setBroadcast(true); // DO NOT FORGET THIS
      resolve(socket);
    });
  });
}
Enter fullscreen mode Exit fullscreen mode

I wasted two hours on this. Ran tcpdump, stared at Wireshark captures, swapped ports, blamed the firewall.

Turns out I forgot socket.setBroadcast(true). Node.js won't send to broadcast addresses without it — doesn't throw, doesn't warn. The packets just vanish.

Broadcast addresses

On a machine with multiple NICs you need the per-subnet broadcast address, not just 255.255.255.255:

function getBroadcastTargets(): string[] {
  const targets: string[] = [];
  const ifaces = os.networkInterfaces();

  for (const [name, addrs] of Object.entries(ifaces)) {
    if (!addrs || isVirtualInterface(name)) continue;
    for (const addr of addrs) {
      if (addr.family === "IPv4" && !addr.internal) {
        targets.push(calcBroadcast(addr.address, addr.netmask));
      }
    }
  }
  return targets;
}

// broadcast = IP | ~mask, nothing fancy
function calcBroadcast(ip: string, mask: string): string {
  const ipParts = ip.split(".").map(Number);
  const maskParts = mask.split(".").map(Number);
  return ipParts
    .map((b, i) => (b | (~maskParts[i]! & 0xff)))
    .join(".");
}
Enter fullscreen mode Exit fullscreen mode

192.168.1.42 with mask 255.255.255.0192.168.1.255. Bitwise OR with inverted netmask.

Filtering virtual interfaces

Docker running + WireGuard VPN = my daemon "discovering" phantom peers. Broadcasting over Docker bridges and VPN tunnels, getting garbage back.

function isVirtualInterface(name: string): boolean {
  const n = name.toLowerCase();
  return (
    n.startsWith("wg") ||       // WireGuard
    n.startsWith("tun") ||      // TUN (OpenVPN)
    n.startsWith("tap") ||      // TAP
    n.startsWith("docker") ||   // Docker
    n.startsWith("br-") ||      // Docker bridge
    n.startsWith("veth") ||     // Docker veth
    n.startsWith("virbr") ||    // KVM
    n.startsWith("vmnet") ||    // VMware
    n.startsWith("vboxnet")     // VirtualBox
  );
}
Enter fullscreen mode Exit fullscreen mode

Hardcoded prefix list. os.networkInterfaces() gives you zero info about physical vs virtual. On Linux you could parse sysfs but that doesn't help on macOS or Windows.

TCP probe before registering

UDP is unverified. Before registering a peer:

async function handleAnnounce(
  data: AnnounceMessage,
  rinfo: dgram.RemoteInfo,
): Promise<void> {
  // own broadcast echoing back — yes this happens
  if (isSelfIp(rinfo.address)) return;

  // not on our subnet? reject
  if (!isInLocalSubnet(rinfo.address)) return;

  // actually try to connect before believing it
  const alive = await tcpProbe(rinfo.address, data.port);
  if (!alive) return;

  registerPeer({
    id: data.instance_id,
    address: rinfo.address,
    port: data.port,
    source: "broadcast",
  });
}
Enter fullscreen mode Exit fullscreen mode

You might get an announce from a service that crashed 30 seconds ago. TCP health check with 2-second timeout filters out the ghosts.

Your own broadcasts also echo back. Found that when my daemon kept discovering itself.

Announce jitter

Without randomization, 10 simultaneous instances all broadcast at the exact same second forever:

const ANNOUNCE_INTERVAL = 60_000;
const ANNOUNCE_JITTER = 10_000;

function scheduleNextAnnounce(socket: dgram.Socket): void {
  const delay = ANNOUNCE_INTERVAL + Math.random() * ANNOUNCE_JITTER;
  setTimeout(() => {
    broadcastAnnounce(socket);
    scheduleNextAnnounce(socket);
  }, delay);
}
Enter fullscreen mode Exit fullscreen mode

60s base + 0-10s random jitter. Same thing NTP does.

I still use mDNS though

Despite everything above, I run mDNS alongside the broadcast protocol. On home networks it works fine, and other programs can discover your service too.

const mDNS = require("multicast-dns");
const mdns = mDNS();

const SERVICE_TYPE = "_my-service._tcp.local";

setInterval(() => {
  mdns.query({
    questions: [{ name: SERVICE_TYPE, type: "PTR" }],
  });
}, 30_000);

mdns.on("response", (response, rinfo) => {
  const allRecords = [
    ...response.answers,
    ...response.additionals,
  ];

  const srvRecords = allRecords.filter((r) => r.type === "SRV");
  const aRecords = allRecords.filter((r) => r.type === "A");

  for (const srv of srvRecords) {
    const port = srv.data.port;
    const target = srv.data.target;

    const aRecord = aRecords.find((r) => r.name === target);
    const ip = aRecord ? String(aRecord.data) : rinfo.address;

    console.log(`Found peer: ${ip}:${port}`);
  }
});
Enter fullscreen mode Exit fullscreen mode

One gotcha: lots of services bind to 127.0.0.1 and advertise that via mDNS. You connect to the A record and hit yourself.

Fix — use the UDP packet's source IP when the A record is loopback:

const aRecordIp = aRecord ? String(aRecord.data) : "";
const isLoopback =
  aRecordIp === "127.0.0.1" ||
  aRecordIp === "::1" ||
  aRecordIp.startsWith("127.");

const address = aRecordIp && !isLoopback
  ? aRecordIp
  : rinfo.address;
Enter fullscreen mode Exit fullscreen mode

I made mDNS a soft dependency — if multicast-dns isn't installed, broadcast-only mode:

let mdns = null;
try {
  const mDNS = require("multicast-dns");
  mdns = mDNS();
} catch {
  console.warn("mDNS unavailable, broadcast-only mode");
}
Enter fullscreen mode Exit fullscreen mode

Last resort: subnet scan

When passive discovery doesn't cut it:

const CONCURRENCY = 50;
const TIMEOUT = 2_000;

async function scanSubnet(subnet: string, port: number) {
  const targets = [];
  for (let i = 1; i <= 254; i++) {
    targets.push({ host: `${subnet}.${i}`, port });
  }

  const queue = [...targets];

  const worker = async () => {
    while (queue.length > 0) {
      const target = queue.shift()!;
      const alive = await probeHost(target.host, target.port);
      if (alive) console.log(`Found: ${target.host}:${target.port}`);
    }
  };

  // 50 workers chewing through the queue
  const workers = Array.from(
    { length: Math.min(CONCURRENCY, targets.length) },
    () => worker(),
  );
  await Promise.allSettled(workers);
}
Enter fullscreen mode Exit fullscreen mode

50 concurrent probes, 2-second timeout. Full /24 in about 10 seconds. I expose it as an on-demand API endpoint.

What I'd change

Should have filtered virtual interfaces on day one. That one wasted an entire evening — I kept thinking my broadcast protocol was broken until I noticed Docker bridges were happily replying to my packets.

Should have used a binary header for UDP instead of raw JSON. Would have prevented the JSON.parse crash when a broken mDNS responder sent invalid data. I knew it could throw. Didn't wrap it. Classic.

Test on Windows earlier. The firewall silently blocks UDP on custom ports. Ended up adding netsh advfirewall rule creation.

Only dependencies are dgram and os from Node, plus multicast-dns if you want it.


Fought with mDNS on corporate networks? What ended up working?

Top comments (0)