DEV Community

Cover image for XDP: The Kernel-Level Powerhouse Behind Modern Network Defence
Simon Morley
Simon Morley

Posted on

XDP: The Kernel-Level Powerhouse Behind Modern Network Defence

Introduction

Traditional packet processing in Linux has always had one problem: latency, just like your Nan.

Packets climb an almost endless ladder through kernel subsystems before reaching user space. By which time your firewall has probably missed the critical window to act. Shame on you and your Nan.

eXpress Data Path (XDP) changes that completely. It's a fast-path hook that runs inside the kernel's network driver layer: before sockets, before Netfilter, before the kernel allocates a socket buffer (skb).

This means you can inspect, modify, drop, or redirect packets as they arrive on the NIC, with nanosecond-level performance.

It's like knowing who's going to turn up at the pub before they've left the house.

The Core Idea

XDP extends the Linux kernel with programmable packet handling at the driver level, using eBPF (extended Berkeley Packet Filter) programs compiled into bytecode.

Instead of pushing packets up the stack, XDP lets you attach logic that decides what happens next, directly in the NIC's receive path.

Execution flow:

  1. NIC receives a packet.
  2. XDP hook triggers before skb allocation.
  3. eBPF program runs in the kernel VM.
  4. Program returns one of several actions:
    • XDP_PASS: let the packet continue to the normal stack
    • XDP_DROP: discard it immediately
    • XDP_TX: bounce it back out the same interface
    • XDP_REDIRECT: forward it to another interface, CPU, or AF_XDP socket
    • XDP_ABORTED: fail gracefully if something goes wrong

That's it.

Why It Matters

1. Performance

XDP can process millions of packets per second per core.

Facebook's Cilium team measured over 20 million packets per second, on commodity hardware.

That's like Mo Farah racing your Nan in an ultra marathon and finishing it 20 million times before she's even put her jeggings on.

2. Programmability

Unlike fixed-function firewalls or DPDK pipelines, XDP programs are just eBPF bytecode.

You can dynamically load and unload filters at runtime, without recompiling the kernel or restarting services.

3. Security

You can build kernel-resident security controls:

  • DDoS mitigation: drop floods at line rate
  • Port knocking or protocol filtering: block unwanted ports before TCP handshake
  • Inline IDS signatures: detect or throttle known attack patterns

4. Observability

Because XDP operates before skb allocation, it's ideal for high-fidelity telemetry.

You can capture packet metadata (MACs, IPs, ports, timestamps) and push structured events to user space with ring buffers — no packet copies, no pcap overhead.

XDP in the Wild

Meta (Facebook)

Meta was one of the earliest large-scale adopters of XDP.

They use it in Katran, their in-kernel load balancer, to handle tens of millions of connections per second while maintaining microsecond-level latency.

XDP replaced parts of their older DPDK-based stack, cutting CPU load and enabling dynamic policy updates through eBPF maps.

The same foundation powers Cilium’s kernel datapath and underpins parts of Meta’s edge networking infrastructure.

Cloudflare

Cloudflare also uses XDP to defend its global edge network against DDoS attacks.

By placing mitigation logic directly inside the kernel, they can absorb massive floods — up to hundreds of millions of packets per second — without userspace overhead.

Their engineers have written extensively about how XDP allows per-interface rate limiting, SYN flood filtering, and on-the-fly rules pushed from Go and Rust control planes.

It's effectively their last-line kernel shield before packets ever reach the proxy layer.

Together, Meta and Cloudflare have proven that XDP can scale from hyperscaler infrastructure to real-world production workloads, not just lab benchmarks.

Typical Use Cases

Category Description Example
DDoS Protection Drop or rate-limit SYN floods directly in driver XDP_DROP TCP SYNs after threshold
Load Balancing Redirect packets to backend queues or CPUs XDP_REDIRECT to AF_XDP sockets
Firewalling Kernel-level ACLs Filter by IP, port, or protocol
Telemetry Stream header data to user space XDP + perf ring buffer
Inline Remediation Block C2 connections before userspace Combine XDP + LSM hook

Writing an XDP Program (Example)

I've been writing this in Rust recently but you can do in c or go etc. Rust is the best IMHO.

// src/xdp.rs
use aya_bpf::{
    bindings::xdp_action,
    macros::{map, xdp},
    maps::HashMap,
    programs::XdpContext,
};

#[map(name = "SYN_COUNTER")]
static mut SYN_COUNTER: HashMap<u32, u64> = HashMap::<u32, u64>::with_max_entries(1024, 0);

#[xdp(name = "count_syns")]
pub fn count_syns(ctx: XdpContext) -> u32 {
    match try_count_syns(ctx) {
        Ok(ret) => ret,
        Err(_) => xdp_action::XDP_ABORTED,
    }
}

fn try_count_syns(ctx: XdpContext) -> Result<u32, ()> {
    let hdr = ctx.ip()?.ok_or(())?;
    if hdr.protocol != aya_bpf::bindings::IPPROTO_TCP as u8 {
        return Ok(xdp_action::XDP_PASS);
    }

    // Parse TCP header
    let tcp = ctx.transport::<aya_bpf::bindings::tcphdr>().ok_or(())?;
    let flags = unsafe { (*tcp).syn() as u8 };

    if flags == 1 {
        let key = hdr.protocol as u32;
        unsafe {
            let counter = SYN_COUNTER.get(&key).copied().unwrap_or(0);
            SYN_COUNTER.insert(&key, &(counter + 1), 0);
        }
    }

    Ok(xdp_action::XDP_PASS)
}
Enter fullscreen mode Exit fullscreen mode

And a userspace loader:

// src/main.rs
use aya::{Bpf, programs::Xdp};
use std::{env, process};

fn main() -> Result<(), anyhow::Error> {
    let iface = env::args().nth(1).unwrap_or_else(|| {
        eprintln!("Usage: cargo run -- <iface>");
        process::exit(1);
    });

    let mut bpf = Bpf::load_file("target/bpfel-unknown-none/release/xdp-example")?;
    let program: &mut Xdp = bpf.program_mut("count_syns").unwrap().try_into()?;
    program.load()?;
    program.attach(&iface, aya::programs::XdpFlags::default())?;

    println!("XDP program attached to {}", iface);
    loop {
        std::thread::sleep(std::time::Duration::from_secs(60));
    }
}
Enter fullscreen mode Exit fullscreen mode

As you can see, not much to get the basics going.

Advanced Scenarios

1. Dynamic Remediation

Combine XDP with userspace controllers.

For example, an agent monitors traffic patterns and pushes new eBPF maps into the kernel to block malicious IPs dynamically.

2. Programmable Rate Limiting

Use per-source counters in eBPF maps:

  • Count packets per IP
  • Apply backoff or redirect decisions
  • Synchronize with userspace via shared maps

3. Hybrid Visibility

Send metadata to userspace without full payloads:

bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, &pkt_meta, sizeof(pkt_meta));
Enter fullscreen mode Exit fullscreen mode

Challenges and Limitations

  • Hardware support varies by NIC driver.
  • Verifier constraints: programs must be bounded and safe.
  • Debugging can be non-trivial — bpftool prog trace helps, but it's still kernel space.
  • Portability: kernel versions differ in helper function availability.
  • Knowledge - you still gotta know what you're doing fu*king around in there, this ain't no lovable prompt party.s

Still, XDP is maturing fast, and the eBPF ecosystem around it (bpftool, libbpf, Cilium, Katran) makes development significantly easier.

Conclusion

XDP represents the most radical shift in Linux networking since Netfilter.

It lets you run programmable logic where it matters most — at the point of ingress — turning your kernel into a programmable network processor.

Whether you're building autonomous defenses, ultra-low-latency telemetry, or custom in-kernel routing, XDP gives you the foundation for it.

The kernel is no longer a bottleneck - it's a battlefield and XDP is the armour keeping your loved ones (your nan etc) safe.

Why Now?

I'm working on XDP applications for a couple of projects that are going on right now. More news on this soon.

Further Reading

Top comments (0)