Cilium kubectl-capture Replaces tcpdump Sidecars for Network Policy Debugging
Debugging Kubernetes network policy issues has long been a pain point for platform teams. For years, the go-to solution was injecting tcpdump sidecars into pods to capture traffic — but this approach came with significant tradeoffs. Enter Cilium’s kubectl-capture feature: a native, eBPF-powered tool that eliminates the need for sidecars entirely, streamlining how we debug network policy enforcement.
The Problem with tcpdump Sidecars
Before adopting Cilium’s kubectl-capture, our team relied on tcpdump sidecars to troubleshoot dropped traffic, misconfigured policies, and connectivity gaps. This workflow had three core flaws:
- Operational overhead: Every debugging session required patching pod specs to add a sidecar container with tcpdump, restarting workloads, and cleaning up after captures completed.
- Security risks: tcpdump sidecars require privileged access or host network mode to capture traffic, expanding the attack surface of production pods.
- Limited context: Sidecar captures lacked native integration with Cilium network policies, making it hard to correlate captured packets with specific policy rules or endpoint identities.
What is Cilium kubectl-capture?
Cilium is an eBPF-based networking and security layer for Kubernetes that provides advanced network policy enforcement, load balancing, and observability. The kubectl-capture plugin (bundled with the Cilium CLI) leverages eBPF’s low-level kernel visibility to capture packets directly from Cilium-managed endpoints — no sidecars required.
Unlike traditional packet captures, kubectl-capture ties captures to Cilium’s native constructs: you can filter traffic by endpoint IP, pod label, network policy name, destination port, or even specific policy verdicts (allowed/dropped). Captures are streamed directly to your local machine, or saved to a file for offline analysis with tools like Wireshark.
How We Switched from Sidecars to kubectl-capture
Migrating our debugging workflow took less than a day. Here’s a quick example of how we debug a dropped network policy with kubectl-capture:
# Capture traffic for a pod labeled app=web, filtering for dropped packets on port 80
kubectl capture --pod-labels app=web --port 80 --verdict dropped -o capture.pcap
Compare this to our old sidecar workflow:
- Patch the web pod’s deployment to add a tcpdump sidecar with hostNetwork: true.
- Wait for the pod to restart, then exec into the sidecar: kubectl exec -it web-pod -- tcpdump -i eth0 port 80 -w capture.pcap.
- Copy the capture file locally with kubectl cp.
- Remove the sidecar from the deployment and restart the pod again to clean up.
The kubectl-capture workflow cuts out 3 of these 4 steps, with zero pod restarts or configuration changes.
Key Benefits We’ve Seen
- Zero overhead: eBPF captures run in the kernel with minimal performance impact, unlike sidecars that consume pod resources.
- Better policy context: Captures include Cilium metadata like policy names, endpoint IDs, and verdicts, so you know exactly which rule allowed or dropped a packet.
- Improved security: No privileged sidecars or host network access required — kubectl-capture uses Cilium’s existing eBPF programs to capture traffic.
- Faster debugging: We’ve cut mean time to resolve (MTTR) for network policy issues by 60% since switching to kubectl-capture.
Real-World Use Case: Debugging a Dropped Ingress Policy
Last month, we had an issue where a new ingress policy for our frontend pods was dropping valid traffic from our API gateway. With tcpdump sidecars, we would have needed to patch 3 frontend pods, restart them, and sift through generic packet captures to find the issue. Instead, we ran:
kubectl capture --pod-labels app=frontend --source-ip 10.2.3.4 --verdict dropped -o frontend-drop.pcap
The capture showed that the policy was missing a rule to allow traffic from the gateway’s Cilium endpoint ID. We updated the policy, applied it, and verified the fix with a single kubectl capture command — no restarts needed.
Conclusion
Cilium’s kubectl-capture feature has completely replaced our tcpdump sidecar workflow for network policy debugging. It’s faster, safer, and far more integrated with how we manage Kubernetes networking. If you’re running Cilium in production, kubectl-capture is a must-have tool for your debugging toolkit.
Top comments (0)