Falco 0.43 Deep Dive — How Legacy eBPF, gVisor, and gRPC Output Deprecation, Cosign v3 Bundles, and Drop-Enter Are Redefining 2026 Kubernetes Runtime Security
On January 26, 2026, the CNCF Graduated project Falco shipped 0.43.0, followed by patch release 0.43.1 on April 9. The previous minor 0.42.0 had already landed two of the largest signature pipeline changes in eight years — the Drop-Enter initiative and Capture Recording, which automatically dumps a .scap whenever a rule triggers. While 0.43 is publicly framed as a "stabilization release," it actually rewires Falco's operational surface in three places at once: simultaneous deprecation of Legacy eBPF, gVisor, and gRPC outputs; mandatory Cosign v3 bundle verification; and a zero-allocation rewrite of the Container plugin 0.6.1. If you don't realign your environment before 0.44, today's warnings will become tomorrow's hard errors. This article is what ManoIT learned while rolling Falco 0.43.1 onto an EKS 1.32 cluster and bare-metal IDC nodes — a falco.yaml migration, falcoctl Cosign v3 verification, the move to Falcosidekick, rule impact after Drop-Enter, the kernel ≥ 3.10 floor in drivers 9.1.0, Falco Operator 0.2 alignment, and a 4-week operational checklist.
1. Why 0.43 Is the Tipping Point — The New Default Born From 0.42 Drop-Enter
Falco has historically generated two events per syscall — an enter event when kernel processing starts, and an exit event when it completes. The Drop-Enter initiative shipped in 0.42 completely removed enter events from the pipeline and consolidated the metadata into exit. The total number of events drops by roughly half, and kernel instrumentation latency drops with it. 0.43 stabilizes this change while shipping a regression fix that re-introduces the filename argument of execve/execveat into exit events (libs 0.23.0). Rule authors get back evt.arg.filename — meaning the distinction between the symlink path the user passed and the resolved binary path the kernel executed is preserved again.
What 0.43 layers on top of 0.42 is three deprecation tracks, each on the same schedule: warning in 0.43, removable any time after 0.44. From an operations standpoint the implication is sharp. If you don't realign falco.yaml and your output pipeline this quarter, a single minor upgrade can break runtime alerting.
| Area | Before (≤ 0.41) | 0.42 | 0.43 (current) | After 0.44 |
|---|---|---|---|---|
| Event model | enter + exit | exit-only (Drop-Enter) | exit-only stabilized + filename restored | exit-only locked |
| Capture Recording | none | sandbox | sandbox stabilized | promotion candidate |
| Legacy eBPF (engine.kind=ebpf) | supported | supported | warns | removable |
| gVisor engine (engine.kind=gvisor) | supported | supported | warns | removable |
| gRPC output / server | supported | supported | warns | removable |
| Modern eBPF (engine.kind=modern_ebpf) | stable | recommended | only recommended eBPF path | maintained |
| kmod minimum kernel | 2.6 series compatible | 3.0 series | 3.10 (drivers 9.1.0 enforces) | maintained |
| Cosign bundle format | v2 .sig tag | v2 .sig tag | v3 bundle (v2 still works) | v3 default |
| falcoctl rule polling | 6h | 6h | 1 week | 1 week |
The most important row is event model. exit-only is no longer just a performance story — it is an operational signal that custom rule field dependencies must be re-checked. If you have in-house rules that relied on enter-time arguments, run regression tests before the upgrade.
2. Falco 0.43 Runtime Architecture — Four-Axis Alignment
┌──────────────────────── Linux Kernel (>= 3.10 for kmod) ────────────────────────┐
│ ┌──────────────────────────────┐ ┌──────────────────────────────┐ │
│ │ Modern eBPF probe (CO-RE) │ │ kmod driver (9.1.0+driver) │ │
│ │ engine.kind=modern_ebpf │ │ engine.kind=kmod │ │
│ │ drop-enter exit-only │ │ drop-enter exit-only │ │
│ │ bpf_loop, sendmmsg/recvmmsg │ │ legacy fallback only │ │
│ └──────────────┬───────────────┘ └──────────────┬───────────────┘ │
│ │ │ │
│ ┌──────────▼────────────────────────────────────▼────────────┐ │
│ │ libscap 0.23.1 / drivers 9.1.0+driver │ │
│ │ evt.arg.filename re-introduced • proc.aargs ancestor args │ │
│ └──────────────────────────┬─────────────────────────────────┘ │
└──────────────────────────────────┼───────────────────────────────────────────────┘
│
┌──────────────────────────────────▼───────────────────────────────────────────────┐
│ Falco userspace │
│ ┌────────────────────┐ ┌──────────────────┐ ┌────────────────────────────┐ │
│ │ Rule engine (yaml) │ │ container plugin │ │ k8smeta plugin │ │
│ │ (.yml/.yaml only) │ │ 0.6.1 zero-alloc │ │ 0.4.1 race-fix │ │
│ └─────────┬──────────┘ └──────┬───────────┘ └──────────┬─────────────────┘ │
│ │ │ │ │
│ ┌─────────▼────────────────────▼──────────────────────────▼─────────────────┐ │
│ │ Outputs: stdout / file / syslog / HTTP (gRPC output → DEPRECATED) │ │
│ │ Capture Recording sink → /var/lib/falco/captures/*.scap (sandbox) │ │
│ └─────────┬─────────────────────────────────────────────────────────────────┘ │
└─────────────┼────────────────────────────────────────────────────────────────────┘
│ HTTP POST
▼
┌────────────────────────────────────────────────────────────────────────┐
│ Falcosidekick (50+ destinations: Slack/Loki/SIEM/SOAR/Webhook) │
└────────────────────────────────────────────────────────────────────────┘
2.1 Axis ① Kernel Driver — Modern eBPF Is the Only Recommended eBPF Path
0.43 attaches an explicit deprecation warning to the legacy eBPF probe (engine.kind=ebpf). The legacy path required a kernel-version-specific module compiled at boot via falco-driver-loader. Modern eBPF leverages CO-RE (Compile Once, Run Everywhere) — a single BPF object runs on every compatible kernel. From the 0.42 cycle onward, the modern probe loads multiple BPF programs per event and uses the bpf_loop helper for batch syscalls like sendmmsg/recvmmsg, reducing processing cost. Security-sensitive settings moved out of the .bss mmapable segment into dedicated BPF maps, eliminating a tampering vector for privileged neighbor processes. The decision is simple: stay on Modern eBPF if you're already there, switch off Legacy before 0.44 if you aren't.
2.2 Axis ② Libraries / Drivers — Kernel 3.10 Floor
Drivers 9.1.0+driver bumped the kmod minimum kernel to 3.10. Released in 2013 and EOL'd in 2017, kernel 3.10 is twelve years old. Modern eBPF clusters are unaffected, but operations groups still on kmod with RHEL 7-or-older / CentOS 6 fragments must plan node OS upgrades alongside the Falco upgrade. The same cycle bumps libscap to 0.23.1 with the evt.arg.filename regression fix and a new proc.aargs indexed accessor.
2.3 Axis ③ Rule Engine — Only .yml/.yaml Loaded
0.43 ignores files in rule directories without a .yml or .yaml extension. Accidental parsing errors caused by leftover backup files or READMEs are gone. From an operator perspective, mount your ConfigMaps with subPath so meta files don't end up next to rules, or split rule ConfigMaps into a dedicated directory.
2.4 Axis ④ Outputs — Drop gRPC, Standardize on HTTP / Falcosidekick
0.43 emits warnings when grpc_output.enabled=true or grpc.enabled=true. The reasoning is twofold. First, the gRPC and protobuf dependencies inflated build-time cost in both core and libs. Second, real-world usage has converged on HTTP and Falcosidekick. Falcosidekick is a lightweight proxy that fans alerts out to 50+ destinations — Slack, PagerDuty, Loki, OpenSearch, Kafka, generic webhooks — and ships in the official Helm chart. If you have a gRPC consumer anywhere, move it to HTTP push or Falcosidekick routing before 0.44.
3. falco.yaml Migration — Aligning On Modern eBPF + HTTP
Below is the falco.yaml diff ManoIT adopted moving from 0.41 to 0.43. The Legacy eBPF and gRPC lines disappear in one pass; Modern eBPF + HTTP becomes the standard.
# falco.yaml — Falco 0.43 recommended baseline (Modern eBPF + HTTP)
engine:
kind: modern_ebpf # ❶ flip ebpf → modern_ebpf before 0.44
modern_ebpf:
cpus_for_each_buffer: 2
buf_size_preset: 4 # 8MB per ring buffer
drop_failed_exit: true # exit-only model alignment
load_plugins: [container, k8smeta] # ❷ container plugin 0.6.1
plugins:
- name: container
library_path: libcontainer.so
init_config:
label_max_len: 100
hooks: [create, start, remove]
- name: k8smeta
library_path: libk8smeta.so
init_config:
collectorPort: 45000
nodeName: ${FALCO_K8S_NODE_NAME}
verbosity: warning
rules_files:
- /etc/falco/falco_rules.yaml
- /etc/falco/falco_rules.local.yaml
- /etc/falco/rules.d # ❸ only .yml/.yaml loaded (0.43)
stdout_output:
enabled: true
keep_alive: false
http_output:
enabled: true
url: http://falcosidekick.falco.svc.cluster.local:2801
user_agent: falco-0.43
ca_bundle: /etc/falco/ca.crt
insecure: false
# ❹ Banned after 0.43 (warns now, removable any time after 0.44)
# grpc:
# enabled: false
# grpc_output:
# enabled: false
A few practical notes. First, Helm chart v7.0.2+ passes these keys through unchanged. Second, with Falco Operator 0.2 you can declare the same configuration as a FalcoCluster CR — one kubectl apply -k aligns every cluster in your fleet. Third, point http_output.url at the in-cluster falcosidekick Service, and let Falcosidekick handle external SIEM fan-out from its destinations.
4. Capture Recording — Auto .scap Dumps On Alert
Capture Recording, introduced at sandbox maturity in 0.42, was polished further in 0.43. The capability is straightforward — automatically write a syscall trace around the moment a rule triggers. The output is a standard .scap file you can open in Stratoshark (or the Wireshark .scap dissector) for host-level forensics.
# falco.yaml — Capture Recording example
captures:
enabled: true
output_dir: /var/lib/falco/captures
duration_seconds: 30 # 30 seconds around the trigger
max_size_mb: 64 # cap per file at 64MB
triggers:
- rule: "Terminal shell in container"
- rule: "Write below etc"
- rule_priority: ">=warning" # priority-based matching also works
retention:
max_age_hours: 72
max_total_mb: 4096
Three operational guidelines. ❶ Move output_dir onto a dedicated PVC (or use emptyDir + a sidecar uploader) so node disks don't take pressure. ❷ For S3 upload, attach a scap-uploader sidecar that watches the directory with inotify and pushes new files. ❸ If you keep the trigger scope wide, disks fill quickly — always pair it with a priority threshold and retention.
5. falcoctl + Cosign v3 — Verification Across the Whole Dependency Chain
0.43 adds first-class support in falcoctl for the Cosign v3 bundle format. Backwards compatibility with v2 .sig tags is preserved, but new rule and plugin artifacts ship as v3 bundles. Two changes matter even more.
-
Full registry references (e.g.
ghcr.io/falcosecurity/plugins/plugin/container:0.4.1) are now signature-verified — previously verification was silently skipped for full refs. - Signature verification now applies across the entire dependency chain. When a ruleset references other plugins, those dependencies are verified too, after dedup logic and reference resolution were rewritten.
Authenticated private registries also work end to end. Basic Auth (Docker creds), OAuth2 client credentials, and GKE Workload Identity are all passed through to cosign. Common falcoctl flows we use:
# 1) Refresh rule index (1-week polling default since 0.43)
falcoctl artifact follow rules-falco --interval=168h
# 2) Install a specific plugin (full refs are now v3-verified)
falcoctl artifact install \
ghcr.io/falcosecurity/plugins/plugin/container:0.6.1 \
--plain-http=false
# 3) Dry-run dependency resolution
falcoctl artifact install \
ghcr.io/falcosecurity/plugins/ruleset/falco:1.5.0 \
--resolve-deps=true \
--dry-run
# 4) Force signature verification (Cosign v3 bundle preferred)
falcoctl artifact install ruleset:1.5.0 \
--verify=true \
--bundle-format=v3
Three operational rules. First, make --verify=true the default for every falcoctl call. Second, in GitOps pipelines wire the falcoctl refresh step ahead of rule ConfigMap apply via Argo CD/Flux sync waves. Third, if you mirror artifacts internally, copy the Cosign v3 bundle alongside via oras copy or cosign copy so the referrer travels with the artifact.
6. Rule Impact After Drop-Enter — evt.arg.filename and proc.aargs
Drop-Enter cost some arguments that only existed on enter. The biggest regression was the filename argument of execve/execveat. Whether the user passed a symlink path, and what the kernel actually resolved (resolved_path), are different things from a security perspective. 0.43 (libs 0.23.0) restores filename on exit events so rules can use evt.arg.filename again.
# local rules — pattern that survives Drop-Enter
- rule: Symlink Trick on Sensitive Binary
desc: Detect symlink-based execution of sensitive binaries
condition: >
spawned_process and
evt.arg.filename startswith "/tmp/" and
proc.exe startswith "/usr/bin/" and
proc.exe in (sensitive_binaries)
output: >
Suspicious symlink-based exec
(sym=%evt.arg.filename resolved=%proc.exe parent=%proc.pname
ancestors=%proc.aargs[1..3] container=%container.id)
priority: WARNING
tags: [process, mitre_execution]
- list: sensitive_binaries
items: [/usr/bin/sudo, /usr/bin/su, /usr/bin/passwd, /usr/bin/chsh]
Two new fields stand out. proc.aargs indexes ancestor args, so you can dump "ancestors 1 through 3" inline. proc.args also gained indexed access, which lets you check a single argument concisely. The net effect: 0.43 rules are fewer events but richer context.
7. Container Plugin 0.6.1 — Table API Expansion + Zero-Allocation
Container plugin 0.6.1 brings two changes. First, container.id, container.image, container.name, and container.type are now exposed via the table API, so other plugins can read container metadata directly. Alignment with k8smeta improves and any in-house plugin pulling container context no longer needs an extra RPC. Second, std::string_view and reflex matcher allocation avoidance push hot-path memory allocations near zero. On multi-tenant clusters with thousands of containers per node, Falco's P99 CPU and memory curves flatten together. The k8smeta plugin 0.4.1 ships a race condition fix in the same cycle.
8. Falco Operator 0.2 — Multi-Artifact Alignment
Falco Operator 0.2, released alongside 0.43, ties together four CRs — FalcoCluster, FalcoRuleSource, FalcoOutput, FalcoCapture — into a coherent declaration. For multi-cluster, multi-tenant operators the biggest shift is being able to declare rule sets, Falcosidekick routing, capture-recording policy, and plugin versions in a single CR tree. Below is the FalcoCluster ManoIT uses to enforce a shared policy across dev/staging/prod.
apiVersion: falco.security/v1alpha1
kind: FalcoCluster
metadata:
name: prod
namespace: falco
spec:
version: 0.43.1
driver:
kind: modern_ebpf
bufSizePreset: 4
rules:
sources:
- name: falco-rules
artifact: ghcr.io/falcosecurity/rules/falco-rules:1.5.0
verify: { enabled: true, bundleFormat: v3 }
- name: manoit-local
configMap: { name: falco-rules-local }
plugins:
- name: container
version: 0.6.1
- name: k8smeta
version: 0.4.1
outputs:
http:
url: http://falcosidekick.falco.svc.cluster.local:2801
captures:
enabled: true
durationSeconds: 30
maxSizeMB: 64
retention: { maxAgeHours: 72, maxTotalMB: 4096 }
9. ManoIT 4-Week Migration Checklist
| Week | Task | Done When |
|---|---|---|
| 1 | Inventory — engine.kind, gRPC consumers, kernel versions, falcoctl creds, in-house rule field dependencies |
Zero nodes on Legacy eBPF/gVisor/gRPC; remaining kmod nodes confirmed on kernel 3.10+ |
| 2 | Deploy Falco Operator 0.2 + 0.43.1 to dev; align falco.yaml on Modern eBPF + HTTP | Alerts received, Falcosidekick destination(s) live, capture .scap files writing |
| 3 | Rule regression — adopt evt.arg.filename + proc.aargs, clean non-.yml/.yaml files, audit ConfigMap subPaths |
Zero event drops; trigger rate within ±5% of 0.41 baseline |
| 4 | Prod cutover — falcoctl polls weekly, Cosign v3 verify enforced, gRPC output retired, Operator rolls config across the fleet | SIEM ingestion healthy, every node on Modern eBPF, falcoctl --verify=true default |
9.1 Recommended Observability / SLOs
- Falco userspace CPU: P95 < 0.5 vCPU per core (multi-tenant baseline)
- Event-to-alert latency: P99 < 200ms after exit-event processing
- HTTP output 5xx rate: < 0.01% per minute (Falcosidekick backpressure alarm)
-
Capture
.scapdisk usage: zero nodes exceeding the 4GB cap - falcoctl signature failures: zero (Critical alarm)
9.2 Five Common Pitfalls
-
Implicit
engine.kinddefault — some in-house charts omitengine.kind, silently falling back to Legacy eBPF. Always set it explicitly. - README/.bak files in rule dirs — 0.43 ignores them, but ConfigMap permissions and naming patterns should be cleaned up first.
- Missing gRPC consumer migration — even one stray consumer means missed alerts after 0.44. Route via Falcosidekick.
-
Capture Recording disk runaway — turning it on without a priority threshold fills disks fast. Pair retention with
max_size_mb. - Pre-3.10 kernels still on kmod — drivers 9.1.0 build will fail outright. Inventory any RHEL 7- or earlier holdouts.
10. Conclusion — The 2026 Default For Runtime Security Has Shifted
By May 2026, the picture Falco 0.43 leaves behind is unambiguous. First, Modern eBPF is the kernel-instrumentation default. The era when Legacy eBPF and kmod fragmented the operational surface is over — a single CO-RE BPF object runs on every compatible kernel. Second, HTTP + Falcosidekick is the alerting pipeline default. With gRPC retiring, Falco core and libs get lighter, and 50+ destinations consolidate behind a single proxy. Third, Cosign v3 bundles are the supply-chain default. With dependency-chain verification mandatory, the cost of trusting a rule or plugin's provenance has moved from the operator to falcoctl.
ManoIT's next-quarter roadmap is three threads. First, align EKS 1.32 and bare-metal IDC nodes on a single FalcoCluster CR running 0.43.1, and merge a sweep across our ~50 in-house rules to standardize on evt.arg.filename and proc.aargs. Second, wire Splunk/OpenSearch + Tines into Falcosidekick destinations and lock the P99 alert-delivery SLO at 200ms. Third, enforce --verify=true on every falcoctl call across the GitOps pipeline and mirror Cosign v3 bundles internally to cut the external dependency. When that's done, ManoIT's runtime security exits the "multi-path era" of ≤ 0.41 and enters the "single standard era" of 0.43+.
This article was written with assistance from AI (Claude) and reviewed for technical accuracy.
© 2026 ManoIT | www.manoit.co.kr
Originally published at ManoIT Tech Blog.
Top comments (0)