DEV Community

ANKUSH CHOUDHARY JOHAL
ANKUSH CHOUDHARY JOHAL

Posted on • Originally published at johal.in

Step-by-Step: Set Up a SIEM Pipeline with Elasticsearch 8.0 and Filebeat 8.0 for K8s 1.36 Logs

Step-by-Step: Set Up a SIEM Pipeline with Elasticsearch 8.0 and Filebeat 8.0 for K8s 1.36 Logs

Security Information and Event Management (SIEM) pipelines are critical for monitoring Kubernetes cluster activity, detecting anomalies, and meeting compliance requirements. This guide walks through deploying a production-ready SIEM pipeline using Elasticsearch 8.0 for log storage and search, Filebeat 8.0 for log collection, and Kubernetes 1.36 as the target cluster.

Prerequisites

Before starting, ensure you have:

  • A running Kubernetes 1.36 cluster with kubectl configured
  • Helm 3.10+ installed locally
  • At least 4 vCPUs and 8GB RAM available for Elasticsearch workloads
  • Basic familiarity with K8s manifests and Helm charts

Step 1: Deploy Elasticsearch 8.0

We will use the official Elastic Helm chart to deploy Elasticsearch 8.0. First, add the Elastic Helm repository:

helm repo add elastic https://helm.elastic.co
helm repo update
Enter fullscreen mode Exit fullscreen mode

Create a custom values file for Elasticsearch to optimize for K8s 1.36 and SIEM workloads:

# es-values.yaml
clusterName: "siem-es-cluster"
version: "8.0.0"
replicas: 2
resources:
  requests:
    cpu: "1"
    memory: "2Gi"
  limits:
    cpu: "2"
    memory: "4Gi"
volumeClaimTemplate:
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 20Gi
ingress:
  enabled: false
Enter fullscreen mode Exit fullscreen mode

Deploy Elasticsearch using the custom values:

helm install elasticsearch elastic/elasticsearch -f es-values.yaml --namespace siem --create-namespace
Enter fullscreen mode Exit fullscreen mode

Wait for the pods to start: kubectl get pods -n siem. You should see 2 Elasticsearch pods in Running state.

Step 2: Deploy Filebeat 8.0 as DaemonSet

Filebeat will run as a DaemonSet to collect logs from all K8s nodes. Create a custom values file for Filebeat:

# filebeat-values.yaml
version: "8.0.0"
daemonset:
  extraEnvs:
    - name: ELASTICSEARCH_HOST
      value: "elasticsearch-master.siem.svc.cluster.local"
    - name: ELASTICSEARCH_PORT
      value: "9200"
filebeatConfig:
  filebeat.yml: |
    filebeat.inputs:
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
          - add_kubernetes_metadata:
              in_cluster: true
    output.elasticsearch:
      hosts: ["${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"]
      username: "elastic"
      password: "${ELASTICSEARCH_PASSWORD}"
    setup.template.name: "k8s-logs"
    setup.template.pattern: "k8s-logs-*"
    setup.dashboards.enabled: true
Enter fullscreen mode Exit fullscreen mode

Retrieve the Elasticsearch elastic user password first:

kubectl get secret -n siem elasticsearch-master-credentials -o jsonpath="{.data.password}" | base64 --decode
Enter fullscreen mode Exit fullscreen mode

Set the password as an environment variable, then deploy Filebeat:

export ES_PASSWORD=""
helm install filebeat elastic/filebeat -f filebeat-values.yaml --namespace siem --set daemonset.extraEnvs[2].value="$ES_PASSWORD"
Enter fullscreen mode Exit fullscreen mode

Step 3: Configure Log Parsing and Indexing

Elasticsearch 8.0 includes built-in support for Kubernetes log parsing via Filebeat modules. Enable the Kubernetes module:

kubectl exec -n siem $(kubectl get pods -n siem -l app=filebeat -o jsonpath="{.items[0].metadata.name}") -- filebeat modules enable kubernetes
Enter fullscreen mode Exit fullscreen mode

Create an index lifecycle policy (ILM) to manage K8s log retention for SIEM compliance:

curl -X PUT "http://elasticsearch-master.siem.svc.cluster.local:9200/_ilm/policy/k8s-logs-policy" -H 'Content-Type: application/json' -d '{
  "policy": {
    "phases": {
      "hot": {
        "min_age": "0ms",
        "actions": {
          "rollover": {
            "max_size": "50gb",
            "max_age": "7d"
          }
        }
      },
      "delete": {
        "min_age": "30d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}'
Enter fullscreen mode Exit fullscreen mode

Step 4: Validate the Pipeline

Check if Filebeat is shipping logs to Elasticsearch:

curl -X GET "http://elasticsearch-master.siem.svc.cluster.local:9200/_cat/indices/k8s-logs-*?v"
Enter fullscreen mode Exit fullscreen mode

You should see indices created for K8s logs. Next, query for recent logs:

curl -X GET "http://elasticsearch-master.siem.svc.cluster.local:9200/k8s-logs-*/_search?pretty" -H 'Content-Type: application/json' -d '{
  "query": {
    "match_all": {}
  },
  "size": 5
}'
Enter fullscreen mode Exit fullscreen mode

Step 5: Deploy Kibana and Configure SIEM

Deploy Kibana 8.0 to visualize logs and access SIEM features:

helm install kibana elastic/kibana --namespace siem --set version="8.0.0" --set service.type=NodePort
Enter fullscreen mode Exit fullscreen mode

Access Kibana via the NodePort, log in with the elastic user credentials, and navigate to the SIEM app. Enable the Kubernetes dashboard to monitor cluster logs, pod activity, and detect suspicious events.

Conclusion

This pipeline provides a scalable foundation for K8s 1.36 log monitoring with Elasticsearch 8.0 and Filebeat 8.0. You can extend it by adding detection rules, integrating with alerting tools like ElastAlert, or enabling encryption for data in transit.

Top comments (0)