DEV Community

Cover image for Rancher Logging Operator with in-cluster Syslog
Joseph D. Marhee
Joseph D. Marhee

Posted on

Rancher Logging Operator with in-cluster Syslog

The Rancher Logging Operator supports a number of backends, but a very common one is logging to a syslog endpoint. However, rather than maintaining separate logging infrastructure if this is a new piece of your environment, you can deploy it to your Kubernetes clusters directly, either using the method in the UI above, or via the Helm chart.

If you plan to deploy charts via Fleet (which I've written about before, or some other GitOps flow, you would pull the following charts from the Rancher chart repo:

helm repo add rancher-charts https://charts.rancher.io
helm repo update
helm fetch rancher-charts/rancher-logging-crd
helm fetch rancher-charts/rancher-logging
Enter fullscreen mode Exit fullscreen mode

but before we apply them, we want to setup the syslog endpoint in the cluster for the logging operator to log to:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rsyslog-deployment
  labels:
    app: rsyslog
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rsyslog
  template:
    metadata:
      labels:
        app: rsyslog
    spec:
      containers:
      - name: rsyslog
        image: sudheerc1190/rsyslog:latest
        ports:
        - containerPort: 514
        resources:
          requests:
            cpu: 250m
            memory: 524Mi
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
Enter fullscreen mode Exit fullscreen mode

here we're just creating a syslog Deployment; for production, I would recommend creating a PersistentVolume and mount it to /var/log/remote so in subsequent recreations of this Pod, you can restore past logging data for this or other clusters.

Because we want this just available internal to the cluster in this example, we'll just create the following Service objects to create cluster DNS entires for this endpoint:

---
apiVersion: v1
kind: Service
metadata:
  name: "syslog-service-tcp"
spec:
  ports:
    - port: 514
      targetPort: 514
      protocol: TCP
  type: LoadBalancer
  selector:
    app: "rsyslog"
---
apiVersion: v1
kind: Service
metadata:
  name: "syslog-service-udp"
spec:
  ports:
    - port: 514
      targetPort: 514
      protocol: UDP
  type: LoadBalancer
  selector:
    app: "rsyslog"
Enter fullscreen mode Exit fullscreen mode

so depending upon which protocol you'd like to use for logging, you can use a hostname like syslog-service-udp.default.svc.cluster.local.

Now, you can create the CRDs and Logging Operator:

helm install rancher-logging-crd ./rancher-logging-crd-100.1.2+up3.17.4.tgz -n cattle-logging-system --create-namespace

helm install rancher-logging ./rancher-logging-100.1.2+up3.17.4.tgz -n cattle-logging-system
Enter fullscreen mode Exit fullscreen mode

Note that the version on these charts should match, and that they should be deployed to the cattle-logging-system namespace.

In the logging operator, there are two pieces Flow (and cluster-wide ClusterFlow) resources and Output (and ClusterOutput) resources. This example will just use cluster-scoped examples that span namespaces, but the matching rules and output formatting will work similarly in both cases.

Our ClusterOutput will be what connects to our Syslog service, using host: syslog-service-udp.default.svc.cluster.local on port 514, in this case using transport: udp:

apiVersion: v1
items:
- apiVersion: logging.banzaicloud.io/v1beta1
  kind: ClusterOutput
  metadata:
    name: testing
    namespace: cattle-logging-system
  spec:
    syslog:
      buffer:
        total_limit_size: 2GB
        flush_thread_count: 8
        timekey: 10m
        timekey_use_utc: true
        timekey_wait: 1m
      format:
        app_name_field: kubernetes.pod_name
        hostname_field: custom-cluster-name
        log_field: message
        rfc6587_message_size: false
      host: syslog-service-udp.default.svc.cluster.local
      insecure: true
      port: 514
      transport: udp
kind: List
Enter fullscreen mode Exit fullscreen mode

and you can verify the status of the output (deployment status, problems, etc.) with kubectl get clusteroutput/testing -n cattle-logging-system. The ClusterOutput should be namespaces with the operator deployment. In the above spec, under format, you can refer to the RFC format for other options for logging fields.

In our ClusterFlow, we're going to use just a basic rule, ask to exclude logs from the kube-system namespace, and then select everything from the applications namespace:

apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
  name: global-ns-clusterflow
spec:
  match:
    - exclude:
        namespaces:
        - kube-system
    - select:
        namespaces:
        - applications
  globalOutputRefs:
    - testing
Enter fullscreen mode Exit fullscreen mode

and you'll see we use the testing output as the global endpoint for sending logs. You can check the status of the Flow using kubectl get clusterflow/global-ns-clusterflow.

I usually test after this point using a deployment like this:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

and then deploy it once to an excluded namespace and then an included one, then make a few requests (curl -s http://{LB_IP}, if the output and flow report no issues, you'll see the logs start to populate after a few minutes). You can then check the logs manually (if nothing is scraping syslog externally) with something formatted like kubectl exec -it {rsyslog_pod} -- tail -f /var/log/remote/{date hierarchy}/{hour}.log.

More about the BanzaiCloud Operator can be found here, and more on the Logging features in Rancher from this operator can be found here.

Top comments (0)