<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Amit Saha</title>
    <description>The latest articles on DEV Community by Amit Saha (@amitsaha).</description>
    <link>https://dev.to/amitsaha</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/amitsaha"/>
    <language>en</language>
    <item>
      <title>How to Set up Log Forwarding in a Kubernetes Cluster Using Fluent Bit</title>
      <dc:creator>Amit Saha</dc:creator>
      <pubDate>Thu, 07 May 2020 01:06:17 +0000</pubDate>
      <link>https://dev.to/amitsaha/how-to-set-up-log-forwarding-in-a-kubernetes-cluster-using-fluent-bit-3bgk</link>
      <guid>https://dev.to/amitsaha/how-to-set-up-log-forwarding-in-a-kubernetes-cluster-using-fluent-bit-3bgk</guid>
      <description>&lt;p&gt;&lt;em&gt;This article is a repost from my &lt;a href="https://echorand.me/posts/fluentbit-kubernetes/" rel="noopener noreferrer"&gt;blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Log forwarding is an essential ingredient of a production logging pipeline in any organization. As an application author, you don't want to be bothered with the responsibility of ensuring the application logs are being processed a certain way and then stored in a central log storage.&lt;br&gt;
As an operations personnel, you don't want to have to hack your way around different applications to process and ship logs. Essentially, log forwarding decouples the application emitting logs and what needs to be done with those logs. This decoupling only works of course, if the&lt;br&gt;
logs emitted by the application is in a format (JSON for example) understood by the log forwarder. The job of the log forwarder is thus to read logs from one or multiple sources, perform any  processing on&lt;br&gt;
it and then forward them to a log storage system or another log forwarder.&lt;/p&gt;

&lt;p&gt;Setting up log forwarding in a Kubernetes cluster allows all applications and system services that are deployed in the cluster to automatically get their logs processed and stored in a preconfigured central log storage. The application authors only need to ensure that their logs are being emitted to the standard output and error streams.&lt;/p&gt;

&lt;p&gt;There are various options when it comes to selecting a log forwarding software. Two of the most popular ones are &lt;a href="https://www.fluentd.org" rel="noopener noreferrer"&gt;fluentd&lt;/a&gt; and &lt;a href="https://www.elastic.co/products/logstash" rel="noopener noreferrer"&gt;logstash&lt;/a&gt;. A relatively new contender is &lt;a href="https://docs.fluentbit.io/manual/" rel="noopener noreferrer"&gt;fluentbit&lt;/a&gt;. It is written in C which makes it very lightweight in terms of its resource consumption as compared to both &lt;code&gt;fluentd&lt;/code&gt; and &lt;code&gt;logstash&lt;/code&gt;. This makes it an excellent alternative. Fluent bit has a pluggable architecture and supports a large collection of input sources, multiple ways to process the logs and a wide variety of output targets.&lt;/p&gt;

&lt;p&gt;The following figure depicts the logging architecture we will setup and the role of fluent bit in it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FMWs2Ggr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FMWs2Ggr.png" alt="Fluent bit in a logging pipeline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this tutorial, we will setup fluent bit (release 1.3.8) as a Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="noopener noreferrer"&gt;daemonset&lt;/a&gt; which will ensure that we will have a fluent bit instance running on every node of the cluster. The fluent bit instance will be configured to automatically read the logs of all the pods running on the node as well as read the system logs from the systemd journal. These logs will be read by fluent bit, one line at a time, processed as per the configuration we specify and then forwarded to the configured output, Elasticsearch. After setting up fluent bit, we will deploy a Python web application and demonstrate how the logs are automatically parsed, filtered and forwarded to be searched and analyzed.&lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;The article assumes that you have the following setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker installed locally and a free Docker hub account to push docker image&lt;/li&gt;
&lt;li&gt;A Kubernetes cluster with RBAC enabled

&lt;ul&gt;
&lt;li&gt;One node with 4vCPUs, 8 GB RAM and 160 GB disk should be sufficient to work through this tutorial&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;kubectl&lt;/code&gt; installed locally and configured to connect to the cluster&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;If you do not have an existing Elasticsearch cluster reachable from the Kubernetes cluster, you can follow &lt;br&gt;
steps 1-3 of &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes" rel="noopener noreferrer"&gt;this guide&lt;/a&gt; to run your own Elasticsearch cluster in Kubernetes.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 0 - Checking Your Kibana and Elasticsearch Setup
&lt;/h1&gt;

&lt;p&gt;If you are using an already available Elasticsearch cluster, you can skip this step.&lt;/p&gt;

&lt;p&gt;If you followed the above guide to setup Kibana and Elasticsearch, let's check all the pods related to elasticsearch and kibana are running in the &lt;code&gt;kube-logging&lt;/code&gt; namespace:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl get pods -n kube-logging


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You'll see the following output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

NAME                      READY   STATUS    RESTARTS   AGE
es-cluster-0              1/1     Running   0          2d23h
es-cluster-1              1/1     Running   0          2d23h
es-cluster-2              1/1     Running   0          2d23h
kibana-7946bc7b94-9gq47   1/1     Running   0          2d22h


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Before we move on let's setup access to Kibana from our local workstation using port forwarding:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl -n kube-logging port-forward pod/kibana-7946bc7b94-9gq47 5601:5601


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You will see the following output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

Forwarding from 127.0.0.1:5601 -&amp;gt; 5601
Forwarding from [::1]:5601 -&amp;gt; 5601


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The Kibana pod name will be different for your case, so make sure to use the correct pod name. Once the port forwarding is setup, go to &lt;code&gt;http://127.0.0.1:5601/&lt;/code&gt; to access kibana.&lt;/p&gt;

&lt;p&gt;We have successfully set up elasticsearch and kibana in the cluster. At this stage, there is no data in elasticsearch as there is nothing sending logs to it. Let's fix that and setup fluent bit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Keep the above port forward running in a terminal session and use a new terminal session for&lt;br&gt;
running the commands in the the rest of the tutorial.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 1 — Setting Up Fluent Bit Service Account and Permissions
&lt;/h1&gt;

&lt;p&gt;In Kubernetes, it is considered a best practice to use dedicated &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noopener noreferrer"&gt;service account&lt;/a&gt; to run pods. Hence, we will setup a new service account for fluent bit daemonset:&lt;/p&gt;

&lt;p&gt;First create a new logging directory:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

mkdir logging


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now inside that directory make a &lt;code&gt;fluent-bit&lt;/code&gt; directory:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

mkdir logging/fluent-bit


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Within the &lt;code&gt;fluent-bit&lt;/code&gt; directory create and open a &lt;code&gt;service-account.yaml&lt;/code&gt; file to create a dedicated service account:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

nano logging/fluent-bit/service-account.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add the following content to the file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-logging&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Save and close the file.&lt;/p&gt;

&lt;p&gt;The two key bits of information here are under the &lt;code&gt;metadata&lt;/code&gt; field:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;name&lt;/code&gt;: The service account will be called &lt;code&gt;fluent-bit&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;namespace&lt;/code&gt;: The service account will be created in the &lt;code&gt;kube-logging&lt;/code&gt; namespace created as part of the last prerequisite&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's create the service account:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f logging/fluent-bit/service-account.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should see the following output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

serviceaccount/fluent-bit created


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;One of the useful features of fluent bit is that it automatically associates various kubernetes metadata to the logs before it sends it to the configured destination. To allow the fluent bit service account to read these metadata by making API calls to the Kubernetes server, we will associate this service account with a set of permissions. This will be implemented by creating a cluster role and a cluster role binding.&lt;/p&gt;

&lt;p&gt;Within the &lt;code&gt;logging/fluent-bit&lt;/code&gt; directory create and open a &lt;code&gt;role.yaml&lt;/code&gt; file to create a cluster role:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

nano logging/fluent-bit/role.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add the following content to the file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-read&lt;/span&gt;
&lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;namespaces&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;pods&lt;/span&gt;
  &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;get"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;watch"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Save and close the file.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;ClusterRole&lt;/code&gt; is a specification of the permissions of the API operations that we want to grant to the &lt;code&gt;fluent-bit&lt;/code&gt; service account. The role is called &lt;code&gt;fluent-bit-read&lt;/code&gt; specified by the &lt;code&gt;name&lt;/code&gt; field inside &lt;code&gt;metadata&lt;/code&gt;. Inside &lt;code&gt;rules&lt;/code&gt;, we specify that we want to allow all &lt;code&gt;get&lt;/code&gt;, &lt;code&gt;list&lt;/code&gt; and &lt;code&gt;watch&lt;/code&gt; verbs on &lt;code&gt;pods&lt;/code&gt; and &lt;code&gt;namespaces&lt;/code&gt;&lt;br&gt;
across the core API group.&lt;/p&gt;

&lt;p&gt;To create the &lt;code&gt;ClusterRole&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f logging/fluent-bit/role.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should see the following output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

clusterrole.rbac.authorization.k8s.io/fluent-bit-read created


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The second and final step in granting the &lt;code&gt;fluent-bit&lt;/code&gt; service account the necessary permissions is to create a cluster role binding to associate the &lt;code&gt;fluent-bit-role&lt;/code&gt; we created above with the &lt;code&gt;fluent-bit&lt;/code&gt; service account in the &lt;code&gt;kube-logging&lt;/code&gt; namespace.&lt;/p&gt;

&lt;p&gt;Within the &lt;code&gt;logging/fluent-bit&lt;/code&gt; directory create and open a &lt;code&gt;role-binding.yaml&lt;/code&gt; file to create a cluster&lt;br&gt;
role binding:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

nano logging/fluent-bit/role-binding.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add the following content to the file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-read&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-read&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-logging&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Save and close the file.&lt;/p&gt;

&lt;p&gt;We are creating a &lt;code&gt;ClusterRoleBinding&lt;/code&gt; named &lt;code&gt;fluent-bit-read&lt;/code&gt;, specified via the &lt;code&gt;name&lt;/code&gt; field inside metadata. We specify the cluster role we are binding to via the &lt;code&gt;roleRef&lt;/code&gt; field. The &lt;code&gt;apiGroup&lt;/code&gt; refers to the API&lt;br&gt;
group for kubernetes RBAC  resource &lt;code&gt;rbac.authorization.k8s.io&lt;/code&gt;, the &lt;code&gt;Kind&lt;/code&gt; of role we are binding to is a &lt;code&gt;ClusterRole&lt;/code&gt; and the &lt;code&gt;name&lt;/code&gt; of the role we are binding to is &lt;code&gt;fluent-bit-read&lt;/code&gt;. The service account&lt;br&gt;
we are creating the binding for is specified in &lt;code&gt;subjects&lt;/code&gt;. We specify that we want to create the binding to the &lt;code&gt;fluent-bit&lt;/code&gt; service account in the &lt;code&gt;kube-logging&lt;/code&gt; namespace.&lt;/p&gt;

&lt;p&gt;Run the following command to create the role binding:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f logging/fluent-bit/role-binding.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should see the following output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

clusterrolebinding.rbac.authorization.k8s.io/fluent-bit-read created


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this step, we have created a &lt;code&gt;fluent-bit&lt;/code&gt; service account in the &lt;code&gt;kube-logging&lt;/code&gt; namespace and given it permissions to read various metadata about the pods and namespaces in the cluster. Fluent bit needs these permissions&lt;br&gt;
to associate metadata to the logs such as pod labels and namespace a log is originating from.&lt;/p&gt;

&lt;p&gt;Next, we will create a &lt;code&gt;ConfigMap&lt;/code&gt; resource to specify configuration for fluent bit.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 2 — Creating a ConfigMap for Fluent Bit
&lt;/h1&gt;

&lt;p&gt;To configure fluent bit we will create a configmap specifying various configuration sections and attributes. When we create the daemonset, Kubernetes will make this config map available as files to fluent bit at startup. We will create three versions of this &lt;code&gt;ConfigMap&lt;/code&gt; as we progress through this tutorial. We will create the first version now.&lt;/p&gt;

&lt;p&gt;Within the &lt;code&gt;logging/fluent-bit&lt;/code&gt; directory create and open a &lt;code&gt;configmap-1.yaml&lt;/code&gt; file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

nano logging/fluent-bit/configmap-1.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add the following content to the file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: kube-logging
  labels:
    k8s-app: fluent-bit
data:
  fluent-bit.conf: |
    [SERVICE]
        Flush         1
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf

    @INCLUDE input-kubernetes.conf
    @INCLUDE output-elasticsearch.conf


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The manifest specifies that we are creating a &lt;code&gt;ConfigMap&lt;/code&gt; - &lt;code&gt;fluent-bit-config&lt;/code&gt; in the &lt;code&gt;logging&lt;/code&gt; namespace. &lt;code&gt;data&lt;/code&gt; specifies the actual contents of the &lt;code&gt;ConfigMap&lt;/code&gt; which is composed of three files - &lt;code&gt;fluent-bit.conf&lt;/code&gt;, &lt;code&gt;input-kubernetes.conf&lt;/code&gt;, &lt;code&gt;output-elasticsearch.conf&lt;/code&gt; and &lt;code&gt;parsers.conf&lt;/code&gt;. The &lt;code&gt;fluent-bit.conf&lt;/code&gt; is the primary configuration file read by&lt;br&gt;
fluent bit at startup. This file then uses the &lt;code&gt;@INCLUDE&lt;/code&gt; specifier to include other configuration files - &lt;code&gt;input-kubernetes.conf&lt;/code&gt; and &lt;code&gt;output-elasticsearch.conf&lt;/code&gt; in this case. The &lt;code&gt;parsers.conf&lt;/code&gt; file is referred to in the &lt;code&gt;fluent-bit.conf&lt;/code&gt; file and is expected to be in the same directory as the &lt;code&gt;fluent-bit.conf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Let's look at the &lt;code&gt;fluent.bit.conf&lt;/code&gt; file contents:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

[SERVICE]
        Flush         1
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;[Service]&lt;/code&gt; section of a fluent bit configuration &lt;a href="https://docs.fluentbit.io/manual/service" rel="noopener noreferrer"&gt;specifies configuration&lt;/a&gt;&lt;br&gt;
 regarding the fluent bit engine itself. Here we specify the following options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Flush&lt;/code&gt;: This specifies how often (in seconds) fluent bit will flush the output&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Log_Level&lt;/code&gt;: This specifies the kind of logs we want fluent bit to emit. Other possible choices are &lt;code&gt;error&lt;/code&gt;, &lt;code&gt;warning&lt;/code&gt;, &lt;code&gt;debug&lt;/code&gt; and &lt;code&gt;trace&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Daemon&lt;/code&gt;: If this set to &lt;code&gt;true&lt;/code&gt;, this will make &lt;code&gt;fluent-bit&lt;/code&gt; go to background on start.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Parsers_File&lt;/code&gt;: This specifies the file where fluent bit will look up for a specified parser (discussed later)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, add the following contents to the file at the same nesting level as &lt;code&gt;fluent-bit.conf&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

 &lt;span class="s"&gt;...&lt;/span&gt;
 &lt;span class="s"&gt;input-kubernetes.conf&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;[INPUT]&lt;/span&gt;
        &lt;span class="s"&gt;Name              tail&lt;/span&gt;
        &lt;span class="s"&gt;Tag               kube.*&lt;/span&gt;
        &lt;span class="s"&gt;Path              /var/log/containers/*.log&lt;/span&gt;
        &lt;span class="s"&gt;Parser            docker&lt;/span&gt;
        &lt;span class="s"&gt;DB                /var/log/flb_kube.db&lt;/span&gt;
        &lt;span class="s"&gt;Mem_Buf_Limit     5MB&lt;/span&gt;
        &lt;span class="s"&gt;Skip_Long_Lines   On&lt;/span&gt;
        &lt;span class="s"&gt;Refresh_Interval  10&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;input-kubernetes.conf&lt;/code&gt; file's contents uses the &lt;code&gt;tail&lt;/code&gt; input plugin (specified via &lt;code&gt;Name&lt;/code&gt;) to read all files matching the pattern &lt;code&gt;/var/log/containers/*.log&lt;/code&gt; (specified via &lt;code&gt;Path&lt;/code&gt;):&lt;/p&gt;

&lt;p&gt;Let's look at the other fields in the configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Tag&lt;/code&gt;: All logs read via this input configuration will be tagged with &lt;code&gt;kube.*&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Parser&lt;/code&gt;: We specify that each line that fluent bit reads from the files should be parsed via a parser named &lt;code&gt;docker&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DB&lt;/code&gt;: This is a path to a local SQlite database that fluent bit will use to keep records related to the files it's reading&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Mem_Buf_Limit&lt;/code&gt;: Set a maximum memory limit that fluent bit will allow the buffer to grow to before it flushes the output&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Skip_Long_Lines&lt;/code&gt;: Setting this to &lt;code&gt;On&lt;/code&gt; ensures that if a certain line in a specific monitored file exceeds a configurable max buffer size, it
will skip that line and continue reading the file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Refresh_Interval&lt;/code&gt;: When a pattern is specified, this is the time interval in seconds, fluent bit will refresh the file list it monitors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can learn more about the tail input plugin &lt;a href="https://docs.fluentbit.io/manual/input/tail" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Next, add the following contents to the file at the same nesting level as &lt;code&gt;input-kubernetes.conf&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

  &lt;span class="s"&gt;...&lt;/span&gt;
  &lt;span class="s"&gt;output-elasticsearch.conf&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;[OUTPUT]&lt;/span&gt;
        &lt;span class="s"&gt;Name            es&lt;/span&gt;
        &lt;span class="s"&gt;Match           *&lt;/span&gt;
        &lt;span class="s"&gt;Host            ${FLUENT_ELASTICSEARCH_HOST}&lt;/span&gt;
        &lt;span class="s"&gt;Port            ${FLUENT_ELASTICSEARCH_PORT}&lt;/span&gt;
        &lt;span class="s"&gt;Logstash_Format On&lt;/span&gt;
        &lt;span class="s"&gt;Logstash_Prefix fluent-bit&lt;/span&gt;
        &lt;span class="s"&gt;Retry_Limit     False&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above configuration will create the output configuration in the file &lt;code&gt;output-elasticsearch.conf&lt;/code&gt;. We specify that we want to use &lt;a href="https://docs.fluentbit.io/manual/output/elasticsearch" rel="noopener noreferrer"&gt;es&lt;/a&gt; output&lt;br&gt;
plugin in the &lt;code&gt;Name&lt;/code&gt; field. The &lt;code&gt;Match&lt;/code&gt; field specifies the tag pattern of log messages that will be sent to the output being configure — the &lt;code&gt;*&lt;/code&gt; pattern matches all logs. Next, we specify the hostname and port for the elasticsearch cluster via the &lt;code&gt;Host&lt;/code&gt; and &lt;code&gt;Port&lt;/code&gt; fields respectively.&lt;br&gt;
Note how we can use the &lt;code&gt;FLUENT_ELASTISEARCH_HOST&lt;/code&gt; and &lt;code&gt;FLUENT_ELASTICSEARCH_PORT&lt;/code&gt; environment variables that we specify in the &lt;code&gt;DaemonSet&lt;/code&gt; in the fluent bit configuration. Being able to use environment variables as values in the configuration files is a feature of fluent bit's configuration system. We then specify that we want to use the &lt;code&gt;Logstash_Format&lt;/code&gt; for the elasticsearch indexes that fluent bit will create. This will create the index in the format &lt;code&gt;logstash-YYYY.MM.DD&lt;/code&gt; where &lt;code&gt;YYYY.MM.DD&lt;/code&gt; is the date when the index is being created. The &lt;code&gt;Logstash_Prefix&lt;/code&gt; field can be specified to change the  default &lt;code&gt;logstash&lt;/code&gt; prefix to something else, like &lt;code&gt;fluent-bit&lt;/code&gt;. The logstash format is useful when you are using a tool like elasticsearch &lt;a href="https://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html" rel="noopener noreferrer"&gt;curator&lt;/a&gt; to manage cleanup of your elasticsearch indices.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Retry_Limit&lt;/code&gt; is a generic output configuration that specifies the retry behavior of &lt;a href="https://docs.fluentbit.io/manual/configuration/scheduler#configuring-retries" rel="noopener noreferrer"&gt;fluent bit&lt;/a&gt; if there is a failure&lt;br&gt;
in sending logs to the output destination.&lt;/p&gt;

&lt;p&gt;Finally, add the following contents to the file at the same nesting level as &lt;code&gt;ouput-elasticsearch.conf&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

  &lt;span class="s"&gt;...&lt;/span&gt;
  &lt;span class="s"&gt;parsers.conf&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;[PARSER]&lt;/span&gt;
        &lt;span class="s"&gt;Name        docker&lt;/span&gt;
        &lt;span class="s"&gt;Format      json&lt;/span&gt;
        &lt;span class="s"&gt;Time_Key    time&lt;/span&gt;
        &lt;span class="s"&gt;Time_Format %Y-%m-%dT%H:%M:%S.%L&lt;/span&gt;
        &lt;span class="s"&gt;Time_Keep   On&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When a parser name is specified in the input section, fluent bit will lookup the parser in the specified &lt;code&gt;parsers.conf&lt;/code&gt; file. Above, we define a parser named &lt;code&gt;docker&lt;/code&gt; (via the &lt;code&gt;Name&lt;/code&gt; field) which we want to use to&lt;br&gt;
parse a docker container's logs which are JSON formatted (specified via &lt;code&gt;Format&lt;/code&gt; field). The &lt;code&gt;Time_Key&lt;/code&gt; specifies the field in the JSON log that will have the timestamp of the log, &lt;code&gt;Time_Format&lt;/code&gt; specifes&lt;br&gt;
the format the value of this field should be parsed as and &lt;code&gt;Time_Keep&lt;/code&gt; specifies whether the original field should be preserved in the log. The fluent bit documentation has &lt;a href="https://docs.fluentbit.io/manual/parser" rel="noopener noreferrer"&gt;more information&lt;/a&gt; on these fields.&lt;/p&gt;

&lt;p&gt;Save and close the file.&lt;/p&gt;

&lt;p&gt;The final content of the file should look as follows:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-logging&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;k8s-app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;fluent-bit.conf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;[SERVICE]&lt;/span&gt;
        &lt;span class="s"&gt;Flush         1&lt;/span&gt;
        &lt;span class="s"&gt;Log_Level     info&lt;/span&gt;
        &lt;span class="s"&gt;Daemon        off&lt;/span&gt;
        &lt;span class="s"&gt;Parsers_File  parsers.conf&lt;/span&gt;

    &lt;span class="s"&gt;@INCLUDE input-kubernetes.conf&lt;/span&gt;
    &lt;span class="s"&gt;@INCLUDE output-elasticsearch.conf&lt;/span&gt;
  &lt;span class="na"&gt;input-kubernetes.conf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;[INPUT]&lt;/span&gt;
        &lt;span class="s"&gt;Name              tail&lt;/span&gt;
        &lt;span class="s"&gt;Tag               kube.*&lt;/span&gt;
        &lt;span class="s"&gt;Path              /var/log/containers/*.log&lt;/span&gt;
        &lt;span class="s"&gt;Parser            docker&lt;/span&gt;
        &lt;span class="s"&gt;DB                /var/log/flb_kube.db&lt;/span&gt;
        &lt;span class="s"&gt;Mem_Buf_Limit     5MB&lt;/span&gt;
        &lt;span class="s"&gt;Skip_Long_Lines   On&lt;/span&gt;
        &lt;span class="s"&gt;Refresh_Interval  10&lt;/span&gt;

  &lt;span class="na"&gt;output-elasticsearch.conf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;[OUTPUT]&lt;/span&gt;
        &lt;span class="s"&gt;Name            es&lt;/span&gt;
        &lt;span class="s"&gt;Match           *&lt;/span&gt;
        &lt;span class="s"&gt;Host            ${FLUENT_ELASTICSEARCH_HOST}&lt;/span&gt;
        &lt;span class="s"&gt;Port            ${FLUENT_ELASTICSEARCH_PORT}&lt;/span&gt;
        &lt;span class="s"&gt;Logstash_Format On&lt;/span&gt;
        &lt;span class="s"&gt;Logstash_Prefix fluent-bit&lt;/span&gt;
        &lt;span class="s"&gt;Retry_Limit     False&lt;/span&gt;

  &lt;span class="na"&gt;parsers.conf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;[PARSER]&lt;/span&gt;
        &lt;span class="s"&gt;Name        docker&lt;/span&gt;
        &lt;span class="s"&gt;Format      json&lt;/span&gt;
        &lt;span class="s"&gt;Time_Key    time&lt;/span&gt;
        &lt;span class="s"&gt;Time_Format %Y-%m-%dT%H:%M:%S.%L&lt;/span&gt;
        &lt;span class="s"&gt;Time_Keep   On&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's now create the first version of the &lt;code&gt;ConfigMap&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f logging/fluent-bit/configmap-1.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should see the following output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

configmap/fluent-bit-config created


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this step, we have created a &lt;code&gt;fluent-bit-config&lt;/code&gt; config map in the &lt;code&gt;kube-logging&lt;/code&gt; namespace. It specifies where we want the logs to be read from, how we want to process it and where to send them off to after processing. We will use it to configure the fluent bit daemonset which we look at next.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 3 — Creating the Fluent Bit Daemonset
&lt;/h1&gt;

&lt;p&gt;A Daemonset will be used to run one fluent bit pod per node of the cluster. Within the &lt;code&gt;logging/fluent-bit&lt;/code&gt; directory create and open a &lt;code&gt;daemonset.yaml&lt;/code&gt; file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

nano logging/fluent-bit/daemonset.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add the following content to the file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DaemonSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-logging&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;k8s-app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-logging&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Using the above declaration, we are going to configure a &lt;code&gt;DaemonSet&lt;/code&gt; named &lt;code&gt;fluent-bit&lt;/code&gt; in the &lt;code&gt;kube-logging&lt;/code&gt; namespace. In the &lt;code&gt;spec&lt;/code&gt; section, we declare that the daemonset will contain pods which has the &lt;code&gt;k8s-app&lt;/code&gt; label set to &lt;code&gt;fluent-bit-logging&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next, add the following contents at the same nesting level as &lt;code&gt;selector&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

  &lt;span class="s"&gt;...&lt;/span&gt;
  &lt;span class="s"&gt;template&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;k8s-app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-logging&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent/fluent-bit:1.3.8&lt;/span&gt;
        &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;FLUENT_ELASTICSEARCH_HOST&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;elasticsearch"&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;FLUENT_ELASTICSEARCH_PORT&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9200"&lt;/span&gt;
        &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlog&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlibdockercontainers&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/docker/containers&lt;/span&gt;
          &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;journal&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/journal&lt;/span&gt;
          &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/fluent-bit/etc/&lt;/span&gt;
      &lt;span class="na"&gt;terminationGracePeriodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlog&lt;/span&gt;
        &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;journal&lt;/span&gt;
        &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log/journal&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlibdockercontainers&lt;/span&gt;
        &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/docker/containers&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config&lt;/span&gt;
        &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config&lt;/span&gt;
      &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit&lt;/span&gt;
      &lt;span class="na"&gt;tolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-role.kubernetes.io/master&lt;/span&gt;
        &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Exists&lt;/span&gt;
        &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoSchedule&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The most important bits of the above specification are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Elasticsearch configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We specify the Elasticsearch host and port via the &lt;code&gt;FLUENT_ELASTICSEARCH_HOST&lt;/code&gt; and &lt;code&gt;FLUENT_ELASTICSEARCH_PORT&lt;/code&gt; environment variables. These are referred to in the fluent bit output configuration (discussed later on). If you are using an existing Elasticsearch cluster, this is where you would specify the DNS for it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Volume mounts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We mount three host filesystem paths inside the fluent bit pods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;/var/log/&lt;/code&gt;: The standard error and output of all the pods are stored as files  with &lt;code&gt;.log&lt;/code&gt; extension&lt;br&gt;
in the &lt;code&gt;/var/log/containers&lt;/code&gt; directory. That said, these files are symbolic links to the actual files&lt;br&gt;
in the &lt;code&gt;/var/lib/docker/containers&lt;/code&gt; directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;/var/lib/docker/containers&lt;/code&gt;: This directory is mounted since we need access to individual container's&lt;br&gt;
log files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;/var/log/journal&lt;/code&gt;: For Linux systems running &lt;code&gt;systemd&lt;/code&gt;, systemd journal stores logs related to the systemd services in this directory. Kubernetes system components also log to the systemd journal.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fourth volume mount inside the pod is from the &lt;code&gt;ConfigMap&lt;/code&gt; resource &lt;code&gt;fluent-bit-config&lt;/code&gt; which is mounted at &lt;code&gt;/fluent-bit/etc&lt;/code&gt; - the default location where the fluent bit docker image looks for configuration&lt;br&gt;
files in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service Account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We specify the service account we want to run the daemonset as using the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

      &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Don't run on master node&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since we want to run fluent bit only on the cluster nodes, we add a toleration to specify that we don't want a pod to be scheduled on the Kubernetes master:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

      &lt;span class="na"&gt;tolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-role.kubernetes.io/master&lt;/span&gt;
        &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Exists&lt;/span&gt;
        &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoSchedule&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The entire contents of the file is as follows:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;


&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DaemonSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-logging&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;k8s-app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-logging&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;k8s-app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-logging&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent/fluent-bit:1.3.8&lt;/span&gt;
        &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;FLUENT_ELASTICSEARCH_HOST&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;elasticsearch"&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;FLUENT_ELASTICSEARCH_PORT&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9200"&lt;/span&gt;
        &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlog&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlibdockercontainers&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/docker/containers&lt;/span&gt;
          &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;journal&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/journal&lt;/span&gt;
          &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/fluent-bit/etc/&lt;/span&gt;
      &lt;span class="na"&gt;terminationGracePeriodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlog&lt;/span&gt;
        &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;journal&lt;/span&gt;
        &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log/journal&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;varlibdockercontainers&lt;/span&gt;
        &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/docker/containers&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config&lt;/span&gt;
        &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config&lt;/span&gt;
      &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit&lt;/span&gt;
      &lt;span class="na"&gt;tolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node-role.kubernetes.io/master&lt;/span&gt;
        &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoSchedule&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's now create the &lt;code&gt;DaemonSet&lt;/code&gt; that will deploy fluent bit to the Kubernetes cluster:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f logging/fluent-bit/daemonset.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You will see the following output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

daemonset.apps/fluent-bit created


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's wait for the daemonset to be rolled out:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl rollout status daemonset/fluent-bit -n kube-logging


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should see the following output when the command exits:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

daemon set "fluent-bit" successfully rolled out


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, in our browser, we will  go to the Kibana URL &lt;code&gt;http://127.0.0.1:5601/app/kibana#/management/kibana/index_pattern?_g=()&lt;/code&gt; to create an index pattern. You will see that an elasticsearch index&lt;br&gt;
is present, but no Kibana index patterns matching it exists:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FoWf1FVS.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FoWf1FVS.png" alt="Kibana index management"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Type in &lt;code&gt;fluent-bit*&lt;/code&gt; as the index pattern:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F6uPzyuQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F6uPzyuQ.png" alt="Kibana index pattern setup"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on "Next Step", select &lt;code&gt;@timestamp&lt;/code&gt; as the Time&lt;br&gt;
filter field name and click on "Create index pattern":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FRLM5HAS.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FRLM5HAS.png" alt="Kibana index creation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the index creation has been completed, if you visit the URL &lt;code&gt;http://127.0.0.1:5601/app/kibana#/discover&lt;/code&gt; you should see logs &lt;br&gt;
from the currently running containers - elasticsearch, kibana as well as fluent bit itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FZSn5MZu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FZSn5MZu.png" alt="Logs from the running containers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this step, we successfully deployed fluent bit in the cluster, configured it to read logs emitted by various pods running in the cluster and then send those logs to Elasticsearch.&lt;/p&gt;

&lt;p&gt;It is worth noting here that there was no need to configure fluent bit specifically for reading logs emitted by Elasticsearch, Kibana or fluent bit itself. Similarly, we didn't need to configure the applications to send their logs to Elasticsearch. This decoupling is a major benefit of setting up log forwarding. If we expand any of the log entries and look at the JSON version, we will see an entry looks similar to:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "_index": "fluent-bit-2020.03.25",
  "_type": "flb_type",
  "_id": "zsksEHEBxb--hvB5vgJi",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2020-03-25T05:31:39.122Z",
    "log": "{\"type\":\"response\",\"@timestamp\":\"2020-03-25T05:31:39Z\",\"tags\":[],\"pid\":1,\"method\":\"get\",\"statusCode\":200,\"req\":{\"url\":\"/ui/fonts/roboto_mono/RobotoMono-Bold.ttf\",\"method\":\"get\",\"headers\":{\"host\":\"127.0.0.1:5601\",\"user-agent\":\"Mozilla/5.0 (X11; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0\",\"accept\":\"application/font-woff2;q=1.0,application/font-woff;q=0.9,*/*;q=0.8\",\"accept-language\":\"en-US,en;q=0.5\",\"accept-encoding\":\"identity\",\"connection\":\"keep-alive\",\"referer\":\"http://127.0.0.1:5601/app/kibana\"},\"remoteAddress\":\"127.0.0.1\",\"userAgent\":\"127.0.0.1\",\"referer\":\"http://127.0.0.1:5601/app/kibana\"},\"res\":{\"statusCode\":200,\"responseTime\":10,\"contentLength\":9},\"message\":\"GET /ui/fonts/roboto_mono/RobotoMono-Bold.ttf 200 10ms - 9.0B\"}\n",
    "stream": "stdout",
    "time": "2020-03-25T05:31:39.122786572Z"
  },
  "fields": {
    "@timestamp": [
      "2020-03-25T05:31:39.122Z"
    ],
    "time": [
      "2020-03-25T05:31:39.122Z"
    ]
  },
  "sort": [
    1585114299122
  ]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The most important field in the above JSON object is &lt;code&gt;_source&lt;/code&gt;. The value of this field corresponds to a line read by fluent bit&lt;br&gt;
from the configured input source. It has three fields:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;log&lt;/code&gt;: The value of this field corresponds to a single line emitted by the application to the standard output (&lt;code&gt;stdout&lt;/code&gt;) or standard error (&lt;code&gt;stderr&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;stream&lt;/code&gt;: The value of this field identifies the output stream - &lt;code&gt;stdout&lt;/code&gt; or &lt;code&gt;stderr&lt;/code&gt;. This added by the Kubernetes runtime.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;time&lt;/code&gt;: This field corresponds to the date time in UTC when the log was read by the Kubernetes runtime&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@timestamp&lt;/code&gt;: This field is derived by fluent bit by parsing the value of  field 
specified via &lt;code&gt;Time_Key&lt;/code&gt; using the format specified in &lt;code&gt;Time_Format&lt;/code&gt; emitted by the application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, we will deploy an web application inside the cluster which emits JSON formatted logs to the standard output and error streams. We will see that fluent bit automatically forwards these logs to Elasticsearch without requiring any additional configuration either in fluent bit or on the application side. &lt;/p&gt;

&lt;h1&gt;
  
  
  Step 4 — Write and deploy a web application on kubernetes
&lt;/h1&gt;

&lt;p&gt;We will use the Python programming language to write a basic web application using &lt;a href="https://palletsprojects.com/p/flask/" rel="noopener noreferrer"&gt;Flask&lt;/a&gt;.&lt;br&gt;
To deploy the application in the kubernetes cluster, we will build a docker image containing the application source code and publish it to docker hub. First, create a new directory, &lt;code&gt;application&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

mkdir application


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now inside that directory, create and open a file called &lt;code&gt;app.py&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

nano application/app.py


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add the following contents to it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/test/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rest&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;

&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/honeypot/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test1&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;lol&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The first two statements imports the &lt;code&gt;Flask&lt;/code&gt; class from &lt;code&gt;flask&lt;/code&gt; package and then creates the application using the current module name (via the special &lt;code&gt;__name__&lt;/code&gt; variable). Then, we define two endpoints - &lt;code&gt;/test/&lt;/code&gt; and &lt;code&gt;/honeypot/&lt;/code&gt; using the &lt;code&gt;app.route&lt;/code&gt; decorator. The &lt;code&gt;/test/&lt;/code&gt; endpoint will return a text, &lt;code&gt;rest&lt;/code&gt; as response and the &lt;code&gt;/honeypot/&lt;/code&gt; endpoint will raise an exception upon being called due to the &lt;code&gt;1/0&lt;/code&gt; statement.&lt;/p&gt;

&lt;p&gt;To run the application, we will use &lt;a href="https://gunicorn.org/" rel="noopener noreferrer"&gt;gunicorn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Inside the application directory, create and open a new file, &lt;code&gt;Dockerfile&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

nano application/Dockerfile


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add the following contents to the file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

FROM python:3.7-alpine
ADD app.py /app.py

RUN set -e; \
    apk add --no-cache --virtual .build-deps \
        gcc \
        libc-dev \
        linux-headers \
    ; \
    pip install flask gunicorn ; \
    apk del .build-deps;

WORKDIR /
CMD ["gunicorn", "--workers", "5", "--bind", "0.0.0.0:8000", "app:app"]



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above &lt;code&gt;Dockerfile&lt;/code&gt; will create a docker image containing the application code, install Flask, gunicorn and configure the image to start &lt;code&gt;gunicorn&lt;/code&gt; on startup. We are running 5 worker processes and listening on port 8000 for HTTP requests with the WSGI application entrypoint object&lt;br&gt;
as &lt;code&gt;app&lt;/code&gt; inside the &lt;code&gt;app&lt;/code&gt; Python module.&lt;/p&gt;

&lt;p&gt;From within the &lt;code&gt;application&lt;/code&gt; directory, let's build the docker image:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cd application
docker build -t sammy/do-webapp .
..
Successfully built &amp;lt;^&amp;gt;ec7bd4635bc7&amp;lt;^&amp;gt;
Successfully tagged sammy/do-webapp:latest



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's run a docker container using the above image:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker run -p 8000:8000 -ti sammy/do-webapp


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, if we visit the URL &lt;code&gt;http://127.0.0.1:8000/honeypot/&lt;/code&gt; from the browser, we will see logs such as these on the terminal we ran the container from:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


[2019-09-05 04:42:53 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2019-09-05 04:42:53 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
[2019-09-05 04:42:53 +0000] [1] [INFO] Using worker: sync
[2019-09-05 04:42:53 +0000] [9] [INFO] Booting worker with pid: 9
[2019-09-05 04:42:53 +0000] [10] [INFO] Booting worker with pid: 10
[2019-09-05 04:42:53 +0000] [11] [INFO] Booting worker with pid: 11
[2019-09-05 04:42:53 +0000] [12] [INFO] Booting worker with pid: 12
[2019-09-05 04:42:53 +0000] [13] [INFO] Booting worker with pid: 13
[2019-09-05 04:43:05,289] ERROR in app: Exception on /honeypot/ [GET]
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
    raise value
  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/app.py", line 15, in test1
    1/0
ZeroDivisionError: division by zero


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The first few lines are startup messages from &lt;code&gt;gunicorn&lt;/code&gt;. Then, we see the exception that occurs when we make a request to the &lt;code&gt;/honeypot/&lt;/code&gt; endpoint. Tracebacks like these present a problem for logging since they are spread over multiple lines. We want the entire traceback as a single&lt;br&gt;
log message. One way to achieve that is to log messages in a JSON format. Press &lt;code&gt;CTRL + C&lt;/code&gt; to terminate the container.&lt;/p&gt;

&lt;p&gt;Let's now configure &lt;code&gt;gunicorn&lt;/code&gt; to emit the logs in a JSON format. In the &lt;code&gt;application&lt;/code&gt; directory, create and open a new file, &lt;code&gt;gunicorn_logging.conf&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

nano application/gunicorn_logging.conf


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add the following contents to it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

[loggers]
keys=root, gunicorn_error, gunicorn_access

[handlers]
keys=console

[formatters]
keys=json

[logger_root]
level=INFO
handlers=console

[logger_gunicorn.error]
level=DEBUG
handlers=console
propagate=0
qualname=gunicorn.error

[logger_gunicorn.access]
level=INFO
handlers=console
propagate=0
qualname=gunicorn.access

[handler_console]
class=StreamHandler
formatter=json
args=(sys.stdout, )

[formatter_json]
class=pythonjsonlogger.jsonlogger.JsonFormatter



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To understand the above logging configuration completely, please refer to the &lt;a href="https://docs.python.org/3/library/logging.config.html" rel="noopener noreferrer"&gt;Python logging module documentation&lt;/a&gt;. The most relevant parts for us is the &lt;code&gt;formatter_json&lt;/code&gt; section where we set the logging formatter close to the &lt;code&gt;JsonFormatter&lt;/code&gt; class which is part of the &lt;a href="https://github.com/madzak/python-json-logger" rel="noopener noreferrer"&gt;python-json-logger&lt;/a&gt; package.&lt;/p&gt;

&lt;p&gt;To use the above logging configuration, we will update the &lt;code&gt;application/Dockerfile&lt;/code&gt; as follows:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

FROM python:3.7-alpine

ADD app.py /app.py
ADD gunicorn_logging.conf /gunicorn_logging.conf

RUN set -e; \
    apk add --no-cache --virtual .build-deps \
        gcc \
        libc-dev \
        linux-headers \
    ; \
    pip install flask &amp;lt;^&amp;gt;python-json-logger&amp;lt;^&amp;gt; gunicorn ; \
    apk del .build-deps;
EXPOSE 8000
WORKDIR /
CMD ["gunicorn", "--log-config", "gunicorn_logging.conf", "--workers", "5", "--bind", "0.0.0.0:8000", "app:app"]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The key changes in the above Dockerfile are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Since we are using a custom gunicorn logging configuration, we copy the gunicorn logging configuration using: &lt;code&gt;ADD gunicorn_logging.conf /gunicorn_logging.conf&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;We are now using a new Python package for JSON logging, so we add &lt;code&gt;python-json-logger&lt;/code&gt; to the list of packages being installed using &lt;code&gt;pip install&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;To specify the custom logging configuration file to &lt;code&gt;gunicorn&lt;/code&gt;, we specify the configuration file path to the  &lt;code&gt;--log-config&lt;/code&gt; option&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rebuild the  image using the updated Dockerfile:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker build -t sammy/do-webapp .
..
Successfully built &amp;lt;^&amp;gt;ec7bd4635bc7&amp;lt;^&amp;gt;
Successfully tagged sammy/do-webapp:latest



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's run the image:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker run -p 8000:8000 -ti sammy/do-webapp


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, if we revisit the URL &lt;code&gt;http://127.0.0.1:8000/honeypot/&lt;/code&gt;, we will see logs are now emitted in a JSON format&lt;br&gt;
in Terminal 1:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Starting gunicorn 19.9.0"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Arbiter booted"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Listening at: http://0.0.0.0:8000 (1)"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Using worker: sync"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Booting worker with pid: 8"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"5 workers"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"metric"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gunicorn.workers"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"mtype"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gauge"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GET /honeypot/"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Exception on /honeypot/ [GET]"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"exc_info"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Traceback (most recent call last):&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;  File &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/usr/local/lib/python3.7/site-packages/flask/app.py&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, line 2446, in wsgi_app&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;    response = self.full_dispatch_request()&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;  File &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/usr/local/lib/python3.7/site-packages/flask/app.py&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, line 1951, in full_dispatch_request&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;    rv = self.handle_user_exception(e)&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;  File &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/usr/local/lib/python3.7/site-packages/flask/app.py&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, line 1820, in handle_user_exception&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;    reraise(exc_type, exc_value, tb)&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;  File &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/usr/local/lib/python3.7/site-packages/flask/_compat.py&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, line 39, in reraise&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;    raise value&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;  File &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/usr/local/lib/python3.7/site-packages/flask/app.py&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, line 1949, in full_dispatch_request&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;    rv = self.dispatch_request()&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;  File &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/usr/local/lib/python3.7/site-packages/flask/app.py&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, line 1935, in dispatch_request&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;    return self.view_functions[rule.endpoint](**req.view_args)&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;  File &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;/app.py&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, line 13, in test1&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;    1/0&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;ZeroDivisionError: division by zero"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.17.0.1 - - [05/Sep/2019:04:47:33 +0000] &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;GET /honeypot/ HTTP/1.1&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; 500 290 &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GET /test/"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"172.17.0.1 - - [05/Sep/2019:04:47:47 +0000] &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;GET /test/ HTTP/1.1&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; 200 4 &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you are building your own docker image, please replace &lt;code&gt;sammy&lt;/code&gt; with your own docker hub username when you login&lt;br&gt;
as well as when you build and push the images. In addition, substitute any reference to &lt;code&gt;sammy/do-webapp&lt;/code&gt; by your own image name for the rest of this article&lt;/p&gt;

&lt;p&gt;Now that we have our application emitting logs as JSON formatted strings, let's push the docker image to docker hub. You will need to login first:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker login


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: sammy
Password: 
Login Succeeded


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let's push the docker image to docker hub:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker push sammy/do-webapp


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


The push refers to repository [docker.io/sammy/do-webapp]
..
latest: digest: sha256:&amp;lt;^&amp;gt;ba7719343e3430e88dc5257b8839c721d8865498603beb2e3161d57b50a72cbe&amp;lt;^&amp;gt; size: &amp;lt;^&amp;gt;1993&amp;lt;^&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now that we have pushed our docker image to docker hub, we will deploy it to our Kubernetes cluster using a &lt;code&gt;Deployment&lt;/code&gt; in a new namespace, &lt;code&gt;demo&lt;/code&gt;. Create and open a new file &lt;code&gt;namespace.yaml&lt;/code&gt; inside the &lt;code&gt;application&lt;/code&gt; directory:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

nano application/namespace.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add the following contents:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To create the namespace:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f application/namespace.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should see the following output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

namespace/demo created


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, create and open a new file &lt;code&gt;deployment.yaml&lt;/code&gt; inside the application` directory:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;command&lt;br&gt;
nano application/deployment.yaml&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add the following contents to it:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;br&gt;
apiVersion: apps/v1&lt;br&gt;
kind: Deployment&lt;br&gt;
metadata:&lt;br&gt;
  name: webapp&lt;br&gt;
  namespace: demo&lt;br&gt;
spec:&lt;br&gt;
  replicas: 2&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: webapp&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app: webapp&lt;br&gt;
    spec:&lt;br&gt;
      containers:&lt;br&gt;
      - name: webapp&lt;br&gt;
        image: sammy/do-webapp&lt;br&gt;
        imagePullPolicy: Always&lt;br&gt;
        livenessProbe:&lt;br&gt;
          httpGet:&lt;br&gt;
            scheme: HTTP&lt;br&gt;
            path: /test/&lt;br&gt;
            port: 8000&lt;br&gt;
          initialDelaySeconds: 30&lt;br&gt;
          periodSeconds: 10&lt;br&gt;
        readinessProbe:&lt;br&gt;
          httpGet:&lt;br&gt;
            scheme: HTTP&lt;br&gt;
            path: /test/&lt;br&gt;
            port: 8000&lt;br&gt;
          initialDelaySeconds: 30&lt;br&gt;
          periodSeconds: 10&lt;/p&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The web application is being deployed to the &lt;code&gt;demo&lt;/code&gt; namespace and will run a container from the docker image, &lt;code&gt;sammy/do-webapp&lt;/code&gt; we just pushed. The container has HTTP liveness and readiness probes configured for port 8000 and path &lt;code&gt;/test/&lt;/code&gt;. These will make sure that kubernetes is able&lt;br&gt;
to check if the application is working as expected.&lt;/p&gt;

&lt;p&gt;Next, let's create the deployment:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;command&lt;br&gt;
kubectl apply -f application/deployment.yaml&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;br&gt;
You shoud see the following output:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
deployment.apps/webapp created&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let's wait for the deployment rollout to complete:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
kubectl rollout status deployment/webapp -n demo&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see the following output:&lt;br&gt;
&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
Waiting for deployment "webapp" rollout to finish: 0 of 2 updated replicas are available...&lt;br&gt;
Waiting for deployment "webapp" rollout to finish: 1 of 2 updated replicas are available...&lt;br&gt;
deployment "webapp" successfully rolled out&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let's see if the pods are up and running successfully:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;command&lt;br&gt;
kubectl -n demo get pods&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see the following output:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
NAME                           READY   STATUS    RESTARTS   AGE&lt;br&gt;
webapp-&amp;lt;^&amp;gt;65f6798978-8phqg&amp;lt;^&amp;gt;   1/1     Running   0          76s&lt;br&gt;
webapp-&amp;lt;^&amp;gt;65f6798978-f2jl9&amp;lt;^&amp;gt;   1/1     Running   0          76s&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To access the web application from our local workstation, we will use port&lt;br&gt;
forwarding specifying a pod name of one of the two pods above:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;command&lt;br&gt;
kubectl -n demo port-forward pod/webapp-&amp;lt;^&amp;gt;65f6798978-8phqg&amp;lt;^&amp;gt; 8000:8000&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see the following output:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
Forwarding from 127.0.0.1:8000 -&amp;gt; 8000&lt;br&gt;
Forwarding from [::1]:8000 -&amp;gt; 8000&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, visit &lt;code&gt;http://127.0.0.1:8000/honeypot/&lt;/code&gt; in the browser a few times.&lt;/p&gt;

&lt;p&gt;Now, if we go to kibana and use "honeypot" as the search query, we will see log documents emitted by our web application. The document's &lt;code&gt;log&lt;/code&gt; field contains the entire log line: &lt;/p&gt;

&lt;p&gt;emitted by the application as a string:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`json&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
...&lt;br&gt;
"log": {"message": "Exception on /honeypot/ [GET]", "exc_info": "Traceback (most recent call last):\n  File \"/usr/local/lib/python3.7/site-packages/flask/app.py\", line 2446, in wsgi_app\n    response = self.full_dispatch_request()\n  File \"/usr/local/lib/python3.7/site-packages/flask/app.py\", line 1951, in full_dispatch_request\n    rv = self.handle_user_exception(e)\n  File \"/usr/local/lib/python3.7/site-packages/flask/app.py\", line 1820, in handle_user_exception\n    reraise(exc_type, exc_value, tb)\n  File \"/usr/local/lib/python3.7/site-packages/flask/_compat.py\", line 39, in reraise\n    raise value\n  File \"/usr/local/lib/python3.7/site-packages/flask/app.py\", line 1949, in full_dispatch_request\n    rv = self.dispatch_request()\n  File \"/usr/local/lib/python3.7/site-packages/flask/app.py\", line 1935, in dispatch_request\n    return self.view_functions&lt;a href="//**req.view_args"&gt;rule.endpoint&lt;/a&gt;\n  File \"/app.py\", line 13, in test1\n    1/0\nZeroDivisionError: division by zero"}&lt;br&gt;
...&lt;br&gt;
}&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In this step, we have seen how without any further work on the logging setup, we can view and search the application logs in Elasticsearch. Next, we will improve this in two ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Parse the &lt;code&gt;log&lt;/code&gt; field as JSON and add the JSON keys as top-level fields in the Elasticsearch document&lt;/li&gt;
&lt;li&gt;Add Kubernetes metadata to each log document so that we can have identifying information about where the log is being forwarded from&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(1) will allow us to search for logs with specific fields and (2) will give us information about the specific pod a log is being emitted from. Let's see how we can do both.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: Keep the above port forward running in a terminal session and use a new terminal session for running the commands in the the rest of the tutorial.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 5 — Update fluent bit configuration to apply kubernetes filter
&lt;/h1&gt;

&lt;p&gt;In fluent bit, a filter is used to alter the incoming log data in some way. The in-built &lt;a href="https://docs.fluentbit.io/manual/filter/kubernetes" rel="noopener noreferrer"&gt;kubernetes filter&lt;/a&gt; enriches logs with kubernetes data. In addition, it can also read the incoming data in the &lt;code&gt;log&lt;/code&gt; field and if JSON can "scoop out" the keys and add them as top-level fields to the log entry.&lt;/p&gt;

&lt;p&gt;The second version of the fluent bit &lt;code&gt;ConfigMap&lt;/code&gt; resource adds this filter. Create a copy of the existing &lt;code&gt;configmap-1.yaml&lt;/code&gt; and name it &lt;code&gt;configmap-2.yaml&lt;/code&gt; in the same directory:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;command&lt;br&gt;
cp logging/fluent-bit/configmap-1.yaml logging/fluent-bit/configmap-2.yaml&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Open the &lt;code&gt;logging/fluent-bit/configmap-2.yaml&lt;/code&gt; file and update the &lt;code&gt;fluent-bit.conf&lt;/code&gt; file definition to include:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; @INCLUDE filter-kubernetes.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In addition, add a new file declaration &lt;code&gt;filter-kubernetes.conf&lt;/code&gt; after &lt;code&gt;input-kubernetes.conf&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;filter-kubernetes.conf: |
   [FILTER]
     Name                kubernetes
     Match               kube.*
     Keep_Log            Off
     Merge_Log           On
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Save and close the file.&lt;/p&gt;

&lt;p&gt;We specify a section, &lt;code&gt;[FILTER]&lt;/code&gt; in this file and refer to the &lt;code&gt;kubernetes&lt;/code&gt; filter using the &lt;code&gt;Name&lt;/code&gt; attribute. We use &lt;code&gt;Match&lt;/code&gt; in &lt;code&gt;filter-kubernetes.conf&lt;/code&gt; to only apply this filter to logs tagged with &lt;code&gt;kube.*&lt;/code&gt;.&lt;br&gt;
Note that when we configured the input above in &lt;code&gt;input-kubernetes.conf&lt;/code&gt;, we tagged all messages with &lt;code&gt;kube.*&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The entire file contents should be as follows:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;br&gt;
apiVersion: v1&lt;br&gt;
kind: ConfigMap&lt;br&gt;
metadata:&lt;br&gt;
  name: fluent-bit-config&lt;br&gt;
  namespace: kube-logging&lt;br&gt;
  labels:&lt;br&gt;
    k8s-app: fluent-bit&lt;br&gt;
data:&lt;br&gt;
  fluent-bit.conf: |&lt;br&gt;
    [SERVICE]&lt;br&gt;
        Flush         1&lt;br&gt;
        Log_Level     info&lt;br&gt;
        Daemon        off&lt;br&gt;
        Parsers_File  parsers.conf&lt;br&gt;
        HTTP_Server   On&lt;br&gt;
        HTTP_Listen   0.0.0.0&lt;br&gt;
        HTTP_Port     2020&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@INCLUDE input-kubernetes.conf
@INCLUDE filter-kubernetes.conf
@INCLUDE output-elasticsearch.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;input-kubernetes.conf: |&lt;br&gt;
    [INPUT]&lt;br&gt;
        Name              tail&lt;br&gt;
        Tag               kube.*&lt;br&gt;
        Path              /var/log/containers/*.log&lt;br&gt;
        Parser            docker&lt;br&gt;
        DB                /var/log/flb_kube.db&lt;br&gt;
        Mem_Buf_Limit     5MB&lt;br&gt;
        Skip_Long_Lines   On&lt;br&gt;
        Refresh_Interval  10&lt;/p&gt;

&lt;p&gt;filter-kubernetes.conf: |&lt;br&gt;
    [FILTER]&lt;br&gt;
        Name                kubernetes&lt;br&gt;
        Match               kube.*&lt;br&gt;
        Keep_Log            Off&lt;br&gt;
        Merge_Log           On&lt;/p&gt;

&lt;p&gt;output-elasticsearch.conf: |&lt;br&gt;
    [OUTPUT]&lt;br&gt;
        Name            es&lt;br&gt;
        Match           *&lt;br&gt;
        Host            ${FLUENT_ELASTICSEARCH_HOST}&lt;br&gt;
        Port            ${FLUENT_ELASTICSEARCH_PORT}&lt;br&gt;
        Logstash_Format On&lt;br&gt;
        Logstash_Prefix fluent-bit&lt;br&gt;
        Retry_Limit     False&lt;/p&gt;

&lt;p&gt;parsers.conf: |&lt;br&gt;
    [PARSER]&lt;br&gt;
        Name        docker&lt;br&gt;
        Format      json&lt;br&gt;
        Time_Key    time&lt;br&gt;
        Time_Format %Y-%m-%dT%H:%M:%S.%L&lt;br&gt;
        Time_Keep   On&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let's delete the existing &lt;code&gt;ConfigMap&lt;/code&gt; and &lt;code&gt;DaemonSet&lt;/code&gt; resources first:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;command&lt;br&gt;
kubectl delete -f logging/fluent-bit/configmap-1.yaml -f logging/fluent-bit/daemonset.yaml&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see the following output:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
configmap "fluent-bit-config" deleted&lt;br&gt;
daemonset.apps "fluent-bit" deleted&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, we recreate the new version of the &lt;code&gt;ConfigMap&lt;/code&gt; and the fluent bit &lt;code&gt;DaemonSet&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;command&lt;br&gt;
kubectl apply -f logging/fluent-bit/configmap-2.yaml -f logging/fluent-bit/daemonset.yaml&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see the following output:&lt;br&gt;
&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
configmap/fluent-bit-config created&lt;br&gt;
daemonset.apps/fluent-bit created&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, visit the URL &lt;code&gt;http://127.0.0.1:8080/honeypot/&lt;/code&gt; in you browser once more and then in Kibana, use the following search query: &lt;code&gt;kubernetes.labels.app: "webapp"&lt;/code&gt;. You will see a few documents show up&lt;br&gt;
in the results. Each log document has kubernetes metadata&lt;br&gt;
associated with it - we used one in our search query under the &lt;code&gt;kubernetes&lt;/code&gt; object. In addition, &lt;code&gt;message&lt;/code&gt; and &lt;code&gt;exc_info&lt;/code&gt; are now searchable fields. For example, the Kibana query &lt;code&gt;kubernetes.labels.app: "webapp"  AND exc_info: "ZeroDivisionError"&lt;/code&gt; will return log documents&lt;br&gt;
containing &lt;code&gt;ZeroDivisionError&lt;/code&gt; in the &lt;code&gt;exc_info&lt;/code&gt; field.&lt;/p&gt;

&lt;p&gt;Here's an example log document with the exception info logged in a separate field:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FIH0OIBO.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FIH0OIBO.png" alt="Kibana index creation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this step, we improved the application logging by making use of fluent bit's features. We used the Kubernetes filter to add Kubernetes metadata to the log messages and parsed the JSON log emitted by the application to make the individual fields searchable in Elasticsearch.&lt;/p&gt;

&lt;p&gt;In the next step, we see how we can forward system logs via fluent bit.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 6 — Update fluent bit configuration to forward system logs
&lt;/h1&gt;

&lt;p&gt;In addition to application logs, it is a good idea to also forward logs from the system services. These logs are useful when we may want to debug the behavior of services such as &lt;code&gt;ssh&lt;/code&gt; daemon, kubernetes node management&lt;br&gt;
services such as &lt;code&gt;kubelet&lt;/code&gt; and the docker daemon. Most linux systems available today run &lt;code&gt;systemd&lt;/code&gt; and logs from these services are logged to the systemd journal. We already mount the &lt;code&gt;/var/log/journal&lt;/code&gt; directory inside fluent bit. However, we have couple of additional steps to perform before we can see our journal logs being forwarded by fluent bit.&lt;/p&gt;

&lt;p&gt;The final version of the fluent bit &lt;code&gt;ConfigMap&lt;/code&gt; resource adds this filter. Create a copy of the existing &lt;code&gt;configmap-2.yaml&lt;/code&gt; and name it &lt;code&gt;configmap-final.yaml&lt;/code&gt; in the same directory:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;command&lt;br&gt;
cp logging/fluent-bit/configmap-2.yaml logging/fluent-bit/configmap-final.yaml&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The first step is to add an additional &lt;code&gt;[INPUT]&lt;/code&gt; section to the the fluent bit configuration. fluent bit has a dedicated &lt;code&gt;input&lt;/code&gt; &lt;a href="https://docs.fluentbit.io/manual/input/systemd" rel="noopener noreferrer"&gt;plugin&lt;/a&gt; for systemd which can be specified as follows:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;yaml&lt;br&gt;
  [INPUT]&lt;br&gt;
      Name            systemd&lt;br&gt;
      Path            /journal&lt;br&gt;
      Tag             systemd.*&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Logs read by the systemd input plugin will be tagged with &lt;code&gt;systemd.*&lt;/code&gt;. This tag is then use to define two filters. Add the following to the &lt;code&gt;configmap-final.yaml&lt;/code&gt; file at the same nesting level as&lt;br&gt;
&lt;code&gt;filter-kubernetes.conf&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;br&gt;
filter-systemd.conf: |&lt;br&gt;
    [FILTER]&lt;br&gt;
        Name modify&lt;br&gt;
        Match systemd.*&lt;br&gt;
        Rename _SYSTEMD_UNIT systemd_unit&lt;br&gt;
        Rename _HOSTNAME hostname&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[FILTER]
    Name record_modifier
    Match systemd.*

    Remove_Key _CURSOR
    Remove_Key _REALTIME_TIMESTAMP
    Remove_Key _MONOTONIC_TIMESTAMP
    Remove_Key _BOOT_ID
    Remove_Key _MACHINE_ID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The first is a &lt;code&gt;modify&lt;/code&gt; filter which renames the &lt;code&gt;_SYSTEMD_UNIT&lt;/code&gt; field to &lt;code&gt;systemd_unit&lt;/code&gt;. The second is a &lt;code&gt;record_modifier&lt;/code&gt; filter which removes keys which we may not care about logging. We can also&lt;br&gt;
use this filter to add fields to the log using &lt;code&gt;Add_Key&lt;/code&gt; configuration. For both filters, we use &lt;code&gt;Match&lt;/code&gt; to only apply them to logs which are tagged &lt;code&gt;systemd.*&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The final contents of the file will look as follows:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yaml&lt;br&gt;
apiVersion: v1&lt;br&gt;
kind: ConfigMap&lt;br&gt;
metadata:&lt;br&gt;
  name: fluent-bit-config&lt;br&gt;
  namespace: kube-logging&lt;br&gt;
  labels:&lt;br&gt;
    k8s-app: fluent-bit&lt;br&gt;
data:&lt;br&gt;
  fluent-bit.conf: |&lt;br&gt;
    [SERVICE]&lt;br&gt;
        Flush         1&lt;br&gt;
        Log_Level     info&lt;br&gt;
        Daemon        off&lt;br&gt;
        Parsers_File  parsers.conf&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@INCLUDE input-systemd.conf
@INCLUDE input-kubernetes.conf
@INCLUDE filter-kubernetes.conf
@INCLUDE filter-systemd.conf
@INCLUDE output-elasticsearch.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;input-systemd.conf: |&lt;br&gt;
    [INPUT]&lt;br&gt;
      Name            systemd&lt;br&gt;
      Path            /journal&lt;br&gt;
      Tag             systemd.*&lt;/p&gt;

&lt;p&gt;input-kubernetes.conf: |&lt;br&gt;
    [INPUT]&lt;br&gt;
        Name              tail&lt;br&gt;
        Tag               kube.*&lt;br&gt;
        Path              /var/log/containers/*.log&lt;br&gt;
        Parser            docker&lt;br&gt;
        DB                /var/log/flb_kube.db&lt;br&gt;
        Mem_Buf_Limit     5MB&lt;br&gt;
        Skip_Long_Lines   On&lt;br&gt;
        Refresh_Interval  10&lt;/p&gt;

&lt;p&gt;filter-kubernetes.conf: |&lt;br&gt;
    [FILTER]&lt;br&gt;
        Name                kubernetes&lt;br&gt;
        Match               kube.*&lt;br&gt;
        Keep_Log            Off&lt;br&gt;
        Merge_Log           On&lt;/p&gt;

&lt;p&gt;filter-systemd.conf: |&lt;br&gt;
    [FILTER]&lt;br&gt;
        Name modify&lt;br&gt;
        Match systemd.*&lt;br&gt;
        Rename _SYSTEMD_UNIT systemd_unit&lt;br&gt;
        Rename _HOSTNAME hostname&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[FILTER]
    Name record_modifier
    Match systemd.*
    Remove_Key _CURSOR
    Remove_Key _REALTIME_TIMESTAMP
    Remove_Key _MONOTONIC_TIMESTAMP
    Remove_Key _BOOT_ID
    Remove_Key _MACHINE_ID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;output-elasticsearch.conf: |&lt;br&gt;
    [OUTPUT]&lt;br&gt;
        Name            es&lt;br&gt;
        Match           *&lt;br&gt;
        Host            ${FLUENT_ELASTICSEARCH_HOST}&lt;br&gt;
        Port            ${FLUENT_ELASTICSEARCH_PORT}&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Logstash_Format On
    Logstash_Prefix fluent-bit
    Retry_Limit     False
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;parsers.conf: |&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[PARSER]
    Name        docker
    Format      json
    Time_Key    time
    Time_Format %Y-%m-%dT%H:%M:%S.%L
    Time_Keep   On
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let's delete the existing &lt;code&gt;ConfigMap&lt;/code&gt; and &lt;code&gt;DaemonSet&lt;/code&gt; resources first:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;command&lt;br&gt;
kubectl delete -f logging/fluent-bit/configmap-2.yaml -f logging/fluent-bit/daemonset.yaml&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see the following output:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;

&lt;p&gt;configmap "fluent-bit-config" deleted&lt;br&gt;
daemonset.apps "fluent-bit" deleted&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, let's recreate the configmap using the final manifest and the fluent bit&lt;br&gt;
&lt;code&gt;DaemonSet&lt;/code&gt; again:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;command&lt;br&gt;
kubectl apply -f logging/fluent-bit/configmap-final.yaml -f logging/fluent-bit/daemonset.yaml&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see the following output:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
configmap/fluent-bit-config created&lt;br&gt;
daemonset.apps/fluent-bit created&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Fluent bit will now forward logs from various system services to Elastic search. However, before we can search for them in Kibana, we will have to perform a "Refresh field list" operation in Kibana. Go to&lt;br&gt;
the page &lt;code&gt;http://127.0.0.1:5601/app/kibana#/management/kibana/index_patterns&lt;/code&gt; in your browser and click on &lt;code&gt;fluent-bit*&lt;/code&gt; - the index pattern we created earlier. This will take us to the page as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FHId42sj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FHId42sj.png" alt="fluent-bit* index pattern page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the "refresh" icon (the second icon on the top right) and in the pop up dialog box, click on "Refresh":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F3gWCOMH.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F3gWCOMH.png" alt="Refreshing the index pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, in addition to the logs of all the different pods, we will be able to search for logs related to the systemd units as well. For example, to view all logs related to the &lt;code&gt;kubelet&lt;/code&gt; service, we will use the&lt;br&gt;
kibana query &lt;code&gt;systemd_unit: kubelet.service&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FikBPxvo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FikBPxvo.png" alt="Example logs from kubelet.service"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;In this post we discussed how we can setup log fowarding in a Kubernetes cluster using fluent bit. We learned how we can read logs, parse them, modify them and forward them to an Elasticsearch cluster.&lt;br&gt;
This article although should get you started, only scratches the basics of fluent bit and I encourage you to look at the other fluent bit features such as &lt;a href="https://docs.fluentbit.io/manual/configuration/monitoring" rel="noopener noreferrer"&gt;monitoring fluent bit itself&lt;/a&gt;, &lt;a href="https://docs.fluentbit.io/manual/configuration/stream_processor" rel="noopener noreferrer"&gt;stream processing&lt;/a&gt; and the variety of &lt;a href="https://docs.fluentbit.io/manual/input" rel="noopener noreferrer"&gt;inputs&lt;/a&gt; and &lt;a href="https://docs.fluentbit.io/manual/output" rel="noopener noreferrer"&gt;outputs&lt;/a&gt; it supports.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://groups.google.com/forum/#!forum/fluent-bit" rel="noopener noreferrer"&gt;fluent bit&lt;/a&gt; google group is a great forum for seeking help if you are stuck with something.&lt;/p&gt;

&lt;h1&gt;
  
  
  Cleaning up
&lt;/h1&gt;

&lt;p&gt;If you want to delete all the resources we created as part of this article, you can delete the two namespaces we created:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;command&lt;br&gt;
kubectl delete namespace kube-logging demo&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

</description>
      <category>logging</category>
      <category>infrastructure</category>
      <category>kubernetes</category>
      <category>fluentbit</category>
    </item>
  </channel>
</rss>
