<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Álvaro Bacelar</title>
    <description>The latest articles on DEV Community by Álvaro Bacelar (@alvarobacelar).</description>
    <link>https://dev.to/alvarobacelar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alvarobacelar"/>
    <language>en</language>
    <item>
      <title>Monitorando Consumer Lag do Apache Kafka</title>
      <dc:creator>Álvaro Bacelar</dc:creator>
      <pubDate>Sun, 29 Mar 2020 20:07:05 +0000</pubDate>
      <link>https://dev.to/kafkabr/monitorando-consumer-lag-do-apache-kafka-2o1d</link>
      <guid>https://dev.to/kafkabr/monitorando-consumer-lag-do-apache-kafka-2o1d</guid>
      <description>&lt;p&gt;O Apache Kafka tem milhares de métricas que podem ser acessadas via interface JMX (&lt;a href="https://medium.com/@alvarobacelar/monitorando-um-cluster-kafka-com-ferramentas-open-source-a4032836dc79?source=friends_link&amp;amp;sk=2e2ae34d66935565a59932b80099dfc1" rel="noopener noreferrer"&gt;nesse artigo&lt;/a&gt; eu mostro como instrumentar tais métricas utilizando ferramentas Open Source). Contudo há uma métrica, não menos importante, que não está disponível via JMX: O Consumer Lag.&lt;/p&gt;

&lt;p&gt;Mensurada em número de mensagem, basicamente o Lag é a diferença entre a última mensagem produzida em uma partição especifica e a última mensagem processada (&lt;em&gt;committed&lt;/em&gt;) pelo consumidor.&lt;/p&gt;

&lt;p&gt;Segundo o livro &lt;a href="https://www.confluent.io/resources/kafka-the-definitive-guide/" rel="noopener noreferrer"&gt;&lt;em&gt;"kafka - The Definitive Guide"&lt;/em&gt;&lt;/a&gt; o método preferido para monitorar o Lag é através de uma aplicação externa que possa visualizar o estado de uma partição no broker, rastreando o offset mais recente da mensagem produzida e o último offset processado pelo consumidor, atraves do topico interno de controle: __consumer_offsets&lt;/p&gt;

&lt;p&gt;Levando isso em consideração, o LinkedIn (empresa responsável pela criação do Apache Kafka) desenvolveu (também) uma aplicação que realiza essa rastreabilidade dos offsets das partições dos brokers. Essa aplicação é o &lt;a href="https://github.com/linkedin/Burrow" rel="noopener noreferrer"&gt;Burrow&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;O Burrow é uma ferramenta que foi escrita na linguagem &lt;a href="https://golang.org/" rel="noopener noreferrer"&gt;Go&lt;/a&gt; e ela nos provê tanto informações de Consumer Lag quanto informações diversas do cluster e disponibiliza tais valores via REST API. Com o Burrow ainda é possível enviar notificações via email ou WebHook com um threshold definido.&lt;/p&gt;

&lt;p&gt;Como falei acima, o Burrow provê essas informações via REST API e nosso objetivo aqui é fazer essa informação chegar no Prometheus e consequentemente no Grafana.&lt;/p&gt;

&lt;p&gt;Então irei mostrar como instalar e configurar o Burrow juntamente com o &lt;a href="https://github.com/alvarobacelar/burrow_exporter" rel="noopener noreferrer"&gt;burrow_exporter&lt;/a&gt; (também escrito em Go) para monitorar o Consumer Lag de um cluster Apache Kafka e ter todas as informações do Lag, entre outras, no Prometheus.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Para realizar todos os testes desse artigo, eu realizei o setup de um cluster Apache Kafka com o Ansible. &lt;a href="https://medium.com/@alvarobacelar/setup-de-cluster-apache-kafka-com-ansible-df62c8b1017b?source=friends_link&amp;amp;sk=5839522ba2bea31a39d527d523592f97" rel="noopener noreferrer"&gt;Clicando aqui&lt;/a&gt; você verá um artigo que escrevi mostrando como realizar um setup de um cluster Apache Kafka e Zookeeper com o Ansible. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Instalando e Configurando o Prometheus
&lt;/h3&gt;

&lt;p&gt;Para instalar e configurar o Prometheus vamos seguir os seguintes passos: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Baixar a versão mais estável do &lt;a href="https://prometheus.io/download/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  wget &lt;a href="https://github.com/prometheus/prometheus/releases/download/v2.12.0/prometheus-2.12.0.linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/prometheus/prometheus/releases/download/v2.12.0/prometheus-2.12.0.linux-amd64.tar.gz&lt;/a&gt;
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Descompactando no diretório /srv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  tar -zxvf prometheus-2.12.0.linux-amd64.tar.gz -C /srv/
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Adicionar usuário de serviço do prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  useradd -s /usr/sbin/nologin prometheus
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Mudar o dono da pasta do prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  chown prometheus:prometheus /srv/prometheus-2.12.0.linux-amd64/ -R
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Adicionar o arquivo de service do prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  vim /etc/systemd/system/prometheus.service
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;[Unit]&lt;br&gt;
Description=Prometheus&lt;br&gt;
After=network-online.target&lt;br&gt;
[Service]&lt;br&gt;
Type=simple&lt;br&gt;
User=prometheus&lt;br&gt;
Group=prometheus&lt;br&gt;
ExecReload=/bin/kill -HUP $MAINPID&lt;br&gt;
ExecStart=/srv/prometheus-2.12.0.linux-amd64/prometheus --config.file=/srv/prometheus-2.12.0.linux-amd64/prometheus.yml --storage.tsdb.path=/srv/prometheus-2.12.0.linux-amd64/data --web.listen-address=0.0.0.0:9090&lt;br&gt;
LimitNOFILE=65000&lt;br&gt;
LockPersonality=true&lt;br&gt;
NoNewPrivileges=true&lt;br&gt;
MemoryDenyWriteExecute=true&lt;br&gt;
PrivateDevices=true&lt;br&gt;
PrivateTmp=true&lt;br&gt;
ProtectHome=true&lt;br&gt;
RemoveIPC=true&lt;br&gt;
RestrictSUIDSGID=true&lt;br&gt;
ProtectSystem=full&lt;br&gt;
SyslogIdentifier=prometheus&lt;br&gt;
Restart=always&lt;br&gt;
[Install]&lt;br&gt;
WantedBy=multi-user.targe&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Executar o daemon-reload e iniciar o serviço
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  systemctl daemon-reload
&lt;/h1&gt;
&lt;h1&gt;
  
  
  systemctl start prometheus
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Se tudo tiver dado certo o serviço subiu na porta 9090 e podemos acessar nessa porta: 
http://127.0.0.1:9090

### Instalando o Grafana

A instalação do Grafana é mais fácil, para instala-lo basta acessar o seguinte link https://grafana.com/grafana/download e seguir os passos descrito no link de acordo com seu S.O. 

Depois de instalado suba o serviço
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  systemctl start grafana-server
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
O Grafana por padrão sobe na porta 3000, então vamos acessa-lo:
http://127.0.0.1:3000

O usuário e senha default do Grafana é admin, no seu primeiro acesso é solicitado para trocar.

Após acessar o Grafana, você deve adicionar o datasource do Prometheus.

### Instalando e configurando o Burrow

O primeiro passo para instalar o Burrow é realizar o download do código fonte, para isso vá ao repositório oficial do Burrow no link abaixo:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;$ git clone &lt;a href="https://github.com/linkedin/Burrow.git" rel="noopener noreferrer"&gt;https://github.com/linkedin/Burrow.git&lt;/a&gt;&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Com o código fonte do Burrow, vamos entrar no diretório e *buildar* a imagem Docker: 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;$ cd ${PWD}/Burrow&lt;br&gt;
$ docker build -t burrow-api .&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


&amp;gt;Nesse post irei focar na instalação do Burrow via [Docker](https://www.docker.com/). Para aqueles que preferirem executar o Burrow sem o uso do Docker, é só seguir os passos que estão descritos no [README.rd](https://github.com/linkedin/Burrow#build-and-install) do repositório do Burrow.

Com a imagem do Burrow *buildada* vamos agora realizar o mesmo procedimento com a imagem do burrow_exporter. Baixe o código fonte no link abaixo: 

&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/alvarobacelar" rel="noopener noreferrer"&gt;
        alvarobacelar
      &lt;/a&gt; / &lt;a href="https://github.com/alvarobacelar/burrow_exporter" rel="noopener noreferrer"&gt;
        burrow_exporter
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A Prometheus Exporter for gathering Kafka consumer group info from Burrow
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;git clone &lt;a href="https://github.com/alvarobacelar/burrow_exporter.git" rel="noopener noreferrer"&gt;https://github.com/alvarobacelar/burrow_exporter.git&lt;/a&gt;&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&amp;gt; O repositório acima é um fork do projeto [burrow_exporter](https://github.com/shamil/burrow_exporter) criado por Shamil. Eu solicitei um [Pull Request](https://github.com/shamil/burrow_exporter/pull/1) há um tempo para adicionar alguns recursos extras. Mas o dono do repositório nunca respondeu, então vamos seguir usando o projeto que *forkei*, pois ele retorna um valor a mais que a API do Burrow nos disponibiliza (iremos ver isso logo mais na frente).

Entrando na pasta do do burrow_exporter, que acabamos de baixar, vamos *buildar* a imagem Docker:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;$ docker build -t burrow-exporter . &lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Agora que temos as imagens do Burrow e burrow_exporter criadas, vamos 
criar o arquivo docker-compose.yml para subirmos as duas aplicações posteriormente. O arquivo deve conter o seguinte conteúdo:
```yaml


version: "2"
services:
  burrow:
    image: burrow-api
    volumes:
      - ${PWD}/config:/etc/burrow/
    container_name: burrow_api

  burrow_exporter:
    image: burrow-exporter
    container_name: burrow_exporter
    ports:
      - 8090:8237
    depends_on:
      - burrow
    command: --burrow.address http://burrow:8000


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Antes de subir os containers precisamos adicionar os servidores do kafka e zookeeper no arquivo de configuração do Burrow. Dentro do diretório da API do Burrow vamos editar o arquivo &lt;em&gt;burrow.toml&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;vim &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/Burrow/config/burrow.toml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;No arquivo vamos alterar os endereços dos brokers, Zookeeper e outros parâmetros:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;

&lt;span class="nn"&gt;[general]&lt;/span&gt;
&lt;span class="py"&gt;pidfile&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"burrow.pid"&lt;/span&gt;
&lt;span class="py"&gt;stdout-logfile&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"burrow.out"&lt;/span&gt;
&lt;span class="py"&gt;access-control-allow-origin&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"mysite.example.com"&lt;/span&gt;

&lt;span class="nn"&gt;[logging]&lt;/span&gt;
&lt;span class="py"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"logs/burrow.log"&lt;/span&gt;
&lt;span class="py"&gt;level&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"info"&lt;/span&gt;
&lt;span class="py"&gt;maxsize&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;
&lt;span class="py"&gt;maxbackups&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;
&lt;span class="py"&gt;maxage&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;
&lt;span class="py"&gt;use-localtime&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="py"&gt;use-compression&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="nn"&gt;[zookeeper]&lt;/span&gt;
&lt;span class="py"&gt;servers&lt;/span&gt;&lt;span class="p"&gt;=[&lt;/span&gt; &lt;span class="s"&gt;"zkhost01.example.com:2181"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"zkhost02.example.com:2181"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"zkhost03.example.com:2181"&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c"&gt;# altere para os edereços de seus zookeepers&lt;/span&gt;
&lt;span class="py"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;
&lt;span class="py"&gt;root-path&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"/burrow"&lt;/span&gt;

&lt;span class="c"&gt;############################&lt;/span&gt;
&lt;span class="c"&gt;####### CLUSTER zoom #######&lt;/span&gt;
&lt;span class="c"&gt;############################&lt;/span&gt;
&lt;span class="c"&gt;# altere o nome do profile do cliente caso queira&lt;/span&gt;
&lt;span class="nn"&gt;[client-profile.post]&lt;/span&gt;
&lt;span class="py"&gt;client-id&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"burrow-monitor"&lt;/span&gt;
&lt;span class="py"&gt;kafka-version&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"2.3.0"&lt;/span&gt; &lt;span class="c"&gt;# altere para a versão 2.3.0 do Kafka&lt;/span&gt;

&lt;span class="c"&gt;# Dê um nome para o seu cluster [cluster.nome]&lt;/span&gt;
&lt;span class="nn"&gt;[cluster.zoom]&lt;/span&gt;
&lt;span class="py"&gt;class-name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"kafka"&lt;/span&gt;
&lt;span class="py"&gt;servers&lt;/span&gt;&lt;span class="p"&gt;=[&lt;/span&gt; &lt;span class="s"&gt;"kafka01.example.com:10251"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"kafka02.example.com:10251"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"kafka03.example.com:10251"&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c"&gt;# altere para os endereços dos seus brokers&lt;/span&gt;
&lt;span class="py"&gt;client-profile&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"post"&lt;/span&gt; &lt;span class="c"&gt;# coloque aqui o nome do client-profile definido acima &lt;/span&gt;
&lt;span class="py"&gt;topic-refresh&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;120&lt;/span&gt;
&lt;span class="py"&gt;offset-refresh&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;

&lt;span class="c"&gt;# Dê um nome para o seu cluster [cluster.nome]&lt;/span&gt;
&lt;span class="nn"&gt;[consumer.zoom]&lt;/span&gt;
&lt;span class="py"&gt;class-name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"kafka"&lt;/span&gt;
&lt;span class="py"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"zoom"&lt;/span&gt; &lt;span class="c"&gt;# coloque aqui o nome do cluster definido&lt;/span&gt;
&lt;span class="py"&gt;servers&lt;/span&gt;&lt;span class="p"&gt;=[&lt;/span&gt; &lt;span class="s"&gt;"kafka01.example.com:10251"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"kafka02.example.com:10251"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"kafka03.example.com:10251"&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c"&gt;# altere para os endereços dos seus brokers&lt;/span&gt;
&lt;span class="py"&gt;client-profile&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"post"&lt;/span&gt; &lt;span class="c"&gt;# coloque aqui o nome do client-profile definido &lt;/span&gt;
&lt;span class="py"&gt;group-blacklist&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"^(console-consumer-|python-kafka-consumer-|quick-).*$"&lt;/span&gt; &lt;span class="c"&gt;# coloque aqui os nomes dos consumer groups que não quer que apareça&lt;/span&gt;
&lt;span class="py"&gt;group-whitelist&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;""&lt;/span&gt;
&lt;span class="c"&gt;################################&lt;/span&gt;
&lt;span class="c"&gt;####### FIM CLUSTER zoom #######&lt;/span&gt;
&lt;span class="c"&gt;################################&lt;/span&gt;
&lt;span class="c"&gt;# Repita isso para quantos clusters tiver&lt;/span&gt;

&lt;span class="nn"&gt;[httpserver.default]&lt;/span&gt;
&lt;span class="py"&gt;address&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;":8000"&lt;/span&gt;

&lt;span class="nn"&gt;[storage.default]&lt;/span&gt;
&lt;span class="py"&gt;class-name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"inmemory"&lt;/span&gt;
&lt;span class="py"&gt;workers&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;
&lt;span class="py"&gt;intervals&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;
&lt;span class="py"&gt;expire-group&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;604800&lt;/span&gt;
&lt;span class="py"&gt;min-distance&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;

&lt;span class="c"&gt;## Vamos deixar comentado por enquanto&lt;/span&gt;
&lt;span class="c"&gt;# [notifier.default]&lt;/span&gt;
&lt;span class="c"&gt;# class-name="http"&lt;/span&gt;
&lt;span class="c"&gt;# url-open="http://someservice.example.com:1467/v1/event"&lt;/span&gt;
&lt;span class="c"&gt;# interval=60&lt;/span&gt;
&lt;span class="c"&gt;# timeout=5&lt;/span&gt;
&lt;span class="c"&gt;# keepalive=30&lt;/span&gt;
&lt;span class="c"&gt;# extras={ api_key="REDACTED", app="burrow", tier="STG", fabric="mydc" }&lt;/span&gt;
&lt;span class="c"&gt;# template-open="conf/default-http-post.tmpl"&lt;/span&gt;
&lt;span class="c"&gt;# template-close="conf/default-http-delete.tmpl"&lt;/span&gt;
&lt;span class="c"&gt;# method-close="DELETE"&lt;/span&gt;
&lt;span class="c"&gt;# send-close=true&lt;/span&gt;
&lt;span class="c"&gt;# threshold=1&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Com o arquivo docker-compose.yml criado e o arquivo de configuração burrow.toml alterado com os endereços de seus clusters, podemos subir as duas aplicações: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Agora que temos os dois containers rodando, vamos configurar o Prometheus para que este capture as métricas.&lt;/p&gt;

&lt;p&gt;No arquivo de configuração do Prometheus (vim /srv/prometheus-2.12.0.linux-amd64/prometheus.yml), você deve adicionar as seguintes linhas:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;burrow'&lt;/span&gt;
    &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;127.0.0.1:8090'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;O IP &lt;em&gt;127.0.0.1&lt;/em&gt; deve ser substituído pelo o IP do servidor que está executando o container que subimos acima.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Para o Prometheus reconhecer tais configurações precisamos realizar um reload no serviço:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# systemctl reload prometheus


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Agora vamos importar o dashboard no Grafana. O link abaixo é o dash específico para essas métricas:&lt;br&gt;
&lt;a href="https://grafana.com/grafana/dashboards/11963" rel="noopener noreferrer"&gt;https://grafana.com/grafana/dashboards/11963&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Quando tiver importando o dash acima e tudo tiver ocorrido bem você verá algo parecido com imagem abaixo:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9ssukt9dwyzfwqfq1wsm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9ssukt9dwyzfwqfq1wsm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Lembram que eu havia comentado acima que iriamos utilizar o repositório que eu &lt;em&gt;forkei&lt;/em&gt;? Pois bem, se você analisar o dash que importamos vai ver que tem um campo na tabela chamado &lt;em&gt;Consumer client&lt;/em&gt;. Foi esse campo que adicionei no PR que abri e o dono do repositório nunca aceitou. Com essa informação sabemos qual é o IP do servidor que está consumindo naquela especifica partição.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusão
&lt;/h2&gt;

&lt;p&gt;O sucesso para que o seu cluster Apache Kafka (ou qualquer outra aplicação) esteja 100% disponível e seja 100% confiável é ter uma stack madura de monitoração. Como mostrado nesse artigo, não é necessário gastar uma &lt;em&gt;bala&lt;/em&gt; com licenças em softwares de monitoração achando que tal software vai fazer mágica para você. Com ferramentas Open Source somos capaz de tirar o melhor que cada uma tem a oferecer e montar uma stack muito madura de monitoração e alertas. &lt;/p&gt;

</description>
      <category>kafka</category>
      <category>monitoring</category>
    </item>
  </channel>
</rss>
