<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Thiago Nascimento Figueiredo</title>
    <description>The latest articles on DEV Community by Thiago Nascimento Figueiredo (@tnfigueiredo).</description>
    <link>https://dev.to/tnfigueiredo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tnfigueiredo"/>
    <language>en</language>
    <item>
      <title>A brief example about message communication with Kafka</title>
      <dc:creator>Thiago Nascimento Figueiredo</dc:creator>
      <pubDate>Fri, 20 Jan 2023 01:24:24 +0000</pubDate>
      <link>https://dev.to/tnfigueiredo/a-brief-example-about-message-communication-with-kafka-li6</link>
      <guid>https://dev.to/tnfigueiredo/a-brief-example-about-message-communication-with-kafka-li6</guid>
      <description>&lt;p&gt;This article came up as an idea when I was reviewing some content I already had studied and I was working with. Since I needed to study a few more issues related to Kafka usage into application solutions I got the idea to share a few information related to that. It is a very entry level article. I'm not a Kafka specialist, but I already worked with solutions that needed to use asynchronous communication model and message systems. Based on this first information I'll try to approach the subject covering the following items:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bring some considerations when using synchronous and asynchronous scenarios in solutions;&lt;/li&gt;
&lt;li&gt;Sample of Kafka usage in the context of an asynchronous communication scenario (producer and consumer basics implementation).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The idea is &lt;strong&gt;not&lt;/strong&gt; to explore the trade-offs over Kafka as technology or platform. So, there will be no deep dive through this direction. On this &lt;a href="https://aws.amazon.com/pt/msk/what-is-kafka/" rel="noopener noreferrer"&gt;AWS link&lt;/a&gt; about Amazon Managed Streaming for Apache Kafka there are a few comparisons when looking to Kafka and RabbitMQ. This can give an idea about the differences when Kafka is compared with tools for messaging solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Synchronous and asynchronous communication
&lt;/h2&gt;

&lt;p&gt;Over those years I have worked with IT it is possible to see that it is kind common see situations where developers have some difficulty distinguish when to use synchronous and asynchronous communication over solution scenarios. Most of the times it is possible to perceive some considerations missing in the evaluation of the requirements to design the solution to be implemented, or a mistaken evaluation over which could be a proper approach to be applied to some problem. It is very helpful the understanding of the main characteristics of both approaches when doing this evaluation.&lt;br&gt;
&lt;br&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Synchronous vs. Asynchronous integration
&lt;/h3&gt;

&lt;p&gt;In a summary, the main point that distinguish the synchronous communication from an asynchronous communication will be the need of receiving a response or not to keep processing something. When the response is needed for keep processing something, the synchronous communication scenario will be most of the times a better approach. When in a communication scenario it is possible or necessary to detach the producer behavior from the consumer behavior in such a way that a response is not needed to keep processing something, almost most of the times the asynchronous approach will fits better.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqujbemyn738xp5yjqi21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqujbemyn738xp5yjqi21.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is good to enforce that there is no silver bullet when we discuss solution design. The choices come with trade-offs to be analysed where the chosen option has good points and problems to be handled, and also there might happen some requirements that can invalidate some solution options. For example, it is possible to work with APIs for an asynchronous communication design if your corporation works only with API driven approach. The service gives a response to the request with a HTTP status code 202 to mention that something will be processed later. And them some callback endpoint can be called later when the processing of this request finishes. But this approach has its &lt;a href="https://aws.amazon.com/pt/blogs/architecture/managing-asynchronous-workflows-with-a-rest-api/" rel="noopener noreferrer"&gt;trade-offs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When considering Kafka as a component in your solution it is possible to say that most of the cases its usage will fit into an &lt;a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/index.html" rel="noopener noreferrer"&gt;Enterprise Integration Pattern related to Messaging Patterns&lt;/a&gt;. With Kafka it combines two messaging models (queuing and publish-subscribe) to provide the key benefits of each to consumers. More details about this can be found on this &lt;a href="https://aws.amazon.com/pt/msk/what-is-kafka/" rel="noopener noreferrer"&gt;AWS link&lt;/a&gt; about Amazon Managed Streaming for Apache Kafka.&lt;br&gt;
&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Producer and Consumer example using Kafka
&lt;/h2&gt;

&lt;p&gt;Apache Kafka is a real-time data streaming technology capable of handling trillions of events per day. It is commonly used to build real-time streaming data pipelines and event-driven applications. It provides distributed, high-throughput, low-latency, fault-tolerant platform for handling real-time data feeds - known as events. More details can be found in the &lt;a href="https://kafka.apache.org/documentation/" rel="noopener noreferrer"&gt;Apache Kafka official page&lt;/a&gt;. The main idea here is to show a sample of Kafka usage to send and consume messages for asynchronous communication.&lt;/p&gt;

&lt;p&gt;To illustrate a producer and an consumer it was created 2 Kotlin applications using Kafka as samples that consumes Tweets based on some parameters and save them in a ElasticSearh repository to allow search of the Tweets content. In a real world application it can be used to track subjects or themes that are trend topics for any specific reason. More details about the implementation itself can be checked &lt;a href="https://github.com/tnfigueiredo/kafka-twitter-dempo-app" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this sample it was not deeply explored the set of &lt;a href="https://kafka.apache.org/documentation/#consumerconfigs" rel="noopener noreferrer"&gt;consumer configurations&lt;/a&gt; or &lt;a href="https://kafka.apache.org/documentation/#producerconfigs" rel="noopener noreferrer"&gt;producer configurations&lt;/a&gt;. The main idea is to highlight a simple implementation to produce messages/events to Kafka and consume them. The source of the information are the filtered Twitters. All the components were linked to a docker-compose structure to make it easier to handle the sample.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fwww.plantuml.com%2Fplantuml%2Fproxy%3Fsrc%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Ftnfigueiredo%2Fkafka-twitter-dempo-app%2Fmain%2Fkafka-twitter-producer-consumer-demo.puml" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fwww.plantuml.com%2Fplantuml%2Fproxy%3Fsrc%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Ftnfigueiredo%2Fkafka-twitter-dempo-app%2Fmain%2Fkafka-twitter-producer-consumer-demo.puml" alt="cached image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the applications it was created configuration components for having all the Kafka configuration for consumer and producer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/tnfigueiredo/kafka-twitter-dempo-app/blob/main/kafka-twitter-consumer-app/src/main/kotlin/com/example/kafkatweeter/consumer/app/config/KafkaConfigurator.kt" rel="noopener noreferrer"&gt;Consumer configuration&lt;/a&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;   &lt;span class="nd"&gt;@Bean&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;consumerFactory&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;ConsumerFactory&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?,&lt;/span&gt; &lt;span class="nc"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;?&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;props&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;MutableMap&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;HashMap&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;ConsumerConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BOOTSTRAP_SERVERS_CONFIG&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;servers&lt;/span&gt;
        &lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;ConsumerConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;KEY_DESERIALIZER_CLASS_CONFIG&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StringDeserializer&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;java&lt;/span&gt;
        &lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;ConsumerConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;VALUE_DESERIALIZER_CLASS_CONFIG&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StringDeserializer&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;java&lt;/span&gt;
        &lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;ConsumerConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;AUTO_OFFSET_RESET_CONFIG&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"earliest"&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;DefaultKafkaConsumerFactory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;props&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nd"&gt;@Bean&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;kafkaListenerContainerFactory&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;ConcurrentKafkaListenerContainerFactory&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;?&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;factory&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ConcurrentKafkaListenerContainerFactory&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;()&lt;/span&gt;
        &lt;span class="n"&gt;factory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;consumerFactory&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;consumerFactory&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;factory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;containerProperties&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ackMode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ContainerProperties&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;AckMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;MANUAL_IMMEDIATE&lt;/span&gt;
        &lt;span class="n"&gt;factory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;containerProperties&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;isSyncCommits&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;factory&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/tnfigueiredo/kafka-twitter-dempo-app/blob/main/kafka-twitter-producer-app/src/main/kotlin/com/tnfigueiredo/kafkatweeter/producer/app/config/KafkaConfigurator.kt" rel="noopener noreferrer"&gt;Producer configuration&lt;/a&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;    &lt;span class="nd"&gt;@Bean&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;producerFactory&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;ProducerFactory&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;configProps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;MutableMap&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;HashMap&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;configProps&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;ProducerConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BOOTSTRAP_SERVERS_CONFIG&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;servers&lt;/span&gt;
        &lt;span class="n"&gt;configProps&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;ProducerConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ENABLE_IDEMPOTENCE_CONFIG&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;
        &lt;span class="n"&gt;configProps&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;ProducerConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ACKS_CONFIG&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"all"&lt;/span&gt;
        &lt;span class="n"&gt;configProps&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;ProducerConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;COMPRESSION_TYPE_CONFIG&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"snappy"&lt;/span&gt;
        &lt;span class="n"&gt;configProps&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;ProducerConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;LINGER_MS_CONFIG&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
        &lt;span class="n"&gt;configProps&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;ProducerConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;KEY_SERIALIZER_CLASS_CONFIG&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StringSerializer&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;java&lt;/span&gt;
        &lt;span class="n"&gt;configProps&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;ProducerConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;VALUE_SERIALIZER_CLASS_CONFIG&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StringSerializer&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;java&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;DefaultKafkaProducerFactory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;configProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nd"&gt;@Bean&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;kafkaTemplate&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;KafkaTemplate&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;KafkaTemplate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;producerFactory&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since the tweet message is a big JSon object I didn't try to map it to any object. It was serialized as a JSon String on both sides of the communication. The details about the tweets consumption is in the mentioned &lt;a href="https://github.com/tnfigueiredo/kafka-twitter-dempo-app" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;. After reading it, it is published to the Kafka topic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Component&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;KafkaTweetsProducer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;kafkaTemplate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;KafkaTemplate&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="k"&gt;companion&lt;/span&gt; &lt;span class="k"&gt;object&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;LOGGER&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LoggerFactory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getLogger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;KafkaTweetsProducer&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;java&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tweet&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;LOGGER&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Tweet message: {}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tweet&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;kafkaTemplate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"tweets-message-topic"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tweet&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The consumer reads the tweet in the same way as it arrives from the Kafka topic to save it to the ElasticSearch repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Component&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;KafkaTweetsConsumer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;elasticSearchClient&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;RestHighLevelClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nd"&gt;@Value&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"\${elasticsearch.index}"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;tweeterIndex&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="k"&gt;companion&lt;/span&gt; &lt;span class="k"&gt;object&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;LOGGER&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LoggerFactory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getLogger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;KafkaTweetsConsumer&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;java&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nd"&gt;@KafkaListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;topics&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"tweets-message-topic"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;groupId&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"simple-kotlin-tweets-message-consumer"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;consume&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tweet&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;LOGGER&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"got tweet: {}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tweet&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;indexRequest&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;IndexRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tweeterIndex&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;source&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tweet&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;XContentType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;indexResponse&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;elasticSearchClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;indexRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;RequestOptions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DEFAULT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nc"&gt;LOGGER&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"ElasticSearch id: {}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;indexResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The README file from the repository has instructions to configure and run the sample. The result of the communication about the producer and the consumer can be seen in a Kibana interface:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Ftnfigueiredo%2Fkafka-twitter-dempo-app%2Fmain%2Fkibana-criteria-search.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Ftnfigueiredo%2Fkafka-twitter-dempo-app%2Fmain%2Fkibana-criteria-search.png" alt="cached image"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final considerations
&lt;/h2&gt;

&lt;p&gt;Kafka as technology and tool has a lot of real powerful use-cases. It is a technology worthy to explore for microservices approach, event driven architecture, asynchronous communication, real time processing, streaming data, and several other scenarios. I hope this article can be useful for an entry level reading about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  References:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.proofhub.com/articles/asynchronous-communication" rel="noopener noreferrer"&gt;&lt;strong&gt;Asynchronous Communication; The What, The Why, And The How&lt;/strong&gt;&lt;/a&gt; - Sandeep Kashyap&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/dhruvesh_patel/apache-kafka-fundamentals-use-cases-and-trade-offs-9e4"&gt;&lt;strong&gt;Apache Kafka - Fundamentals, Use cases and Trade-Offs&lt;/strong&gt;&lt;/a&gt; - Dhruvesh Patel&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kafka.apache.org/documentation/" rel="noopener noreferrer"&gt;&lt;strong&gt;Apache Kafka Documentation&lt;/strong&gt;&lt;/a&gt; - Apache Kafka&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.confluent.io/what-is-apache-kafka/" rel="noopener noreferrer"&gt;&lt;strong&gt;What is Kafka?&lt;/strong&gt;&lt;/a&gt; - Confluent&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/pt/msk/what-is-kafka/" rel="noopener noreferrer"&gt;&lt;strong&gt;What is Kafka?&lt;/strong&gt;&lt;/a&gt; - AWS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/pt/blogs/architecture/managing-asynchronous-workflows-with-a-rest-api/" rel="noopener noreferrer"&gt;&lt;strong&gt;Managing Asynchronous Workflows with a REST API&lt;/strong&gt;&lt;/a&gt; - AWS&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>beginners</category>
      <category>programming</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Web APIs - Different approaches and how to choose</title>
      <dc:creator>Thiago Nascimento Figueiredo</dc:creator>
      <pubDate>Fri, 21 May 2021 14:17:58 +0000</pubDate>
      <link>https://dev.to/tnfigueiredo/web-apis-different-approaches-and-how-to-choose-4l6b</link>
      <guid>https://dev.to/tnfigueiredo/web-apis-different-approaches-and-how-to-choose-4l6b</guid>
      <description>&lt;p&gt;Web APIs are one of the most common resources used for design several solutions related to integration scenarios, microservice applications, and other very common solutions that are built on top of the HTTP protocol.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24983y4bq48uvcuc0dx2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24983y4bq48uvcuc0dx2.png" alt="API's"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we look to WEB APIs as a solution alternative, it is possible to see that it became an interesting option because it works over an interoperable protocol widely used. This option mitigates several problems that we might face when building the system's integration and features. Since we are talking about APIs, it is important to understand some principles that we need to use to guide a solution approach: APIs – Application program interface – are most commonly expressed as a set of operations, associated data definitions, and the semantics of the operations on some underlying system (David Emery - Association for Computing Machinery).&lt;/p&gt;

&lt;p&gt;There are other similar definitions, but taking this one as a basis we can see that semantic is something very important when designing an API. The API semantics depends on the WEB API Design approach, its usage of HTTP verbs, definitions about application operations, and other elements. And in several moments development teams will question themselves which approach they need to use. Those choices come to the trade-off analysis. The best tool to guide this analysis is to understand better the WEB APIs Design approaches and check which one fits better to your problem requirements.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Web API Design approaches
&lt;/h1&gt;

&lt;p&gt;When evaluating possible Web API design approaches it will fit into a few possible options: RPC, REST, or a "query language" API style (GraphQL). Basically, those are concepts of how to design your API that follow specifications that are often drafted up by various working groups. For example: SOAP is a W3C recommendation, gRPC has as authors Google Inc., REST is a paradigm with no organization or group responsible for it.&lt;/p&gt;

&lt;p&gt;By checking on the characteristics of each design style it will be possible to start evaluating which approach matches your problem. When looking at the RPC style, it will depend on the specific API implementation. REST and GraphQL have more parameters related to the standards and paradigms.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  RPC
&lt;/h2&gt;

&lt;p&gt;RPC APIs are about executing a block of code on another server, that when implemented in HTTP or AMQP can become a Web API. It is possible to say that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They are action-based;&lt;/li&gt;
&lt;li&gt;The API specification depends on the implementation;&lt;/li&gt;
&lt;li&gt;Can be stateless or stateful;&lt;/li&gt;
&lt;li&gt;No specific semantic definition:

&lt;ul&gt;
&lt;li&gt;No HTTP verbs best practice recommendation;&lt;/li&gt;
&lt;li&gt;No HTTP status code usage recommendation;&lt;/li&gt;
&lt;li&gt;Best practices recommendations depend on the chosen RPC API.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The development of an RPC API is similar to create programming libraries. The name of the actions are in the URI and they can be compared to a function invocation. Definitions of parameters and returns according to the operation needs in the query string or body, and modern implementations can make simple interactions with high performance. There is no discoverability (How to start? What to call?). In general, can become hard to understand and maintain due to the tendency to create new endpoints when new functions are needed. This creates a tight coupling solution.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  REST
&lt;/h2&gt;

&lt;p&gt;REST is a paradigm described by Roy Fielding in a dissertation in 2000. It describes a client-server relationship where server-side data are made available through representations of data in simple formats. To be considered a RESTful API it is necessary to fit into the restrictions described in this dissertation. It uses as most common formats JSON or XML but could be anything. When describing a RESTful API we can say that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is resource-based;&lt;/li&gt;
&lt;li&gt;Nouns of the resources are used to define URIs;&lt;/li&gt;
&lt;li&gt;It is stateless;&lt;/li&gt;
&lt;li&gt;There is a strict semantic definition based on HTTP:

&lt;ul&gt;
&lt;li&gt;It has HTTP verbs and status code semantic usage;&lt;/li&gt;
&lt;li&gt;Resources and their relationships represented in the API’s URIs;&lt;/li&gt;
&lt;li&gt;Content negotiation for different message formats;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Service URIs are easily readable due to their strict semantic;&lt;/li&gt;

&lt;li&gt;Hypermedia allows actions and relationships to be made discoverable.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszm2clyx6se8lpueapry.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszm2clyx6se8lpueapry.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The RESTful APIs are considered good options when there are issues involved like common workflows to be followed, caching of resource information, not a very reduced payload restriction, content negotiation necessary to provide a response to the clients, etc.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  GraphQL and "Query language" APIs
&lt;/h2&gt;

&lt;p&gt;GraphQL was built by Facebook and it is a data-driven API design approach that provides a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL can be essentially considered an RPC API style with a default procedure, with a lot of good ideas from the REST/HTTP community. It is possible to mention that its main characteristics are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stateless;&lt;/li&gt;
&lt;li&gt;It is based on entity graphs and entities are not identified by URIs;&lt;/li&gt;
&lt;li&gt;Custom data recovery:

&lt;ul&gt;
&lt;li&gt;Ask for specific resources and specific fields;&lt;/li&gt;
&lt;li&gt;Operation parameters and returns parametrized;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Reduced number of HTTP requests necessary to retrieve data for multiple resources;&lt;/li&gt;

&lt;li&gt;Low network overhead.


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  How to choose?
&lt;/h1&gt;

&lt;p&gt;It is important to look into the requirements that the application needs to meet so it becomes possible to look at the different API design approaches to find which one fits best into the problem to be solved. Based on the highlighted issues of each API style that were mentioned it is possible to illustrate some scenarios.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0njpmridkyz3f9jeidmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0njpmridkyz3f9jeidmn.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Among the different approaches that were presented, when evaluating the options through RPC APIs it will depend on the specific implementation option. For example, gRPC can be used in scenarios where a system requires a set amount of data or processing routinely, and in which the requester is either low power or resource-jealous (IoT is a great example of this). Or scenarios like SOAP interaction where is needed asynchronous processing and invocation, formal contracts, and stateful operations.&lt;/p&gt;

&lt;p&gt;One good scenario to explain where REST fits is when we bring the reality of microservice applications, where different bounded contexts must be decoupled. Things within a context can treat their own APIs in private and it is possible to do changes whenever needed without affect other domains. It is even possible to use an RPC approach when pieces inside a bound context need to communicate internally. But when there is communication among different contexts it is important to be loosely coupled. Another scenario is when this interaction between client and server requires a workflow among a set of well-defined resources (for example, a payment workflow).&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxon0l6o8eu2nya3yjgc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxon0l6o8eu2nya3yjgc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Proposing a scenario where GraphQL would be a good fit, we can consider an interaction among the client of an API where to recover information needed will demand several requests or iterations to achieve the final purpose. For example, lest suppose there is the need to call “/users/” endpoint to fetch the initial user data, then call a “/users//posts” endpoint to return all the posts for a user, and then call a “/users//followers” to return a list of followers per user.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fio1hqjcpc3nweskikkzx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fio1hqjcpc3nweskikkzx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When evaluating this interaction through the perspective of a data-driven API this information could be recovered into a single interaction. It is possible to be considered also in a scenario where the relationship between the resources is hard to be handled through a set of interactions into RESTful resources. Or also when there is needed to recover a customized set of information involving fields of different resources to compose a single response.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyd7zbst9hjopvym90e2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyd7zbst9hjopvym90e2z.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One thing that is important to have in mind is that there’s no silver bullet. Tackle the problem with the tool that handles it according to its characteristics. Sometimes there will be more than one possible option, but with different tradeoffs that the choice will bring.&lt;/p&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;p&gt;Standards, APIs, Interfaces and Bindings&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;David Emery - Association for Computing Machinery: &lt;a href="http://oldwww.acm.org/tsc/apis.html" rel="noopener noreferrer"&gt;http://oldwww.acm.org/tsc/apis.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Understanding RPC, REST and GraphQL&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Phil Sturgeon: &lt;a href="https://apisyouwonthate.com/blog/understanding-rpc-rest-and-graphql" rel="noopener noreferrer"&gt;https://apisyouwonthate.com/blog/understanding-rpc-rest-and-graphql&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Picking the right API Paradigm&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Phil Sturgeon: &lt;a href="https://apisyouwonthate.com/blog/picking-the-right-api-paradigm" rel="noopener noreferrer"&gt;https://apisyouwonthate.com/blog/picking-the-right-api-paradigm&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GraphQL — The Query language for APIs&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Arun Rajeevan: &lt;a href="https://arunrajeevan.medium.com/graphql-the-query-language-for-apis-4e79ae303100" rel="noopener noreferrer"&gt;https://arunrajeevan.medium.com/graphql-the-query-language-for-apis-4e79ae303100&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When to Use What: REST, GraphQL, Webhooks, &amp;amp; gRPC&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kristopher Sandoval: &lt;a href="https://nordicapis.com/when-to-use-what-rest-graphql-webhooks-grpc/" rel="noopener noreferrer"&gt;https://nordicapis.com/when-to-use-what-rest-graphql-webhooks-grpc/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SOAP vs REST 101: Understand The Differences&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SoapUI: &lt;a href="https://www.soapui.org/learn/api/soap-vs-rest-api/" rel="noopener noreferrer"&gt;https://www.soapui.org/learn/api/soap-vs-rest-api/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>architecture</category>
      <category>design</category>
    </item>
    <item>
      <title>Logs - Why, good practices, and recommendations</title>
      <dc:creator>Thiago Nascimento Figueiredo</dc:creator>
      <pubDate>Wed, 14 Apr 2021 11:05:19 +0000</pubDate>
      <link>https://dev.to/tnfigueiredo/logs-why-good-practices-and-recommendations-ojd</link>
      <guid>https://dev.to/tnfigueiredo/logs-why-good-practices-and-recommendations-ojd</guid>
      <description>&lt;h1&gt;
  
  
  Why logging is important?
&lt;/h1&gt;

&lt;p&gt; &lt;br&gt;
Logging information is important to understand the behavior of the application and to understand problems in several different scenarios. Into the application lifetime, it will eventually crash, a server will go down, users may complain about a bug that “randomly” appears, or the client could realize that some data is missing, and there is no clue when or why this data got lost. Those are some examples of scenarios where there is a need to understand the application behavior through the development application lifecycle, track problems during QA validations and incident scenarios, and other possible situations. That is why the application solution logs must be designed, implemented, and tested.&lt;/p&gt;

&lt;p&gt;Considering those issues related to the importance of logs it is necessary to define for the application what, how, and when to log and even how to extract meaningful data from logs. This brings the need to define a structure for log messages. The need to define a log structure is to extract the information needed (most of the time with the support of tools).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pjx40st0dbh4kyr5i1j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pjx40st0dbh4kyr5i1j.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another issue that is worthy to mention is that log messages need to be sent at least in every layer of the architecture with meaningful and enough contextual information. Having at least one log entry per request/result in every layer of the application allows providing accurate context about what the user was doing when a specific error or situation happened.&lt;/p&gt;

&lt;p&gt;Once it is defined the structure of the log messages, the next step is to have clear what to log and how to log. The log messages for the application need to inform about errors (to use them for troubleshooting purposes), but also useful information about successful requests to have a clear idea of how users work with the application. In this subject, we get through the usage of log level messages related to the purpose of the information we want to inform.&lt;br&gt;
 &lt;/p&gt;

&lt;h1&gt;
  
  
  Logging levels
&lt;/h1&gt;

&lt;p&gt;The usage of a good logging strategy needs to take into consideration the proper usage of log levels for its messages. This strategy related to log level messages is essential to avoid “noise” when looking for information and track possible evidences to evaluate a problem or analyze the application behavior. The main cons about this are that when a problem comes up, you are likely to proper contextual information to be analyzed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2aydlw8ixqvc3tdt782g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2aydlw8ixqvc3tdt782g.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There will be frameworks to help handle logging messages for the different programming languages and the definition among the log levels can slightly vary among the framework implementations. But they follow the same semantic purpose. The following example shows this for a few Java frameworks possible to be used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhnbyb2wbdef3i2vk4ds.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhnbyb2wbdef3i2vk4ds.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The log level information has well-defined usage recommendations for each situation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatd49gel95khv4dyt7kn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatd49gel95khv4dyt7kn.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
 &lt;/p&gt;

&lt;h1&gt;
  
  
  Good practice recommendations
&lt;/h1&gt;

&lt;p&gt;According to the presented reasons over the application logging practices, there are a few good practices and recommendations to be followed to provide log messages that are meaningful and useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  General good practices
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Set the current log level via external configuration: the modification of the current log level messages should always be an operational task done in runtime. This allows fast action to problem troubleshooting and situation analysis without stopping the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Default log level information configuration by environments: For the production environment it is interesting to have the default log level as “Information”. This will allow that Fatal, Error, Warning, and Information log entries will be written. while Debug and Trace log entries will be suppressed. Environments that need more detailed information can have default log level set to debug or trace.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The proper log level is important to avoid noise: log application information with the proper log level avoids mistaken interpretation between data that represent problems and data that represents application behavior:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the Warning and Information Log Levels properly: Warning should be for logs that are not errors, but are not normal behavior. Information should be for application normal execution when something important is to be communicated.&lt;/li&gt;
&lt;li&gt;Log catastrophic failures using Critical/Fatal: when there is an unrecoverable error during the application start-up or execution log it using the Critical log level. Tools can use this log level for emitting alerts.&lt;/li&gt;
&lt;li&gt;Step through code using the Debug log level: When your application is misbehaving it is necessary to get enhanced visibility into what is going on. The good use of the Debug log level can get you more detailed about what is occurring.&lt;/li&gt;
&lt;li&gt;Inspect variables using the Trace log level: The use of trace log level to write the values of variables, parameters, and settings to the logs for inspection can help you when the debug log level isn’t enough. This mimics the behavior of inspecting the contents of variables with the debugger attached.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Never log personal identifying Information or secrets: It is important because of GDPR, CCPA, and other privacy laws and for security issues.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Events log good practices
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use clear key-value pairs: tolls extract fields from events when you search, creating structure out of unstructured data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create events that humans can read: complex encoding that would require lookups to make event information intelligible. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use timestamps for every event: The correct time is critical to understanding the proper sequence of events, for debugging, analytics, and deriving transactions. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use correlation IDs for tracking operations: Correlation ID (also known as a Transit ID) is a unique identifier value that is attached to requests and messages that allow reference to a particular transaction or event chain. They are helpful when debugging and tracking transactions through the system and follow them across machines, networks, and services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Avoid logging binary information: It is not good for reading what is happening through the log events and also does not work well when tools are indexing the information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use data structured formats: They are readable by humans and machines and can be easily parsed by most programming languages right in your browser. This helps tools and log search when evaluating log events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identify the source (class, function, or filename): Useful to understand where the problem is.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Operational good practices
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Log locally to files: If you log to a local file, it provides a local buffer and you aren't blocked if the network goes down.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use tools for streaming and monitoring logs: Tools can collect logging data and then send this information to the indexers. They will be able to work well among the large amount of information generated by the logging activity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use rotation and retention policies: Logs can take up a lot of space. Good rotation strategies can help to decide when to destroy or back up your logs (if needed). It is especially essential considering scenarios for cloud solutions where storage price is important.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Collect events from all possible sources: The more data you capture, the more visibility you have. Some example of possible sources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application logs;&lt;/li&gt;
&lt;li&gt;Database logs;&lt;/li&gt;
&lt;li&gt;Network logs;&lt;/li&gt;
&lt;li&gt;Configuration files;&lt;/li&gt;
&lt;li&gt;Performance data.
 &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Tools for dealing with log information
&lt;/h1&gt;

&lt;p&gt;Without tools for dealing with log information, the amount of data created by logging activity can be meaningless. Besides that, in a scenario where there are several instances of distributed applications and several services that those applications integrate, those tools are helpful to track information that comes from different sources. Each of those sources formats and stores logs in their own way, making it really difficult to find useful data.&lt;/p&gt;

&lt;p&gt;The logging tools that we have available do streaming over log messages and centralize its access, allowing log search and aggregation to turn all this generated data into useful information. There are several examples of logging tools nowadays, but here follow the example of the structure of two of them. The first image is from an example using splunk, and the following image is an example using the Elastic stack:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3u8z5zkaytnaoofrebx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3u8z5zkaytnaoofrebx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6gy88j6a5f3syoonjwy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6gy88j6a5f3syoonjwy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Those tools get the log messages through collectors and store the information into repositories where the information is indexed. When the collectors get the information, they analyze the log structure to process its data before storage. Once the log messages are stored and indexed it is possible to search information, create charts and create alerts through the interface of those tools. Here follow a few examples of the mentioned log searching UI. The presented UIs are from the tools Splunk, Graylog, and Kibana.&lt;/p&gt;

&lt;p&gt;Splunk:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqq4b9tkj9yhw0z542989.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqq4b9tkj9yhw0z542989.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Graylog:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbtpiap7m6jgmxrm6c5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbtpiap7m6jgmxrm6c5d.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kibana:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F257dp7pj6ola826o0enk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F257dp7pj6ola826o0enk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As it is possible to see, those tools provide very interesting features related to logging activity. That brings a good reason to work with the messages following a well-defined structure, good practices, and recommendations. It makes the log information possible to be processed, tracked, and handled by the referred tools so they become useful.&lt;/p&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;h5&gt;
  
  
  Logging Best Practices:
&lt;/h5&gt;

&lt;p&gt;Ray Saltrelli - &lt;a href="https://dev.to/raysaltrelli/logging-best-practices-obo"&gt;https://dev.to/raysaltrelli/logging-best-practices-obo&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Logging best practices in an app or add-on for Splunk Enterprise:
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://dev.splunk.com/enterprise/docs/developapps/addsupport/logging/loggingbestpractices/" rel="noopener noreferrer"&gt;https://dev.splunk.com/enterprise/docs/developapps/addsupport/logging/loggingbestpractices/&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  The Importance of logging: introducing Elastic Stack
&lt;/h5&gt;

&lt;p&gt;Christian Claudio Bohm - &lt;a href="https://www.hexacta.com/importance-logging-introducing-elastic-stack/" rel="noopener noreferrer"&gt;https://www.hexacta.com/importance-logging-introducing-elastic-stack/&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  9 Logging Best Practices Based on Hands-on Experience
&lt;/h5&gt;

&lt;p&gt;Liron Tal - &lt;a href="https://www.loomsystems.com/blog/single-post/2017/01/26/9-logging-best-practices-based-on-hands-on-experience" rel="noopener noreferrer"&gt;https://www.loomsystems.com/blog/single-post/2017/01/26/9-logging-best-practices-based-on-hands-on-experience&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
