<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matt Ghafouri</title>
    <description>The latest articles on DEV Community by Matt Ghafouri (@mattqafouri).</description>
    <link>https://dev.to/mattqafouri</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mattqafouri"/>
    <language>en</language>
    <item>
      <title>Observability or Monitoring: Which one do you need?</title>
      <dc:creator>Matt Ghafouri</dc:creator>
      <pubDate>Fri, 16 Feb 2024 11:34:57 +0000</pubDate>
      <link>https://dev.to/mattqafouri/observability-or-monitoring-which-one-do-you-need-3d13</link>
      <guid>https://dev.to/mattqafouri/observability-or-monitoring-which-one-do-you-need-3d13</guid>
      <description>&lt;h2&gt;
  
  
  Observability or Monitoring: Which one do you need?
&lt;/h2&gt;

&lt;p&gt;As software engineers, we have most likely heard a lot about monitoring, but what about observability? what is the difference between them? Why do we need them in the first place? These are the topics that we are going to discuss in this article.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table of Contents&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Monitoring Definition&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Observability Definition&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring vs Observability&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Observability main components&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Visualization and Dashboard&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Incident Automation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Challenges and Pitfalls&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Benchmarking Observability Tools&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AHIflmNycvXVPdy4-w-WyDA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AHIflmNycvXVPdy4-w-WyDA.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also watch this article’s video on &lt;strong&gt;YouTube&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/BntUkh0dptQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Monitoring?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;🔖Short Definition&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Monitoring involves the systematic collection of data about a system’s health, performance, and other key metrics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖Long Definition&lt;/strong&gt;&lt;br&gt;
In a broader sense, monitoring entails identifying specific areas within the application that require attention. By strategically placing logs in these identified locations, we gain the ability to observe and assess the system’s behavior. The essential aspect of monitoring lies in the proactive observation of anticipated issues, with an expectation that the system will manifest expected behaviors in those instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖Purpose&lt;/strong&gt;&lt;br&gt;
Detect and alert on deviations from expected behavior.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖Examples&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Monitoring the CPU usage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tracking the response time of a web application&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Logging errors or exceptions in an application for later analysis&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🔖Tools&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.elastic.co/kibana" rel="noopener noreferrer"&gt;Kibana&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AV6AHmIrCnZFbNiYEDqPvxQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AV6AHmIrCnZFbNiYEDqPvxQ.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is Observability?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;🔖Short Definition&lt;/strong&gt;&lt;br&gt;
The measure of how well you can understand the internal state of a system based on its external outputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖Long Definition&lt;/strong&gt;&lt;br&gt;
Observability encompasses a broader spectrum of application performance, extending beyond internal status. Its primary focus lies in addressing the larger picture of the entire system rather than concentrating on a single service.&lt;/p&gt;

&lt;p&gt;Its significance becomes evident when applied to complex systems that require monitoring from various aspects, such as the relationships between services, internal service statuses, communication statuses, and more.&lt;/p&gt;

&lt;p&gt;In essence, during debugging and troubleshooting, observability empowers developers or system administrators to comprehensively assess the system’s overall status by scrutinizing each component.&lt;/p&gt;

&lt;p&gt;Observability platforms prove especially valuable when dealing with unknown issues, like incidents claiming a low response time for the order service. In such cases, these platforms come into play, facilitating the examination of all services and their interrelationships.&lt;/p&gt;

&lt;p&gt;It’s not just about reading logs, as system performance issues may not always manifest as errors in the logs, but rather as improper system functioning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖Purpose&lt;/strong&gt;&lt;br&gt;
Provide insights into the system’s internal workings and facilitate debugging and troubleshooting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖Examples&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Instrumenting code to capture detailed traces, logs, and events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using distributed tracing to follow the flow of a request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Collecting and analyzing logs, and metrics.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🔖Tools&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.dynatrace.com/" rel="noopener noreferrer"&gt;Dynatrace&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.datadoghq.com/" rel="noopener noreferrer"&gt;DataDog&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.splunk.com/" rel="noopener noreferrer"&gt;Splunk&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://logz.io/" rel="noopener noreferrer"&gt;Logz&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.sumologic.com/" rel="noopener noreferrer"&gt;SumoLogic&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloud Observability tools (&lt;a href="https://aws.amazon.com/cloudops/monitoring-and-observability/" rel="noopener noreferrer"&gt;AWS&lt;/a&gt;, &lt;a href="https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/manage/monitor/observability" rel="noopener noreferrer"&gt;AZURE&lt;/a&gt;, &lt;a href="https://cloud.google.com/blog/products/management-tools/observability-on-google-cloud" rel="noopener noreferrer"&gt;GCP&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2988%2F1%2ARCjHgpcAv2FWjRMsHb34GQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2988%2F1%2ARCjHgpcAv2FWjRMsHb34GQ.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Components of Observability
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Logs:&lt;/strong&gt; Including system and server logs, network system logs, and application logs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Metrics:&lt;/strong&gt;Monitoring CPU and memory usage, infrastructure metrics&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Traces:&lt;/strong&gt;Track the performance of microservices&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Monitoring vs Observability
&lt;/h3&gt;

&lt;p&gt;Monitoring and observability can be differentiated based on various perspectives, with the primary ones being :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Focus , Purpose, Data Collection, Alerting, Use Case, Scope, Tools&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2382%2F1%2ADG-NH0ua6KdhbSO1VRvFwQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2382%2F1%2ADG-NH0ua6KdhbSO1VRvFwQ.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Collection and Instrumentation
&lt;/h3&gt;

&lt;p&gt;Data collection and instrumentation are integral components of observability, providing the means to gather information about the internal workings and performance of a software system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AmkSh89rDiDNaxIgcsXmDgw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AmkSh89rDiDNaxIgcsXmDgw.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Collection
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;🔖Definition&lt;/strong&gt;&lt;br&gt;
Involves gathering relevant information and metrics from various components within a software system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖Purpose&lt;/strong&gt;&lt;br&gt;
The collected data helps in monitoring, analyzing, and understanding the system’s behavior, performance, and health.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖Example&lt;/strong&gt;&lt;br&gt;
In this image, depicting a configuration file associated with the Prometheus Monitoring Platform, our objective is to extract metrics from an application hosted at the address &lt;em&gt;localhost:5000&lt;/em&gt;. The scraping interval set for this operation is &lt;em&gt;15 seconds&lt;/em&gt;. This signifies that the Prometheus service will, at regular &lt;em&gt;15-second intervals&lt;/em&gt;, retrieve metrics such as &lt;em&gt;CPU usage&lt;/em&gt;, &lt;em&gt;Memory Usage&lt;/em&gt;, and more from the specified endpoint.&lt;/p&gt;

&lt;p&gt;Later on, we’ll delve into the internal components of the Prometheus server, exploring how metrics are collected and persistently stored.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AxnZFdDQzXxpHQZXVhnSOxA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AxnZFdDQzXxpHQZXVhnSOxA.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Instrumentation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;🔖Definition&lt;/strong&gt;&lt;br&gt;
Involves adding code or agents to a software system to gather specific data and metrics at runtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖Purpose&lt;/strong&gt;&lt;br&gt;
Instrumentation provides a way to gather fine-grained insights into the behavior of the application, allowing for detailed analysis and troubleshooting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖Example&lt;/strong&gt;&lt;br&gt;
Within this code snippet, utilizing the Prometheus SDK, we can explicitly gather specific metrics within our application. For example, in this code, as we receive a request, the request counter increments by one unit. This framework allows you to expose various types of metrics essential for monitoring your application’s performance and health by instrumenting your code.&lt;/p&gt;

&lt;p&gt;To expose your custom metrics, all you need to do is configure the Prometheus service, already hosted on your service. Utilizing your specific programming language SDK, you can then expose the desired metrics from within your code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AU4DVdJDGOG4Iy8Aaks4aHw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AU4DVdJDGOG4Iy8Aaks4aHw.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Quality Matters
&lt;/h3&gt;

&lt;p&gt;To maintain the quality of collected data, data sources from various monitoring systems should be standardized to &lt;strong&gt;prevent redundancy&lt;/strong&gt;, &lt;strong&gt;reduce clutter&lt;/strong&gt;, and &lt;strong&gt;minimize noise&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In a large and intricate system, we gather diverse metrics such as application heartbeat, application logs (Information, Debug, Exception, etc.), as well as metrics related to application performance and resource usage.&lt;/p&gt;

&lt;p&gt;Given the substantial volume of data collected, it becomes crucial to implement a strategy for filtering only the essential metrics.&lt;/p&gt;

&lt;p&gt;One effective approach involves &lt;em&gt;sampling metrics&lt;/em&gt; in conjunction with the &lt;em&gt;Telemetry Framework&lt;/em&gt;, enabling us to selectively filter out necessary metrics. For example, by sampling metrics associated with server responses having a status code greater than 300, we specifically focus on gathering responses that indicate not successful, excluding those categorized as successful.&lt;/p&gt;

&lt;h3&gt;
  
  
  Visualization and Dashboards
&lt;/h3&gt;

&lt;p&gt;Providing a user-friendly interface to analyze and interpret complex data from diverse sources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AGPL-VSlPeX7DSE3NCZ-1uw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AGPL-VSlPeX7DSE3NCZ-1uw.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we’ve collected metrics, the next step is to derive insights from them. However, each team may have unique requirements, necessitating their specialized dashboards and graphs.&lt;/p&gt;

&lt;p&gt;Some teams may require real-time data on application health status, while others may need information on the rate of messages received from a Kafka topic, and so forth. Consequently, the selection of an observability platform capable of accommodating these varied needs becomes crucial when adopting an observability platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖 &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt;&lt;/strong&gt;specializes in customizing dashboards and visualizing metrics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖&lt;a href="https://www.elastic.co/kibana" rel="noopener noreferrer"&gt; Kibana&lt;/a&gt;&lt;/strong&gt;, when paired with the &lt;a href="https://www.elastic.co/elastic-stack" rel="noopener noreferrer"&gt;ELK stack&lt;/a&gt;, facilitates the collection and querying of logs through a user-friendly dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖 &lt;a href="https://www.dynatrace.com/" rel="noopener noreferrer"&gt;Dynatrace&lt;/a&gt;&lt;/strong&gt;, on the other hand, stands out as an enterprise-level observability tool, encompassing all the necessary features for visualization and dashboard creation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2182%2F1%2AX_EPTozs0TUgb_ZytE1N0Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2182%2F1%2AX_EPTozs0TUgb_ZytE1N0Q.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Framework for data collection
&lt;/h3&gt;

&lt;p&gt;These frameworks are used to instrument code, capture telemetry, and enable monitoring, tracing, and logging.&lt;/p&gt;

&lt;p&gt;Having discussed code instrumentation earlier, it’s important to note that Prometheus is not the sole framework available for this purpose. There are several other notable frameworks for &lt;em&gt;data collection&lt;/em&gt; and &lt;em&gt;code instrumentation&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2462%2F1%2A8aq_ZnPy8K9FjVWSgYk-lA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2462%2F1%2A8aq_ZnPy8K9FjVWSgYk-lA.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://opentelemetry.io/docs/concepts/components/" rel="noopener noreferrer"&gt;OpenTelemetry&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://zipkin.io/" rel="noopener noreferrer"&gt;Zipkin&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://prometheus.io/docs/instrumenting/clientlibs/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://newrelic.com/" rel="noopener noreferrer"&gt;new relic&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prometheus Overview
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2384%2F1%2AglfRvp4KfjzY34vtiDKaLw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2384%2F1%2AglfRvp4KfjzY34vtiDKaLw.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exploring how the &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus &lt;/a&gt;server operates and facilitates the metric collection, notification to other services, and exposure and visualization of collected metrics reveals three main components at its core:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Retrieval:&lt;/strong&gt; This component enables the collection of metrics from various applications, short-lived jobs, and services. Operating on a pulling mechanism, it retrieves a list of targets from service discovery.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;TSDB&lt;/strong&gt;(&lt;a href="https://prometheus.io/docs/prometheus/latest/storage/" rel="noopener noreferrer"&gt;Time-series Database&lt;/a&gt;): Prometheus provides a Time-series database that can be hosted on a separate node. It supports vertical scaling and federation for scenarios requiring multiple instances of the database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HTTP Server&lt;/strong&gt;: Prometheus includes an endpoint through which it exposes its metrics to the external environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, Prometheus incorporates the following components to enhance its functionality:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Push Gateway:&lt;/strong&gt;Some jobs and services may be unable to directly expose or push their metrics externally. The Push Gateway allows Prometheus to pull metrics from these services and jobs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Alert Manager:&lt;/strong&gt; This component enables Prometheus to send various types of notifications (Email, SMS, Slack, Microsoft Team, etc.) to relevant teams, enhancing communication and alerting capabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prometheus Web UI:&lt;/strong&gt; Prometheus features a web UI that visualizes all metrics by pulling them from the HTTP server endpoint. Leveraging the PromQL (&lt;a href="https://prometheus.io/docs/prometheus/latest/querying/basics/" rel="noopener noreferrer"&gt;Prometheus Query Language&lt;/a&gt;), users can pull and visualize metrics on other tools like Grafana.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  OpenTelemetry Overview
&lt;/h3&gt;

&lt;p&gt;Similar to Prometheus, the &lt;a href="https://opentelemetry.io/" rel="noopener noreferrer"&gt;OpenTelemetry &lt;/a&gt;framework provides an alternative for collecting metrics from various services. In this diagram, the central component is the &lt;a href="https://opentelemetry.io/docs/collector/" rel="noopener noreferrer"&gt;OTel Collector&lt;/a&gt;, responsible for scraping or collecting metrics from diverse microservices and shared infrastructure services such as Kubernetes, cloud services, and more. Once the metrics are collected, they can be persisted in a time-series database.&lt;/p&gt;

&lt;p&gt;Notably, OpenTelemetry is observability framework-agnostic. This means that the structure of the collected data is standardized, and as &lt;a href="https://opentelemetry.io/docs/" rel="noopener noreferrer"&gt;OpenTelemetry &lt;/a&gt;is under the umbrella of &lt;a href="https://www.cncf.io/" rel="noopener noreferrer"&gt;CNCF&lt;/a&gt;(Cloud Native Computing Foundation), users have the flexibility to employ any observability framework and databases of their choice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2Adc6Sk-t0zzJ6lsv8pno8dg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2Adc6Sk-t0zzJ6lsv8pno8dg.png" alt="[OTel Document](https://opentelemetry.io/docs/)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Incident Automation and Remediation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;🔖 Definition&lt;/strong&gt;&lt;br&gt;
Involve the use of automated processes and workflows to respond to and resolve issues detected through monitoring, logging, and tracing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖 Purpose&lt;/strong&gt;&lt;br&gt;
To reduce manual intervention, minimize downtime, and enhance the overall reliability of the software system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔖Example&lt;/strong&gt;&lt;br&gt;
There are plenty of tools for incident management but &lt;a href="https://www.servicenow.com/" rel="noopener noreferrer"&gt;ServiceNow&lt;/a&gt; and &lt;a href="https://www.pagerduty.com/" rel="noopener noreferrer"&gt;PagerDuty&lt;/a&gt; can be always considered as the items on the top of the list.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ServiceNow Incident Automation:&lt;/em&gt; ServiceNow automates incident resolution through predefined workflows and scripts, reducing manual effort and accelerating response times.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ServiceNow Remediation:&lt;/em&gt; In ServiceNow, remediation involves corrective actions taken to resolve incidents swiftly, leveraging automated solutions and predefined processes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2Aq7tS2bXuhRQOnG1bXszyRg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2Aq7tS2bXuhRQOnG1bXszyRg.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Incident Automation and Remediation Real-World Example
&lt;/h3&gt;

&lt;p&gt;Incident response in observability refers to the systematic approach of detecting, managing, and resolving unexpected issues or disruptions in a software system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2164%2F1%2AYqW9b2ZsSOmcsEBtDfZDXA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2164%2F1%2AYqW9b2ZsSOmcsEBtDfZDXA.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s consider this real-world scenario&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A sudden surge in website traffic leads to a spike in server CPU usage beyond normal thresholds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Detection:&lt;/strong&gt; Anomaly detection algorithms identify a sudden spike in server CPU usage, signaling a potential issue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Alerting:&lt;/strong&gt; The system triggers alerts to the operations team, notifying them of the abnormal server load and providing initial details.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Automated Response:&lt;/strong&gt; Automated scripts kick in to scale up additional server instances, redistributing the load to prevent performance degradation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Verification:&lt;/strong&gt; The operations team reviews system metrics to confirm if the automated response effectively mitigated the issue, ensuring stability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Notification:&lt;/strong&gt; Upon successful resolution, notifications are sent to relevant stakeholders, updating them on the incident’s detection, response, and resolution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges and Pitfalls in Observability
&lt;/h3&gt;

&lt;p&gt;Like any other aspect, Observability has its own set of challenges and pitfalls. The aspects mentioned in this picture are self-explanatory and should be clear without further elaboration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A4F_A_QI153xQcHa8YaDZhQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A4F_A_QI153xQcHa8YaDZhQ.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Observability Tools
&lt;/h3&gt;

&lt;p&gt;When selecting an observability tool, numerous factors come into play, influencing our choices. Considerations such as&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Budget constraints&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scale of services&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data volume&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature updates&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Open-source nature&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Visualization&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Alerting&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Keeping SDKs Up to date (coding instrumentation)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;all play pivotal roles. Below, I have compiled a list of some of the top observability tools and frameworks that can be chosen based on your specific requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2136%2F1%2ABt1sUwpwHtmqBT6It-YH8g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2136%2F1%2ABt1sUwpwHtmqBT6It-YH8g.png" alt="obervability and monitoring in software"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Thought
&lt;/h3&gt;

&lt;p&gt;To sum it up, having good observability and monitoring is important for making software successful. Getting quick insights helps fix problems fast and makes sure the application works well. Picking a platform that suits your needs is crucial because dealing with complex observability tools can be tricky. Knowing how to compare tools and find the best one for what you need is important. Understanding the basics of observability concepts can be helpful.&lt;/p&gt;

&lt;p&gt;Follow me on &lt;strong&gt;Medium&lt;/strong&gt; or &lt;strong&gt;YouTube&lt;/strong&gt; if you are interested in back-end development subjects 🤞&lt;/p&gt;

&lt;p&gt;🎬 &lt;a href="https://www.youtube.com/@codeboulevard" rel="noopener noreferrer"&gt;Youtube&lt;/a&gt;&lt;br&gt;
📖 &lt;a href="https://medium.com/@matt-ghafouri" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;br&gt;
💻 &lt;a href="https://www.linkedin.com/in/matt-ghafouri/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cheers, &lt;strong&gt;Matt Ghafouri&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>obervability</category>
      <category>monitoring</category>
      <category>backenddevelopment</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>A short review of “Building Evolutionary Architectures” book</title>
      <dc:creator>Matt Ghafouri</dc:creator>
      <pubDate>Sun, 24 Sep 2023 21:36:22 +0000</pubDate>
      <link>https://dev.to/mattqafouri/a-short-review-of-building-evolutionary-architectures-book-22n9</link>
      <guid>https://dev.to/mattqafouri/a-short-review-of-building-evolutionary-architectures-book-22n9</guid>
      <description>&lt;p&gt;I’ll summarize my takeaways from this book in a 5-minute discussion&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5K_XFttS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/9720/1%2AaoErszLm3FT13fuIuRJi4Q.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5K_XFttS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/9720/1%2AaoErszLm3FT13fuIuRJi4Q.jpeg" alt="Building Evolutionary Architectures" width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Book Name: &lt;strong&gt;Building Evolutionary Architectures&lt;/strong&gt;&lt;br&gt;
Authors: &lt;strong&gt;Neal Ford&lt;/strong&gt;, &lt;strong&gt;Rebecca Parsons&lt;/strong&gt;, &lt;strong&gt;Patrick Kua&lt;/strong&gt; &lt;br&gt;
Publisher: &lt;strong&gt;O'Reilly Media&lt;/strong&gt;&lt;br&gt;
Publication Date: &lt;strong&gt;2022 November 22 (2nd Edition)&lt;/strong&gt;&lt;br&gt;
Paperback: &lt;strong&gt;262 **Pages&lt;br&gt;
Language: **English&lt;br&gt;
Dimensions: **7 x 0.5 x 9.25&lt;/strong&gt; inches&lt;br&gt;
Purchase Link: &lt;a href="https://www.amazon.com/Building-Evolutionary-Architectures-Neal-Ford-ebook/dp/B0BN4T1P27?ref=d6k_applink_bb_dls&amp;amp;dplnkId=90509212-a269-49e2-9bcd-84fa51d43521&amp;amp;_encoding=UTF8&amp;amp;tag=mattghafouri-20&amp;amp;linkCode=ur2&amp;amp;linkId=d33ae5e83653a84c81ca313a8ea415b8&amp;amp;camp=1789&amp;amp;creative=9325"&gt;Book available on Amazon&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Audiences 👂🏻🔉
&lt;/h3&gt;

&lt;p&gt;Software developers at all levels except for those in junior positions.&lt;/p&gt;

&lt;h3&gt;
  
  
  My understanding of this book 📖🔖
&lt;/h3&gt;

&lt;p&gt;The author attempts to list all the important aspects of software architecture, such as scalability, ease of maintenance, security, resilience, monitoring, etc. To help choose an architecture that matches business requirements. In essence, they aim to understand what it takes to create and maintain architectures that can adapt to constant changes.&lt;/p&gt;

&lt;p&gt;Next, the author discusses various architectural approaches, including monolithic, modular monolithic, SOA, microservice, and serverless architectures. The author evaluates these options based on &lt;strong&gt;fitness functions&lt;/strong&gt;, &lt;strong&gt;incremental changes&lt;/strong&gt;, and &lt;strong&gt;coupling&lt;/strong&gt; to determine the most suitable architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Should I Read this book?
&lt;/h3&gt;

&lt;p&gt;I genuinely believe that even if you manage to extract just one page from a book, it’s worth the read. If you find the time, I highly recommend giving it a go. However, if you’re unable to do so, I hope the information you’ve gathered here serves as a valuable takeaway. Embrace it.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://codeboulevard.com/2023/11/a-short-review-of-building-evolutionary-architectures-book/"&gt;This article was originally published on my personal blog&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Cheers, &lt;strong&gt;Matt Ghafouri&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>books</category>
      <category>softwaredevelopment</category>
      <category>architecture</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>I’m a RICH software engineer who Creates and Sells.</title>
      <dc:creator>Matt Ghafouri</dc:creator>
      <pubDate>Tue, 12 Sep 2023 08:12:28 +0000</pubDate>
      <link>https://dev.to/mattqafouri/im-a-rich-software-engineer-who-creates-and-sells-fp5</link>
      <guid>https://dev.to/mattqafouri/im-a-rich-software-engineer-who-creates-and-sells-fp5</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k0q9mtuz61wtn3cp6kb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k0q9mtuz61wtn3cp6kb.jpg" alt="Photo by Ian Schneider on Unsplash" width="640" height="427"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;[Article.Preface]&lt;/strong&gt;&lt;br&gt;
Success in the future will be determined by one’s ability to create and sell. We as software engineers are able to create digital products and sell them to our audience, Digital products can be an article like what you are reading now, a YouTube video, or anything else on the Internet. I will be sharing my experience about finding the solution to make passive income as a software engineer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;[Article.Authrozie]&lt;/strong&gt;&lt;br&gt;
I am a software engineer, but I’m financially illiterate,&lt;br&gt;
I yearn for freedom, whether it’s financial or in terms of time.&lt;br&gt;
I don’t know the secret to making money even while I sleep.&lt;br&gt;
If (you can’t answer YES to at least one of these questions) then&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;throw a new IrrelevantArticleException(“This article is not for you”)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;[Article.Reason]&lt;/strong&gt;&lt;br&gt;
I’m a software engineer, always trying to keep up with new tech and programming languages. Finance topics like the stock market and crypto trading don’t really interest me, and I don’t want to work extra on weekends on another part-time project for more money.&lt;/p&gt;

&lt;p&gt;But, is there a way to make money with my interests? BIG Yes, definitely!&lt;/p&gt;

&lt;p&gt;Read the full version on &lt;a href="https://medium.com/@m-qafouri/im-a-rich-software-engineer-who-creates-and-sells-c4875e4700e2"&gt;Medium&lt;/a&gt;&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>freelance</category>
      <category>contentwriting</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>How does Base64 work?</title>
      <dc:creator>Matt Ghafouri</dc:creator>
      <pubDate>Mon, 04 Jul 2022 07:49:26 +0000</pubDate>
      <link>https://dev.to/mattqafouri/how-does-base64-works-3m52</link>
      <guid>https://dev.to/mattqafouri/how-does-base64-works-3m52</guid>
      <description>&lt;p&gt;How does #base64 works?&lt;br&gt;
ASCII characters where each Base64 character contains 6 bits of binary information.&lt;br&gt;
It's very useful for storing image/audio information in Strings of information. What Base64 isn't is an encryption algorithm.&lt;/p&gt;

&lt;p&gt;There is an amazing and simple explanation for that on YouTube. check it out.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/yqlpYY_xljI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>base64</category>
      <category>encoding</category>
      <category>developer</category>
      <category>interviewquestion</category>
    </item>
    <item>
      <title>Automated dead-letter queue handling for EasyNetQ [RabbitMQ]</title>
      <dc:creator>Matt Ghafouri</dc:creator>
      <pubDate>Tue, 19 Apr 2022 06:23:12 +0000</pubDate>
      <link>https://dev.to/mattqafouri/automated-dead-letter-queue-handling-for-easynetq-rabbitmq-1cgk</link>
      <guid>https://dev.to/mattqafouri/automated-dead-letter-queue-handling-for-easynetq-rabbitmq-1cgk</guid>
      <description>&lt;p&gt;&lt;strong&gt;📃What is EasyNetQ?&lt;/strong&gt;&lt;br&gt;
EasyNetQ is designed to make publishing and subscribing with RabbitMQ as easy as possible. of course, you can always use the RabbitMQ client to do this, but it brings lots of complexity of maintenance cumbersome to your application.&lt;br&gt;
Digging into this library is out of the scope of this article, you can read more about this incredibly simple library &lt;a href="https://easynetq.com/" rel="noopener noreferrer"&gt;Here&lt;/a&gt;.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;📃What is the dead-letter queue?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw52eqrdcl16u6aafk6wb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw52eqrdcl16u6aafk6wb.png" alt="Dead-letter rabbitmq and easyNetQ"&gt;&lt;/a&gt;&lt;br&gt;
Messages from a queue can be "dead-lettered"; that is, republished to an exchange when any of the following events occur:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The message is negatively acknowledged by a consumer using basic.reject or basic.nack with requeue parameter set to false.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The message expires due to per-message TTL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The message is dropped because its queue exceeded a length limit.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;&lt;strong&gt;🔔EasyNetQ weakness!!&lt;/strong&gt;&lt;br&gt;
Honestly. it's not a weakness, EasyNetQ tries to keep the implementation as simple as possible, because of this principle you cannot find so many features which already exist in the RabbitMQ client in this library.&lt;/p&gt;

&lt;p&gt;But one of the biggest issues that a developer perhaps faces during the implementation of RabbitMQ functionality is dead-letter management.&lt;/p&gt;

&lt;p&gt;EasyNetQ does not support automatic dead-letter definition. It means in case any of the conditions are mentioned, the failed message will be moved to the default error queue of RabbitMQ.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;❓What's wrong with one Error queue?&lt;/strong&gt;&lt;br&gt;
Good question! dealing with different types of failed messages and monitoring the failed messages for different queues is not possible in this case. It means there should be a different dead-letter queue for each discrete queue.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;🌱Solution&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pjusgn8f59uap1u3vz3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pjusgn8f59uap1u3vz3.png" alt="EasyDeadLetter"&gt;&lt;/a&gt;&lt;br&gt;
Here, I have implemented a Nuget package (EasyDeadLetter) for this purpose, which can be easily implemented with the minimum changes in any project.&lt;/p&gt;

&lt;p&gt;All you need to do is follow the four steps :&lt;br&gt;
1- First of all, Decorate your class object with &lt;strong&gt;QeueuAttribute&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;2- The second step is to define your dead-letter queue with the same QueueAttribute and also inherit the dead-letter object from the Main object class.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;3- Now, it's time to decorate your main queue object with the EasyDeadLetter attribute and set the type of dead-letter queue.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;4- In the final step, you need to register &lt;strong&gt;EasyDeadLetterStrategy&lt;/strong&gt; as the default error handler (&lt;strong&gt;IConsumerErrorStrategy&lt;/strong&gt;).&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;That's all. from now on any failed message will be moved to the related dead-letter queue.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;📑Resources&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.nuget.org/packages/EasyDeadLetterStrategy/" rel="noopener noreferrer"&gt;NuGet Package&lt;/a&gt;&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/MattGhafouri" rel="noopener noreferrer"&gt;
        MattGhafouri
      &lt;/a&gt; / &lt;a href="https://github.com/MattGhafouri/EasyDeadLetter" rel="noopener noreferrer"&gt;
        EasyDeadLetter
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      RabbitMQ automatic DeadLetter handler for dotnet | EasyNetQ
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🌱Easy Dead-Letter 🌱&lt;/h1&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;📕Handling dead-letter queues in EasyNetQ (RabbitMQ)&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Messages from a queue can be "dead-lettered"; that is, republished to an exchange when any of the following events occur:&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;The message is negatively acknowledged by a consumer using basic.reject or basic.nack with requeue parameter set to false.&lt;/li&gt;
&lt;li&gt;The message expires due to per-message TTL;&lt;/li&gt;
&lt;li&gt;The message is dropped because its queue exceeded a length limit&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🔔 Problem&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;If you are using RabbitMQ client for dotnet there is no problem, you can define dead-letter queues during a queue declaration, But using RabbitMQ client brings lots of implementation complexity to the application, because of that so many developers prefer to use the &lt;strong&gt;&lt;a href="https://github.com/EasyNetQ/EasyNetQ" rel="noopener noreferrer"&gt;EasyNetQ&lt;/a&gt;&lt;/strong&gt; library which is a wrapper on RabbitMQ client(&lt;strong&gt;Amazing and simple&lt;/strong&gt;)
There is no implementation for &lt;strong&gt;dead-letter queues&lt;/strong&gt; in EasyNetQ to keep the library easy to use, It means in case of any exception in the message consumers,…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/MattGhafouri/EasyDeadLetter" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;🤞If you want to support me, please &lt;strong&gt;share&lt;/strong&gt; this article with your community.&lt;/p&gt;

&lt;p&gt;Follow me on &lt;a href="https://www.linkedin.com/in/majid-qafouri" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt; &lt;/p&gt;

</description>
      <category>rabbitmq</category>
      <category>easynetq</category>
      <category>deadletter</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Control concurrency for shared resources in distributed systems with DLM (Distributed Lock Manager)</title>
      <dc:creator>Matt Ghafouri</dc:creator>
      <pubDate>Fri, 24 Sep 2021 15:45:03 +0000</pubDate>
      <link>https://dev.to/mattqafouri/serialize-access-to-a-shared-resource-in-distributed-systems-with-dlm-distributed-lock-manager-5g7e</link>
      <guid>https://dev.to/mattqafouri/serialize-access-to-a-shared-resource-in-distributed-systems-with-dlm-distributed-lock-manager-5g7e</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuj7jl6fbtxlmhdy5sog.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feuj7jl6fbtxlmhdy5sog.jpg" alt="DLM" width="800" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario&lt;/strong&gt;&lt;br&gt;
Imagine in our application we need to update a value that is shared between all requests. It means we should control concurrent requests for manipulating this shared resource. Somehow we should serialize the access to resources. In our example, we keep this value in the Redis cache for reducing latency.&lt;br&gt;
To make our example more clear, imagine we have a contribution amount that is updated by each request. This value should be plus the contribution amount of incoming requests.&lt;br&gt;
Let’s demonstrate our scenario with graphic symbols for better understanding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6kiiv3oxnq5q4zwrg2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6kiiv3oxnq5q4zwrg2a.png" alt="singleInstanceApplication" width="431" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have an application that has subscribed to a RabbitMQ queue. In the simplest model, we receive a message from the queue and update our contribution amount in Redis in synchronized mode, which means the application process requests one by one. Everything looks fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We may encounter two problems :&lt;br&gt;
&lt;strong&gt;1- What if the application uses async processing to process messages from the queue?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsweas4doi4yb2luwyl81.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsweas4doi4yb2luwyl81.png" alt="async processing" width="627" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2- What if we horizontally scale the application, means running several instances of the application?&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhdgvkm1lwbqj5im4f2d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhdgvkm1lwbqj5im4f2d.png" alt="application scaling" width="569" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In these two cases, we will receive more than one request, and somehow the concurrency should be controlled, otherwise, we will face Race conditions in Redis and the contribution value is not valid anymore.&lt;/p&gt;

&lt;p&gt;The first solution which can be brought up is to use an internal lock object for controlling the critical point.&lt;br&gt;
This solution works well for the first problem. But what about the second one?&lt;/p&gt;

&lt;p&gt;In the case of horizontal scaling, the internal lock just works in the scope of one instance, we should find a control mechanism outside of the application scope.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions&lt;/strong&gt;&lt;br&gt;
1- Actor systems like Akka or Orleans(.Net)&lt;br&gt;
2- Distributed lock manager (DLM)&lt;br&gt;
3- Add your solutions here&lt;/p&gt;

&lt;p&gt;In my point of view if your application does not have to deal with lots of concurrent scenarios like what I’ve mentioned, using a DLM is better for sake of simplicity. Because implementing actor systems bring more complexity to your projects.&lt;/p&gt;

&lt;p&gt;But if you have an application with lots of critical points in which concurrency should be controlled, I also prefer to use actor systems. The point is, in the example which I’ve mentioned, we just need to control access for a single shared value. So I prefer to use DLM instead of bringing extra complexity to my application code.&lt;/p&gt;

&lt;p&gt;Ok, forget the Actor systems, and Let’s talk about DLM and how it can be helpful for our problems 😊&lt;br&gt;
For implementing a DLM we have several choices. for example, we would use Zookeeper or Redis or maybe other tools. As I have experience with Redis DLM, so I’ve decided to explain it in this article.&lt;/p&gt;

&lt;p&gt;I don’t want to talk about Redis DLM in-depth, because it’s already been explained well on &lt;a href="https://redis.io/topics/distlock"&gt;Redis Official website&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I’ve just tried to define a scenario and solve it with the help of Redis DLM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Redis Enterprise cluster node can include between zero and a few hundred Redis databases in one of the following types:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;· A simple database, i.e. a single master shard&lt;br&gt;
· A highly available (HA) database, i.e. a pair of master and slave shards&lt;br&gt;
· A clustered database, which contains multiple master shards, each managing a subset of the dataset (or in Redis terms, a different range of “hash-slots”)&lt;br&gt;
· An HA clustered database, i.e. multiple pairs of master/slave shards&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ytwtoybd9hl6nqtm7fg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ytwtoybd9hl6nqtm7fg.png" alt="Redis cluster" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I made a contact with the Redis development team and I asked about Redis DLM best practice. As they said, using clustered Redis with more than 2 Master nodes. Although you can use the RedLock algorithm with One Master and several slaves ( leader-follower model).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before starting to Implement a Redis DLM, it’s better to notice these points:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1- DLM in Redis uses the RedLock algorithm to controlling lock management.&lt;/p&gt;

&lt;p&gt;2- We have several implementations of the RedLock algorithm in different languages &lt;a href="https://redis.io/topics/distlock"&gt;Redis Official website&lt;/a&gt;. and you do not need to implement it again.&lt;/p&gt;

&lt;p&gt;3- Our lock manager also should be configured with expiration time for locks, because without expiry time you may face deadlock, to avoid this it’s better to set expiry time. It means after that time the lock will be released automatically by Redis.&lt;/p&gt;

&lt;p&gt;4- We also should create a mechanism as a retry pattern to retrying for acquiring the lock. If the requests couldn’t get the lock, we should consider it as a failed request and it should be controlled ( for example we can republish the failed request to the queue, to receive it again or we could use an outbox pattern to process them in a background worker.)&lt;/p&gt;

&lt;p&gt;5- I highly recommend you to find an implementation of RedLock in your using language which supports retry pattern and also expiry time for the lock.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So let’s define our scenario step by step with the help of Redis DLM.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvill36fxms03agbrnpye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvill36fxms03agbrnpye.png" alt="redis distributed lock manager implementation" width="637" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1- We have an application that has subscribed to a queue.&lt;/p&gt;

&lt;p&gt;2- We use async processing and also we want to horizontally scale our application which means we will have several handlers for a specific queue, and we will receive several messages simultaneously.&lt;/p&gt;

&lt;p&gt;3- All the requests want to update a contribution value which is located inside the Redis cache. It means our shared resource is the contribution value and concurrency should be controlled for updating this value in Redis.&lt;/p&gt;

&lt;p&gt;4- We use Redis distributed lock manager (DLM) as a coordinator to serialize the access to this contribution amount.&lt;/p&gt;

&lt;p&gt;5- each request which wants to update the contribution amount should acquire a lock from DLM, if it acquired the lock it could be able to update the contribution amount in Redis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In modern applications in which async processing and horizontal scaling can be considered as a Must-have, we will face concurrent requests, and If we have some shared resource, somehow this concurrency must be controlled.&lt;/p&gt;

&lt;p&gt;For controlling concurrency in applications we could implement different approaches, but you should always consider choosing the best approach because a good developer avoids bringing unnecessary complexity to an application by over-designing or overthinking.&lt;/p&gt;

&lt;p&gt;So, for concurrency management in an application, first, you must understand the problem, after that determine your critical points, and in the last step according to the collected data and consulting with a domain expert, you can choose your approach. Distributed lock managers in most cases can handle your problems. But sometimes you have to choose another approach like Actor systems.&lt;/p&gt;

&lt;p&gt;That’s all, Hope you’ve enjoyed the article, feel free to contact me and send me your comments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://codeboulevard.com/2023/11/how-to-control-concurrency-in-distributed-systems-with-distributed-lock-manager/"&gt;This article was originally published on my personal blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cheers, &lt;strong&gt;Matt Ghafouri&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>distributedsystems</category>
      <category>concurrency</category>
      <category>redis</category>
      <category>dlm</category>
    </item>
    <item>
      <title>Microservice Roadmap</title>
      <dc:creator>Matt Ghafouri</dc:creator>
      <pubDate>Tue, 31 Aug 2021 20:00:02 +0000</pubDate>
      <link>https://dev.to/mattqafouri/microservice-roadmap-4mci</link>
      <guid>https://dev.to/mattqafouri/microservice-roadmap-4mci</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnuv3f814ydb2k98vz9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnuv3f814ydb2k98vz9u.png" alt="microservice roadmap"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Why Microservice architecture?
&lt;/h2&gt;

&lt;p&gt;Nowadays with the rise of social media, fast internet, etc. The tendency to use applications is getting more and more. As a result of these behavior changes, monolithic applications need to deal with a tremendous majority of changes.&lt;br&gt;
most of the businesses face a new feature that should be added to their application, on the other side, the amount of data that should be processed is getting much higher and in so many cases monolithic applications because of their drawbacks such as slow development velocity, slower deployment, as barely support the agile methodology which is the main idea to deal with fast changes in the software industry. &lt;br&gt;
If you want to create a software project for a big or complex business (It’s not true about startup projects or those projects in the funding stage, because they have to be submissive to the time to market limitations) it’s better to start with microservice architecture. &lt;br&gt;
So, one of the solutions to boost the application flexibility, scalability, and so many other concerns is to follow a flexible architecture like microservice-based architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do you need a roadmap?
&lt;/h2&gt;

&lt;p&gt;As I understand so many developers, want to know how they should start this journey, obviously, there are thousands of resources that can be used, but the problem is the dispersion of resources. I decided to make this journey more clear by defining a road map for this learning curve.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does this roadmap work?
&lt;/h2&gt;

&lt;p&gt;A microservice-based architecture has several discrete units which they work together to receive and process requests from various sources. Some parts of this complex can be plug-in units, which means in case of need either you can plug a new unit or unplug a unit, which shouldn’t disturb the overall work of an application. &lt;/p&gt;

&lt;p&gt;For example, if you have decided to implement a microservice architecture, you should be familiar with various concerns in a life cycle of an application such as persistence, logging, monitoring, load balancing, caching, etc. besides you should know which tools or which stack are more suitable in your applications.&lt;/p&gt;

&lt;p&gt;Well, this is the main approach of my article, I will explain the concern, then introduce some of the best tools for satisfying the needs.&lt;/p&gt;

&lt;p&gt;I explain each topic to get answer these questions:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; What is it?&lt;/li&gt;
&lt;li&gt; Why should I use it?&lt;/li&gt;
&lt;li&gt; Which tools are better?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Notice that I have mentioned for tools part just two or three tools, Of course, we have so many other tools, the criterion for choosing these tools is popularity, performance, being open-source, and also update frequency. &lt;br&gt;
Notice again we have also cloud-based services which are out of the scope of this article to be discussed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9dfknhxe40el04dirfs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9dfknhxe40el04dirfs.png" alt="microservice diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have used the above diagram as the architecture diagram sample, because this diagram involves most of the microservice architecture components and also has been recognized as a standard model, of course, I know this diagram is not a very comprehensive diagram, but I have tried to keep this article as simple as possible, maybe in future I would revise the article and write a new version of that. I think the best mindset about explaining every concept, should always follow the KISS pattern. &lt;br&gt;
The concepts which have been explained in this article are:&lt;/p&gt;

&lt;p&gt;• Docker&lt;br&gt;
• Container orchestration&lt;br&gt;
• Docker container management&lt;br&gt;
• API gateway&lt;br&gt;
• Load Balancing&lt;br&gt;
• Service discovery&lt;br&gt;
• Event Bus&lt;br&gt;
• Logging&lt;br&gt;
• Monitoring And Alerting&lt;br&gt;
• Distributed tracing&lt;br&gt;
•       Data Persistence&lt;br&gt;
•       Caching&lt;br&gt;
•       Cloud Provider&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                            Docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Docker is an open-source platform for containerizing your applications, with the libraries, and dependencies that your application needs to run in various environments. with the help of Docker, development teams are able to package applications into containers.&lt;br&gt;
&lt;strong&gt;Why&lt;/strong&gt;&lt;br&gt;
Actually, Docker is one of the tools to containerize applications, Which means you are also able to create containers without using Docker, The real benefits of Docker is to make this process easier, safer, and much simpler.&lt;br&gt;
&lt;strong&gt;Tools&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Docker Inc&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                   Container Orchestration
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt;&lt;br&gt;
After you containerize your application, you will need some tools for managing containerized applications for doing some manual and automated operations like horizontal scaling.&lt;br&gt;
&lt;strong&gt;Why&lt;/strong&gt; &lt;br&gt;
These tools provide some services for your application management like automated load-balancing, guarantee high service availability.&lt;br&gt;
This kind of service is done by defining several manager nodes, and in case of any failure in one node manager, the other managers can keep the application services available.&lt;br&gt;
&lt;strong&gt;Tools&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Kubernetes or K8s&lt;/code&gt; ,&lt;code&gt;Docker Swarm&lt;/code&gt; &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                   Docker Container Management 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt;&lt;br&gt;
Managing Docker environment, configuration management, and providing the environment security, etc. These concerns can be centralized and automated by a docker container management tool.&lt;br&gt;
&lt;strong&gt;Why&lt;/strong&gt;&lt;br&gt;
A Docker container management tool provides a GUI-based tool for users, and they do not have to deal with CLI which is not comfortable for all users. These tools give developers a rich UI to build and publish their images, also make it much easier to do some operational tasks such as services horizontally scaling by providing a simplified user interface.&lt;br&gt;
&lt;strong&gt;Tools&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Portainer&lt;/code&gt;, &lt;code&gt;DockStation&lt;/code&gt;, &lt;code&gt;Kitematic&lt;/code&gt;, &lt;code&gt;Rancher&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                           API Gateway
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt;&lt;br&gt;
An API gateway can be considered as an API management tool that works as a middleware between your application services and different clients. An API gateway can manage many things such as: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;Routing&lt;/code&gt; :a gateway receives all API requests and forwards them to the destination services.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Logging&lt;/code&gt; : you will be able to logs all the requests in one place.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Authorization&lt;/code&gt;: check if you are as a user are eligible to access the service or not, if not the request can be short-circuited&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Performance profiling&lt;/code&gt;: you can estimate every request execution time and check your application bottlenecks.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Caching&lt;/code&gt;: with handling your caching at the gateway level, you would eliminate so much traffic on your services.&lt;/p&gt;

&lt;p&gt;In fact, It works as a reverse proxy, and clients just need to know about your gateway, and application services can be hidden from the outside.&lt;br&gt;
&lt;strong&gt;Why&lt;/strong&gt;&lt;br&gt;
Without an API gateway, you may need to do some cross-cutting concerns in every service, for example, if you want to log the request and response for services.&lt;br&gt;
Besides, if your application consists of several services, your client needs to know about each service address, and in case of changing a service address, several places should be updated.&lt;br&gt;
&lt;strong&gt;Tools&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Kong&lt;/code&gt;, &lt;code&gt;Ocelot&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                      Load Balancing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt;&lt;br&gt;
One of the most important reasons that we have chosen the microservice architecture is scalability, Which means we will be able to handle more requests by running more instances of our services, but the question is, which instance should receive requests or How do clients know which instance of service should handle the requests?&lt;br&gt;
The answer to these questions is Load Balancing. Load Balancing means sharing the income traffic between a service instance.&lt;br&gt;
&lt;strong&gt;Why&lt;/strong&gt;&lt;br&gt;
In order to scale your independent services, you need to run several instances of the services. With a load balancer, clients do not need to know the right instances which serving.&lt;br&gt;
&lt;strong&gt;Tools&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Traefik&lt;/code&gt;, &lt;code&gt;NGINX&lt;/code&gt;, &lt;code&gt;Seesaw&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                        Service Discovery
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt; &lt;br&gt;
As your application services count getting more and more, services need to know each other service instance addresses, but in large applications with lots of services, this cannot be handled. So we need service discovery which is responsible to provide all components addresses in your application, they could easily send a request to the service discovery service and get the available services instance address.&lt;br&gt;
&lt;strong&gt;Why&lt;/strong&gt;&lt;br&gt;
When you can have several services in your application, service discovery is a must-have for your application.&lt;br&gt;
Your application services' don't need to know each service instance address, it means service discovery paves this way for you.&lt;br&gt;
&lt;strong&gt;Tools&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Consul&lt;/code&gt;, &lt;code&gt;Zookeeper&lt;/code&gt;, &lt;code&gt;Eureka&lt;/code&gt;, &lt;code&gt;etcd&lt;/code&gt; and &lt;code&gt;Keepalived&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                          Event Bus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt;&lt;br&gt;
In microservice architecture pattern, you would use two different types of communication, &lt;code&gt;Sync&lt;/code&gt; and &lt;code&gt;Async&lt;/code&gt; communication&lt;br&gt;
Sync communication means services call each other through HTTP call or GRPC call.&lt;br&gt;
&lt;code&gt;Async&lt;/code&gt; communication means services interact with each other via a message bus or an event bus, It means there is no direct connection between services.&lt;br&gt;
While your architecture can use both communication styles simultaneously, for instance in an online shop example you can send a message whenever an order is registered and those services which have subscribed to a specific channel will receive the message. But sometimes you may need some real-time inquiry, for instance, you need to know the quantity of an item, you may use GRPC or HTTP call between services to get the response.&lt;br&gt;
&lt;strong&gt;Why&lt;/strong&gt;&lt;br&gt;
If you want to have a scalable application with several services, one of the principles you would follow is to create loosely coupled services that interact with each other through an event bus. Also, if you need to create an application that is able to plug in a new service to receive a series of specific messages, you need to use an event bus&lt;br&gt;
&lt;strong&gt;Tools&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;RabbitMQ&lt;/code&gt;, &lt;code&gt;Kafka&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                         Logging
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt;&lt;br&gt;
When using a microservice architecture pattern it’s better to centralize your services logs. These logs would be used in debugging a problem or aggregating logs according to their types for analytical usages.&lt;br&gt;
&lt;strong&gt;Why&lt;/strong&gt;&lt;br&gt;
In case of any need to debug a request, you may face difficulty if you do not gather services logs in one place. You are also able to correlate the logs related to a specific request with a unique correlation Id. It means all logs in different services related to a request will be accessible by this correlation Id.&lt;br&gt;
&lt;strong&gt;Tools&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Elastic Logstash&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                    Monitoring And Alerting
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt;&lt;br&gt;
In a microservice architecture, if you want to have a reliable application or service you have to monitor the functionality, performance, communication, and any other aspect of your application in order to achieve a responsible application.&lt;br&gt;
&lt;strong&gt;Why&lt;/strong&gt;&lt;br&gt;
You need to monitor the overall functionality and services health, also need to monitor performance bottlenecks and prepare a plan to fix them. Optimize user experience by decreasing downtime of your service by defining early alerts for your services in critical points. Monitor overall resource consumption of services, when there are under heavy load and etc.&lt;br&gt;
&lt;strong&gt;Tools&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Prometheus&lt;/code&gt;, &lt;code&gt;Kibana&lt;/code&gt;, &lt;code&gt;Grafana&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                    Distributed Tracing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt;&lt;br&gt;
Debugging is always one of the developers' concerns, As you, all have the experience to trace or debug a monolithic. It's very straightforward and easy, But when it comes to microservice architecture, because a request may be passed through different services, which makes it difficult to debug and trace because the codebase is not in one place, so here distributed tracing tool can be helpful.&lt;br&gt;
&lt;strong&gt;Why&lt;/strong&gt;&lt;br&gt;
Without a distributed tracing tool it's frustrating or maybe impossible to trace your request through different services. With these tools, you can easily trace requests and events with the help of having a rich UI for demonstrating the flow of the request.&lt;br&gt;
&lt;strong&gt;Tools&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;OpenTelemetry&lt;/code&gt;, &lt;code&gt;Jeager&lt;/code&gt;, &lt;code&gt;Zipkin&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                     Data Persistence
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt;&lt;br&gt;
In most systems, we need to persist data, because we would need the data for further processes or reporting, etc in order to persist data, we need some tool to write our applications data into physical file with different structure.&lt;br&gt;
&lt;strong&gt;Why&lt;/strong&gt;&lt;br&gt;
In monolithic applications we used to have one or two different persistence types and most of monolithic applications use Relational databases like SQL Server, Oracel, MySQL. But in microservice architecture we should follow “DataBase per service” pattern, it means keep each microservice’s persistent data private to that service and accessible only via its API.&lt;br&gt;
Well,You would have different databases for different usages and scenarios.For instance a report service might use NoSQL databases like ElasticSearch or MongoDB, because they use document base structure, It means the structure for persisting data in these databases is different from relational databases, which are suitable for services with high rate of reading or fetching data.&lt;br&gt;
In the other hand you may need relational databases like Oracle or SQL SERVER in some microservices or you may also need some databases which support graph structure or key-value structure.&lt;br&gt;
So, In Microservice architecture according to the service mission you would need different types of databases.&lt;br&gt;
&lt;strong&gt;Tools&lt;/strong&gt;&lt;br&gt;
Relational databases or RDBMS : &lt;code&gt;PostgreSQL&lt;/code&gt;, &lt;code&gt;MySQL&lt;/code&gt;, &lt;code&gt;SQL SERVRE&lt;/code&gt;, &lt;code&gt;Oracle&lt;/code&gt;&lt;br&gt;
NoSQL database : &lt;code&gt;MongoDB&lt;/code&gt;, &lt;code&gt;Cassandra&lt;/code&gt;, &lt;code&gt;Elasticsearch&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                      Caching 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt;&lt;br&gt;
Caching reduce latency in service-to-service communication of microservice architectures. A cache is a high-speed data storage layer that stores a subset of data. When data is requested from a cache, it is delivered faster than if you accessed the data’s primary storage location.&lt;br&gt;
&lt;strong&gt;Why&lt;/strong&gt;&lt;br&gt;
In a microservice architecture there are many strategies which caching can be implemented in those ways. Such as&lt;/p&gt;

&lt;p&gt;1- Embedded Cache (distributed and non-distributed)&lt;br&gt;
2- Client-Server Cache (distributed)&lt;br&gt;
3- Reverse Proxy Cache (Sidecar)&lt;/p&gt;

&lt;p&gt;In order to reduce latecny, caching can be implemented in different layers. It means you would have a higher response time. besides you can also implement distributed caching which is accessable for several microservices and sometimes considered as a placeholder for shared resources, which have different usage, like&lt;br&gt;
Rate-limiting : The most common reason for rate limiting is to improve the availability of API-based services by avoiding resource starvation.&lt;br&gt;
&lt;strong&gt;Tools&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Redis (REmote DIctionary Server)&lt;/code&gt;, &lt;code&gt;Apache Ignite&lt;/code&gt;, &lt;code&gt;Hazelcast IMDG&lt;/code&gt; &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                     Cloud Provider
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt;&lt;br&gt;
A cloud service provider is a third-party company offering a cloud-based platform, infrastructure, application, or storage services. Much like a homeowner would pay for a utility such as electricity or gas, companies typically have to pay only for the amount of cloud services they use, as business demands require.&lt;a href="https://azure.microsoft.com/en-us/overview/what-is-a-cloud-provider/" rel="noopener noreferrer"&gt;Microsoft&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most important categories for cloud providers&lt;br&gt;
1- Software as a Service (SaaS).&lt;br&gt;
2-Platform as a Service (PaaS).&lt;br&gt;
3- Infrastructure as a Service (IaaS).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why&lt;/strong&gt;&lt;br&gt;
One benefit of using cloud computing services is that firms can avoid the upfront cost and complexity of owning and maintaining their own IT infrastructure, and instead simply pay for what they use, when they use it. Today, rather than owning their own computing infrastructure or data centers, companies can rent access to anything from applications to storage.&lt;br&gt;
&lt;strong&gt;Tools&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Amazon Web Services (AWS)&lt;/code&gt; , &lt;code&gt;Microsoft Azure&lt;/code&gt;, &lt;code&gt;Google Cloud&lt;/code&gt;, &lt;code&gt;Alibaba Cloud&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, I tried to demonstrate a roadmap which associated with microservice architecture pattern. If you want to implement a microservice architecture from scratch or migrate your monolithic to microservice architecture, you will need to know these concepts.&lt;br&gt;
Besides these concepts, we have other concepts like service mesh, caching, persistence that could be a part of this roadmap, but I intentionally have not mentioned them, for sake of simplicity and with a view to brevity.&lt;br&gt;
Feel free to send me your comments and suggest other concepts to this roadmap.&lt;/p&gt;

&lt;p&gt;Follow me on &lt;a href="https://m-qafouri.medium.com/" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Related Article:&lt;br&gt;
&lt;a href="https://dev.to/majidqafouri/serialize-access-to-a-shared-resource-in-distributed-systems-with-dlm-distributed-lock-manager-5g7e"&gt;Control concurrency for shared resources in distributed systems with DLM&lt;/a&gt;&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>architecture</category>
      <category>programming</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>Writing logs into Elastic with NLog , ELK and .Net 5.0 (Part 2)</title>
      <dc:creator>Matt Ghafouri</dc:creator>
      <pubDate>Thu, 12 Aug 2021 18:38:50 +0000</pubDate>
      <link>https://dev.to/mattqafouri/writing-logs-into-elastic-with-nlog-elk-and-net-5-0-part-2-3m43</link>
      <guid>https://dev.to/mattqafouri/writing-logs-into-elastic-with-nlog-elk-and-net-5-0-part-2-3m43</guid>
      <description>&lt;p&gt;In the previous article I demonstrated how we can write our log in our dotnet applications into &lt;code&gt;Elastic&lt;/code&gt; with &lt;code&gt;NLog&lt;/code&gt;, if you haven't read that &lt;a href="https://medium.com/@m.qafouri/writing-logs-into-elastic-with-nlog-elk-and-net-5-0-8d8898d1d50d"&gt;article&lt;/a&gt;,  I highly recommend reading it first.&lt;/p&gt;

&lt;p&gt;After I published the first part, I  received some comments that said, when they try to write logs into Elastic they have faced some problems.&lt;/p&gt;

&lt;p&gt;I checked some of the cases and realized, they have enabled &lt;code&gt;Basic authentication&lt;/code&gt; in &lt;code&gt;Elastic&lt;/code&gt;. So, whenever &lt;code&gt;NLog&lt;/code&gt; tries to write logs into Elastic by calling the endpoint of the Elastic, they get an &lt;code&gt;authentication error&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
OriginalException: Elasticsearch.Net.ElasticsearchClientException: Failed to ping the specified node.. Call: Status code 401 from: HEAD /
---&amp;gt; Elasticsearch.Net.PipelineException: Failed to ping the specified node.
---&amp;gt; Elasticsearch.Net.PipelineException: Could not authenticate with the specified node. Try verifying your credentials or check your Shield configuration.
at Elasticsearch.Net.RequestPipeline.ThrowBadAuthPipelineExceptionWhenNeeded(IApiCallDetails details, IElasticsearchResponse response)
at Elasticsearch.Net.RequestPipeline.Ping(Node node)
--- End of inner exception stack trace ---
at Elasticsearch.Net.RequestPipeline.Ping(Node node)
at Elasticsearch.Net.Transport`1.Ping(IRequestPipeline pipeline, Node node)
at Elasticsearch.Net.Transport`1.Request[TResponse](HttpMethod method, String path, PostData data, IRequestParameters requestParameters)
--- End of inner exception stack trace ---
# Audit exception in step 1 PingFailure:
Elasticsearch.Net.PipelineException: Could not authenticate with the specified node. Try verifying your credentials or check your Shield configuration.
at Elasticsearch.Net.RequestPipeline.ThrowBadAuthPipelineExceptionWhenNeeded(IApiCallDetails details, IElasticsearchResponse response)
at Elasticsearch.Net.RequestPipeline.Ping(Node node).

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To solve the problem you should set credentials in your &lt;code&gt;nlog.config&lt;/code&gt; file. so change the &lt;code&gt;nlog.config&lt;/code&gt; file like this.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;requireAuth="true"&lt;/code&gt; &lt;br&gt;
&lt;code&gt;username="*******"&lt;/code&gt;&lt;br&gt;
&lt;code&gt;password="********"&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;After I added these attributes to my target everything was working well. But the other issue reported was the &lt;code&gt;SSL connection&lt;/code&gt; problem. If your Elastic endpoint has SSL, you may also face this error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Elasticsearch.Net.ElasticsearchClientException: Failed to ping the specified node.. Call: Status code unknown from: HEAD /
---&amp;gt; Elasticsearch.Net.PipelineException: Failed to ping the specified node.
---&amp;gt; Elasticsearch.Net.PipelineException: An error occurred trying to write the request data to the specified node.
---&amp;gt; System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception.
---&amp;gt; System.Security.Authentication.AuthenticationException: The remote certificate is invalid because of errors in the certificate chain: PartialChain
at System.Net.Security.SslStream.SendAuthResetSignal(ProtocolToken message, ExceptionDispatchInfo exception)
at System.Net.Security.SslStream.ForceAuthenticationAsync[TIOAdapter](TIOAdapter adapter, Boolean receiveFirst, Byte[] reAuthenticationData, Boolean isApm)
at System.Net.Http.ConnectHelper.EstablishSslConnectionAsyncCore(Boolean async, Stream stream, SslClientAuthenticationOptions sslOptions, CancellationToken cancellationToken).

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One solution to fix this issue is &lt;code&gt;disabling certificate validation&lt;/code&gt; on Elastic endpoint. To achieve this, you need to add another attribute to your &lt;code&gt;Elastic target&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;DisableCertificateValidation="true"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now your Elastic target in &lt;code&gt;nlog.config&lt;/code&gt; file should be something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&amp;lt;target xsi:type="ElasticSearch" 
        name="elastic"               
        index="MyService-${date:format=yyyy.MM.dd}" 
        layout ="MyService-|${longdate}|${event-properties:item=EventId_Id}|${uppercase:${level}}|${logger}|${message} ${exception:format=tostring}|url: ${aspnet-request-url}|action: ${aspnet-mvc-action}" 
        includeAllProperties="true"
        requireAuth="true" 
        username="*******"
        password="*********" 
        uri="https://elasticSampleaddress.com:9200" /&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That all. I hope you find this article is useful, let me know if you have any comment.&lt;/p&gt;

&lt;p&gt;Read &lt;a href="https://dev.to/majidqafouri/writing-logs-into-elastic-with-nlog-elk-and-net-5-0-246c"&gt;Writing logs into Elastic with NLog , ELK and .Net 5.0 (Part 1)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow me on &lt;a href="https://m-qafouri.medium.com/"&gt;Medium&lt;/a&gt;&lt;/p&gt;

</description>
      <category>elasticsearch</category>
      <category>dotnet</category>
      <category>microservices</category>
      <category>nlog</category>
    </item>
    <item>
      <title>Projects Folder Structures Best Practices</title>
      <dc:creator>Matt Ghafouri</dc:creator>
      <pubDate>Sat, 07 Aug 2021 16:13:45 +0000</pubDate>
      <link>https://dev.to/mattqafouri/projects-folder-structures-best-practices-g9d</link>
      <guid>https://dev.to/mattqafouri/projects-folder-structures-best-practices-g9d</guid>
      <description>&lt;p&gt;Structuring a solution’s folder is one of the first steps toward starting any software project. If you want to have a good structure with a good separation of tasks, you should be familiar with best practices.&lt;br&gt;
For this purpose, one of the good references which can be used is &lt;strong&gt;Github's good projects’ repositories&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why does folder structure matter?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1-&lt;/strong&gt; It makes it easier to understand the structure of the project, especially for new developers when they join a new team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2-&lt;/strong&gt; It helps to separate &lt;strong&gt;deployment&lt;/strong&gt; from &lt;strong&gt;application&lt;/strong&gt; codes ( test codes vs application codes)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3-&lt;/strong&gt; defining different categories for different objects(DTOs, Contracts, Filters, etc)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4-&lt;/strong&gt; defining categories for various assets which can be set access permission or applying some other concerns on them for better management.&lt;/p&gt;

&lt;p&gt;Well, so far we understand the importance of the folder structure of software project solutions.&lt;br&gt;
Important to keep in mind, in a microservice architecture, we may have different folder structures because according to the size of the project or other concerns (framework, language, etc), we may see different structures. but in most cases, you could use this structure for your projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s check an example&lt;/strong&gt;&lt;br&gt;
Here We are going to define a folder structure from scratch. In this example, I am going to use &lt;strong&gt;Microsoft Visual Studio&lt;/strong&gt; as IDE, but the concept is the same if you want to use different IDE or languages.&lt;/p&gt;

&lt;p&gt;First of all, create a repository on Github with the showed properties&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lu350cj8zmo0qjv2iq2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lu350cj8zmo0qjv2iq2.png" alt="github repo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you create your repository, the next step is to clone the project on your local system. Open Visual Studio, select &lt;strong&gt;Clone repository&lt;/strong&gt; from the File menu in the opened window, paste the Git URI and click on the &lt;strong&gt;Clone&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;you will see the existing items in the solution explorer panel&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrgsy27bty2urrujyc81.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrgsy27bty2urrujyc81.png" alt="solution"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the below image, select &lt;strong&gt;Project…&lt;/strong&gt; from the &lt;strong&gt;File&lt;/strong&gt; menu, then select &lt;strong&gt;ASP.NET Core Empty&lt;/strong&gt; project type.(obviously, you can select your own project type)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx1hmu3g7we4h6febpqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx1hmu3g7we4h6febpqt.png" alt="folder structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;Configuration&lt;/strong&gt; window, set the properties and click on the &lt;strong&gt;Next&lt;/strong&gt; button&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmykszfbpcxtzvp197whd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmykszfbpcxtzvp197whd.png" alt="visual studio"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, I want to use a &lt;strong&gt;Git&lt;/strong&gt; command to create a &lt;strong&gt;.sln&lt;/strong&gt; file for our solution, if you haven’t installed &lt;strong&gt;GitBash&lt;/strong&gt; on your local system, use this &lt;a href="https://git-scm.com/download/win" rel="noopener noreferrer"&gt;link&lt;/a&gt; to install.&lt;/p&gt;

&lt;p&gt;Git is a set of command line utility programs that are designed to execute on a &lt;br&gt;
Unix style command-line environment. &lt;a href="https://www.atlassian.com/git/tutorials/git-bash#:~:text=At%20its%20core%2C%20Git%20is,in%20Unix%20command%20line%20terminals.&amp;amp;text=In%20Windows%20environments%2C%20Git%20is,of%20higher%20level%20GUI%20applications." rel="noopener noreferrer"&gt;ReadMore&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open the cloned folder in the FileExplorer.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7cvsin5ctvzmpmj8l8l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7cvsin5ctvzmpmj8l8l.png" alt="ww"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the next step, in the opened Window highlight the address bar and type CMD, then hit enter; The &lt;strong&gt;Command Prompt&lt;/strong&gt; will be shown, type the command in order to create a &lt;strong&gt;.sln&lt;/strong&gt; file&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;dotnet new sln&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fks8a1grhr2cdo9pwyoru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fks8a1grhr2cdo9pwyoru.png" alt="gitbash"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To open with the Visual Studio, double click on the newly created solution file. You would have a solution with this structure if you have followed the previous steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5sanilub4uk4wyhwm5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5sanilub4uk4wyhwm5d.png" alt="ssss"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open the root folder and create two folders besides the solution file (&lt;strong&gt;src&lt;/strong&gt; and &lt;strong&gt;test&lt;/strong&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8ti04hwq3kmjhjr0f2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8ti04hwq3kmjhjr0f2j.png" alt="structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From now on every project which related to this solution should locate inside the &lt;strong&gt;src&lt;/strong&gt; folder and all the test project should locate inside the &lt;strong&gt;test&lt;/strong&gt; folder.&lt;/p&gt;

&lt;p&gt;Here you can add all other projects according to your needs (&lt;em&gt;If you use Visual Studio as IDE, take into consideration that the creation of a folder in Visual Studio won't create a physical folder, and you need to create it manually.&lt;/em&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8g9ckotdql2xunup5u8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8g9ckotdql2xunup5u8.png" alt="visual project"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You also may need to add other folders besides your project like:&lt;br&gt;
&lt;strong&gt;docs&lt;/strong&gt;, &lt;strong&gt;tools&lt;/strong&gt;, &lt;strong&gt;releases&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36b6e1c76mpk2rke2k3b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36b6e1c76mpk2rke2k3b.png" alt="conclusion"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Structuring folders in a solution is one of the most important steps in software project implementation.&lt;br&gt;
&lt;em&gt;A good beginning makes a good ending&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Follow me on &lt;a href="https://m-qafouri.medium.com/" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Maybe you like :&lt;br&gt;
&lt;a href="https://dev.to/majidqafouri/writing-logs-into-elastic-with-nlog-elk-and-net-5-0-246c"&gt;Writing logs into Elastic with NLog , ELK and .Net 5.0&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cleancode</category>
      <category>softwareengineering</category>
      <category>programming</category>
      <category>coding</category>
    </item>
    <item>
      <title>Writing logs into Elastic with NLog , ELK and .Net 5.0 (part 1)</title>
      <dc:creator>Matt Ghafouri</dc:creator>
      <pubDate>Wed, 04 Aug 2021 10:25:58 +0000</pubDate>
      <link>https://dev.to/mattqafouri/writing-logs-into-elastic-with-nlog-elk-and-net-5-0-246c</link>
      <guid>https://dev.to/mattqafouri/writing-logs-into-elastic-with-nlog-elk-and-net-5-0-246c</guid>
      <description>&lt;p&gt;If you are using Microservice-based architecture, one of the challenges is to integrate and monitor application logs from different services and ability to search on this data based on message string or sources, etc. You can also read &lt;a href="https://dev.to/majidqafouri/microservice-roadmap-4mci"&gt;microservice roadmap&lt;/a&gt; here.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post assumes you already have a general idea of Elastic and Dotnet core and what it's used for, so if it's new to you, take a look at the Elastic official website.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So, what is the ELK Stack?&lt;br&gt;
“ELK” is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server-side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.&lt;/p&gt;

&lt;p&gt;On the other hand, NLog is a flexible and free logging platform for various .NET platforms, including .NET standard. NLog makes it easy to write to several targets. (database, file, console) and change the logging configuration on-the-fly.&lt;/p&gt;

&lt;p&gt;If you combine these two powerful tools, you can get a good and easy way for processing and persisting your application’s logs&lt;br&gt;
In this article, I’m going to use NLog to write my .Net 5.0 web application logs into Elastic. after that, I will show you how you can monitor and search your logs with different filters by Kibana.&lt;br&gt;
First of all Open Visual Studio and select a new project (ASP.NET Core Web API)(If you want to know the best practice for Folder structure in your project, read my &lt;a href="https://medium.com/@m.qafouri/projects-folder-structures-best-practices-706e4136aaca" rel="noopener noreferrer"&gt;article&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkowygw86fo6sj1sanw3n.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkowygw86fo6sj1sanw3n.PNG" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the next page set the target framework on &lt;strong&gt;.Net 5.0(current)&lt;/strong&gt; and also check Enable &lt;strong&gt;OpenAPI support&lt;/strong&gt; option for using swagger in the project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfjpzgoqajgh7fx1pbb6.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfjpzgoqajgh7fx1pbb6.PNG" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need some Nuget packages, In order to install them, from the &lt;strong&gt;Tools&lt;/strong&gt; menu select  &lt;strong&gt;Nuget Package Manager&lt;/strong&gt; and then select &lt;strong&gt;Manage Nuget packages for solutions&lt;/strong&gt;. &lt;br&gt;
In the opened panel search the below packages and install them one by one &lt;/p&gt;

&lt;p&gt;&lt;code&gt;NLog.Web.AspNetCore&lt;/code&gt;&lt;br&gt;
&lt;code&gt;NLog.Targets.ElasticSearch&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flr03os5v5jdtpvxruffv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flr03os5v5jdtpvxruffv.png" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you installed the packages, you need to add a config file for NLog. In order to add the file, right-click on the current project in the solution and select &lt;strong&gt;add&lt;/strong&gt; =&amp;gt; &lt;strong&gt;new Item&lt;/strong&gt;, then add a &lt;code&gt;web configuration&lt;/code&gt; file and name it &lt;code&gt;nlog.config&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx88x7oaeqz1m0i3shish.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx88x7oaeqz1m0i3shish.PNG" alt="test"&gt;&lt;/a&gt;&lt;br&gt;
Open the newly added file and paste the below codes&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="p"&gt;&amp;lt;?&lt;/span&gt;&lt;span class="n"&gt;xml&lt;/span&gt; &lt;span class="n"&gt;version&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"1.0"&lt;/span&gt; &lt;span class="n"&gt;encoding&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"utf-8"&lt;/span&gt; &lt;span class="p"&gt;?&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;nlog&lt;/span&gt; &lt;span class="n"&gt;xmlns&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"http://www.nlog-project.org/schemas/NLog.xsd"&lt;/span&gt;
      &lt;span class="n"&gt;xmlns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;xsi&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"http://www.w3.org/2001/XMLSchema-instance"&lt;/span&gt;
      &lt;span class="n"&gt;autoReload&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"true"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;extensions&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;add&lt;/span&gt; &lt;span class="n"&gt;assembly&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"NLog.Web.AspNetCore"&lt;/span&gt;&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;add&lt;/span&gt; &lt;span class="n"&gt;assembly&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"NLog.Targets.ElasticSearch"&lt;/span&gt;&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;extensions&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;


    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;targets&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"true"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"elastic"&lt;/span&gt; &lt;span class="n"&gt;xsi&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"ElasticSearch"&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;" MyServiceName-${date:format=yyyy.MM.dd}"&lt;/span&gt;
                &lt;span class="n"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"http://localhost:9200"&lt;/span&gt;
                &lt;span class="n"&gt;layout&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"API:MyServiceName|${longdate}|${event-properties:item=EventId_Id}|${uppercase:${level}}|${logger}|${message} ${exception:format=tostring}|url: ${aspnet-request-url}|action: ${aspnet-mvc-action}"&lt;/span&gt; &lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;targets&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;rules&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"*"&lt;/span&gt; &lt;span class="n"&gt;minlevel&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Debug"&lt;/span&gt; &lt;span class="n"&gt;writeTo&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"elastic"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;rules&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;nlog&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the target tag, we define our configuration like Elastic service URI, layout pattern, Elastic index name and etc.&lt;br&gt;
Let’s take a look at these tags and their attributes&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Index&lt;/strong&gt;: &lt;code&gt;MyServiceName -${date:format=yyyy.MM.dd}&lt;/code&gt;: it means we will have different index for every day’s log. For example, &lt;code&gt;MyServiceName-2021.08.03&lt;/code&gt; related to all the logs of the third of August  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ElasticUri&lt;/strong&gt;:  Elastic URI address.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layout&lt;/strong&gt;: your log message pattern, you can determine which information you want to have in the message of your log.&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;rules&lt;/strong&gt; tag, the &lt;code&gt;minimum level&lt;/code&gt; for the writing log is defined.&lt;/p&gt;

&lt;p&gt;You can read more detail about nlog tag config  &lt;a href="https://github.com/NLog/NLog/wiki/Getting-started-with-ASP.NET-Core-5" rel="noopener noreferrer"&gt;Here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After we understand the structure and meaning of the config file, the next step is to set up NLog logger instead of &lt;strong&gt;.Net build-in logger&lt;/strong&gt;. To achieve this, Open &lt;strong&gt;program.cs&lt;/strong&gt; and change Main method like this&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;NLogBuilder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ConfigureNLog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"nlog.config"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;GetCurrentClassLogger&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
            &lt;span class="k"&gt;try&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"init main"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                &lt;span class="nf"&gt;CreateHostBuilder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Exception&lt;/span&gt; &lt;span class="n"&gt;exception&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exception&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Stopped program because of exception"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                &lt;span class="k"&gt;throw&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="k"&gt;finally&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;NLog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LogManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Shutdown&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Change the &lt;strong&gt;CreateHostBuilder&lt;/strong&gt; in &lt;strong&gt;program.cs&lt;/strong&gt; as well:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="n"&gt;IHostBuilder&lt;/span&gt; &lt;span class="nf"&gt;CreateHostBuilder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
            &lt;span class="n"&gt;Host&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateDefaultBuilder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ConfigureWebHostDefaults&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;webBuilder&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="n"&gt;webBuilder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UseStartup&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Startup&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;
                &lt;span class="p"&gt;})&lt;/span&gt;
                &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ConfigureLogging&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ClearProviders&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
                    &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SetMinimumLevel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LogLevel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Trace&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                &lt;span class="p"&gt;})&lt;/span&gt;
                &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;UseNLog&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;  &lt;span class="c1"&gt;// NLog: Setup NLog for Dependency injection&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the next step, open &lt;strong&gt;Startup.cs&lt;/strong&gt; and in the &lt;strong&gt;ConfigureService&lt;/strong&gt; method add below code &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="n"&gt;services&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddLogging&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now you are able to write your log into elastic with Injecting &lt;strong&gt;ILogger&lt;/strong&gt; in every class constructor.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;ApiController&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;Route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"[controller]"&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;WeatherForecastController&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ControllerBase&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;ILogger&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;WeatherForecastController&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;WeatherForecastController&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ILogger&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;WeatherForecastController&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;_logger&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;HttpGet&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;IActionResult&lt;/span&gt; &lt;span class="nf"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;   
            &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogDebug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Debug message"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogTrace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Trace message"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error message"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogWarning&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Warning message"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogCritical&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Critical message"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogInformation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Information message"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;From now on, If you run your application and send a request with Postman, all types of logs will be written into Elastic.&lt;/p&gt;

&lt;p&gt;Well, We successfully configured our application to write the logs into Elastic, but how we can monitor and search out logs. For this purpose, Elastic has Kibana as a tool that lets users visualize data with charts and graphs in Elasticsearch.&lt;br&gt;
After you install the Elastic, you can have access to &lt;a href="http://localhost:5601/app/kibana/" rel="noopener noreferrer"&gt;Kibana&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwjhm515fy4fkyf8o1ja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwjhm515fy4fkyf8o1ja.png" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First of all, we should be able to create a pattern which our application's log will be fetched by that pattern. For doing this, select the &lt;code&gt;Management&lt;/code&gt; option in the left panel&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frarp5ql2qcvjbelqk7ml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frarp5ql2qcvjbelqk7ml.png" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the management panel select &lt;code&gt;Index Pattern&lt;/code&gt; link in the left side&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6n8ogfb71lkaaoq1sp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6n8ogfb71lkaaoq1sp3.png" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the showed panel click on &lt;code&gt;Create index pattern&lt;/code&gt; button. You will see  &lt;code&gt;Create index pattern&lt;/code&gt; panel, there is an &lt;code&gt;Index pattern&lt;/code&gt; input box which you should input your &lt;code&gt;Index name&lt;/code&gt;(in our example we set the index name in the &lt;strong&gt;nlog.config&lt;/strong&gt; file like this &lt;code&gt;MyServiceName-${date:format=yyyy.MM.dd}&lt;/code&gt;.)&lt;br&gt;
If you want your pattern to include wild character instead of the date you should write down &lt;code&gt;myservicename-*&lt;/code&gt; in the input(it means we want to create a pattern that is able to load all logs with a prefix of &lt;code&gt;myservicename-&lt;/code&gt; ).&lt;/p&gt;

&lt;p&gt;If there were any logs with this pattern, a Success message will be shown beneath the input box. You can select the pattern and click on Next step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqds6rn0dlrany5l6215g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqds6rn0dlrany5l6215g.png" alt="test"&gt;&lt;/a&gt;&lt;br&gt;
In the next panel, you can add a &lt;code&gt;Time filter&lt;/code&gt; to your pattern, which means you will be able to filter logs according to the insertion date. After you selected &lt;code&gt;@timestamp&lt;/code&gt;, click on &lt;code&gt;Create index pattern&lt;/code&gt; button to complete this process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsp8hbezchd2hngipz2h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsp8hbezchd2hngipz2h.png" alt="test"&gt;&lt;/a&gt;&lt;br&gt;
Now click on the &lt;code&gt;discover&lt;/code&gt; link on the left panel to redirect to the logged dashboard.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0lfdobxunpxvj38qaxj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0lfdobxunpxvj38qaxj.png" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you see, we have several logs with different types related to our application. If you do not see any log, be sure that you have selected your newly defined pattern on the left side of the dashboard&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i5htvc6dulcr0qvede4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i5htvc6dulcr0qvede4.png" alt="test"&gt;&lt;/a&gt;&lt;br&gt;
For filtering our logs according to the log message or insertion date you can use the filter bar located beneath the menu bar.&lt;/p&gt;

&lt;p&gt;For example, in this picture, I have search logs that contain &lt;strong&gt;critical&lt;/strong&gt; word in their log messages. &lt;br&gt;
After you set your filter click on the green &lt;code&gt;update&lt;/code&gt; button on the right side.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4bn4mopn3e5741hh665.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4bn4mopn3e5741hh665.png" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it. Now you can easily filter your application's logs with different patterns. &lt;/p&gt;

&lt;p&gt;For more information about Kibana features you can use this &lt;a href="https://www.elastic.co/guide/en/kibana/current/get-started.html" rel="noopener noreferrer"&gt;link&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Aggregating application logs is one of the benefits of Elastic logstash and NLog, but with this approach, you also can easily export your application logs in any format and also could create aggregate logs which is very helpful for analyzing your logs according to the type of logs.&lt;/p&gt;

&lt;p&gt;Read &lt;a href="https://dev.to/majidqafouri/writing-logs-into-elastic-with-nlog-elk-and-net-5-0-246c"&gt;Writing logs into Elastic with NLog , ELK and .Net 5.0 (Part 1)&lt;/a&gt;&lt;br&gt;
Follow me on &lt;a href="https://m-qafouri.medium.com/" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Maybe you like :&lt;br&gt;
&lt;a href="https://dev.to/majidqafouri/projects-folder-structures-best-practices-g9d"&gt;Projects Folder Structures Best Practices&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>elasticsearch</category>
      <category>csharp</category>
      <category>logstash</category>
    </item>
  </channel>
</rss>
