<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rayan Dasoriya</title>
    <description>The latest articles on DEV Community by Rayan Dasoriya (@rayandasoriya).</description>
    <link>https://dev.to/rayandasoriya</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rayandasoriya"/>
    <language>en</language>
    <item>
      <title>Serverless error-handling pipeline using Redis</title>
      <dc:creator>Rayan Dasoriya</dc:creator>
      <pubDate>Sat, 05 Sep 2020 20:39:44 +0000</pubDate>
      <link>https://dev.to/rayandasoriya/serverless-error-handling-pipeline-using-redis-54em</link>
      <guid>https://dev.to/rayandasoriya/serverless-error-handling-pipeline-using-redis-54em</guid>
      <description>&lt;p&gt;Errors are an essential part of an application. These are the thing that you don't want to occur, but if they do, then there should be some mechanism to detect them. These errors can be due to some service unavailability, bugs, or any other unforeseen issue. In this article, I am going to highlight how we used IBM Cloud Functions, a serverless offering by IBM, and Redis to log the errors in our Slack channel.&lt;/p&gt;

&lt;h1&gt;
  
  
  Background
&lt;/h1&gt;

&lt;p&gt;We have a Node.js application that interacts with Salesforce CRM offerings and performs certain operations. There are times when the application gets an error message. Whenever we get these messages, we want to detect and log these errors into our Slack channel so that appropriate actions can be taken.&lt;/p&gt;

&lt;h1&gt;
  
  
  Technology used
&lt;/h1&gt;

&lt;p&gt;We used &lt;a href="https://cloud.ibm.com/functions/"&gt;IBM Cloud Functions&lt;/a&gt;, which is a FaaS(Functions as a Service) platform based on Apache Openwhish. It allows us to run the application without worrying about the configurations. &lt;a href="https://redis.io/"&gt;Redis&lt;/a&gt; cache was used to store the error logs within our system. It is an in-memory key-value data structure with in-build replication and LRU cache features.&lt;/p&gt;

&lt;h1&gt;
  
  
  Working
&lt;/h1&gt;

&lt;p&gt;Our Node.js application runs within our IBM Cloud Kubernetes cluster which interacts with Salesforce CRM systems. Whenever it sees an error, we trigger IBM Cloud Functions. At this point, the Node.js application continues its work in the Kubernetes cluster without worrying about the errors. The control for error handling is being transferred to IBM Cloud functions.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q-pSDBAb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n1yu9sznl8dwx6suqb9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q-pSDBAb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n1yu9sznl8dwx6suqb9v.png" alt="Application Workflow"&gt;&lt;/a&gt;&lt;br&gt;
IBM Cloud functions try to establish a connection with the Redis cluster and stores the log over there. We categorize the type of error and then check if the error is eligible to be sent over to the Slack channel. If we have to send the message on Slack, we use Slack Webhooks. Once this message is sent, we remove the log from Redis.&lt;/p&gt;

&lt;p&gt;We are using Redis to make sure that our logs have been processed. If we get an error next time, we will check if our Redis cluster contains any error messages that need to be processed. It will then retry sending these messages to Slack, thus helping us to ensure we don't miss any errors.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubernetes monitoring with Prometheus</title>
      <dc:creator>Rayan Dasoriya</dc:creator>
      <pubDate>Thu, 25 Jul 2019 16:49:11 +0000</pubDate>
      <link>https://dev.to/rayandasoriya/kubernetes-monitoring-with-prometheus-2l7k</link>
      <guid>https://dev.to/rayandasoriya/kubernetes-monitoring-with-prometheus-2l7k</guid>
      <description>&lt;p&gt;Microservices architecture is going to be one of the essential features in software development in the coming years. Packing a large monolithic application into small containers poses various advantages. One of the key advantages I can think of is that if certain things fail, then a part of the application will be down and it can auto-heal rather than crashing the whole application. This is one of the reasons that when Instagram, Facebook, and Whatsapp crashes, only certain functionalities stop working. The application, as a whole, is still working. This serves a great advantage. Also, when the demand for a particular service or component increases, the number of components can be increased or decreased easily. The small containers which I am referring here are the Docker containers and the application is deployed on Kubernetes. Kubernetes is a container orchestration software which manages the containers at various levels and enables the management of connections and endpoints of these containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/?source=post_page" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; or K8s is an open-source self-healing application which manages the deployment, scaling, and operation of these containers. It was originally developed by Google but later on, donated to CNCF. Since Kubernetes provides tons of services, there needs to be an easier way to monitor the activities in the Kubernetes cluster. This is possible by &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;. It is an open-source solution to monitor the metrics and manage the alerts in the system. It was developed by SoundCloud, but later joined CNCF as the second hosted project after Kubernetes. Prometheus provided a rich set of monitoring metrics and alert management system which helps the developers to monitor and get notified about any unusual activity or consumption.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prometheus Operator
&lt;/h2&gt;

&lt;p&gt;CoreOs launched &lt;a href="https://github.com/coreos/prometheus-operator" rel="noopener noreferrer"&gt;Prometheus operator&lt;/a&gt; to ease the process of integrating K8s with Prometheus. It preserves the configuration of both the K8s and Prometheus while installing and configuring the cluster. It provides easy monitoring for K8s services and deployments, along with managing Prometheus, Grafana and Alertmanager configuration.&lt;br&gt;
When a new version of the application is deployed, K8s manages the creation of new pod and deletes the older one. Prometheus, on the other hand, constantly watches the K8s API and creates a new Prometheus configuration whenever it detects a change, based on the services/pods changes. It uses a ServiceMonitor, a CRD(Custom Resource Definition), to abstract the configuration to target.&lt;/p&gt;
&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes&lt;/li&gt;
&lt;li&gt;Helm (Package installer for K8s)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Steps
&lt;/h3&gt;

&lt;p&gt;Install Prometheus operator in a different namespace. It is preferable to keep your monitoring containers in a separate namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm install stable/prometheus-operator --name prometheus-operator --namespace monitor

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything got installed perfectly, you can see these pods available:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -n monitor
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-operator-alertmanager-0          2/2     Running   0          13d
prometheus-operator-grafana-749b598b6c-t4r48             2/2     Running   0          13d
prometheus-operator-kube-state-metrics-d7b8b7666-zfqg5   1/1     Running   0          13d
prometheus-operator-operator-667dd7cbb7-hjbl6            1/1     Running   0          13d
prometheus-operator-prometheus-node-exporter-mgsqb       1/1     Running   0          13d
prometheus-prometheus-operator-prometheus-0              3/3     Running   1          13d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run the dashboard, enter the following command and go to &lt;code&gt;http://localhost:9000&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl port-forward -n monitor prometheus-prometheus-operator-prometheus-0 9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Prometheus Dashboard
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fztrejj1mdkknfsto0664.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fztrejj1mdkknfsto0664.png"&gt;&lt;/a&gt;&lt;br&gt;
You can enter your query to get the results about any particular instance or even a graph of it as shown in the figure above.&lt;br&gt;
To see the visual representation at each level, we use Grafana. It provides some great visual insights regarding the usage, health and other metrics. We can also add more custom metrics. We will get real-time analysis of the data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl port-forward $(kubectl get  pods --selector=app=grafana -n  monitor --output=jsonpath="{.items..metadata.name}") -n monitor  3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go to &lt;code&gt;http://localhost:3000&lt;/code&gt; and enter ‘admin’ as username and ‘prom-operator’ as password. These are the available options:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fiih0cf7ss3bhy7ohgl0k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fiih0cf7ss3bhy7ohgl0k.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can get visual graphs by selecting any one of the options. Node-level metrics are shown here:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F66znm7td8s6tvx2n8jce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F66znm7td8s6tvx2n8jce.png"&gt;&lt;/a&gt;&lt;br&gt;
We can configure alerts in many ways. We can access the dashboard to configure the AlertManager by going to &lt;code&gt;http://localhost:9093&lt;/code&gt; after executing this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl port-forward -n monitor alertmanager-prometheus-operator-alertmanager-0 9093
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F9x1hllzbntj78nrjaruw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F9x1hllzbntj78nrjaruw.png"&gt;&lt;/a&gt;&lt;br&gt;
It’ll look like this. Here you can add more alerts and can see the Slack API URL under the status tab. We can set the notifications on Slack, HipChat or even email. Some of the templates are available &lt;a href="https://prometheus.io/docs/alerting/notification_examples/?source=post_page" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The use of microservices is going to increase soon and monitoring the metrics and alert notifications are going to be an essential part of it. Prometheus provides optimal monitoring services with easy installation services.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
