<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ajit Chelat</title>
    <description>The latest articles on DEV Community by Ajit Chelat (@ajitchelat).</description>
    <link>https://dev.to/ajitchelat</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ajitchelat"/>
    <language>en</language>
    <item>
      <title>How AIOps Helps in Application Monitoring</title>
      <dc:creator>Ajit Chelat</dc:creator>
      <pubDate>Wed, 07 Jul 2021 16:10:53 +0000</pubDate>
      <link>https://dev.to/logiq/how-aiops-helps-in-application-monitoring-2h6i</link>
      <guid>https://dev.to/logiq/how-aiops-helps-in-application-monitoring-2h6i</guid>
      <description>&lt;p&gt;There’s no one-size-fits-all approach regarding application monitoring, especially for companies using applications in various cloud environments. Companies are rapidly investing in microservices, mobile apps, data science programs, data ops, etc. Subsequently, they’re also integrating monitoring tools to improve domain-centric monitoring abilities.&lt;/p&gt;

&lt;p&gt;AIOps tools help streamline the use of monitoring applications. It allows companies that need high application services to efficiently manage the complexities of IT workflows and monitoring tools. AIOps extends machine learning and automation abilities to IT operations. These robust technologies aim to detect vulnerabilities and issues to resolve them, determine operational trends, and simplify the remediation of the problems that affect their applications’ performance and availability.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Exactly Is AIOps?
&lt;/h2&gt;

&lt;p&gt;AIOps is short for Artificial Intelligence for IT Operations. AIOps combines machine learning, data analytics, and many other AI technologies to automate the identification and remediation of common and recurring IT operations issues. AIOps leverages data from logs and event recordings to monitor assets and obtain visibility into dependencies without interfering with IT systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Capabilities of AIOps Platforms
&lt;/h2&gt;

&lt;p&gt;AIOps platforms provide the following capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Machine learning capabilities to help in identifying patterns in the collected data.&lt;/li&gt;
&lt;li&gt;A dedicated data platform for aggregating raw data and logs from various monitoring tools and data sources across your applications and infrastructure. &lt;/li&gt;
&lt;li&gt;Dashboards, analytics, and console integration help IT operations gain a single-pane view over their applications and infrastructure.&lt;/li&gt;
&lt;li&gt;Out-of-the-box integrations with tools used for IT service management, monitoring, agile development, collaboration, and log data collection, parsing, and ingestion tools. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Does AIOps Work?
&lt;/h2&gt;

&lt;p&gt;AIOps platforms are powered by algorithms that automate and simplify prominent aspects of IT operations and application monitoring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Selection&lt;/strong&gt;: It collects all the data generated by applications and infrastructure in the form of logs and events and analyzes it. Post analysis, AIOps platforms highlight data that has an issue.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pattern Discovery&lt;/strong&gt;: AIOps platforms correlate and find relationships between different data elements in the form of patterns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interference&lt;/strong&gt;: AIOps determines the root causes of new and recurring issues allowing companies to take proactive actions to mitigate the implications of these issues. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaboration&lt;/strong&gt;: AIOps platforms simplify and promote collaboration across IT teams through unified dashboards and intelligent notification systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation&lt;/strong&gt;: AIOps works towards automating responses to issues and threats as much as possible, thereby making issue and threat remediation quick and straightforward. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Improved Application Monitoring with AIOps
&lt;/h2&gt;

&lt;p&gt;The adoption of AIOps has numerous benefits – right from processing data from multiple sources faster and using that data to make data-driven decisions, to making IT operations more proactive by predicting and remediating performance issues across applications and deployments. Let’s take a closer look at how AIOps is helpful in improving your application monitoring efforts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Detect Hidden Relationships
&lt;/h3&gt;

&lt;p&gt;IT operations and monitoring are an extensive web of interdependencies; no system works independently. However, with so much data present, it is challenging to understand the relationships between systems. AIOps allows you to evaluate performance metrics across different types of systems quickly. This can help identify the impact of IT applications on the overall company’s performance and customer satisfaction. &lt;/p&gt;

&lt;p&gt;This is accomplished by initially working with the business to determine mission-critical activities for such applications. The next step is to gather data produced during the day-to-day tasks like orders, cancellations, transactions, etc. AIOps algorithms can be leveraged to identify patterns or clusters in the collected data, allowing businesses to understand the relationships better.&lt;/p&gt;

&lt;h3&gt;
  
  
  Optimizing The Use of Customer And Transaction Data
&lt;/h3&gt;

&lt;p&gt;Capabilities of AIOps can help in the identification of patterns, anomaly recognition, categorization, and extrapolation. These are essential aspects of big data analytics operations that the organization applies to the transaction and customer data. Leveraging AIOps can help in understanding user behavior in broad IT systems. &lt;/p&gt;

&lt;p&gt;This will make it easier to monitor how any modifications on the applications will affect the business operations. By harnessing internal application monitoring data, AIOps can bring together customer and transaction data effectively. When the information is readily available, a business can efficiently choose the right path for the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Forecasting The Issues
&lt;/h3&gt;

&lt;p&gt;An essential role of AIOps is ameliorating the predictive analytics activities. It closely studies the current and past behavior of the apps. This allows the technology to predict future scenarios, enabling the business to adjust its strategies. This proactive approach helps in improving application performance and also in gaining competitive advantages. &lt;/p&gt;

&lt;p&gt;For instance, companies can identify changing trends in how users are interacting with apps. So they will have a clear idea of the areas that they need to focus. Moreover, AIOps allows businesses to perform a deep analysis of the cause of the problem. Not just that, it will also take the necessary steps to eliminate the issue before it impacts the performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Decrease The Response Time
&lt;/h3&gt;

&lt;p&gt;By leveraging AIOps, companies can reduce the response time of dealing with errors and outages. Experts believe that AIOps can reduce the cost of events like errors, &lt;a href="https://thenewstack.io/the-current-state-of-aiops/"&gt;outages by 30% to 40%&lt;/a&gt;. This signifies a massive saving considering that the average cost that a company bears in service disruption is approximately $300,000 per hour. &lt;/p&gt;

&lt;p&gt;This is due to the ability of this powerful technology to detect where the data originates. Every system that a business uses produced a lot of data, making it harder to track the source of information. But AIOps manages the massive amount of data from a central location, allowing better process and application security.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bringing Together Silos
&lt;/h3&gt;

&lt;p&gt;One of the hurdles in improving application performance is how siloed organizations can be. More than 90% of IT professionals say that most monitoring tools only provide them with information related to their areas of responsibility.&lt;/p&gt;

&lt;p&gt;But AIOps can deal with this issue by leveraging data analytics and machine learning. These technologies allow the tools to monitor tons of information streams. Such extensive monitoring makes it easier to spot problems that would otherwise be difficult to spot with a siloed approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;IT leverages a lot of application monitoring tools to maintain operational efficiency. However, each of these tools collects a massive amount of data that needs to be maintained. The team fails to detect vulnerability and issues in the complex web of data, leading to security threats. By harnessing the potentials of AIOps, IT teams can automate and improve their application monitoring processes by leaps and bounds&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>apm</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>How to stream AWS CloudWatch logs to LOGIQ</title>
      <dc:creator>Ajit Chelat</dc:creator>
      <pubDate>Sat, 26 Jun 2021 10:51:06 +0000</pubDate>
      <link>https://dev.to/logiq/how-to-stream-aws-cloudwatch-logs-to-logiq-388e</link>
      <guid>https://dev.to/logiq/how-to-stream-aws-cloudwatch-logs-to-logiq-388e</guid>
      <description>&lt;p&gt;AWS CloudWatch is an observability and monitoring service that provides you with actionable insights to monitor your applications, stay on top of performance changes, and optimize resource utilization while providing a centralized view of operational health. AWS CloudWatch collects operational data of your AWS resources, applications, and services running on AWS and on-prem servers in the form of logs, metrics, and events. CloudWatch then uses this data to help detect and troubleshoot issues and errors in your environments, visualize logs and metrics, set up and take automated actions, and uncover insights that help keep your applications and deployments running smoothly. &lt;/p&gt;

&lt;p&gt;AWS CloudWatch provides excellent observability for your applications and infrastructure hosted on AWS. But what about your applications and resources hosted on service providers? While you can stream their logs into CloudWatch using proxies and exporters, it isn’t that straightforward. You’d have to monitor them separately using a your service provider’s own monitoring tool or build something in-house using Prometheus or Grafana, maybe. Why train your eyes to watch multiple monitoring tools when you can centralize monitoring and observability across your on-premise servers and cloud providers with LOGIQ? LOGIQ plugs into numerous data sources to centralize your logs and visualize them in a single pane regardless of the service provider. &lt;/p&gt;

&lt;p&gt;You can easily stream your AWS CloudWatch logs into LOGIQ, thereby letting you monitor your AWS resources applications along with everything else you’re watching with LOGIQ. You can also &lt;a href="https://logiq.ai/integrated-ui/"&gt;visualize and analyze&lt;/a&gt; your AWS CloudWatch logs in real-time and gain powerful insights into their performance and security.&lt;/p&gt;

&lt;p&gt;This guide will show you how you can stream your AWS CloudWatch logs into LOGIQ in no time. You can get yourself a free-forever instance of the &lt;a href="https://docs.logiq.ai/logiq-server/logiq-paas-community-edition"&gt;LOGIQ PaaS Community Edition&lt;/a&gt; and try out the steps listed in this article to stream your AWS CloudWatch logs to LOGIQ.&lt;/p&gt;

&lt;h2&gt;
  
  
  LOGIQ’s AWS CloudWatch Exporter Lambda function
&lt;/h2&gt;

&lt;p&gt;Since we love keeping it simple at LOGIQ, we’ve built an AWS Lambda function that enables you to export your CloudWatch logs to your LOGIQ instance. This AWS Lambda function acts as a trigger for a CloudWatch log stream.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kAxDH38---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/15cntzx0itujudujpd1a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kAxDH38---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/15cntzx0itujudujpd1a.png" alt="How the LOGIQ CloudWatch Exporter Lambda function works"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the LOGIQ CloudWatch Exporter Lambda Function
&lt;/h2&gt;

&lt;p&gt;You can create the LOGIQ CloudWatch Exporter Lambda Function using the CloudFormation template available at &lt;a href="https://logiqcf.s3.amazonaws.com/cloudwatch-exporter/cf.yaml"&gt;https://logiqcf.s3.amazonaws.com/cloudwatch-exporter/cf.yaml&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Alternatively, you can also use the code available in our client integrations Bitbucket repository to create the Lambda function. &lt;/p&gt;

&lt;p&gt;This CloudFormation template creates a Lambda function along with the permissions it needs. Before using this template, you’ll need to configure the following attributes:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;APPNAME&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A readable application name for LOGIQ to partition logs by.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;CLUSTERID&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A Cluster ID for LOGIQ to partition logs by.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;NAMESPACE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A namespace for LOGIQ to partition logs by.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;LOGIQHOST&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;IP address or hostname of your LOGIQ instance.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;INGESTTOKEN&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;JWT token to securely ingest logs into LOGIQ&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Creating and configuring the CloudWatch trigger
&lt;/h2&gt;

&lt;p&gt;Once you’ve created the AWS Lambda function, it’s time to create and configure the CloudWatch trigger. On your AWS dashboard, do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the AWS Lambda function you just created (logiq-cloudwatch-exporter).&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add Trigger&lt;/strong&gt;. &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ePNa72t0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r0phn9mo7lsmpz262pfk.png" alt="Adding a CloudWatch trigger"&gt;
&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Add Trigger&lt;/strong&gt; page, select &lt;strong&gt;CloudWatch Logs&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Next, select the &lt;strong&gt;Log group&lt;/strong&gt; you’d like to stream to LOGIQ.&lt;/li&gt;
&lt;li&gt;Enter a &lt;strong&gt;Filter name&lt;/strong&gt; and optionally add a &lt;strong&gt;Filter pattern&lt;/strong&gt;. &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GpmM8lwx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/67hby8yty67fofn7kjk9.png" alt="Configuring the CloudWatch trigger"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And that’s it! All new logs from the CloudWatch log group you configured are streamed directly to your LOGIQ instance.&lt;/p&gt;

&lt;p&gt;From here, you can easily view, query, visualise and analyse your CloudWatch logs while detecting anomalies in real-time thereby helping you keep your AWS applications and resources always on and performing at their best.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JVfKXrp---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cevx0zpnpobtswhn6bnx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JVfKXrp---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cevx0zpnpobtswhn6bnx.png" alt="The LOGIQ dashboard streaming logs from AWS CloudWatch"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you enjoyed trying out this guide and the Community Edition of LOGIQ PaaS, let us know in the comments. You can also reach out to us if you'd like a detailed demo of the LOGIQ Observability platform and witness first-hand how LOGIQ can help you derive more value from your log data.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudwatch</category>
      <category>monitoring</category>
      <category>devops</category>
    </item>
    <item>
      <title>Shipping and Visualizing Jenkins Logs with LOGIQ</title>
      <dc:creator>Ajit Chelat</dc:creator>
      <pubDate>Fri, 25 Jun 2021 14:54:23 +0000</pubDate>
      <link>https://dev.to/logiq/shipping-and-visualizing-jenkins-logs-with-logiq-2ccf</link>
      <guid>https://dev.to/logiq/shipping-and-visualizing-jenkins-logs-with-logiq-2ccf</guid>
      <description>&lt;p&gt;Jenkins is by far the leading open-source automation platform. A majority of developers turn to Jenkins to automate processes in their development, test, and deployment pipelines. Jenkins’ support for plugins helps automate nearly every task and set up robust continuous integration and continuous delivery pipelines. &lt;/p&gt;

&lt;p&gt;Jenkins provides logs for every Job it executes. These logs offer detailed records related to a Job, such as a build name and number, time for completion, build status, and other information that help analyze the results of running the Job. A typical large-scale implementation of Jenkins in a multi-node environment with multiple pipelines generates tons of logs, making it challenging to identify errors and analyze their root cause(s) whenever there’s a failure. Setting up centralized observability for your Jenkins setup can help overcome these challenges by providing a single pane to log, visualize, and analyze your Jenkins logs. A robust observability platform enables you to debug pipeline failures, optimize resource allocation, and identify bottlenecks in your pipeline that hamper faster delivery. &lt;/p&gt;

&lt;p&gt;We’ve all come across numerous articles that discuss using the popular ELK stack to track and analyze Jenkins logs. While the ELK stack is a popular service for logging and monitoring, its &lt;a href="https://logiq.ai/major-challenges-in-elk-stack-logging/" rel="noopener noreferrer"&gt;use can be a little challenging&lt;/a&gt;. While the ELK stack performs brilliantly in simple, single-use scenarios, it struggles with manageability and scalability in large-scale deployments. Additionally, their associated costs (and changes in Elastic Licensing) might raise a few eyebrows. LOGIQ, on the other hand, is a true-blue observability PaaS that helps you ingest log data from &lt;a href="https://logiq.ai/k8s/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, &lt;a href="https://logiq.ai/monitoring/" rel="noopener noreferrer"&gt;on-prem servers or cloud VMs, applications&lt;/a&gt;, and &lt;a href="https://logiq.ai/integrations/" rel="noopener noreferrer"&gt;several other data sources&lt;/a&gt; without a price shock. As LOGIQ uses S3 as the primary storage layer, you get better control and ownership over your data and as much as 10X reductions in costs in large-scale deployments. In this article that’s part of a two-article series, we’ll demonstrate how you can get started with Jenkins log analysis using LOGIQ. We’ll walk you through installing Logstash, setting up your Jenkins instance, and ingesting log data into LOGIQ to visualize and analyze your Jenkins logs. &lt;/p&gt;

&lt;h2&gt;
  
  
  Before you begin
&lt;/h2&gt;

&lt;p&gt;Before we dive into the demo, here’s what you’d need in case you’d like to follow along and try integrating your Jenkins logs with LOGIQ:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A LOGIQ instance&lt;/strong&gt;: If you don’t have access to a LOGIQ instance, you can quickly spin up the &lt;a href=""&gt;free-forever Community Edition of LOGIQ PaaS&lt;/a&gt;.
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A Jenkins instance&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installing Logstash
&lt;/h2&gt;

&lt;p&gt;Logstash is a free server-side data processing pipeline that ingests data from many sources, transforms it, and then sends it to your favourite stash. We’ll use Logstash as an intermediary between Jenkins and LOGIQ that grooms your Jenkins log data before being ingested by LOGIQ. &lt;/p&gt;

&lt;p&gt;To install Logstash on your local (Ubuntu) machine, run the following commands in succession:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install apt-transport-https
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install logstash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For detailed instructions on installing Logstash on other OSs, refer to the &lt;a href="https://www.elastic.co/guide/en/logstash/current/installing-logstash.html" rel="noopener noreferrer"&gt;official Logstash documentation&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Now that we’ve installed Logstash download the flatten configuration and place it in your desired directory. The &lt;a href="https://github.com/hegdesandesh25/Logstashconfig/blob/main/FlattenJSON.rb" rel="noopener noreferrer"&gt;flatten configuration&lt;/a&gt; helps structure data before ingestion into LOGIQ. Once you’ve downloaded the flatten configuration, use the following Logstash configuration to push your Jenkins logs to LOGIQ:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input {
  tcp {
    port =&amp;gt; 12345
    codec =&amp;gt; json
  }
}
output { stdout { codec =&amp;gt; rubydebug } }
filter {
    split {
        field =&amp;gt; "message"
    }
  mutate {
    add_field =&amp;gt; { "cluster_id" =&amp;gt; "JENKINS-LOGSTASH" }
    add_field =&amp;gt; { "namespace" =&amp;gt; "jenkins-ci-cd-1" }
    add_field =&amp;gt; { "application" =&amp;gt; "%{[data][fullProjectName]}" }
    add_field =&amp;gt; { "proc_id" =&amp;gt; "%{[data][displayName]}" }
  }
ruby {
        path =&amp;gt; "/home/yourpath/flattenJSON.rb"
        script_params =&amp;gt; { "field" =&amp;gt; "data" }
    }
}
output {
  http {
        url =&amp;gt; "http://&amp;lt;logiq-instance&amp;gt;/v1/json_batch"
        http_method =&amp;gt; "post"
        format =&amp;gt; "json_batch"
        content_type =&amp;gt; "application/json"
        pool_max =&amp;gt; 300
        pool_max_per_route =&amp;gt; 100
       }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Make sure you change the path in the configuration to the path where you downloaded the flatten configuration file. Also, remember to replace the LOGIQ endpoint with the endpoint of your LOGIQ instance. If you haven’t provisioned LOGIQ yet, you can do so by following one of our &lt;a href="https://docs.logiq.ai/logiq-server/quickstart-guide" rel="noopener noreferrer"&gt;quickstart guides&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Jenkins
&lt;/h2&gt;

&lt;p&gt;Now that we’ve got Logstash ready to go, let’s go ahead and configure Jenkins to use Logstash. For this demo, we’ve created two Jenkins pipeline jobs whose logs we’ll push to Logstash. You can use your own Jenkins logs when following along. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhvmfkikffwnejb7z62m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhvmfkikffwnejb7z62m.png" alt="The Jenkins dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To push Jenkins logs to Logstash, we first need to install the Logstash plugin on Jenkins. To install Logstash, do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log on to your Jenkins instance. &lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Manage Jenkins&lt;/strong&gt; &amp;gt; &lt;strong&gt;Manage Plugins&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Search for &lt;strong&gt;Logstash&lt;/strong&gt; under &lt;strong&gt;Available&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Once Logstash shows up, click &lt;strong&gt;Install without restart&lt;/strong&gt;. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwscvnj3w6poth2fusqw3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwscvnj3w6poth2fusqw3.png" alt="Installing the Logstash plugin on Jenkins"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After installing Logstash, we’ll go ahead and configure and enable Jenkins to push logs to Logstash. To configure Jenkins, do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Manage Jenkins&lt;/strong&gt; &amp;gt; &lt;strong&gt;Configure System&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Scroll down until you see &lt;strong&gt;Logstash&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Enter the &lt;strong&gt;Host name&lt;/strong&gt; and &lt;strong&gt;Port&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flocbxqw07a2urraou56y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flocbxqw07a2urraou56y.png" alt="Configuring the Logstash plugin"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: In this example, we’ve entered the IP address and port number of the local Ubuntu machine on which we installed Logstash. Ensure that you provide the IP address and port number of the machine where you’ve installed Logstash. &lt;/p&gt;

&lt;p&gt;Your Jenkins instance is now ready to push logs to Logstash&lt;/p&gt;

&lt;h2&gt;
  
  
  Shipping logs to LOGIQ
&lt;/h2&gt;

&lt;p&gt;We’ve got Jenkins ready to ship logs to Logstash and Logstash prepared to pick them up and groom them for ingestion into LOGIQ. Let’s go ahead and start Logstash from the installation folder (&lt;code&gt;/usr/share/logstash&lt;/code&gt;) and pass the custom configuration file we prepared above using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/usr/share/logstash# bin/logstash -f /etc/logstash/logstash-sample.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it! Your logging pipeline is up and running. Now when you head over to the Logs page on your LOGIQ dashboard, you’ll see all of your Jenkins logs that Logstash pushed to LOGIQ. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnjh3upht6wy9r2g1wtz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnjh3upht6wy9r2g1wtz.png" alt="The Logs page on your LOGIQ dashboard with Jenkins logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here, you can create custom metrics from your logs, create events and alerts, and set up powerful dashboards that help visualize your log data. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobd9f9veil7a38m9khzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobd9f9veil7a38m9khzy.png" alt="Visualising your Jenkins log data using LOGIQ"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This completes our overview on shipping and visualizing your Jenkins logs with LOGIQ. In a future article, we'll show you exactly how you can create powerful visualizations from your Jenkins logs. In the meanwhile, do drop a comment or &lt;a href="https://logiq.ai/" rel="noopener noreferrer"&gt;reach out&lt;/a&gt; to us in case have any questions or would like to know more about how LOGIQ can bring in multi-dimensional observability to your applications and infrastructure and bring your log data to life.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>monitoring</category>
      <category>operations</category>
      <category>jenkins</category>
    </item>
    <item>
      <title>Getting Started with the LOGIQ PaaS Community Edition</title>
      <dc:creator>Ajit Chelat</dc:creator>
      <pubDate>Thu, 24 Jun 2021 14:30:30 +0000</pubDate>
      <link>https://dev.to/logiq/getting-started-with-the-logiq-paas-community-edition-1a88</link>
      <guid>https://dev.to/logiq/getting-started-with-the-logiq-paas-community-edition-1a88</guid>
      <description>&lt;p&gt;If you’ve been looking for an inexpensive way to run your own observability stack while maintaining complete control over your data and its security, look no further. The LOGIQ PaaS Community Edition is officially live!&lt;/p&gt;

&lt;p&gt;With the LOGIQ PaaS Community Edition, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Self-host your observability stack on a cloud provider of your choice – public or private &lt;/li&gt;
&lt;li&gt;Ingest up to &lt;strong&gt;50GB&lt;/strong&gt; of log data &lt;strong&gt;per day&lt;/strong&gt; with &lt;strong&gt;unlimited data retention&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Store your log data on any S3-compatible cloud provider via the built-in Minio S3 service&lt;/li&gt;
&lt;li&gt;Ingest logs from Syslog, RSyslog, Logstash, Fluent, AWS Firelens, JSON, and &lt;strong&gt;plenty more&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Run up to &lt;strong&gt;4 ingest worker&lt;/strong&gt; processes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ll also get access to all of the LOGIQ Enterprise Edition’s features along with Community Support, &lt;strong&gt;free forever&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5bqcmn5z8zyjygvm0vb.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5bqcmn5z8zyjygvm0vb.gif" alt="The LOGIQ UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What’s more? Deploying LOGIQ PaaS is ridiculously easy! This article will show you exactly how you can deploy the LOGIQ PaaS Community Edition on your Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before you begin
&lt;/h2&gt;

&lt;p&gt;To get you up and running with the LOGIQ PaaS Community Edition quickly, we’ve made LOGIQ PaaS’ Kubernetes components available as &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; Charts. To deploy LOGIQ PaaS, you’ll need access to a Kubernetes cluster and Helm 3.&lt;/p&gt;

&lt;p&gt;Before you start deploying LOGIQ PaaS, let’s run through a few quick steps to set up your environment correctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add the LOGIQ Helm repository
&lt;/h3&gt;

&lt;p&gt;Add LOGIQ’s Helm repository to your Helm repositories by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add logiq-repo https://logiqai.github.io/helm-charts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Helm repository you just added is named logiq-repo. Whenever you install charts from this repository, ensure that you use the repository name as the prefix in your install command, as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install &amp;lt;deployment_name&amp;gt; logiq-repo/&amp;lt;chart_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now search for the Helm charts available in the repository by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm search repo logiq-repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running this command displays a list of the available Helm charts along with their details, as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm repo update
$ helm search repo logiq-repo
NAME                CHART VERSION    APP VERSION    DESCRIPTION
logiq-repo/logiq    2.2.11           2.1.11         LOGIQ Observability HELM chart for Kubernetes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’ve already added LOGIQ’s Helm repository in the past, you can update the repository by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create a namespace to deploy LOGIQ PaaS
&lt;/h3&gt;

&lt;p&gt;Create a namespace where we’ll deploy LOGIQ PaaS by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace logiq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running the command shown above creates a namespace named &lt;code&gt;logiq&lt;/code&gt;. You can also name your namespace differently by replacing &lt;code&gt;logiq&lt;/code&gt; with the name of your choice in the command above. In case you do, remember to use the same namespace for the rest of the instructions listed in this guide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: Ensure that the name of the namespace is not more than 15 characters in length.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prepare your Values file
&lt;/h3&gt;

&lt;p&gt;Just as any other package deployed via Helm charts, you can configure your LOGIG PaaS deployment using a Values file. The Values file acts as the Helm chart’s API, giving it access to values to populate the Helm chart’s templates.&lt;/p&gt;

&lt;p&gt;To give you a head start with configuring your LOGIQ deployment, we’ve provided sample &lt;code&gt;values.yaml&lt;/code&gt; files for small, medium, and large clusters. You can use these files as a base for configuring your LOGIQ deployment. You can download these files from the following links. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://firebasestorage.googleapis.com/v0/b/gitbook-28427.appspot.com/o/assets%2F-LmzGprckLqwd5v6bs6m%2F-MOSfp6X1_SPwV_8AGhv%2F-MOSh7NloEncIi1LjUyh%2Fvalues.small.yaml?alt=media&amp;amp;token=83d76953-0854-4a48-a3a8-0591aded0bc6" rel="noopener noreferrer"&gt;&lt;code&gt;values.small.yaml&lt;/code&gt;&lt;/a&gt; for small clusters.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://firebasestorage.googleapis.com/v0/b/gitbook-28427.appspot.com/o/assets%2F-LmzGprckLqwd5v6bs6m%2F-MQ3BQwto2mGZmAgEveP%2F-MQ3BW2mk4SRtFYNkQ2B%2Fvalues.medium.yaml?alt=media&amp;amp;token=95ffa9d0-a736-4213-9425-1b5ff7fa3178" rel="noopener noreferrer"&gt;&lt;code&gt;values.medium.yaml&lt;/code&gt;&lt;/a&gt; for medium clusters.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://firebasestorage.googleapis.com/v0/b/gitbook-28427.appspot.com/o/assets%2F-LmzGprckLqwd5v6bs6m%2F-MQ3BQwto2mGZmAgEveP%2F-MQ3BXv1S-DqlVCWRpOw%2Fvalues.large.yaml?alt=media&amp;amp;token=7d4772bf-39e0-4030-8620-1de1a64aed99" rel="noopener noreferrer"&gt;&lt;code&gt;values.large.yaml&lt;/code&gt;&lt;/a&gt; for large clusters.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can pass the &lt;code&gt;values.yaml&lt;/code&gt; file with the helm install command using the &lt;code&gt;-f&lt;/code&gt; flag, as shown in the following example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install logiq --namespace logiq --set global.persistence.storageClass=&amp;lt;storage_class_name&amp;gt; logiq-repo/logiq -f values.small.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Read and accept the EULA
&lt;/h3&gt;

&lt;p&gt;As a final step, you should read our &lt;a href="https://docs.logiq.ai/eula/eula" rel="noopener noreferrer"&gt;End User’s License Agreement&lt;/a&gt; and accept its terms before you proceed with deploying LOGIQ PaaS. &lt;/p&gt;

&lt;h3&gt;
  
  
  Latest LOGIQ PaaS component versions
&lt;/h3&gt;

&lt;p&gt;The following table lists the latest version tags for all LOGIQ components.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Image&lt;/th&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;logiq-flash&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;2.1.11.27&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;coffee&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;2.1.17.4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;logiq&lt;/code&gt; Helm chart&lt;/td&gt;
&lt;td&gt;2.2.11&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Install LOGIQ PaaS
&lt;/h3&gt;

&lt;p&gt;Now that your environment is ready, you can proceed with installing LOGIQ PaaS in it. To install LOGIQ PaaS, run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install logiq --namespace logiq --set global.persistence.storageClass=&amp;lt;storage class name&amp;gt; logiq-repo/logiq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running the above command installs LOGIQ PaaS and exposes its services and UI on the ingress’ IP address. Accessing the ingress’ IP address in a web browser of your choice takes you to the LOGIQ PaaS login screen, as shown in the following image. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzn31ild6u6k6oijbxivn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzn31ild6u6k6oijbxivn.png" alt="The LOGIQ login screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you haven’t changed any of the admin settings in the values.yaml file you used during deployment, you can log into the LOGIQ PaaS UI using the following default credentials. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Username&lt;/strong&gt;: &lt;code&gt;flash-admin@foo.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Password&lt;/strong&gt;: &lt;code&gt;flash-password&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You can change the default login credentials after you’ve logged into the UI.&lt;/p&gt;

&lt;p&gt;Your LOGIQ PaaS instance is now deployed and ready for use. Your LOGIQ instance enables you to ingest and tail logs, index and query log data, and provides search capabilities. Along with the LOGIQ UI, you can also access these features via LOGIQ’s CLI, &lt;a href="https://docs.logiq.ai/logiq-cli" rel="noopener noreferrer"&gt;logiqctl&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Now that you have full access to your very own LOGIQ PaaS instance, you should try using it to amplify your observability practices. You can use LOGIQ to &lt;a href="https://logiq.ai/k8s/" rel="noopener noreferrer"&gt;observe your Kubernetes clusters&lt;/a&gt;, &lt;a href="https://logiq.ai/jenkins-log-analysis-with-logiq/" rel="noopener noreferrer"&gt;set up centralised observability for your CI/CD pipelines&lt;/a&gt;, &lt;a href="https://logiq.ai/monitoring/" rel="noopener noreferrer"&gt;monitor your applications and infrastructure&lt;/a&gt;, or even tail and analyse logs from &lt;a href="https://logiq.ai/how-to-stream-aws-cloudwatch-logs-to-logiq/" rel="noopener noreferrer"&gt;AWS CloudWatch&lt;/a&gt; or other data sources – all without the pricing shock that the usual log management and analysis solutions provide.&lt;/p&gt;

&lt;p&gt;Do drop a comment or &lt;a href="https://logiq.ai/contact" rel="noopener noreferrer"&gt;reach out to us&lt;/a&gt; if you’d like to know more about how LOGIQ PaaS can help you deliver always-on applications and infrastructure at scale through efficient log management and analysis. &lt;/p&gt;

</description>
      <category>devops</category>
      <category>analytics</category>
      <category>monitoring</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Run your favorite Helm Chart using MicroK8s in 5 minutes</title>
      <dc:creator>Ajit Chelat</dc:creator>
      <pubDate>Wed, 23 Jun 2021 13:35:02 +0000</pubDate>
      <link>https://dev.to/logiq/run-your-favorite-helm-chart-using-microk8s-in-5-minutes-3ii</link>
      <guid>https://dev.to/logiq/run-your-favorite-helm-chart-using-microk8s-in-5-minutes-3ii</guid>
      <description>&lt;p&gt;&lt;a href="http://helm.sh/"&gt;Helm&lt;/a&gt; is a Kubernetes package manager that helps you find, share, and use software built for Kubernetes. With Helm Charts, you can bundle Kubernetes deployments into a single package you can install by running a single command. At LOGIQ, we use Helm Charts on the regular. One of our most commonly used Helm Charts is &lt;a href="https://artifacthub.io/packages/helm/logiqai/logiq"&gt;&lt;code&gt;logiq&lt;/code&gt;&lt;/a&gt; – the same Helm Chart we use for quick deployments of the LOGIQ observability platform for customers, prospects, and folks who’d love to know more about what we’re building. &lt;/p&gt;

&lt;p&gt;This article will explain how you can deploy your favorite Helm Chart on MicroK8s in under 5 minutes. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is MicroK8s?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="http://microk8s.io/"&gt;MicroK8s&lt;/a&gt; is a lightweight, pure-upstream Kubernetes aiming to reduce entry barriers for K8s and cloud-native application development. It comes in a single package that installs a single-node (standalone) K8s cluster in under 60 seconds. While MicroK8s has all the Kubernetes core components, it is also opinionated, which means that many of the add-ons you would typically look for in Kubernetes, such as DNS, Helm, registry, storage, etc. are all a single command away.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is LOGIQ?
&lt;/h2&gt;

&lt;p&gt;LOGIQ is a complete observability platform for monitoring, log aggregation, and analytics with an infinite storage scale that aims to bring simple and powerful logging to the masses. LOGIQ uses AWS S3 (or S3-compatible storage) for data at rest and allows the sending of logs from Kubernetes, on-prem servers, or cloud VMs with ease. &lt;/p&gt;

&lt;p&gt;The LOGIQ platform includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A User Interface (UI)&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://logiqctl.logiq.ai/"&gt;command-line toolkit&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A monitoring stack for time-series metrics, and;&lt;/li&gt;
&lt;li&gt;A log analytics stack for log data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you’re acquainted with MicroK8s and LOGIQ and their awesomeness, why don’t we jump right into the integration?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;In this demo, we’ll use the &lt;a href="https://artifacthub.io/packages/helm/logiqai/logiq"&gt;&lt;code&gt;logiq&lt;/code&gt;&lt;/a&gt; Helm Chart. You can also use your own favorite Helm Chart that you’d like to try out. Let’s also assume that you have access to the Linux operating system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing MicroK8s
&lt;/h2&gt;

&lt;p&gt;As a first step, let’s install MicroK8s on your machine by running the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get -y update
sudo snap install core
sudo snap install microk8s --classic
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
sudo microk8s config &amp;gt; ~/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let’s check whether MicroK8s is up and running or not with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo microk8s status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Enabling add-ons
&lt;/h2&gt;

&lt;p&gt;Now that we have MicroK8s up and running, let’s set up your cluster and enable the add-ons that MicroK8s readily provides, like Helm, DNS, ingress, storage, and private registry. These add-ons can be enabled and disabled at any time, and most are pre-configured to work without any additional setup.&lt;/p&gt;

&lt;p&gt;Run the following commands to enable add-ons:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s enable helm
microk8s enable storage
microk8s enable dns
microk8s enable ingress
microk8s enable registry
microk8s.kubectl config view &amp;gt; $HOME/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Provisioning an IP address
&lt;/h2&gt;

&lt;p&gt;We need an endpoint or an IP address to access the application we’re spinning up. This endpoint can either be within or outside our cluster. For this, let’s leverage &lt;a href="https://ubuntu.com/kubernetes/docs/metallb"&gt;MetalLB&lt;/a&gt; – a Kubernetes-aware solution that can monitor for services with the type &lt;code&gt;LoadBalancer&lt;/code&gt; and assign them an IP address. Alternatively, you can also set an IP address while enabling add-ons. &lt;/p&gt;

&lt;p&gt;While provisioning an IP address, you can use your local machine’s IP address, which pulls up the stack at &lt;code&gt;IP-address:80&lt;/code&gt;. If you do not know your local machine’s IP address, run the &lt;code&gt;ifconfig&lt;/code&gt; command as shown below and use the output of the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ifconfig:
wlp60s0: flags=4163&amp;lt;UP,BROADCAST,RUNNING,MULTICAST&amp;gt; mtu 1500
inet 192.168.1.27 netmask 255.255.255.0 broadcast 192.168.1.255
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, enable MetalLB by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s enable metallb
Enabling MetalLB
Enter each IP address range delimited by comma (e.g. '10.64.140.43-10.64.140.49,192.168.0.105-192.168.0.111'): 192.168.1.27-192.168.1.27
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you’re spinning up an EC2 instance from AWS, MetalLB might not work due to private/public IP configuration. We’ll take a closer look at and resolve this issue in another article.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bring in the Helm Chart
&lt;/h2&gt;

&lt;p&gt;Now that the configuration bits are in place, it’s time to bring in your Helm Chart. As mentioned above, we’re using the &lt;a href="https://artifacthub.io/packages/helm/logiqai/logiq"&gt;&lt;code&gt;logiq&lt;/code&gt;&lt;/a&gt; Helm Chart and Helm 3 in the following commands. You can replace the Helm Chart repo URL in the following command with your own Helm Chart’s repo URL if you’re trying another chart.&lt;/p&gt;

&lt;p&gt;Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add logiq-repo https://logiqai.github.io/helm-charts
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bringing up LOGIQ
&lt;/h2&gt;

&lt;p&gt;Next, let’s create a namespace called &lt;code&gt;logiq&lt;/code&gt; for the stack to spin up from and start running with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s kubectl create namespace logiq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then run &lt;code&gt;helm install&lt;/code&gt; with the storage class set to the &lt;code&gt;microk8s-hostpath&lt;/code&gt; as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install logiq -n logiq --set global.persistence.storageClass=microk8s-hostpath logiq-repo/logiq -f values.yaml  --debug --timeout 10m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The &lt;code&gt;values.yml&lt;/code&gt; file used in the command above is customized to suit our cluster’s configuration. You can download the &lt;code&gt;values.yml&lt;/code&gt; file from &lt;a href="https://docs.logiq.ai/logiq-server/k8s-quickstart-guide"&gt;docs.logiq.ai&lt;/a&gt; and edit it to suit your cluster’s needs, and then run the above command. &lt;/p&gt;

&lt;p&gt;LOGIQ is now ready to go! Before we get to the UI, let’s inspect the pods in your cluster by running the following command in the &lt;code&gt;logiq&lt;/code&gt; namespace you created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s kubectl get pod -n logiq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now access the LOGIQ UI by hitting the MetalLB endpoint we defined earlier in this article. To find the endpoint, let’s search for the LoadBalancer service that knows which IP address MicroK8s exposes. &lt;/p&gt;

&lt;p&gt;Run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$microk8s kubectl get service -n logiq |grep -i loadbalancer
logiq-kubernetes-ingress  LoadBalancer   10.152.183.45  192.168.1.27

80:30537/TCP,20514:30222/TCP,24224:30909/TCP,24225:31991/TCP,2514:30800/TCP,3000:32680/TCP,514:32450/TCP,7514:30267/TCP,8081:30984/TCP,9998:31425/TCP     18m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, using a web browser of your choice, navigate to the IP address shown by the LoadBalancer service above: &lt;code&gt;http://192.168.1.27:80&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And voila! Our LOGIQ deployment using a Helm Chart is up and running!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QHt5hudY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j90ijtlbv2667cqf64qf.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QHt5hudY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j90ijtlbv2667cqf64qf.jpeg" alt="The LOGIQ login screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On a side note, we wanted to talk about how cumbersome logging is. Most of the logging solutions out there hold data in proprietary databases that use disks and volumes to store their data. What’s wrong with that, you ask? Well, disks and volumes need to be monitored, managed, replicated, and sized. Throw in clustering in the mix and your log analytics project is now working on someone else’s software and is a storage nightmare. For these reasons, we built the LOGIQ observability platform – to bring easy logging to everyone who needs it. Using LOGIQ, you can ingest log data from &lt;a href="https://logiq.ai/k8s/"&gt;Kubernetes&lt;/a&gt;, &lt;a href="https://logiq.ai/monitoring/"&gt;on-prem servers or cloud VMs, applications&lt;/a&gt;, and &lt;a href="https://logiq.ai/integrations/"&gt;several other data sources&lt;/a&gt;, helping you gain complete visibility over your infrastructure and applications without burning a massive hole in your pocket.&lt;/p&gt;

&lt;p&gt;We love making logging easy. To show you how easy it is to get going with the LOGIQ platform, our next article will show you how to automate the LOGIQ platform’s deployment on AWS using Helm Charts on MicroK8s using a CloudFormation template. Meanwhile, if you have questions about LOGIQ and would like to know more, please do leave a comment below or &lt;a href="https://logiq.ai/"&gt;visit our website&lt;/a&gt; and reach out!&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>kubernetes</category>
      <category>monitoring</category>
      <category>devops</category>
    </item>
    <item>
      <title>Shift from API Monitoring to API Observability with LOGIQ</title>
      <dc:creator>Ajit Chelat</dc:creator>
      <pubDate>Tue, 22 Jun 2021 15:08:16 +0000</pubDate>
      <link>https://dev.to/logiq/shift-from-api-monitoring-to-api-observability-with-logiq-3je9</link>
      <guid>https://dev.to/logiq/shift-from-api-monitoring-to-api-observability-with-logiq-3je9</guid>
      <description>&lt;p&gt;APIs – by now, we’re all familiar with the term. Every service or software we use or build today either uses or is an API. If APIs are a central pillar in your building and delivery of software and services, you’ll know that the success of your software or services depends on the integrity, availability, and performance of your APIs. Traditional API monitoring sure does help you stay on top of uptime, security, and performance. Still, it is limited to being a black box form of monitoring – you’re only testing API behavior that you’d only see externally. You’d still have to guess what’s causing issues with your APIs as you’re only testing, measuring, and monitoring them against metrics you already know – like request rates, error rates, or status codes.&lt;/p&gt;

&lt;p&gt;Traditional API monitoring lets you monitor system health and performance but can’t help you identify and troubleshoot what’s causing issues. And the more your system’s dependency on APIs increases, the more you’ll find yourself modifying the metrics you track, guessing which parts of your API may cause problems, and not moving any closer to root causes. &lt;/p&gt;

&lt;p&gt;At this stage, establishing a proper API Observability strategy starts making more sense. API Observability is all about making your APIs more observable. Instead of relying on predetermined metrics and monitoring and waiting for failure, API Observability lets you dive into the unknown unknowns of your APIs by observing how they work internally. With API Observability, you can analyze data exposed by the internals of your API system and identify patterns and behavior that help you prevent threats, identify and troubleshoot issues, and understand API usage. &lt;/p&gt;

&lt;p&gt;What’s the easiest way to make your APIs observable, you ask? Use LOGIQ. &lt;/p&gt;

&lt;p&gt;LOGIQ is a complete observability platform for monitoring, log aggregation, and analytics with an infinite storage scale. The LOGIQ platform includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A User Interface (UI)&lt;/li&gt;
&lt;li&gt;A command-line toolkit&lt;/li&gt;
&lt;li&gt;A monitoring stack for time-series metrics, and&lt;/li&gt;
&lt;li&gt;A log analytics stack for log data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But how does LOGIQ help with API Observability? Here’s how. &lt;/p&gt;

&lt;h2&gt;
  
  
  Support for popular API Gateways
&lt;/h2&gt;

&lt;p&gt;LOGIQ can aggregate logs from popular API gateways like Istio, NGINX, HAProxy, Apache, and more. Integrating LOGIQ with your API gateways gives you complete visibility into API usage in your environment. &lt;/p&gt;

&lt;h2&gt;
  
  
  Automated extraction for API attributes
&lt;/h2&gt;

&lt;p&gt;LOGIQ includes full support for GROK expressions. You can write powerful rules in GROK to extract API attributes from standard API log formats such as Common Log and Apache. With GROK expressions, you can comb through API log traces to extract information like API methods, response times, URLs, sender information, payload length, request rates, and error rates. &lt;/p&gt;

&lt;h2&gt;
  
  
  Visualizing API data with LOG2Metrics
&lt;/h2&gt;

&lt;p&gt;With LOG2Metrics, you can transform ingested API log traces into multiple time-series visualizations using powerful attribute or pattern-based group-by expressions. For example, you can easily query for &lt;code&gt;GET&lt;/code&gt; requests made to the endpoint &lt;code&gt;/v1/resource/&amp;lt;id&amp;gt;&lt;/code&gt; and generate time-series visualizations by ID. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rlBhPYt4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqjk6jl5286iprkkuwil.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rlBhPYt4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqjk6jl5286iprkkuwil.jpeg" alt="Time-series visualisation for HTTP status codes generated by an API"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance Visualization &amp;amp; Alerting
&lt;/h2&gt;

&lt;p&gt;Using LOGIQ, you can plot API response times extracted from your API logs and gain total visibility over how your APIs perform. Moreover, LOGIQ’s alerting capabilities let you build alerts that notify you instantly when an API begins to underperform. You can also eliminate false positives by creating alerting rules that only trigger after being validated against frequency thresholds in intervals that you can customize. For example, you could choose to get alerted if an event occurs more than 10 times in a 5-minute interval. &lt;/p&gt;

&lt;h2&gt;
  
  
  Historical Reporting
&lt;/h2&gt;

&lt;p&gt;You can generate insightful and periodical ad hoc reports on historical API data scheduled with a built-in CRON job. For example, with a few clicks, you can create a report that shows you all client IP addresses that generated 4xx errors, grouped by HTTP status codes and IP addresses. &lt;/p&gt;

&lt;h2&gt;
  
  
  Extract Business Intelligence from APIs
&lt;/h2&gt;

&lt;p&gt;Empower your Support and Services organizations by using LOGIQ to extract insightful business-level metrics around API usage, avenues to harden security, and product analytics from log data.  For example, in case your software or service offers public-facing APIs and you’d like to know how your partner integrations are faring, you can monitor the health of your API usage by partners and notify your support and services team when you start seeing increased error rates. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O0Kticj0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jnfy6o57ilcmskp5hlgn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O0Kticj0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jnfy6o57ilcmskp5hlgn.png" alt="Graph plotting cloud latency events by application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;An API Observability platform is more than an accessory for your overall API strategy. The right approach to API Observability can provide you with invaluable insights about API usage while identifying security loopholes and maintaining the overall health, performance, and availability of your APIs. Now’s the time to adopt API Observability to optimize the way your APIs are shipped and how they perform by establishing visibility into the inner workings of your API system. To get started with API Observability, install the free-forever &lt;a href="https://docs.logiq.ai/logiq-server/logiq-paas-community-edition"&gt;LOGIG PaaS Community Edition&lt;/a&gt; and integrate it with your API server.&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>devops</category>
      <category>api</category>
      <category>observability</category>
    </item>
  </channel>
</rss>
