<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: LOGIQ.AI</title>
    <description>The latest articles on DEV Community by LOGIQ.AI (@logiq).</description>
    <link>https://dev.to/logiq</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/logiq"/>
    <language>en</language>
    <item>
      <title>How to Debug Microservices in the Cloud</title>
      <dc:creator>Vinodh</dc:creator>
      <pubDate>Wed, 02 Feb 2022 11:49:28 +0000</pubDate>
      <link>https://dev.to/logiq/how-to-debug-microservices-in-the-cloud-1n38</link>
      <guid>https://dev.to/logiq/how-to-debug-microservices-in-the-cloud-1n38</guid>
      <description>&lt;p&gt;The growth in information architecture has urged many IT technologies to adopt cloud services and grow over time. Microservices have been the frontrunner in this regard and have grown exponentially in their popularity for designing diverse applications to be independently deployable services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trivia: In a survey by O’Reilly, over 50% of respondents said that more than 50% of new development in their organization utilizes microservices.
&lt;/h2&gt;

&lt;p&gt;Using isolated modules, microservices in the Cloud stray away from using monolithic systems, where an entire application could fail due to a single error in a module. This provides developers with much broader flexibility for editing and deploying customizable codes without worrying about affecting separate modules.&lt;/p&gt;

&lt;p&gt;However, this approach brings along unique challenges when there is an accidental introduction of bugs. Debugging microservices in the Cloud can be a daunting task due to the complexity of the information architecture and the transition from the development phase to the production phase.&lt;/p&gt;

&lt;p&gt;Let’s explore what these challenges are and how you can seamlessly navigate around them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges in Debugging Microservices
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Inadequacy in Tracing and Observability
&lt;/h2&gt;

&lt;p&gt;The growth in the demand for microservices brings along complex infrastructures. Every cloud component, module, and serverless calls often conceal the infrastructure’s actual intricacy, making it difficult for DevOps and operations teams to trace and observe the microservice’s internal state, based on the outputs. Microservices running independently makes it especially difficult to track any user requests existing in the asynchronous modules, which might cause a chain-reproduction of errors. It also means that detecting services that are interacting with each other might become susceptible to these errors too. These factors make pinpointing the root cause of any error or bug a daunting task for developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring State in a Sophisticated Environment
&lt;/h2&gt;

&lt;p&gt;Since many microservices come together to build a system, it becomes complicated to monitor its state. As more microservice components add to the system, a complex mesh of services develops with each module running independently. This also brings forth the possibility that any module can fail anytime, without affecting other modules. &lt;/p&gt;

&lt;p&gt;Developers can find it extremely hard to debug errors in some particular microservices. Each of them can be coded in a different programming language, have unique logging functions, and are mostly independent of other components.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development to production can be irregular
&lt;/h2&gt;

&lt;p&gt;It is also unpredictable for developers to monitor the performance and state errors when moving the codes from the development phase to the production phase. We can’t predict how the code will perform when it processes hundreds of thousands of requests on distributed servers, even after integration and unit testing. If the code scales inadequately or if the database isn’t able to process the requests, it’ll make it almost cryptic for developers to detect the system’s underlying error.&lt;/p&gt;

&lt;h2&gt;
  
  
  Methods for Debugging Microservices in the Cloud
&lt;/h2&gt;

&lt;p&gt;Here are some microservices-specific debugging methods, which can help you in navigating around the challenges mentioned below:&lt;/p&gt;

&lt;h2&gt;
  
  
  Non-Intrusive Debugging Options
&lt;/h2&gt;

&lt;p&gt;Unlike traditional debugging methods, third-party tools can help the DevOps teams set breakpoints that don’t affect the debugging process’s execution by halting or pausing the service. These methods are non-intrusive and allow the developers to view global variables and stack traces, which helps them monitor and detect bugs more efficiently. It also allows the developers to test hypotheticals about where the issues might arise without halting the code or redeploying their codebase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Observability Enhancing Tools
&lt;/h2&gt;

&lt;p&gt;Any system with a multitude of microservices makes it extremely difficult to track requests. While you might think that building a customized platform for observability might be the answer to this issue, it would consume a lot of time and resources in its development. &lt;/p&gt;

&lt;p&gt;Fortunately, many modern, third-party tools are designed to track requests and provide extensive observability for microservices. These tools come packed with many other benefits, such as distributed and serverless computing capabilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5E3j4iTh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2pphslcq8btfap62qp8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5E3j4iTh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2pphslcq8btfap62qp8f.png" alt="Image description" width="880" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tools like LOGIQ enables complete observability for your microservices&lt;br&gt;
For instance, tools like Thundra can help you monitor user requests that are moving through your infrastructure during production, assisting developers in getting a holistic overview of the coding environment, pinpointing the source of bugs, and debugging it quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Self-Governed Exception Tracking
&lt;/h2&gt;

&lt;p&gt;It’s an uphill battle for a system to realize that there is an error or bug in the first place. The system must automatically track any exceptions as they occur, thereby helping the system identify repetitive patterns or destructive behaviors like leap year error, errors in a specific version of the browser, odd stack overflows, and much more.&lt;/p&gt;

&lt;p&gt;However, capturing these errors is only half the battle won. The system also needs to track variables and logs for pinpointing the time and conditions under which the error occurred. This helps the developers in replicating the situation and finding the most effective solution to remove the error. Comprehensive monitoring can significantly simplify the process of debugging in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debugging in the Cloud Doesn’t Have to be Hard.
&lt;/h2&gt;

&lt;p&gt;With modern microservices, debugging can be a very complex process for anyone. The ability to trace user requests and predicting how well the code can scale is very complicated. However, modern tools can make it easier for developers to monitor, detect, and resolve errors. LOGIQ is a one-stop-shop for microservices monitoring and observability, that lets you leverage the power of machine data analytics for infrastructures and applications on a single platform.&lt;/p&gt;

&lt;p&gt;Microservice architectures are designed to be quickly deployable, and with the right set of tools, debugging becomes much simpler for the developers. &lt;/p&gt;

</description>
      <category>database</category>
      <category>datascience</category>
      <category>data</category>
      <category>programming</category>
    </item>
    <item>
      <title>Combining The Powerful Forces of Compliance and Observability</title>
      <dc:creator>Vinodh</dc:creator>
      <pubDate>Tue, 01 Feb 2022 04:25:55 +0000</pubDate>
      <link>https://dev.to/logiq/combining-the-powerful-forces-of-compliance-and-observability-13f6</link>
      <guid>https://dev.to/logiq/combining-the-powerful-forces-of-compliance-and-observability-13f6</guid>
      <description>&lt;p&gt;Containers, services, and cloud-based apps have changed the way companies produce and deliver products and services and do business worldwide. This has altered the attack surface, necessitating highly different security techniques and technologies to prevent the disclosure of sensitive data and other cyber threats. Regulatory compliance has also changed, making it even more critical for businesses to adapt to this new paradigm. IT and regulatory compliance are required to guarantee that your corporation fulfills the data privacy and security requirements related to your industry, location, and business processes. But how can you enhance the power of compliance?&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the role of Observability in compliance?
&lt;/h2&gt;

&lt;p&gt;With each passing day, software becomes more and more sophisticated. Microservices and containers are examples of infrastructure patterns that continue to break down more extensive systems into sophisticated, smaller systems.&lt;/p&gt;

&lt;p&gt;At the same time, the number of items available is increasing, and there are several platforms and methods for businesses to accomplish new and unique things. Environments are becoming more complicated, and not every company is prepared to deal with the expanding number of difficulties. The source of issues is unclear without an observable system, and there is no common starting point.&lt;/p&gt;

&lt;p&gt;The total Observability of a system should not be considered a goal but rather an essential step in achieving critical business goals. Observability development aims to help security analysts, IT operators, and management recognize and handle system faults that might harm the company. &lt;/p&gt;

&lt;p&gt;The development of Observability with compliance has four main objectives:&lt;/p&gt;

&lt;h2&gt;
  
  
  Reliability
&lt;/h2&gt;

&lt;p&gt;One of the fundamental aims of Observability is reliability. We must measure the performance of our IT infrastructure if we are to design a system that is dependable and meets the expectations of our customers. We may monitor user behavior, network speed, system availability, capacity, and other metrics using an observability platform software application to guarantee that the system is working.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;For enterprises with legal or compliance obligations to protect sensitive data from unauthorized disclosure, Observability is critical. Organizations can discover possible intrusions, security risks, and attempted brute force or DDoS assaults before the attacker completes the attack and steals data by having full visibility into the cloud computing environment via event logs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reduce the cost of penalties
&lt;/h2&gt;

&lt;p&gt;Observability helps businesses increase income and saves a considerable sum of money by reducing penalties. Depending on your sector, rules and requirements may be causing hefty non-compliance fees that significantly affect firms. Do the long-term costs of investing in the correct procedures, tools, and overhead worth the dangers of not being compliant? The answer is yes!&lt;/p&gt;

&lt;p&gt;With settlement agreements and civil money penalties, the Health Insurance Portability and Accountability Act (HIPAA) expenses have risen dramatically in recent years. Fines under the General Data Protection Regulation (GDPR) are also increasing, rising by 20% from 2020 to 2021. It’s more critical than ever to stay on top of cybersecurity regulatory compliance obligations, and with Observability, companies can do that effectively and efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automation Saves Time and Money
&lt;/h2&gt;

&lt;p&gt;Data protection should include more than just ticking boxes to ensure that the company avoids fines and penalties. This is where Observability plays a massive role in securing all vital data, not just what is regulated. You can easily convince stakeholders, prospects, customers, partners, and others involved since automation expands efficiency and creativity across essential areas of your company and enhance ROI, regardless of whether your firm has a mature or immature compliance program. Automation will minimize management expense and analyst labor by removing duplicate material as the number of necessary compliance requirements grows, therefore saving time and money.&lt;/p&gt;

&lt;p&gt;Here are a few quantitative and qualitative variables that you can track with the combined power of Compliance and Observability:&lt;/p&gt;

&lt;h2&gt;
  
  
  Qualitative Measurements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enhanced brand value ( lack of data breaches, consistency of external audit opinions on security, number of compliance certifications achieved)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Possibility of pursuing new business ventures (some certifications will increase your credibility and attract customers)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The severity of post-audit findings and the degree of effort required to correct them&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Increased customer trust in your products and services&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Quantitative Measurements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Increased profits (customer trust= more sales)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost-cutting (cost of non-compliance)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Number of closed compliance concerns over the number of identified issues&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mean Time to Detect &amp;amp; Respond&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Total post-audit risk exposure analysis&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;LOGIQ is an all-in-one solution for complete observability data pipeline control and storage. Your IT department can use LOGIQ to aggregate log files, metrics, and traces, assess network performance against the most important KPIs, and acquire the insights and network visibility required to fulfill your business’s system dependability, security, and customer satisfaction goals – all backed by robust observability data pipelines that ship the right data to the right targets. With LOGIQ, you can enable your teams with total observability data pipeline control, enhanced data value, reduced data complexity, quick insights, and zero data loss.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>datascience</category>
      <category>data</category>
    </item>
    <item>
      <title>5 Best Practices of Data Masking</title>
      <dc:creator>Vinodh</dc:creator>
      <pubDate>Mon, 31 Jan 2022 07:27:40 +0000</pubDate>
      <link>https://dev.to/logiq/5-best-practices-of-data-masking-i34</link>
      <guid>https://dev.to/logiq/5-best-practices-of-data-masking-i34</guid>
      <description>&lt;p&gt;Data breaches are on the increase; it’s no secret. Almost every day brings news of a large corporation disclosing the loss of personal information, along with officials asking for a full investigation and a renewed commitment to securing consumer data.&lt;/p&gt;

&lt;p&gt;What’s particularly perplexing about these circumstances is that current technologies and data protection best practices may enable firms to neutralize attempted breaches thoroughly. Data masking tactics that use next-generation techniques, in particular, have been shown to halt hackers and attackers in their tracks. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is data masking?
&lt;/h2&gt;

&lt;p&gt;Data obfuscation, also known as data masking, substitutes sensitive information with fake but plausible values. Confidential information is made inactive, such as names, addresses, credit card numbers, or patient health information, but the masked data is still useful for application development, testing, and analytics. The version with the masked information may then be used for user training or software testing. The primary goal here is to generate a functioning replacement that hides the original data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Is Data Masking Necessary?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Data masking eliminates several significant dangers, including data loss, data exfiltration, insider threats or account breach, and insecure connections with third-party systems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reduces the risks connected with cloud adoption in terms of data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data is rendered unusable by an attacker while retaining many of its basic functioning qualities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Allows authorized users, such as testers and developers, to share data without exposing production data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can be used for data sanitization — whereas standard file deletion leaves data traces on storage media, sanitization replaces the original values with disguised ones.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Many types of sensitive information may be protected with data masking, including:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Personally identifiable information (PII)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Protected health information (PHI)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Payment card information (subject to PCI-DSS regulation)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Intellectual property (subject to ITAR and EAR regulations)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data on Health and Finance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IP addresses and passwords, particularly when combined with personally-identifying information&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s crucial to examine your data thoroughly to establish what is sensitive (this is a significant component of many compliance programs. Consider how much difficulty your organization would have if you had to reveal that you had leaked this information. Would your business go bankrupt as a result of penalties or a loss of client confidence? Document which data is deemed sensitive, what systems handle that data, and how access is maintained with the help of your security expert or privacy team.&lt;/p&gt;

&lt;h2&gt;
  
  
  5 Best practices for Data Masking
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Determine which data is sensitive
&lt;/h2&gt;

&lt;p&gt;Identify and categorize the following items before masking any data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Location of sensitive data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Groups of people that have been given permission to view the data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Application of the data&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Masking is not required for every element of the company. Instead, in both production and non-production situations, properly identify any existing sensitive data. This might take a long time, depending on the intricacy of the data and the organizational structure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Define your data masking technique stack
&lt;/h2&gt;

&lt;p&gt;Because data differ so much, large enterprises can’t employ a single masking method across the board. Furthermore, the method you use may need you to adhere to certain internal security regulations or fulfill budgetary constraints. You may need to refine your masking approach in some circumstances. So, take into account all of these important criteria while selecting the proper collection of tactics. Keep them in sync to guarantee that the same type of data utilizes the same referential integrity approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Make sure your data masking procedures are secure
&lt;/h2&gt;

&lt;p&gt;Masking techniques are just as important as sensitive data. A lookup file, for example, can be used in the replacement strategy. If this lookup file gets into the wrong hands, the original data set may be revealed. Only authorized people should access the masking algorithms; thus, organizations should develop the necessary standards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Make the masking process reproducible
&lt;/h2&gt;

&lt;p&gt;Changes to an organization, a specific project, or a product might cause data to alter over time. Whenever possible, avoid starting from the beginning. Instead, make masking a repeatable, simple, and automated procedure so that you may use it whenever sensitive data changes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define a data masking procedure that works from beginning to finish.&lt;/li&gt;
&lt;li&gt;An end-to-end procedure must be in place for organizations, which includes:&lt;/li&gt;
&lt;li&gt;Detecting confidential information&lt;/li&gt;
&lt;li&gt;Using an approach that is appropriate&lt;/li&gt;
&lt;li&gt;Auditing regularly to ensure your choosen technique is operating properly&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Maintain Referential Integrity
&lt;/h2&gt;

&lt;p&gt;Referential integrity requires that every data from a business application be disguised using the same methodology. In big enterprises, a single technique isn’t practicable. Data masking may be necessary by each business line owing to budget/business considerations, IT administration practices, or security/regulatory requirements. When working with the same kind of data, ensure that various data masking technologies and processes are synced. This will help later when data is needed across business divisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;An efficient data masking plan is an apparent gain for the organization, mainly because the cost of a data breach can be measured in millions of dollars. Using a solution like Logiq.AI for implementing data masking will help developers, testers, analysts, and other data consumers spend less time figuring out the right ways to secure data and more time working.&lt;/p&gt;

</description>
      <category>database</category>
      <category>datascience</category>
      <category>data</category>
      <category>observability</category>
    </item>
    <item>
      <title>6 Dimensions Of Data Quality</title>
      <dc:creator>Vinodh</dc:creator>
      <pubDate>Mon, 20 Dec 2021 07:49:00 +0000</pubDate>
      <link>https://dev.to/logiq/6-dimensions-of-data-quality-476e</link>
      <guid>https://dev.to/logiq/6-dimensions-of-data-quality-476e</guid>
      <description>&lt;p&gt;Have you ever questioned what it takes to be a truly data-driven company? To make important decisions, you must have faith in the accuracy and reliability of your data.&lt;/p&gt;

&lt;p&gt;Many firms discover that the data they collect is not adequately reliable. 74% think they need to improve their data management to thrive, according to Experian’s 2021 Global data management research survey. That means that more than half of corporate leaders are unable to make confident decisions based on the data they collect.&lt;/p&gt;

&lt;p&gt;Let’s look at why data quality is crucial to a company and how it can benefit your end result.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the significance of data quality?
&lt;/h2&gt;

&lt;p&gt;Data quality is crucial because it allows you to make informed decisions that benefit your customers. A positive customer experience leads to happy customers, brand loyalty, and improved revenue. With low-quality data, you’re just guessing what people want. Worse, you might be doing things your clients hate. Collecting credible data and updating existing records helps you get a better picture of your clientele. It also provides verified email, postal, and phone numbers. This data helps you sell more successfully and efficiently.&lt;/p&gt;

&lt;p&gt;Keeping data quality might help you stay ahead of the competition. Reliable data keeps your firm agile. You’ll be able to spot new opportunities and conquer challenges before your competitors.&lt;/p&gt;

&lt;p&gt;To gain the greatest outcomes, you must regularly manage data quality. Data quality is crucial as data is used more extensively for more complex use cases. &lt;/p&gt;

&lt;p&gt;Personalization, accurate marketing attribution, predictive analytics, machine learning, and AI applications all rely on high-quality data. Working with low-quality data takes a long time and requires a lot of resources. Poor data quality, according to Gartner, can cost an extra $15 million per year on average. It isn’t only about money loss, though. &lt;/p&gt;

&lt;h2&gt;
  
  
  Poor data quality has a number of consequences for your company including:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Bad data leads to incomplete or erroneous insights and erodes faith in the data team’s work inside the team as well as the enterprise.&lt;/li&gt;
&lt;li&gt;Companies’ data analytics efforts don’t pay off.&lt;/li&gt;
&lt;li&gt;To confidently use business data in operational and analytical applications, you must understand data quality. Only credible data can allow accurate analysis and thus reliable business decisions.&lt;/li&gt;
&lt;li&gt;The rule of ten states that processing faulty data costs 10 times more than processing the right data.&lt;/li&gt;
&lt;li&gt;Unreliable analyses: Managing the bottom line is difficult when reporting and analysis are distrusted.&lt;/li&gt;
&lt;li&gt;Poor governance and noncompliance risks: Compliance is no longer optional; it is essential for corporate survival.&lt;/li&gt;
&lt;li&gt;Brand depreciation: Businesses whose judgments and processes are regularly incorrect lose a lot of brand value.&lt;/li&gt;
&lt;li&gt;Poor data impacts a company’s growth and innovation strategy. The immediate concern is
how to  increase data quality.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What criteria are used to assess data quality?
&lt;/h2&gt;

&lt;p&gt;Data quality is easy to detect but hard to measure. Numerous data attributes can be evaluated to gain context and assessment for data quality. To be effective, customer data must be unique, accurate, and consistent across all engagement channels. Data quality dimensions capture context-specific features.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the definition of a data quality dimension?
&lt;/h2&gt;

&lt;p&gt;Data quality dimensions are data measurement qualities that you may examine, interpret, and improve on an individual basis. Data quality in your given context is represented by the aggregated ratings of many variables, which show the data’s feasibility for usage.&lt;/p&gt;

&lt;p&gt;On average only 3% of DQ scores are graded acceptable (with a score of &amp;gt;97%), indicating that high-quality data is the exception.&lt;/p&gt;

&lt;p&gt;Data quality dimension scores are usually expressed in %ages, which serve as a benchmark for the intended purpose. A 52 % comprehensive customer data collection, for example, indicates a lesser level of confidence that the planned campaign will reach the proper target segment. To increase data trust, you can specify the acceptable amounts of scores.&lt;/p&gt;

&lt;p&gt;What are data quality dimensions?&lt;br&gt;
The following 6 major dimensions are commonly used to gauge data quality on several dimensions with equal or variable weights.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accuracy
&lt;/h2&gt;

&lt;p&gt;The degree to which information accurately reflects an event or thing represented is referred to as “accuracy.” Data accuracy refers to how closely data matches a real-world scenario and can be verified and ensures real-world entities can participate as anticipated. A correct employee phone number ensures that the person is always reachable. Incorrect birth dates, on the other hand, can result in loss of benefits. Verification of data accuracy requires legitimate references like birth certificates or the actual entity. Testing can sometimes ensure data accuracy. You can check customer bank details against a bank certificate or perform a transaction.  Accurate data can support factual reporting and reliable business outcomes. Highly regulated businesses like healthcare and finance require accuracy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Completeness
&lt;/h2&gt;

&lt;p&gt;When data meets the requirements for comprehensiveness, it is deemed “complete.”  For customers, it displays the bare minimum required for effective interaction. Data can be considered complete even if a customer’s address lacks an optional landmark component. Completeness can help customers compare and pick products and services. A product description is incomplete without a delivery estimate. Customers can use historical performance data to analyze financial products’ suitability. Completeness assesses if the data is sufficient to make valid judgments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Consistency
&lt;/h2&gt;

&lt;p&gt;The same information may be maintained in multiple locations at many businesses. It’s termed “consistent” if the information matches. For instance, if your human resources information systems indicate that an employee no longer works there, but your payroll system indicates that he is still receiving a paycheck, that is inconsistency. Consistency of data enables analytics to appropriately gather and utilize data. Testing for consistency across numerous data sets is tough. These formatting mismatches can be swiftly remedied if one enterprise system utilizes a customer phone number with international code separate from another. If the underlying data is conflicting, resolving may necessitate a second source. Data consistency is generally linked to data correctness, therefore any data set that has both is likely to be high-quality.&lt;/p&gt;

&lt;p&gt;Review your data sets to determine if they’re the same in every instance to resolve inconsistency issues. Is there any evidence that the information contradicts itself?&lt;/p&gt;

&lt;h2&gt;
  
  
  Timeliness
&lt;/h2&gt;

&lt;p&gt;Is your data readily available when you need it? “Timeliness” is one of the data quality dimensions. Let’s say you need financial data every quarter; if the data is available when you need it, it’s timely.&lt;/p&gt;

&lt;p&gt;The timeliness dimension of data quality is a user expectation. It doesn’t satisfy that dimension if your information isn’t available when you need it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Validity
&lt;/h2&gt;

&lt;p&gt;Validity is a data quality attribute that refers to information that does not meet business standards or conforms to a specified format. Example: ZIP codes are legitimate if they contain the appropriate characters. Months are legitimate in a calendar if they match the global names. Using business rules to validate data is a methodical strategy.&lt;/p&gt;

&lt;p&gt;To achieve this data quality criterion, make sure that all of your data adhere to a certain format or set of business standards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Uniqueness
&lt;/h2&gt;

&lt;p&gt;The term “unique” refers to information that appears just once in a database. Data duplication is a common occurrence, as we all know. It’s possible that “George A. Robertson” and “George A. Robertson” are the same people. This data quality dimension necessitates a thorough examination of your data to guarantee that none of it is duplicated.&lt;/p&gt;

&lt;p&gt;Uniqueness is crucial to avoid duplication and overlap. Data uniqueness is assessed across all records in a data set. With low duplication and overlap, high uniqueness builds trust in data and analysis.&lt;/p&gt;

&lt;p&gt;Finding overlaps can help keep records unique, while data cleansing and deduplication can remove duplicates. Unique client profiles help offensive and defensive consumer engagement initiatives. This increases data governance and compliance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The fundamental goal of identifying essential data quality dimensions is to provide universal metrics for measuring data quality in various operational or analytical contexts. &lt;/p&gt;

&lt;p&gt;Define data quality rules and expectations&lt;br&gt;
Determine minimum thresholds for acceptability&lt;br&gt;
Assess acceptability thresholds.&lt;/p&gt;

&lt;p&gt;In other words, the claims that correlate to these thresholds can be utilized to monitor how well-measured quality levels fulfill agreed-upon business objectives. Consequently, metrics that match these conformance measures help identify core problems that hinder quality levels from achieving expectations.&lt;/p&gt;

&lt;p&gt;Originally published at &lt;a href="https://logiq.ai"&gt;https://logiq.ai&lt;/a&gt; on October 28, 2021.&lt;/p&gt;

</description>
      <category>data</category>
      <category>database</category>
      <category>dataquality</category>
      <category>datascience</category>
    </item>
    <item>
      <title>How to Reduce TCO and Infrastructure Costs for your Business?</title>
      <dc:creator>Vinodh</dc:creator>
      <pubDate>Mon, 20 Dec 2021 07:38:57 +0000</pubDate>
      <link>https://dev.to/logiq/how-to-reduce-tco-and-infrastructure-costs-for-your-business-2pep</link>
      <guid>https://dev.to/logiq/how-to-reduce-tco-and-infrastructure-costs-for-your-business-2pep</guid>
      <description>&lt;p&gt;A large percentage of organizations today tend to spend way too much on compute resources and storage. For instance, investing in high capacity on-premise data centers, to meet the ever-growing demand when the cloud has a more inexpensive alternative. Statistically speaking, on average, small businesses spend approximately 6.9% of their revenue on IT. So, there is no denying that technology is expensive, and for some, IT might feel like a financial black hole. To keep your IT expenditures from skyrocketing, you must absolutely find ways to reduce your overall TCO.&lt;/p&gt;

&lt;p&gt;As your competitors are investing heaps and bounds on infrastructure and new technologies to raise productivity, it is natural that you are following suit. But what if we say that there are ways to not only reduce your IT spending significantly but also help your teams unlock optimal infrastructure and application performance?&lt;/p&gt;

&lt;p&gt;While it is easier said than done, there are a few tried and tested strategies that can come in handy in reducing your TCO and infrastructure costs. &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Standardize your IT Infrastructure
&lt;/h2&gt;

&lt;p&gt;Technology standardization, simply put, is positioning your applications and IT infrastructure to a set of standards that best fit your strategy, security policies, and goals. Standardized technology negates complexity and has scores of benefits such as cost savings through economies of scale, easy-to-integrate systems, enhanced efficiency, and better overall IT support. Standardizing technology across the board leads to simplified IT management. &lt;/p&gt;

&lt;p&gt;The first step in standardizing technology is to adopt a streamlined, template-based approach that leads to operation-wide consistency. Doing so, in turn, reduces the cost and the complexity of IT processes in the long run. We know this might be difficult to implement for many companies. However, if you manage to reduce the number of variations, you ultimately reduce the TCO of your systems. For instance, a company that provides a standard set of devices to employees across the board finds it easier and less expensive to provide support when compared to an organization whose employees use a mix of Apple and Windows-based devices.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Have a check on your existing investment
&lt;/h2&gt;

&lt;p&gt;When considering integrating new technology or processes into your IT infrastructure, it is always a good idea to keep tabs on your existing investments. The goal here is to focus on adopting solutions that have maximum agility. Analyze all of your existing equipment and determine which will minimize your future costs and which will hinder your company’s growth. This analysis, albeit time-consuming, is a necessary step to reduce your spending in the future. Our suggestion is to hold on only to the investments that positively impact your organization’s growth.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Adapt to Cloud Storage and Optimize it
&lt;/h2&gt;

&lt;p&gt;When it comes to storage, cloud storage is a blessing in disguise. Switching your storage to the cloud to keep up with the ever-evolving storage needs is a great way to reduce on-premise hardware usage. Optimization helps you gain control over and maintain the ever-increasing incoming volume of data from across different resources. It is prudent to create multiple data lines and store the incoming data within their respective data line for hassle-free access when needed.&lt;/p&gt;

&lt;p&gt;Additionally, it is also absolutely essential that you distribute workloads evenly between spinning disks and flash to further balance data storage and control. &lt;/p&gt;

&lt;h2&gt;
  
  
  4. Automate it
&lt;/h2&gt;

&lt;p&gt;The more you leave your cloud incapabilities unattended, the higher your expenses will be. Make use of automated features (such as a cost-optimization tool) not just to set up immediate responses for all the disarray in your configurations but also to mitigate them as soon as they occur. Furthermore, these features keep your expenses at a minimum and reduce overall TCO costs without tedious manual intervention. &lt;/p&gt;

&lt;h2&gt;
  
  
  5. Reducing your TCO with your Observability and Monitoring Platform
&lt;/h2&gt;

&lt;p&gt;An observability and monitoring platform like Splunk can help streamline all your data streams. However, your TCO can shoot through the roof if you don’t optimize your spending. The good news is that it is possible to keep a check on your expenses and make sure that you don’t cross your allocated budget with these few tips and tricks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Leverage Usage-based Licensing
&lt;/h2&gt;

&lt;p&gt;Most observability and monitoring platforms charge you based on the peak daily data volume ingested into the platform, stored in either a database or a flat file, depending on your choice. Although there are no explicit charges for the accumulation of log data, customers are usually expected to bear the cost of hardware for storing log data, including (but not limited to) any high-availability and backup solutions.&lt;/p&gt;

&lt;p&gt;You can cleverly reduce the TCO involved here by carefully planning the data inflow and managing data volume. For instance, you may choose to turn on Splunk for a few hours and then turn off data ingestion to save big on your licensing spends. However, be warned that this could expose your servers and systems to potential business risks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Retention
&lt;/h2&gt;

&lt;p&gt;A data retention policy is something every organization must possess as it can provide a set of guidelines for securely archiving data while establishing for how long the data must be saved. While the process seems pretty straightforward on the surface, there is more to it than meets the eye. This is especially true when you need to retain your data for longer durations. Increasing data retention periods involves cumbersome and complex workflows that gradually pave the way for an increased TCO over time. &lt;/p&gt;

&lt;p&gt;What started as data ponds in the 90s, after having transitioned into data lakes, has now evolved into data oceans. We are currently dealing with Exabytes and Zettabytes of data for which the outdated scale-out colocation model might not be the best way to go about this.  &lt;/p&gt;

&lt;p&gt;Modern observability and monitoring solutions often provide smart storage solutions. An example of such a solution is Splunk’s SmartStore. SmartStores are architectured for massive scale with high data availability coupled with remote storage tiers. Furthermore, it is well-known for performance at scale with cached active data sets. With independent scale compute and storage and reduced indexer footprint, you can leverage SmartStores for a phenomenal reduction in your organization’s TCO.&lt;/p&gt;

&lt;h2&gt;
  
  
  Take Complete Control
&lt;/h2&gt;

&lt;p&gt;With Splunk or any other observability and monitoring platform, the control you have is quite limited regarding data flow pipelines. To exercise complete control over your data, you will have to invest in an expensive alternative tool to control the volume of data and when it gets sent to Splunk.  &lt;/p&gt;

&lt;p&gt;However, this perennial issue has a straightforward solution with LOGIQ.AI’s LogFlow. With LogFlow, you can gain complete visibility into what is affecting your data volume with an AI-powered log flow controller that lets you customize your data pipelines and solve volume challenges. LogFlow can also scrutinize and identify high-volume log patterns and make your data pipelines fully observable. It processes only the essential log data, thereby helping you significantly reduce the volume of unnecessary data ingested to your Splunk environment, ultimately decreasing your licensing and infrastructure costs.&lt;/p&gt;

&lt;p&gt;LogFlow helps you streamline and store all of your incoming data seamlessly without manual intervention and enables you to exercise total data pipeline observability at far lesser costs. LogFlow also eliminates the need for “smart” storage with InstaStore that provides infinite retention of all data (old/new, hot/cold) with indexing at Zero Storage Tax. &lt;/p&gt;

&lt;p&gt;If you’re interested in knowing more about how LOGIQ.AI can help reduce your TCO, &lt;a href="https://logiq.ai/get-started-logiq/"&gt;book a free trial&lt;/a&gt; with us today!&lt;/p&gt;

&lt;p&gt;Originally published on &lt;a href="https://logiq.ai/how-to-reduce-tco-and-infrastructure-costs-for-your-business/"&gt;https://logiq.ai/how-to-reduce-tco-and-infrastructure-costs-for-your-business/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>tco</category>
      <category>business</category>
      <category>database</category>
      <category>datascience</category>
    </item>
    <item>
      <title>A Beginner’s Guide to SIEM</title>
      <dc:creator>Vinodh</dc:creator>
      <pubDate>Mon, 20 Dec 2021 07:32:20 +0000</pubDate>
      <link>https://dev.to/logiq/a-beginners-guide-to-siem-39hd</link>
      <guid>https://dev.to/logiq/a-beginners-guide-to-siem-39hd</guid>
      <description>&lt;p&gt;IT environments of any organization around the world are constantly under threats of cyberattacks. To stay safe and miles ahead of potential attacks, organizations continually tighten security regulations and focus on reducing their attack surfaces. Constantly improving security is no easy feat and is very challenging. What could help security teams is including SIEM software in their security arsenal. But what is SIEM, and what’s in it for security teams? &lt;/p&gt;

&lt;h2&gt;
  
  
  What is SIEM?
&lt;/h2&gt;

&lt;p&gt;SIEM or Security Information and Event Management systems are security and auditing systems with multiple analysis and monitoring components. When deployed correctly, these components can help an organization detect and remediate threats. A well-rounded SIEM system consists of the following elements. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Log Management (LMS): Tools for log aggregation, unification, and storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security Information Management (SIM): Systems that focus on collecting, analyzing, and managing data related to security from various data sources. DNS servers, firewalls, antivirus apps, and routers are a few of such data sources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security Event Management (SEM): Proactive monitoring and analysis-based systems that include data visualization, event correlation, and alert generation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A SIEM solution merges all of these components to automatically collect and process information, store it in a centralized location, compare various events, and generate reports and alerts. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why is it important?
&lt;/h2&gt;

&lt;p&gt;Cyber-attacks and threats to our IT environments and computer systems are not going away any time soon. From good old phishing and malware attacks to the latest coin mining, ransomware, and zero-day attacks, threats to our applications, infrastructure, and data are pretty frequent and constantly on the rise. Attackers are getting smarter by the day, due to which most of these attacks go unnoticed – often for several months. What can prove very successful against such attacks is an effective threat detection system and thorough network monitoring. Aggregating data from different data sources and correlating between events is now crucial in helping us keep fighting the good fight.&lt;/p&gt;

&lt;p&gt;Additionally, governments worldwide are tightening compliance requirements to protect their citizens’ data, leaving the onus on developers to build a super-secure solution and maintain strict compliance. Only a comprehensive set of security controls with proper monitoring, threat detection and remediation, auditing, and reporting can meet all these requirements. A SIEM system facilitates all of that. &lt;/p&gt;

&lt;h2&gt;
  
  
  How does a SIEM solution work?
&lt;/h2&gt;

&lt;p&gt;At the outset, a SIEM solution collects event and log data from host systems, security devices, and applications across an IT environment and consolidates data from these multiple data points in one location. Post consolidation, the data is sampled against preset security rules, analyzed in real-time, and sorted into categories such as malware activity, successful and failed logins, and other potentially malicious activities. When the system detects any potential security problems, it creates alerts. Organizations can prioritize these alerts using preset rules. For example, a user account generating 100 failed attempts across two minutes of login-related activity would be flagged and alerted as a high-priority event. Alternatively, you could categorize another account with ten failed attempts in ten minutes as suspicious but set to a lower priority. The first scenario could be a brute-force attack in progress, while the second one could just be a forgetful user. &lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of SIEM
&lt;/h2&gt;

&lt;p&gt;A well-rounded SIEM solution has plenty of benefits that help strengthen an organization’s security posture. Some of these benefits commonly seen across different solutions include:&lt;/p&gt;

&lt;p&gt;A holistic view of an organization’s information and technology security&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data convergence from disparate sources of security and log data&lt;/li&gt;
&lt;li&gt;Standardization of log data generated in different formats&lt;/li&gt;
&lt;li&gt;Augmentation of log data with additional attributes by sampling them against security rules&lt;/li&gt;
&lt;li&gt;Making your machine data indexable, searchable, and easily accessible&lt;/li&gt;
&lt;li&gt;Stay compliant with real-time and continuous visibility&lt;/li&gt;
&lt;li&gt;Faster detection and remediation times&lt;/li&gt;
&lt;li&gt;Visualization of raw log data to quickly identify threats, vulnerabilities, and patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What to look for in a SIEM solution
&lt;/h2&gt;

&lt;p&gt;A SIEM solution can accelerate threat detection and responses to threats while enabling SecOps to reduce attack surfaces and mitigate risks to IT environments. Although a good SIEM solution provides plenty of benefits, you need to be tactful while picking one. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;First, assess your security and business objectives. If your business requires that you maintain compliance with several regulations while staying secure, be sure to pick a solution that helps you do both with relative ease. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Understand the real TCO (total cost of ownership) of the SIEM solution you’re evaluating. Depending on the vendor’s licensing model, you might end up paying a lot of storage tax for something as essential as storing your data for longer durations. Read through the fine print and see if your vendor lets you retain data for as long as you wish to, without costing you a fortune. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Evaluate the data analytics capabilities of the SIEM solution. A SIEM solution is no good if it cannot identify, correlate, and analyze the knowns and unknowns of your environments and data. Bonus points if the solution has machine learning and AI capabilities. Although machine learning and AI are relatively new, they are essential in helping the solution learn to identify threat patterns automatically and adjust to new data without human input. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Evaluate the ease of integration and automation of the SIEM solution. Your SIEM solution should be easy to integrate with all your existing data sources and incident management systems, no matter how disparate or distributed they are. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;See how resource-intensive the solution could be. Avoid solutions that require trained staff to set up, operate, and manage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Assess the solution’s reporting capabilities. Your SIEM solution should be able to display security-related information and events in a human-readable format. The more dashboarding, visualization, graphing, and textual reporting capabilities the solution possesses, the better your team comprehends and uses that information. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When it comes to information and IT infrastructure security, no amount of preparedness, planning, tools, or measures is ever enough. The numerous benefits of a SIEM solution makes it worthy of an investment and inclusion in your security arsenal. It helps you automate log monitoring, correlating log and event data, identifying patterns, alerting, and providing data for compliance. If you’re considering investing in a SIEM solution, look for &lt;a href="https://logiq.ai/siem-soar/"&gt;tools that help you perform all of these functions through a single interface&lt;/a&gt; rather than taking a fragmented approach.&lt;/p&gt;

&lt;p&gt;Originally published on &lt;a href="https://logiq.ai/a-beginners-guide-to-siem/"&gt;https://logiq.ai/a-beginners-guide-to-siem/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>data</category>
      <category>database</category>
      <category>datascience</category>
      <category>observability</category>
    </item>
    <item>
      <title>3 Common Challenges Faced When Deploying Splunk</title>
      <dc:creator>Vinodh</dc:creator>
      <pubDate>Tue, 14 Dec 2021 07:28:03 +0000</pubDate>
      <link>https://dev.to/logiq/3-common-challenges-faced-when-deploying-splunk-1ni7</link>
      <guid>https://dev.to/logiq/3-common-challenges-faced-when-deploying-splunk-1ni7</guid>
      <description>&lt;p&gt;Deploying Splunk doesn’t come without challenges. It is common knowledge that Splunk is quite a fantastic tool for monitoring and searching through big data. In simplest terms, it indexes and correlates information generated in an IT environment, makes it searchable, and facilitates generating alerts, reports, and visualizations that aid proactive monitoring, threat remediation, and process improvements. However, there is more to it than meets the eye. It is an understatement to say that only highly skilled and professional technical experts with years of hands-on expertise can maneuver the ins and outs of Splunk. &lt;/p&gt;

&lt;p&gt;In this article, we have collated the most common issues faced when deploying Splunk in an IT environment. The good news is that we also describe how you can maneuver through and mitigate these common issues.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;High Licensing Cost
Splunk environments are expensive – how much you pay for them is directly proportional to the volume of data ingested. Meaning, the higher the volume of data, the higher your licensing cost is. Furthermore, one of the most common challenges that customers face while deploying Splunk is in creating structured data pipelines, thereby ingesting unnecessary data into the system. Doing so, in turn, results in higher licensing costs. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As a workaround, teams often switch Splunk off for a few hours to reduce licensing costs. However, periods of zero data ingestion compromises the infrastructure’s security. LogFlow &lt;/p&gt;

&lt;p&gt;Optimizing Splunk Licensing Cost&lt;br&gt;
At LOGIQ.AI, we recognize the common issues faced with Splunk. We are on a mission to provide XOps teams with complete control over their observability data pipelines without breaking the bank.  &lt;/p&gt;

&lt;p&gt;Our AI and ML-powered data processing module enables and facilitates only necessary and high-quality data into your Splunk environment, thereby lowering the volume of data ingested. Lower data volumes naturally mean a significantly lower licensing cost. Furthermore, only ingesting the highest quality of data enhances Splunk performance by avoiding clutter and processing only data with real value. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data Retention 
Data retention does pose a significant challenge in the Splunk environment. Although Splunk is backed up by a data retirement and archiving policy, it still poses many difficulties maneuvering through and archiving the exact data you deem unnecessary. In addition, owing to Splunk’s high storage infrastructure costs, there is a growing need to tier storage with Splunk. Even though Splunk SmartStore may seem like a great option in terms of retention, it isn’t necessarily your best friend when it comes to querying historical data regularly. Although your data is structured in your SmartStore, performance takes a massive hit due to the need for rehydration. Also, it takes immense time and effort to conduct frequent lookback searches with SmartStore deployed. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Overcoming Data Retention Woes with LogFlow&lt;br&gt;
LogFlow’s InstaStore decouples storage from compute, not just on paper. InstaStore uses object storage as the primary and only storage tier. All data stored is indexed and searchable in real-time, without the need for archival or rehydration. &lt;/p&gt;

&lt;p&gt;InstaStore comes with a plethora of advantages:&lt;/p&gt;

&lt;p&gt;Zero Storage Tax&lt;br&gt;
Zero Rehydration&lt;br&gt;
Zero Reindexing&lt;br&gt;
Zero Reprocessing&lt;br&gt;
Zero Reanalysis&lt;br&gt;
Zero Operation Delays&lt;br&gt;
In short, you can compare months or even years of data with the recent ones in real-time with InstaStore while maintaining 100% compliance and infinite retention.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Limited Control
Although Splunk is a Data-to-Everything platform, one other major challenge faced by users is that they still have limited access and control over their data pipelines. Not having observability data pipeline control built-in means investing in a whole other separate tool to control the volume of data and when it gets sent to Splunk. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With LogFlow in place, you don’t just have 100% control of upstream data flow into Splunk, but you can also shape, transform, and enhance the data you’re shipping to Splunk. &lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
While Splunk is a great platform for using data to power analytics, security, IT, and DevOps, getting a Splunk deployment to control and derive real value from all the data in your IT environment is no easy task. You’d often find yourselves either depending on third-party tools to exercise greater control over data flow and quality or footing the bill for additional infrastructure and services to control and support data volumes. &lt;/p&gt;

&lt;p&gt;At LOGIQ.AI, we understand the pain points of a Splunk user and have engineered LogFlow to mitigate the shortcomings of Splunk and the other observability and monitoring platforms in the market and give your teams total control over the data they need. All of this with extreme cost-effectiveness. In short, LOGIQ.AI makes all observability and monitoring platforms perform better, be more efficient, and be more productive. &lt;/p&gt;

&lt;p&gt;If you’d like to try out LogFlow or get a demo on how LogFlow can improve observability, drop us a line.&lt;/p&gt;

&lt;p&gt;Originally published on &lt;a href="https://logiq.ai/3-common-challenges-faced-when-deploying-splunk/"&gt;https://logiq.ai/3-common-challenges-faced-when-deploying-splunk/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>data</category>
      <category>splunk</category>
      <category>database</category>
      <category>observability</category>
    </item>
    <item>
      <title>The difference between monitoring and observability</title>
      <dc:creator>Vinodh</dc:creator>
      <pubDate>Tue, 14 Dec 2021 07:11:21 +0000</pubDate>
      <link>https://dev.to/logiq/the-difference-between-monitoring-and-observability-j6j</link>
      <guid>https://dev.to/logiq/the-difference-between-monitoring-and-observability-j6j</guid>
      <description>&lt;p&gt;We live in a complicated world of Enterprise IT and software-driven consumer product design. The internet offers IT infrastructure services from remote data centers. Companies use these services as microservices and containers spread across infrastructure and platform services. Consumers anticipate frequent feature updates over the internet.&lt;/p&gt;

&lt;p&gt;To fulfill these end-user demands, IT service providers and business organizations must increase the reliability and predictability of backend IT infrastructure operations. To enhance system dependability, we regularly monitor infrastructure performance indicators and statistics.&lt;/p&gt;

&lt;p&gt;Though observability might seem like a buzzword, it is a traditional principle that drives monitoring procedures. System observability and monitoring are important components of system dependability, but they’re not the same. Monitoring vs Observability is a question that many have. Let’s examine the relationship between observability and monitoring in cloud-based business IT operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Observability in software?
&lt;/h2&gt;

&lt;p&gt;Observability in software is the ability to deduce a system’s internal states from exterior outputs. Control theory is the ability to manipulate the internal states of a system by altering external inputs. It’s difficult to assess controllability quantitatively; therefore, system observability is used to evaluate outputs and draw meaningful inferences about system states.&lt;/p&gt;

&lt;p&gt;In business IT, dispersed infrastructure components are virtualized and run on various abstraction levels. This setting makes analyzing and computing system controllability difficult.&lt;/p&gt;

&lt;p&gt;Instead, most people use infrastructure performance logs and metrics to analyze specific hardware components’ and systems’ performance. Analyzing log data with AI (AIOps) helps detect future system failures. Then your IT staff may take proactive steps to minimize end-user impact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Observability has three fundamental pillars:
&lt;/h2&gt;

&lt;p&gt;Logs: An event log is a permanent record of discrete occurrences that may uncover unexpected behavior in a system and reveal what changed when things went wrong. It’s best to ingest logs in structured JSON format so log visualization tools can auto-index and query them.&lt;/p&gt;

&lt;p&gt;Metrics: Metrics are the cornerstones of monitoring. They are measures or counts accumulated over time. Metrics inform you how much memory a function uses or how many requests a service handles per second.&lt;/p&gt;

&lt;p&gt;Traces: A single trace shows a particular transaction or request moving from one node to another in a distributed system. Traces let you dive into specific requests to determine which components cause system problems, track module flow, and identify performance bottlenecks.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Monitoring?
&lt;/h2&gt;

&lt;p&gt;Being observable means knowing a system’s internal status. Monitoring is described as actions involved in observability: observing system performance quality over time. Monitoring describes the performance, health, and other critical features of a system’s internal states. Monitoring in corporate IT refers to the practice of turning infrastructure log information into actionable insights.&lt;/p&gt;

&lt;p&gt;The observability of a system involves how effectively infrastructure log metrics can infer individual component performance. Monitoring tools use infrastructure log metrics to provide actionable data and insights.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring vs. Observability
&lt;/h2&gt;

&lt;p&gt;Let’s look at a vast, complicated data center’s infrastructure system monitored by log analysis and ITSM technologies. Too much data analysis generates needless alarms, data, and false flags. Without assessing the right measurements and thoroughly filtering out what’s unnecessary from all the information the system generates,  the infrastructure cannot be used for observability.&lt;/p&gt;

&lt;p&gt;Single server machines can be readily monitored for hardware energy consumption, temperature, data transmission rates, and processor performance. These variables are highly linked with system health. So the system is observable. Performance, life expectancy, and risk of possible performance issues may be examined proactively using simple monitoring tools like energy and temperature measurement equipment.&lt;/p&gt;

&lt;p&gt;The observability of a system depends on its simplicity, the metric representation, and the monitoring tools’ ability to recognize them. Despite a system’s intrinsic complexity, this combination provides essential insights.&lt;/p&gt;

&lt;p&gt;Your teams should have the following to monitor and observe effectively:&lt;/p&gt;

&lt;p&gt;System health reportin&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;g (Do my systems work? Do my systems have enough resources?).&lt;/li&gt;
&lt;li&gt;Reporting on customer-experienced system condition (Do my customers know if my system is down?).&lt;/li&gt;
&lt;li&gt;Key business and system metrics monitoring&lt;/li&gt;
&lt;li&gt;Tools to understand and debug production systems.&lt;/li&gt;
&lt;li&gt;Tooling to find information about things you did not previously know (that is, you can identify unknown unknowns).&lt;/li&gt;
&lt;li&gt;Tools and data to trace, analyze and diagnose production infrastructure issues, including service interactions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Observability and monitoring implementation
&lt;/h2&gt;

&lt;p&gt;Monitoring and observability solutions are intended to:&lt;/p&gt;

&lt;p&gt;Provide early warning signs of service breakdown.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detect outages, bugs, and unauthorized activity.&lt;/li&gt;
&lt;li&gt;Assist in the investigation of service disruptions.&lt;/li&gt;
&lt;li&gt;Identify long-term patterns for business and capacity planning.&lt;/li&gt;
&lt;li&gt;Expose unforeseen impacts of modifications or new features.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Installing a tool is not enough to fulfill DevOps goals, although tools can help or impede the endeavor. Monitoring methods should not be limited to a single person or team. Empowering all developers to use monitoring re&lt;br&gt;
duces outages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Combining the forces of Monitoring and Observability
&lt;/h2&gt;

&lt;p&gt;Though Observability and Monitoring are distinct tasks, they are linked. Both monitoring and observability technologies can help you identify issues. Monitoring and Observability go hand in hand since not all concerns deserve further investigation. Maybe your monitoring tools report a server offline, but it was part of a planned shutdown. You don’t need to collect and evaluate various data types. Just log the alert and go.&lt;/p&gt;

&lt;p&gt;Observability data is essential when dealing with serious situations. Manually gathering the same data that observability technologies provide would be time-consuming. Observability tools always have data to understand a challenging scenario. Several solutions also provide ideas or automated assessments to help teams navigate complex observability data and identify fundamental causes.  &lt;/p&gt;

&lt;p&gt;With LOGIQ, you can gather, process, and analyze behavioral data and use patterns from business systems to help you make better business choices and provide better user experiences. AI can evaluate operational data across apps and infrastructure to provide actionable insights that allow you to scale effectively. Sign up for a free trial today to take your business to the next level.&lt;/p&gt;

&lt;p&gt;Originally published on &lt;a href="https://logiq.ai/the-difference-between-monitoring-and-observability/"&gt;https://logiq.ai/the-difference-between-monitoring-and-observability/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>data</category>
      <category>dataengineering</category>
      <category>observability</category>
      <category>database</category>
    </item>
    <item>
      <title>How AIOps Helps in Application Monitoring</title>
      <dc:creator>Ajit Chelat</dc:creator>
      <pubDate>Wed, 07 Jul 2021 16:10:53 +0000</pubDate>
      <link>https://dev.to/logiq/how-aiops-helps-in-application-monitoring-2h6i</link>
      <guid>https://dev.to/logiq/how-aiops-helps-in-application-monitoring-2h6i</guid>
      <description>&lt;p&gt;There’s no one-size-fits-all approach regarding application monitoring, especially for companies using applications in various cloud environments. Companies are rapidly investing in microservices, mobile apps, data science programs, data ops, etc. Subsequently, they’re also integrating monitoring tools to improve domain-centric monitoring abilities.&lt;/p&gt;

&lt;p&gt;AIOps tools help streamline the use of monitoring applications. It allows companies that need high application services to efficiently manage the complexities of IT workflows and monitoring tools. AIOps extends machine learning and automation abilities to IT operations. These robust technologies aim to detect vulnerabilities and issues to resolve them, determine operational trends, and simplify the remediation of the problems that affect their applications’ performance and availability.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Exactly Is AIOps?
&lt;/h2&gt;

&lt;p&gt;AIOps is short for Artificial Intelligence for IT Operations. AIOps combines machine learning, data analytics, and many other AI technologies to automate the identification and remediation of common and recurring IT operations issues. AIOps leverages data from logs and event recordings to monitor assets and obtain visibility into dependencies without interfering with IT systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Capabilities of AIOps Platforms
&lt;/h2&gt;

&lt;p&gt;AIOps platforms provide the following capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Machine learning capabilities to help in identifying patterns in the collected data.&lt;/li&gt;
&lt;li&gt;A dedicated data platform for aggregating raw data and logs from various monitoring tools and data sources across your applications and infrastructure. &lt;/li&gt;
&lt;li&gt;Dashboards, analytics, and console integration help IT operations gain a single-pane view over their applications and infrastructure.&lt;/li&gt;
&lt;li&gt;Out-of-the-box integrations with tools used for IT service management, monitoring, agile development, collaboration, and log data collection, parsing, and ingestion tools. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Does AIOps Work?
&lt;/h2&gt;

&lt;p&gt;AIOps platforms are powered by algorithms that automate and simplify prominent aspects of IT operations and application monitoring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Selection&lt;/strong&gt;: It collects all the data generated by applications and infrastructure in the form of logs and events and analyzes it. Post analysis, AIOps platforms highlight data that has an issue.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pattern Discovery&lt;/strong&gt;: AIOps platforms correlate and find relationships between different data elements in the form of patterns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interference&lt;/strong&gt;: AIOps determines the root causes of new and recurring issues allowing companies to take proactive actions to mitigate the implications of these issues. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaboration&lt;/strong&gt;: AIOps platforms simplify and promote collaboration across IT teams through unified dashboards and intelligent notification systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation&lt;/strong&gt;: AIOps works towards automating responses to issues and threats as much as possible, thereby making issue and threat remediation quick and straightforward. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Improved Application Monitoring with AIOps
&lt;/h2&gt;

&lt;p&gt;The adoption of AIOps has numerous benefits – right from processing data from multiple sources faster and using that data to make data-driven decisions, to making IT operations more proactive by predicting and remediating performance issues across applications and deployments. Let’s take a closer look at how AIOps is helpful in improving your application monitoring efforts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Detect Hidden Relationships
&lt;/h3&gt;

&lt;p&gt;IT operations and monitoring are an extensive web of interdependencies; no system works independently. However, with so much data present, it is challenging to understand the relationships between systems. AIOps allows you to evaluate performance metrics across different types of systems quickly. This can help identify the impact of IT applications on the overall company’s performance and customer satisfaction. &lt;/p&gt;

&lt;p&gt;This is accomplished by initially working with the business to determine mission-critical activities for such applications. The next step is to gather data produced during the day-to-day tasks like orders, cancellations, transactions, etc. AIOps algorithms can be leveraged to identify patterns or clusters in the collected data, allowing businesses to understand the relationships better.&lt;/p&gt;

&lt;h3&gt;
  
  
  Optimizing The Use of Customer And Transaction Data
&lt;/h3&gt;

&lt;p&gt;Capabilities of AIOps can help in the identification of patterns, anomaly recognition, categorization, and extrapolation. These are essential aspects of big data analytics operations that the organization applies to the transaction and customer data. Leveraging AIOps can help in understanding user behavior in broad IT systems. &lt;/p&gt;

&lt;p&gt;This will make it easier to monitor how any modifications on the applications will affect the business operations. By harnessing internal application monitoring data, AIOps can bring together customer and transaction data effectively. When the information is readily available, a business can efficiently choose the right path for the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Forecasting The Issues
&lt;/h3&gt;

&lt;p&gt;An essential role of AIOps is ameliorating the predictive analytics activities. It closely studies the current and past behavior of the apps. This allows the technology to predict future scenarios, enabling the business to adjust its strategies. This proactive approach helps in improving application performance and also in gaining competitive advantages. &lt;/p&gt;

&lt;p&gt;For instance, companies can identify changing trends in how users are interacting with apps. So they will have a clear idea of the areas that they need to focus. Moreover, AIOps allows businesses to perform a deep analysis of the cause of the problem. Not just that, it will also take the necessary steps to eliminate the issue before it impacts the performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Decrease The Response Time
&lt;/h3&gt;

&lt;p&gt;By leveraging AIOps, companies can reduce the response time of dealing with errors and outages. Experts believe that AIOps can reduce the cost of events like errors, &lt;a href="https://thenewstack.io/the-current-state-of-aiops/"&gt;outages by 30% to 40%&lt;/a&gt;. This signifies a massive saving considering that the average cost that a company bears in service disruption is approximately $300,000 per hour. &lt;/p&gt;

&lt;p&gt;This is due to the ability of this powerful technology to detect where the data originates. Every system that a business uses produced a lot of data, making it harder to track the source of information. But AIOps manages the massive amount of data from a central location, allowing better process and application security.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bringing Together Silos
&lt;/h3&gt;

&lt;p&gt;One of the hurdles in improving application performance is how siloed organizations can be. More than 90% of IT professionals say that most monitoring tools only provide them with information related to their areas of responsibility.&lt;/p&gt;

&lt;p&gt;But AIOps can deal with this issue by leveraging data analytics and machine learning. These technologies allow the tools to monitor tons of information streams. Such extensive monitoring makes it easier to spot problems that would otherwise be difficult to spot with a siloed approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;IT leverages a lot of application monitoring tools to maintain operational efficiency. However, each of these tools collects a massive amount of data that needs to be maintained. The team fails to detect vulnerability and issues in the complex web of data, leading to security threats. By harnessing the potentials of AIOps, IT teams can automate and improve their application monitoring processes by leaps and bounds&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>apm</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>How to stream AWS CloudWatch logs to LOGIQ</title>
      <dc:creator>Ajit Chelat</dc:creator>
      <pubDate>Sat, 26 Jun 2021 10:51:06 +0000</pubDate>
      <link>https://dev.to/logiq/how-to-stream-aws-cloudwatch-logs-to-logiq-388e</link>
      <guid>https://dev.to/logiq/how-to-stream-aws-cloudwatch-logs-to-logiq-388e</guid>
      <description>&lt;p&gt;AWS CloudWatch is an observability and monitoring service that provides you with actionable insights to monitor your applications, stay on top of performance changes, and optimize resource utilization while providing a centralized view of operational health. AWS CloudWatch collects operational data of your AWS resources, applications, and services running on AWS and on-prem servers in the form of logs, metrics, and events. CloudWatch then uses this data to help detect and troubleshoot issues and errors in your environments, visualize logs and metrics, set up and take automated actions, and uncover insights that help keep your applications and deployments running smoothly. &lt;/p&gt;

&lt;p&gt;AWS CloudWatch provides excellent observability for your applications and infrastructure hosted on AWS. But what about your applications and resources hosted on service providers? While you can stream their logs into CloudWatch using proxies and exporters, it isn’t that straightforward. You’d have to monitor them separately using a your service provider’s own monitoring tool or build something in-house using Prometheus or Grafana, maybe. Why train your eyes to watch multiple monitoring tools when you can centralize monitoring and observability across your on-premise servers and cloud providers with LOGIQ? LOGIQ plugs into numerous data sources to centralize your logs and visualize them in a single pane regardless of the service provider. &lt;/p&gt;

&lt;p&gt;You can easily stream your AWS CloudWatch logs into LOGIQ, thereby letting you monitor your AWS resources applications along with everything else you’re watching with LOGIQ. You can also &lt;a href="https://logiq.ai/integrated-ui/"&gt;visualize and analyze&lt;/a&gt; your AWS CloudWatch logs in real-time and gain powerful insights into their performance and security.&lt;/p&gt;

&lt;p&gt;This guide will show you how you can stream your AWS CloudWatch logs into LOGIQ in no time. You can get yourself a free-forever instance of the &lt;a href="https://docs.logiq.ai/logiq-server/logiq-paas-community-edition"&gt;LOGIQ PaaS Community Edition&lt;/a&gt; and try out the steps listed in this article to stream your AWS CloudWatch logs to LOGIQ.&lt;/p&gt;

&lt;h2&gt;
  
  
  LOGIQ’s AWS CloudWatch Exporter Lambda function
&lt;/h2&gt;

&lt;p&gt;Since we love keeping it simple at LOGIQ, we’ve built an AWS Lambda function that enables you to export your CloudWatch logs to your LOGIQ instance. This AWS Lambda function acts as a trigger for a CloudWatch log stream.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kAxDH38---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/15cntzx0itujudujpd1a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kAxDH38---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/15cntzx0itujudujpd1a.png" alt="How the LOGIQ CloudWatch Exporter Lambda function works"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the LOGIQ CloudWatch Exporter Lambda Function
&lt;/h2&gt;

&lt;p&gt;You can create the LOGIQ CloudWatch Exporter Lambda Function using the CloudFormation template available at &lt;a href="https://logiqcf.s3.amazonaws.com/cloudwatch-exporter/cf.yaml"&gt;https://logiqcf.s3.amazonaws.com/cloudwatch-exporter/cf.yaml&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Alternatively, you can also use the code available in our client integrations Bitbucket repository to create the Lambda function. &lt;/p&gt;

&lt;p&gt;This CloudFormation template creates a Lambda function along with the permissions it needs. Before using this template, you’ll need to configure the following attributes:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;APPNAME&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A readable application name for LOGIQ to partition logs by.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;CLUSTERID&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A Cluster ID for LOGIQ to partition logs by.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;NAMESPACE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A namespace for LOGIQ to partition logs by.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;LOGIQHOST&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;IP address or hostname of your LOGIQ instance.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;INGESTTOKEN&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;JWT token to securely ingest logs into LOGIQ&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Creating and configuring the CloudWatch trigger
&lt;/h2&gt;

&lt;p&gt;Once you’ve created the AWS Lambda function, it’s time to create and configure the CloudWatch trigger. On your AWS dashboard, do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the AWS Lambda function you just created (logiq-cloudwatch-exporter).&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add Trigger&lt;/strong&gt;. &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ePNa72t0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r0phn9mo7lsmpz262pfk.png" alt="Adding a CloudWatch trigger"&gt;
&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Add Trigger&lt;/strong&gt; page, select &lt;strong&gt;CloudWatch Logs&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Next, select the &lt;strong&gt;Log group&lt;/strong&gt; you’d like to stream to LOGIQ.&lt;/li&gt;
&lt;li&gt;Enter a &lt;strong&gt;Filter name&lt;/strong&gt; and optionally add a &lt;strong&gt;Filter pattern&lt;/strong&gt;. &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GpmM8lwx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/67hby8yty67fofn7kjk9.png" alt="Configuring the CloudWatch trigger"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And that’s it! All new logs from the CloudWatch log group you configured are streamed directly to your LOGIQ instance.&lt;/p&gt;

&lt;p&gt;From here, you can easily view, query, visualise and analyse your CloudWatch logs while detecting anomalies in real-time thereby helping you keep your AWS applications and resources always on and performing at their best.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JVfKXrp---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cevx0zpnpobtswhn6bnx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JVfKXrp---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cevx0zpnpobtswhn6bnx.png" alt="The LOGIQ dashboard streaming logs from AWS CloudWatch"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you enjoyed trying out this guide and the Community Edition of LOGIQ PaaS, let us know in the comments. You can also reach out to us if you'd like a detailed demo of the LOGIQ Observability platform and witness first-hand how LOGIQ can help you derive more value from your log data.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudwatch</category>
      <category>monitoring</category>
      <category>devops</category>
    </item>
    <item>
      <title>Shipping and Visualizing Jenkins Logs with LOGIQ</title>
      <dc:creator>Ajit Chelat</dc:creator>
      <pubDate>Fri, 25 Jun 2021 14:54:23 +0000</pubDate>
      <link>https://dev.to/logiq/shipping-and-visualizing-jenkins-logs-with-logiq-2ccf</link>
      <guid>https://dev.to/logiq/shipping-and-visualizing-jenkins-logs-with-logiq-2ccf</guid>
      <description>&lt;p&gt;Jenkins is by far the leading open-source automation platform. A majority of developers turn to Jenkins to automate processes in their development, test, and deployment pipelines. Jenkins’ support for plugins helps automate nearly every task and set up robust continuous integration and continuous delivery pipelines. &lt;/p&gt;

&lt;p&gt;Jenkins provides logs for every Job it executes. These logs offer detailed records related to a Job, such as a build name and number, time for completion, build status, and other information that help analyze the results of running the Job. A typical large-scale implementation of Jenkins in a multi-node environment with multiple pipelines generates tons of logs, making it challenging to identify errors and analyze their root cause(s) whenever there’s a failure. Setting up centralized observability for your Jenkins setup can help overcome these challenges by providing a single pane to log, visualize, and analyze your Jenkins logs. A robust observability platform enables you to debug pipeline failures, optimize resource allocation, and identify bottlenecks in your pipeline that hamper faster delivery. &lt;/p&gt;

&lt;p&gt;We’ve all come across numerous articles that discuss using the popular ELK stack to track and analyze Jenkins logs. While the ELK stack is a popular service for logging and monitoring, its &lt;a href="https://logiq.ai/major-challenges-in-elk-stack-logging/" rel="noopener noreferrer"&gt;use can be a little challenging&lt;/a&gt;. While the ELK stack performs brilliantly in simple, single-use scenarios, it struggles with manageability and scalability in large-scale deployments. Additionally, their associated costs (and changes in Elastic Licensing) might raise a few eyebrows. LOGIQ, on the other hand, is a true-blue observability PaaS that helps you ingest log data from &lt;a href="https://logiq.ai/k8s/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, &lt;a href="https://logiq.ai/monitoring/" rel="noopener noreferrer"&gt;on-prem servers or cloud VMs, applications&lt;/a&gt;, and &lt;a href="https://logiq.ai/integrations/" rel="noopener noreferrer"&gt;several other data sources&lt;/a&gt; without a price shock. As LOGIQ uses S3 as the primary storage layer, you get better control and ownership over your data and as much as 10X reductions in costs in large-scale deployments. In this article that’s part of a two-article series, we’ll demonstrate how you can get started with Jenkins log analysis using LOGIQ. We’ll walk you through installing Logstash, setting up your Jenkins instance, and ingesting log data into LOGIQ to visualize and analyze your Jenkins logs. &lt;/p&gt;

&lt;h2&gt;
  
  
  Before you begin
&lt;/h2&gt;

&lt;p&gt;Before we dive into the demo, here’s what you’d need in case you’d like to follow along and try integrating your Jenkins logs with LOGIQ:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A LOGIQ instance&lt;/strong&gt;: If you don’t have access to a LOGIQ instance, you can quickly spin up the &lt;a href=""&gt;free-forever Community Edition of LOGIQ PaaS&lt;/a&gt;.
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A Jenkins instance&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installing Logstash
&lt;/h2&gt;

&lt;p&gt;Logstash is a free server-side data processing pipeline that ingests data from many sources, transforms it, and then sends it to your favourite stash. We’ll use Logstash as an intermediary between Jenkins and LOGIQ that grooms your Jenkins log data before being ingested by LOGIQ. &lt;/p&gt;

&lt;p&gt;To install Logstash on your local (Ubuntu) machine, run the following commands in succession:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install apt-transport-https
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install logstash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For detailed instructions on installing Logstash on other OSs, refer to the &lt;a href="https://www.elastic.co/guide/en/logstash/current/installing-logstash.html" rel="noopener noreferrer"&gt;official Logstash documentation&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Now that we’ve installed Logstash download the flatten configuration and place it in your desired directory. The &lt;a href="https://github.com/hegdesandesh25/Logstashconfig/blob/main/FlattenJSON.rb" rel="noopener noreferrer"&gt;flatten configuration&lt;/a&gt; helps structure data before ingestion into LOGIQ. Once you’ve downloaded the flatten configuration, use the following Logstash configuration to push your Jenkins logs to LOGIQ:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input {
  tcp {
    port =&amp;gt; 12345
    codec =&amp;gt; json
  }
}
output { stdout { codec =&amp;gt; rubydebug } }
filter {
    split {
        field =&amp;gt; "message"
    }
  mutate {
    add_field =&amp;gt; { "cluster_id" =&amp;gt; "JENKINS-LOGSTASH" }
    add_field =&amp;gt; { "namespace" =&amp;gt; "jenkins-ci-cd-1" }
    add_field =&amp;gt; { "application" =&amp;gt; "%{[data][fullProjectName]}" }
    add_field =&amp;gt; { "proc_id" =&amp;gt; "%{[data][displayName]}" }
  }
ruby {
        path =&amp;gt; "/home/yourpath/flattenJSON.rb"
        script_params =&amp;gt; { "field" =&amp;gt; "data" }
    }
}
output {
  http {
        url =&amp;gt; "http://&amp;lt;logiq-instance&amp;gt;/v1/json_batch"
        http_method =&amp;gt; "post"
        format =&amp;gt; "json_batch"
        content_type =&amp;gt; "application/json"
        pool_max =&amp;gt; 300
        pool_max_per_route =&amp;gt; 100
       }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Make sure you change the path in the configuration to the path where you downloaded the flatten configuration file. Also, remember to replace the LOGIQ endpoint with the endpoint of your LOGIQ instance. If you haven’t provisioned LOGIQ yet, you can do so by following one of our &lt;a href="https://docs.logiq.ai/logiq-server/quickstart-guide" rel="noopener noreferrer"&gt;quickstart guides&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Jenkins
&lt;/h2&gt;

&lt;p&gt;Now that we’ve got Logstash ready to go, let’s go ahead and configure Jenkins to use Logstash. For this demo, we’ve created two Jenkins pipeline jobs whose logs we’ll push to Logstash. You can use your own Jenkins logs when following along. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhvmfkikffwnejb7z62m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhvmfkikffwnejb7z62m.png" alt="The Jenkins dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To push Jenkins logs to Logstash, we first need to install the Logstash plugin on Jenkins. To install Logstash, do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log on to your Jenkins instance. &lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Manage Jenkins&lt;/strong&gt; &amp;gt; &lt;strong&gt;Manage Plugins&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Search for &lt;strong&gt;Logstash&lt;/strong&gt; under &lt;strong&gt;Available&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Once Logstash shows up, click &lt;strong&gt;Install without restart&lt;/strong&gt;. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwscvnj3w6poth2fusqw3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwscvnj3w6poth2fusqw3.png" alt="Installing the Logstash plugin on Jenkins"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After installing Logstash, we’ll go ahead and configure and enable Jenkins to push logs to Logstash. To configure Jenkins, do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Manage Jenkins&lt;/strong&gt; &amp;gt; &lt;strong&gt;Configure System&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Scroll down until you see &lt;strong&gt;Logstash&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Enter the &lt;strong&gt;Host name&lt;/strong&gt; and &lt;strong&gt;Port&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flocbxqw07a2urraou56y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flocbxqw07a2urraou56y.png" alt="Configuring the Logstash plugin"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: In this example, we’ve entered the IP address and port number of the local Ubuntu machine on which we installed Logstash. Ensure that you provide the IP address and port number of the machine where you’ve installed Logstash. &lt;/p&gt;

&lt;p&gt;Your Jenkins instance is now ready to push logs to Logstash&lt;/p&gt;

&lt;h2&gt;
  
  
  Shipping logs to LOGIQ
&lt;/h2&gt;

&lt;p&gt;We’ve got Jenkins ready to ship logs to Logstash and Logstash prepared to pick them up and groom them for ingestion into LOGIQ. Let’s go ahead and start Logstash from the installation folder (&lt;code&gt;/usr/share/logstash&lt;/code&gt;) and pass the custom configuration file we prepared above using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/usr/share/logstash# bin/logstash -f /etc/logstash/logstash-sample.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it! Your logging pipeline is up and running. Now when you head over to the Logs page on your LOGIQ dashboard, you’ll see all of your Jenkins logs that Logstash pushed to LOGIQ. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnjh3upht6wy9r2g1wtz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnjh3upht6wy9r2g1wtz.png" alt="The Logs page on your LOGIQ dashboard with Jenkins logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here, you can create custom metrics from your logs, create events and alerts, and set up powerful dashboards that help visualize your log data. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobd9f9veil7a38m9khzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobd9f9veil7a38m9khzy.png" alt="Visualising your Jenkins log data using LOGIQ"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This completes our overview on shipping and visualizing your Jenkins logs with LOGIQ. In a future article, we'll show you exactly how you can create powerful visualizations from your Jenkins logs. In the meanwhile, do drop a comment or &lt;a href="https://logiq.ai/" rel="noopener noreferrer"&gt;reach out&lt;/a&gt; to us in case have any questions or would like to know more about how LOGIQ can bring in multi-dimensional observability to your applications and infrastructure and bring your log data to life.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>monitoring</category>
      <category>operations</category>
      <category>jenkins</category>
    </item>
    <item>
      <title>Getting Started with the LOGIQ PaaS Community Edition</title>
      <dc:creator>Ajit Chelat</dc:creator>
      <pubDate>Thu, 24 Jun 2021 14:30:30 +0000</pubDate>
      <link>https://dev.to/logiq/getting-started-with-the-logiq-paas-community-edition-1a88</link>
      <guid>https://dev.to/logiq/getting-started-with-the-logiq-paas-community-edition-1a88</guid>
      <description>&lt;p&gt;If you’ve been looking for an inexpensive way to run your own observability stack while maintaining complete control over your data and its security, look no further. The LOGIQ PaaS Community Edition is officially live!&lt;/p&gt;

&lt;p&gt;With the LOGIQ PaaS Community Edition, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Self-host your observability stack on a cloud provider of your choice – public or private &lt;/li&gt;
&lt;li&gt;Ingest up to &lt;strong&gt;50GB&lt;/strong&gt; of log data &lt;strong&gt;per day&lt;/strong&gt; with &lt;strong&gt;unlimited data retention&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Store your log data on any S3-compatible cloud provider via the built-in Minio S3 service&lt;/li&gt;
&lt;li&gt;Ingest logs from Syslog, RSyslog, Logstash, Fluent, AWS Firelens, JSON, and &lt;strong&gt;plenty more&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Run up to &lt;strong&gt;4 ingest worker&lt;/strong&gt; processes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ll also get access to all of the LOGIQ Enterprise Edition’s features along with Community Support, &lt;strong&gt;free forever&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5bqcmn5z8zyjygvm0vb.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5bqcmn5z8zyjygvm0vb.gif" alt="The LOGIQ UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What’s more? Deploying LOGIQ PaaS is ridiculously easy! This article will show you exactly how you can deploy the LOGIQ PaaS Community Edition on your Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before you begin
&lt;/h2&gt;

&lt;p&gt;To get you up and running with the LOGIQ PaaS Community Edition quickly, we’ve made LOGIQ PaaS’ Kubernetes components available as &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; Charts. To deploy LOGIQ PaaS, you’ll need access to a Kubernetes cluster and Helm 3.&lt;/p&gt;

&lt;p&gt;Before you start deploying LOGIQ PaaS, let’s run through a few quick steps to set up your environment correctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add the LOGIQ Helm repository
&lt;/h3&gt;

&lt;p&gt;Add LOGIQ’s Helm repository to your Helm repositories by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add logiq-repo https://logiqai.github.io/helm-charts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Helm repository you just added is named logiq-repo. Whenever you install charts from this repository, ensure that you use the repository name as the prefix in your install command, as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install &amp;lt;deployment_name&amp;gt; logiq-repo/&amp;lt;chart_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now search for the Helm charts available in the repository by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm search repo logiq-repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running this command displays a list of the available Helm charts along with their details, as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm repo update
$ helm search repo logiq-repo
NAME                CHART VERSION    APP VERSION    DESCRIPTION
logiq-repo/logiq    2.2.11           2.1.11         LOGIQ Observability HELM chart for Kubernetes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’ve already added LOGIQ’s Helm repository in the past, you can update the repository by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create a namespace to deploy LOGIQ PaaS
&lt;/h3&gt;

&lt;p&gt;Create a namespace where we’ll deploy LOGIQ PaaS by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace logiq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running the command shown above creates a namespace named &lt;code&gt;logiq&lt;/code&gt;. You can also name your namespace differently by replacing &lt;code&gt;logiq&lt;/code&gt; with the name of your choice in the command above. In case you do, remember to use the same namespace for the rest of the instructions listed in this guide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: Ensure that the name of the namespace is not more than 15 characters in length.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prepare your Values file
&lt;/h3&gt;

&lt;p&gt;Just as any other package deployed via Helm charts, you can configure your LOGIG PaaS deployment using a Values file. The Values file acts as the Helm chart’s API, giving it access to values to populate the Helm chart’s templates.&lt;/p&gt;

&lt;p&gt;To give you a head start with configuring your LOGIQ deployment, we’ve provided sample &lt;code&gt;values.yaml&lt;/code&gt; files for small, medium, and large clusters. You can use these files as a base for configuring your LOGIQ deployment. You can download these files from the following links. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://firebasestorage.googleapis.com/v0/b/gitbook-28427.appspot.com/o/assets%2F-LmzGprckLqwd5v6bs6m%2F-MOSfp6X1_SPwV_8AGhv%2F-MOSh7NloEncIi1LjUyh%2Fvalues.small.yaml?alt=media&amp;amp;token=83d76953-0854-4a48-a3a8-0591aded0bc6" rel="noopener noreferrer"&gt;&lt;code&gt;values.small.yaml&lt;/code&gt;&lt;/a&gt; for small clusters.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://firebasestorage.googleapis.com/v0/b/gitbook-28427.appspot.com/o/assets%2F-LmzGprckLqwd5v6bs6m%2F-MQ3BQwto2mGZmAgEveP%2F-MQ3BW2mk4SRtFYNkQ2B%2Fvalues.medium.yaml?alt=media&amp;amp;token=95ffa9d0-a736-4213-9425-1b5ff7fa3178" rel="noopener noreferrer"&gt;&lt;code&gt;values.medium.yaml&lt;/code&gt;&lt;/a&gt; for medium clusters.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://firebasestorage.googleapis.com/v0/b/gitbook-28427.appspot.com/o/assets%2F-LmzGprckLqwd5v6bs6m%2F-MQ3BQwto2mGZmAgEveP%2F-MQ3BXv1S-DqlVCWRpOw%2Fvalues.large.yaml?alt=media&amp;amp;token=7d4772bf-39e0-4030-8620-1de1a64aed99" rel="noopener noreferrer"&gt;&lt;code&gt;values.large.yaml&lt;/code&gt;&lt;/a&gt; for large clusters.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can pass the &lt;code&gt;values.yaml&lt;/code&gt; file with the helm install command using the &lt;code&gt;-f&lt;/code&gt; flag, as shown in the following example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install logiq --namespace logiq --set global.persistence.storageClass=&amp;lt;storage_class_name&amp;gt; logiq-repo/logiq -f values.small.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Read and accept the EULA
&lt;/h3&gt;

&lt;p&gt;As a final step, you should read our &lt;a href="https://docs.logiq.ai/eula/eula" rel="noopener noreferrer"&gt;End User’s License Agreement&lt;/a&gt; and accept its terms before you proceed with deploying LOGIQ PaaS. &lt;/p&gt;

&lt;h3&gt;
  
  
  Latest LOGIQ PaaS component versions
&lt;/h3&gt;

&lt;p&gt;The following table lists the latest version tags for all LOGIQ components.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Image&lt;/th&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;logiq-flash&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;2.1.11.27&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;coffee&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;2.1.17.4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;logiq&lt;/code&gt; Helm chart&lt;/td&gt;
&lt;td&gt;2.2.11&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Install LOGIQ PaaS
&lt;/h3&gt;

&lt;p&gt;Now that your environment is ready, you can proceed with installing LOGIQ PaaS in it. To install LOGIQ PaaS, run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install logiq --namespace logiq --set global.persistence.storageClass=&amp;lt;storage class name&amp;gt; logiq-repo/logiq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running the above command installs LOGIQ PaaS and exposes its services and UI on the ingress’ IP address. Accessing the ingress’ IP address in a web browser of your choice takes you to the LOGIQ PaaS login screen, as shown in the following image. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzn31ild6u6k6oijbxivn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzn31ild6u6k6oijbxivn.png" alt="The LOGIQ login screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you haven’t changed any of the admin settings in the values.yaml file you used during deployment, you can log into the LOGIQ PaaS UI using the following default credentials. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Username&lt;/strong&gt;: &lt;code&gt;flash-admin@foo.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Password&lt;/strong&gt;: &lt;code&gt;flash-password&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You can change the default login credentials after you’ve logged into the UI.&lt;/p&gt;

&lt;p&gt;Your LOGIQ PaaS instance is now deployed and ready for use. Your LOGIQ instance enables you to ingest and tail logs, index and query log data, and provides search capabilities. Along with the LOGIQ UI, you can also access these features via LOGIQ’s CLI, &lt;a href="https://docs.logiq.ai/logiq-cli" rel="noopener noreferrer"&gt;logiqctl&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Now that you have full access to your very own LOGIQ PaaS instance, you should try using it to amplify your observability practices. You can use LOGIQ to &lt;a href="https://logiq.ai/k8s/" rel="noopener noreferrer"&gt;observe your Kubernetes clusters&lt;/a&gt;, &lt;a href="https://logiq.ai/jenkins-log-analysis-with-logiq/" rel="noopener noreferrer"&gt;set up centralised observability for your CI/CD pipelines&lt;/a&gt;, &lt;a href="https://logiq.ai/monitoring/" rel="noopener noreferrer"&gt;monitor your applications and infrastructure&lt;/a&gt;, or even tail and analyse logs from &lt;a href="https://logiq.ai/how-to-stream-aws-cloudwatch-logs-to-logiq/" rel="noopener noreferrer"&gt;AWS CloudWatch&lt;/a&gt; or other data sources – all without the pricing shock that the usual log management and analysis solutions provide.&lt;/p&gt;

&lt;p&gt;Do drop a comment or &lt;a href="https://logiq.ai/contact" rel="noopener noreferrer"&gt;reach out to us&lt;/a&gt; if you’d like to know more about how LOGIQ PaaS can help you deliver always-on applications and infrastructure at scale through efficient log management and analysis. &lt;/p&gt;

</description>
      <category>devops</category>
      <category>analytics</category>
      <category>monitoring</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
