<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ricky Arora</title>
    <description>The latest articles on DEV Community by Ricky Arora (@rickysarora).</description>
    <link>https://dev.to/rickysarora</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rickysarora"/>
    <language>en</language>
    <item>
      <title>Advanced Metrics Optimization: Filter, Reduce, and Aggregate</title>
      <dc:creator>Ricky Arora</dc:creator>
      <pubDate>Tue, 15 Oct 2024 04:13:46 +0000</pubDate>
      <link>https://dev.to/rickysarora/advanced-metrics-optimization-filter-reduce-and-aggregate-37f</link>
      <guid>https://dev.to/rickysarora/advanced-metrics-optimization-filter-reduce-and-aggregate-37f</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The massive growth of observability data isn’t limited to just log data. Metrics are growing just as fast, or faster. Making matters worse, DevOps and Engineering teams aren’t just dealing with the increasing volume of metrics data causing a spike in egress, storage, and compute costs. Many tools also charge by the number of custom metrics they track. When you consider that metric tags and tag values count towards many tools' custom metrics tally, all of this growth in metrics can cripple budgets.&lt;/p&gt;

&lt;p&gt;Observo AI has several ways to help DevOps and Engineering teams control their metrics usage. In this article, we will review three different metrics use cases that show how &lt;a href="https://www.observo.ai/product" rel="noopener noreferrer"&gt;Observo AI's Observability Pipeline&lt;/a&gt; can massively reduce metrics volumes and custom metric counts so your teams can analyze all of the data that matters without impacting your budget. Optimizing metrics can also improve query performance and help DevOps and Engineering teams address the most important areas for potential improvement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filter out high cardinality, unqueried metrics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Metrics can add a huge amount of volume to your telemetry data depending on how much an organization collects and how much cardinality those metrics have. Some collect and store more than a hundred million metrics on a daily basis. Compounding the issue, many if not most metrics are never queried by your DevOps and Engineering teams. Observo AI can integrate directly with data feeds from metric stores like Datadog and Elasticsearch to get metrics that have not been queried over a period of time and use these insights to filter out unused metrics. This reduces ingestion costs and also improves query performance in downstream metric stores. Observo AI can also provide insights about the cardinality of metrics and help you identify and filter out metrics that are causing an explosion in cardinality. No need to change agents and collectors across thousands of endpoints - just a massive reduction in data volume that isn’t being used. Observo AI can also rehydrate any metrics from your low-cost data lake to analyze previously filtered-out metrics. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reduce custom metrics by filtering out tags&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Not all metrics are created equally, and those with extremely high cardinality can multiply the number of custom metrics you pay for with tools like Datadog, Elasticsearch, Splunk, and others. This also severely impacts performance of metrics stores. Every tag and tag value creates a new custom time series that can quickly sink your observability budgets even if those tag values offer very little analytical value - examples include many different label values, such as user IDs, email addresses, or other unbounded sets of values. Observo AI can help tame the cardinality explosion with bespoke data transforms that limit or altogether filter out tags with a high number of metric tag values. Using the Observo metric limit enforcer, you can choose how many tag values to ingest before filtering them. You also optionally define an allow-list and deny-list of metric tags and tag values. This eliminates the high cost of ingesting and indexing these tag values so you can focus on custom metrics that provide the insights your DevOps and Engineering teams need to optimize your enterprise. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aggregate high-frequency metrics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Metrics provide deep insights into the observability of your IT environment, but not all metrics give you any new insights, especially if they are collected at high frequency and there isn’t a lot of variation between one value to the next. Sampling can reduce the number of metrics you send to your tools, but it risks you sampling out an interesting value along with all of the low-signal, normal values. A better approach is to aggregate high-frequency metrics by summarizing all of the events across a specific time frame into a single event. Based on the specific metric, the ML model will summarize the data set with max/min value, median, cumulative values, or other summarization based on characteristics of the data set. The single event shows you the time range summarized and the important value during that period of time. Aggregating high-frequency metrics can significantly reduce data volume without losing any of the insights your team needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Observo AI is the right observability pipeline to address the rapid growth of metrics data. By filtering out metrics that are never queried, reducing the cardinality of metrics with low-value tag values, and aggregating high frequency metrics, you can dramatically reduce costs and improve the performance of the tools used by your Engineering and DevOps teams.&lt;/p&gt;

&lt;p&gt;If you are analyzing metrics using Datadog, Splunk, Elasticsearch, Dynatrace or any other tools, &lt;a href="//www.observo.ai"&gt;Observo AI&lt;/a&gt; can help you optimize this data using the techniques we describe above. Don’t just take our word for it, schedule a demo today to see how we can help you save money and improve the performance and effectiveness of your DevOps and Engineering teams’ efforts.&lt;/p&gt;

</description>
      <category>observability</category>
      <category>metrics</category>
      <category>telemetry</category>
    </item>
    <item>
      <title>The Modern SOC Platform</title>
      <dc:creator>Ricky Arora</dc:creator>
      <pubDate>Mon, 01 Jul 2024 01:11:36 +0000</pubDate>
      <link>https://dev.to/rickysarora/the-modern-soc-platform-586d</link>
      <guid>https://dev.to/rickysarora/the-modern-soc-platform-586d</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On April 24, 2024, Francis Odum, released his research report titled, “&lt;a href="https://softwareanalyst.substack.com/p/the-evolution-of-the-modern-security?r=414hy"&gt;The Evolution of the Modern Security Data Platform&lt;/a&gt;” in The Software Analyst Newsletter. This report examines the evolution of modern security operations, tracing its evolution from a reactive approach to a proactive approach. It highlights the shift towards automation, threat intelligence integration, and controlling the costs of ingesting and storing data as crucial elements in enhancing cyber defense strategies. We were excited to see Observo.ai included as a key player in the emerging landscape for modern Security Operation Centers.&lt;/p&gt;

&lt;p&gt;In this article, we will highlight some of the findings of this report and share how Observo.ai is addressing some of the biggest trends in security with our &lt;a href="//www.observo.ai/product"&gt;AI-Powered Telemetry Pipeline&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Source: Francis Odum "The Evolution of the Modern Security Data Platform”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Executive Summary: The Evolution of the Modern Security Data Platform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This comprehensive research report delves into the dynamic evolution of security analysis, tracing its trajectory from conventional methods to contemporary paradigms. It explores the transition from reactive measures to proactive strategies, driven by the burgeoning complexity of digital threats and technological ecosystems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Sections&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Introduction to Security Analysis Evolution: A historical overview of security analysis, highlighting its origins in reactive practices like antivirus software and firewalls. It sets the stage for understanding the need for modernization in response to evolving cyber threats.&lt;/p&gt;

&lt;p&gt;Emergence of Modern Techniques: Explores the rise of advanced methodologies such as threat intelligence, machine learning, and behavioral analytics, showcasing their pivotal role in proactive threat detection and mitigation. This section discusses how these techniques augment traditional security measures.&lt;/p&gt;

&lt;p&gt;Challenges of the Digital Landscape: Examines the challenges posed by the expanding digital landscape, including the proliferation of connected devices, cloud computing, and the Internet of Things (IoT). It underscores the need for adaptable and scalable security solutions.&lt;/p&gt;

&lt;p&gt;Collaborative Paradigm: Emphasizes the importance of collaborative efforts among security analysts, developers, and stakeholders. It illustrates how cross-functional teamwork enhances the implementation of robust security measures and fosters a culture of vigilance within organizations.&lt;/p&gt;

&lt;p&gt;Continuous Adaptation in Security Practices: Stresses the necessity for security analysts to continuously adapt their strategies and tools in response to evolving threats. It advocates for staying abreast of emerging technologies and threat vectors, alongside investing in ongoing training and skill development.&lt;/p&gt;

&lt;p&gt;Future Perspectives: Envisions forthcoming advancements in security analysis driven by artificial intelligence, automation, and decentralized technologies. It also cautions against the challenges posed by increasingly sophisticated adversaries and regulatory landscapes.&lt;/p&gt;

&lt;p&gt;In conclusion, the report underscores the imperative for security analysts to evolve alongside the ever-changing threat landscape, advocating for the adoption of modern techniques and collaborative partnerships to effectively safeguard digital assets in today's dynamic cybersecurity milieu.&lt;/p&gt;

&lt;p&gt;Observo.ai has worked with several customers who are implementing a modern SOC platform like the one described in this report. They have experienced stronger security, reduced manual workarounds, and have significantly controlled costs using this approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explosion in Data Volume&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“Legacy SIEM costs are largely indexed to data volume - meaning, the more stuff you ingest and index, the more you linearly pay. It’s now common knowledge that enterprises are accumulating data at record-setting speeds, meaning that SIEM costs are unfortunately also growing proportionally. In response, IT and security leaders have spent much of the last few years finding clever methods and tools to pre-process, reduce, and prioritize the data that they feed into these expensive systems.”&lt;br&gt;
Francis Odum, “The Evolution of the Modern Security Data Platform”&lt;br&gt;
Observo.ai was created to combat this meteoric rise in data volume. In fact, the idea for the company came when our founders were faced with the escalating costs in their SIEM renewal contract. When the proposal from their SIEM vendor came back in eight figures, something for which they could not get the budget approved,  they knew they had to come up with a better solution.&lt;/p&gt;

&lt;p&gt;The idea for Observo.ai was conceived to help organizations control the growth of this data without losing any of the important signals contained within it. In a study of enterprise log and security event data, our team concluded that as much as 80% of log and security event data has zero analytical value. Sending all of that unusable data to your SIEM is a budget killer.&lt;/p&gt;

&lt;p&gt;Observo.ai uses AI models to optimize this data in the stream before it hits the SIEM index and starts racking up numbers against your daily ingest limits. We can reduce the volume sent to your SIEM by 80% or more by summarizing normal events and separating out redundant or low-value data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alert Fatigue&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“The primary problem has been the cost of ingesting and storing data on these platforms. Secondly, the rising volume of alerts generated from these solutions.”&lt;br&gt;
Francis Odum, “The Evolution of the Modern Security Data Platform”&lt;br&gt;
Not all alerts are created equally - but they can all clog up your security team’s inbox leaving them to wonder which alerts need attention now and which can be addressed later. Observo.ai uses machine learning to understand what is normal for each data type. The Observo.ai Sentiment Engine identifies anomalies and can assign sentiment values to events. By enriching events in the stream with positive or negative sentiment values, teams can better prioritize which alerts must be dealt with immediately. This helps teams identify and resolve critical incidents 40% faster. Helping your security teams be more productive and focus on the most meaningful alerts is all part of the modern SOC.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SIEM Vendor Lock-in&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“In general, this SIEM vendor lock-in intensifies data management issues, it creates a lack of correlation among siloed sources, and necessitates data rehydration for investigations.”&lt;br&gt;&lt;br&gt;
Francis Odum&lt;br&gt;
Legacy SIEM vendors are incentivized to be the single destination for security data. The more data ingested into their index, the more they can charge their customers. But modern security teams are trying to balance sharply rising data volumes and the corresponding increase in SIEM licenses and infrastructure with flat to only modestly increasing budgets. Ripping and replacing is very difficult - installing agents and collectors across thousands of endpoints, applications, databases and firewalls could take months of time and take away your team from managing daily security tasks. It’s only when the prospect of massive increases in license costs and fees for daily ingest overages become so high that security teams would actually consider a switch.&lt;/p&gt;

&lt;p&gt;Observo.ai gives you a much simpler way to balance the challenges of increasing  data volumes against flat budgets. With Observo.ai, you can route security data to multiple tools, and you don’t need to recollect data in order to do so. Observo.ai takes security data in the format you have and can transform it to any schema and route it to the tools you want in the right formats. This helps you route the most important data to more expensive tools and choose less expensive tools, including new SIEMs, for other classes of data. Having multiple SIEMs doesn’t mean that you need to collect the entire data sets multiple times - by transforming the data you have, you can collect once, optimize it, store a full-fidelity copy in low-cost data lake (see below), and route relevant sections to whatever tools make the most sense. Route data where it has the most value. &lt;/p&gt;

&lt;p&gt;Because we also reduce 80% or more of the data volume, this means you don’t have to choose between analyzing only the bare minimum and all of the data that gives insights into your security stance. This flexibility allows you to onboard new data types that may have been considered too expensive to analyze in your legacy SIEM including notoriously verbose sources like Firewall Logs and VPC Flow Logs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In-House Cloud DIY Data Lake&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“Security operations teams will increasingly adopt security data lakes without needing to replace existing SIEM solutions, allowing for better cost management and scalability.”&lt;br&gt;
Francis Odum&lt;br&gt;
The vast majority of SIEM queries are performed on data generated within the last two days. Still, many organizations keep months of data in their SIEM index. This can be a huge drag on performance, and rack up large storage costs. A better practice is to create a Security Data Lake for longer-term retention. Observo.ai makes it easy to create a data lake in low-cost cloud storage like AWS S3, Azure Blob, or GCP that is fully searchable with natural language queries. We store data in highly-compressible Parquet format to further control costs. Data can be stored in an Observo.ai data lake for about 1% of the cost of storing it in the SIEM index.&lt;/p&gt;

&lt;p&gt;Observo.ai can rehydrate (send in the telemetry stream) data from the lake on-demand, transform and optimize it, and re-route to any SIEM tool in the right format for further analysis. Because of the ability to perform natural language queries on data stored in the lake, you don’t need a team of data scientists and engineers to pull the right data for an investigation. By separating the system of analysis (your SIEM) from your system of retention (Observo.ai data lake), you can reduce the total cost of operating a SIEM by 50% or more and retain data for much longer timeframes. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rise of Data ETL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“Companies like Observo have come in as data storage and management intermediaries. They act as an intelligent policy layer, absorbing filtering, and cleansing data (logs and events) before routing them into these large SIEMs. These players integrate with various apps, data management, and storage systems by intelligently filtering and managing data flow. This reduces unnecessary data replication, and managing data storage costs.”&lt;br&gt;
Francis Odum&lt;br&gt;
As we have discussed, the rise in security data brings the risk of a corresponding rise in total SIEM costs. Security teams are being tasked with keeping their spending within tight budgets. Without tools like Observo.ai, these teams are left with mundane, manual workarounds to try to harness the value of security data. Some of these include random sampling, excluding whole classes of data, or turning off data when volumes approach daily ingest limits. All of these are time-consuming and labor-intensive and introduce blindspots into your security mission. &lt;/p&gt;

&lt;p&gt;Observo.ai summarizes and samples data based on AI-based analysis in the stream. This helps ensure all of the data that matters gets into the best tools for analysis. We can automate this process to free up your teams to address security incidents instead of spending time worrying about ingest overage fees. &lt;/p&gt;

&lt;p&gt;Observo.ai can also route data to multiple tools. Many companies are trying to wean off legacy SIEM tools or at a minimum control the growth of data ingested into them. Observo.ai gives our customers the choice to send different classes of data to a different tool and to route anomalous data to more expensive tools and more normal data to lower cost tools or to an Observo.ai Data Lake. This is a huge protection against vendor lock-in and helps teams pick their optimal mix of tools and storage options without being held hostage by incumbent vendors or budget concerns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“The Evolution of the Modern Security Data Platform” raises a lot of very interesting trends and best practices for security teams to consider. Observo.ai is a key part of implementing several of these recommendations. Observo.ai is the AI-Powered Pipeline for security data. To learn more about how Observo.ai can help you achieve a more modern approach to security, schedule a demo with us. You can also read our white paper, titled “Elevating Observability: Intelligent AI-Powered Pipelines.”&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is an Observability Pipeline?</title>
      <dc:creator>Ricky Arora</dc:creator>
      <pubDate>Sun, 30 Jun 2024 02:56:05 +0000</pubDate>
      <link>https://dev.to/rickysarora/what-is-an-observability-pipeline-2n56</link>
      <guid>https://dev.to/rickysarora/what-is-an-observability-pipeline-2n56</guid>
      <description>&lt;p&gt;&lt;strong&gt;Key Takeaways&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Observability pipelines are essential for managing the ever-growing volume of telemetry data (logs, metrics, traces) efficiently, enabling optimal security, performance, and stability within budget constraints.&lt;br&gt;
They address challenges such as data overload, legacy architectures, rising costs, compliance and security risks, noisy data, and the need for dedicated resources.&lt;br&gt;
Observo.ai's AI-powered Observability Pipeline offers solutions like data optimization and reduction, smart routing, anomaly detection, data enrichment, a searchable observability data lake, and sensitive data discovery, significantly reducing costs and improving incident resolution times.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//www.observo.ai"&gt;An Observability Pipeline&lt;/a&gt; is a transformative tool for managing complex system data, optimizing security, and enhancing performance within budget. Observo.ai revolutionizes this with AI, slashing costs and streamlining data analysis to empower Security and DevOps teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: What is an Observability Pipeline?&lt;/strong&gt;&lt;br&gt;
‍&lt;br&gt;
A: An Observability Pipeline, sometimes called a Telemetry Pipeline is a sophisticated system designed to manage, optimize, and analyze telemetry data (like logs, metrics, traces) from various sources. It helps Security and DevOps teams efficiently parse, route, and enrich data, enabling them to make informed decisions, improve system performance, and maintain security within budgetary constraints. Observo.ai elevates this concept with AI-driven enhancements that significantly reduce costs and improve operational efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Observability is the practice of asking questions about the inner workings of a system or application based on the data it produces. It involves collecting, monitoring, and analyzing various data sources (logs, metrics, traces, etc.) to comprehensively understand how the system behaves, its performance, and potential security threats. Another important practice is telemetry, which involves collection and transmission of data from remote sources to a central location for monitoring, analysis and decision making. Logs, metrics, events and traces are known as the four pillars of Observability.&lt;/p&gt;

&lt;p&gt;The telemetry data collected and analyzed by Security and DevOps teams in their observability efforts is growing at an unrelenting pace – for some organizations, as much as 35% year over year. That means that costs to store, index, and process data are doubling in a little more than 2 years. Some of our larger customers spend tens of millions of dollars a year just to store and process this data.&lt;/p&gt;

&lt;p&gt;An observability pipeline or a telemetry pipeline can help Security and DevOps teams get control over their telemetry data such as security event logs, application logs, metrics, traces et al. It allows them to choose the best tools to analyze and store this data for optimal security, performance, and stability within budget requirements. Observability pipelines parse and shape data into the right format, route it to the right SIEM and Observability tools, optimize it by reducing low-value data and enriching it with more context, and empower these teams to make optimal choices while dramatically reducing costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges of Observability&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Data Overload: Security and DevOps teams need help to keep pace with the growth of telemetry data used for observability efforts. This leads them to make sub-optimal choices about what data to analyze, how heavily to sample, and how long to retain data for later security investigations and compliance. Budget is often the culprit driving decisions about how much data to analyze, but it can impair enterprise security, performance, and stability if these teams lack a complete view of their environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Legacy Architectures: Traditional architectures, relying on static, rule-based methods and indexing for telemetry data processing and querying, struggle to adapt to the soaring data volumes and dynamic nature of modern systems. As the scale of data expands, these static methods fail to keep pace, endangering real-time analysis and troubleshooting. Log data constantly changes with new releases and services. Static systems need constant tuning to stay on top of this change which can be time-consuming and difficult without seasoned professionals at the helm.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rising Costs: As telemetry data volumes grow, so do the costs of storing, processing, and indexing this data. Many customers report that storage and compute costs are the same or more than their SIEM and log analytics license costs. Because budgets remain flat or decrease, rising costs force decisions about which data can be analyzed and stored - jeopardizing security, stability, and compliance goals. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Compliance and Security Risks: As telemetry data grows, it becomes increasingly challenging to keep private identifiable information (IPI) secure. Recent reports suggest that data breaches from observability systems have increased by 48% in just the last two years. Manual efforts to mask this data rely on in-depth knowledge of data schemas to try to protect PII. Unfortunately, those efforts fall short. PII like social security numbers, credit card numbers, and personal contact information is often found in open text fields, not just the fields you would expect. This leaves organizations vulnerable to data breaches in new and troubling ways and makes compliance efforts even more challenging.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Noisy Data Overwhelming Useful Signal: About 80% of log data has zero analytical value, yet most teams are paying to analyze all of it. This adds to the cost challenges we’ve mentioned and limits the flexibility of getting a comprehensive and holistic view of observability into core systems. All of this noise also makes SIEM and Observability systems work much harder. It’s easy to find a needle in a really small haystack.  If that haystack gets really big you might need more people to help you find that one important needle. The same is true for SIEM and log management tools. Too much data requires much more CPU power to index and search through it and costs 80% more than it should to store it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lack of Dedicated Resources: Most of our customers deploy large teams to tackle these challenges before using Observo. They develop an intimate knowledge of the telemetry data and tools designed to optimize observability. This draws them away from working on proactive efforts to improve security, performance, reliability, and stability and other projects that bring a lot of value to their organization. The most skilled and knowledgeable of these teams also leave over time. If the systems are heavily reliant on their expertise, this puts the strength of observability in jeopardy.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Observo.ai Observability Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Observo.ai has developed an AI-powered Observability pipeline to address these challenges. Our customers have reduced observability costs by 50% or more by optimizing telemetry data such as Security event logs, application logs, metrics, and others, and by routing data to the most cost-effective destination for storage and analysis. By optimizing and enriching data with AI-generated sentiment analysis, our customers have cut the time to identify and resolve incidents by more than 40%. Built in Rust, Observo.ai's Observability pipelines are extremely fast and designed to handle the most demanding workloads. Here are some of the key ways our solution addresses your biggest observability challenges.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Data Optimization and Reduction: We have found that only 20% of log data has value. Our Smart Summarizer can reduce the volume of data types such as VPC Flow Logs, Firewall logs, OS, CDN, DNS, Network devices, Cloud infrastructure and Application logs by more than 80%. Teams can ingest more quality data while reducing their overall ingest and reduce storage and compute costs. Many customers reduce their total observability costs by 50% or more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Smart Routing: Observo.ai’s observability pipeline transforms data from any source to any destination, giving you complete control over your data. We deliver data in the right format to the tool or storage location that makes the most sense. This helps customers avoid vendor lock-in by giving them choices about how to store, index, and analyze their data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Anomaly Detection : The Observo.ai observability pipeline learns what is normal for any given data type. The Observo.ai Sentiment Engine identifies anomalies and can integrate with common alert/ticketing systems like ServiceNow, PagerDuty, and Jira for real-time alerting. Customers have lowered mean time to resolve (MTTR) incidents by 40% or more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data Enrichment: Observo enriches data to add context. Observo.ai’s models assign “sentiment” based on pattern recognition, or add 3rd party data like Geo-IP and threat intel. Sentiment dashboards add valuable insights and help reduce alert fatigue. By adding context, teams achieve faster, more precise searches and eliminate false alarms that can mask real ones.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Searchable, Full-Fidelity, Low-cost Observability Data Lake: The Observo.ai observability pipeline helps you create a full-fidelity observability data lake in low-cost cloud storage. We store data in Parquet file format making it highly compressed and searchable. You can use natural language queries, so you don’t need to be a data scientist to retrieve insights from your observability stack. Storing this data in your SIEM or log management tool can cost as much as a hundred times more than in an Observo.ai data lake.  This helps you retain more data, for longer periods of time, spend less money, and be a lot more flexible.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sensitive Data Discovery: Observo.ai proactively detects sensitive and classified information in telemetry data flowing through the Observability pipeline, allowing you to secure it through obfuscation or hashing wherever it sits. Observo.ai uses pattern recognition to discover all sensitive data, even if it’s not where you’d expect it to be or in fields designated for PII.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;&lt;br&gt;
There are numerous use cases for Observability pipelines and how they can help organizations solve challenges. Primarily, they are a combination of the challenges mentioned above. Here are some examples that we have seen with organizations of various sizes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get data from Splunk forwarder, optimize the data and send to Splunk. Route the raw data in optimized parquet schema to data lake on AWS S3&lt;/li&gt;
&lt;li&gt;Ingest Cisco Firewall events and Windows event logs from Kafka topic. Send the optimized data to Azure Sentinel and full fidelity data to a Snowflake data lake&lt;/li&gt;
&lt;li&gt;Collect logs from OpenTelemetry agent, reduce the noise and send the optimized data to Datadog&lt;/li&gt;
&lt;li&gt;Receive data from Cribl Logstream, reduce the data volume, mask PII data and route it to Exabeam. A full fidelity copy in JSON format is sent to an Azure Blob Storage data lake&lt;/li&gt;
&lt;li&gt;Ingest VPC Flow logs and CloudTrail events from AWS Kinesis, reduce the noise and send optimized data to Elasticsearch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An observability pipeline is a critical tool for cutting costs, managing data growth, and giving you choices about what data to analyze, which tools to use, and how to store it. An AI-powered observability pipeline elevates observability with much deeper data optimization, and automated pipeline building, and makes it much easier for anyone in your organization to derive value without having to be an expert in the underlying analytics tools and data types. Observo.ai helps you break free from static, rules-based pipelines that fail to keep pace with the ever-changing nature of your data. Observo.ai helps you automate observability with a pipeline that constantly learns and evolves with your data. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learn More&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For more information on how you can save 50% or more on your SIEM and observability costs with the AI-powered Observability Pipeline, Read the Observo.ai White paper, &lt;a href="https://www.observo.ai/whitepaper-elevating-observability-ai-powered-pipelines"&gt;Elevating Observability with AI&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
