<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ankit Anand ✨</title>
    <description>The latest articles on DEV Community by Ankit Anand ✨ (@ankit01oss).</description>
    <link>https://dev.to/ankit01oss</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ankit01oss"/>
    <language>en</language>
    <item>
      <title>8 Best Free &amp; Open Source Log Management Tools (2026)</title>
      <dc:creator>Ankit Anand ✨</dc:creator>
      <pubDate>Thu, 26 Mar 2026 07:02:21 +0000</pubDate>
      <link>https://dev.to/ankit01oss/8-best-free-open-source-log-management-tools-2026-562n</link>
      <guid>https://dev.to/ankit01oss/8-best-free-open-source-log-management-tools-2026-562n</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; The best free and open source log management tools in 2026 are SigNoz (best for unified observability with logs, traces, and metrics), Grafana Loki (best for dashboarding and visualization), ELK Stack (best for full-text search at scale), and Graylog (best for security and compliance). OpenSearch, FluentBit/FluentD, Logstash, and Syslog-ng round out the list for specific use cases. Here is how they compare across features, performance, and ease of deployment.&lt;/p&gt;

&lt;p&gt;Choosing the right &lt;strong&gt;open source log management&lt;/strong&gt; tool is critical for modern engineering teams. It can save you hundreds of hours in debugging and prevent critical outages. While there are many commercial options, many organizations prefer &lt;strong&gt;free, open source log analysis tools&lt;/strong&gt; that provide the flexibility and control growing engineering teams need, without data privacy concerns.&lt;/p&gt;

&lt;p&gt;Start sending logs to SigNoz Cloud in minutes — migrate to self-hosted anytime. Same open-source codebase, zero lock-in.&lt;/p&gt;

&lt;p&gt;In this guide, we compare the 8 best &lt;strong&gt;open source log management tools&lt;/strong&gt; for 2026. We have filtered out the noise and have evaluated each tool based on hands-on usage across the following criteria:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Storage efficiency:&lt;/strong&gt; How well does the tool handle massive log volumes without cost spiralling?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Query performance:&lt;/strong&gt; How fast can you search, filter, and run aggregations on high-cardinality log data?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ease of setup:&lt;/strong&gt; How quickly can a team go from zero to ingesting and querying logs?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Community and ecosystem:&lt;/strong&gt; Is the project actively maintained? How large is the contributor base?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Correlation capabilities:&lt;/strong&gt; Can the tool natively connect logs, metrics, and traces for root cause analysis?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The tools below fall into two categories. &lt;strong&gt;Log management platforms&lt;/strong&gt; collect, store, analyze, and visualize logs. &lt;strong&gt;Log collectors&lt;/strong&gt; collect logs from various sources and forward them to a central location. You may need a combination of both to build a complete logging stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top 8 Free &amp;amp; Self-Hosted Open Source Log Management Tools
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. SigNoz
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhsfo7phar1vhrqr7ige.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhsfo7phar1vhrqr7ige.webp" alt="Log Management in SigNoz" width="800" height="431"&gt;&lt;/a&gt;&lt;br&gt;SigNoz Logs Explorer showing real-time filtering and analysis.
  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; can be a great open source logging tool that also combines traces and metrics in a single application. It uses a columnar datastore, making log queries fast and cost-efficient. It includes built-in pipelines to parse unstructured logs into structured fields, allowing you to filter by attributes and run powerful aggregations to find patterns instead of just searching raw text.&lt;/p&gt;

&lt;p&gt;Beyond just logging, SigNoz stands out as a unified observability platform that also handles metrics and traces. Consolidating all your telemetry signals in a single pane of glass significantly reduces operational overhead and speeds up troubleshooting, eliminating the need to context-switch between fragmented tools. Even if your current focus is solely on logs, opting for a comprehensive solution like SigNoz is a wiser long-term choice.&lt;/p&gt;

&lt;p&gt;Additionally, if you are using &lt;a href="https://signoz.io/opentelemetry/" rel="noopener noreferrer"&gt;OpenTelemetry&lt;/a&gt; for instrumentation, you unlock powerful correlation capabilities-allowing you to seamlessly link logs with traces and metrics to pinpoint root causes faster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Columnar storage for fast aggregations at scale&lt;/li&gt;
&lt;li&gt;Built-in log pipelines for parsing, transforming, and enriching logs before storage&lt;/li&gt;
&lt;li&gt;Native OpenTelemetry support for vendor-neutral instrumentation&lt;/li&gt;
&lt;li&gt;Log-trace-metric correlation with a single query builder&lt;/li&gt;
&lt;li&gt;Flexible deployment options with options to self-host and cloud&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strength:&lt;/strong&gt; The only tool on this list that natively combines log management with distributed tracing and metrics in a single open source platform, eliminating the need to stitch together multiple tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; Managing open source might require some in-house expertise when used at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Teams that want a single observability platform for logs, metrics, and traces without managing separate tools for each.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License:&lt;/strong&gt; MIT (except for the enterprise folder). Details of the license are available in the &lt;a href="https://github.com/SigNoz/signoz/blob/main/LICENSE" rel="noopener noreferrer"&gt;SigNoz GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Follow the &lt;a href="https://signoz.io/docs/install/self-host/" rel="noopener noreferrer"&gt;self-host installation guide&lt;/a&gt; to set up SigNoz locally. For a quick start, you can &lt;a href="https://signoz.io/teams/" rel="noopener noreferrer"&gt;sign up for SigNoz Cloud&lt;/a&gt; with a 30-day free trial.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Grafana Loki
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgic5nrh45q45kujayth.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgic5nrh45q45kujayth.webp" alt="Log viewing in Grafana Loki" width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;Log monitoring and visualization in Grafana Loki with LogQL queries
  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://grafana.com/oss/loki/" rel="noopener noreferrer"&gt;&lt;strong&gt;Grafana Loki&lt;/strong&gt;&lt;/a&gt; is a log aggregation system built around the idea of only indexing metadata about your logs (labels, similar to Prometheus labels). Log data is compressed and stored in object stores like S3 or GCS.&lt;/p&gt;

&lt;p&gt;This design makes it cost-effective and easy to operate since it does not index the full content of the logs. It integrates natively with Grafana, allowing users already in that ecosystem to switch between metrics and logs using shared labels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Metadata-only indexing for significantly reduced storage costs&lt;/li&gt;
&lt;li&gt;Kubernetes-native with automatic labelling of pod and container logs&lt;/li&gt;
&lt;li&gt;Live tail to stream logs in real time via CLI or GUI&lt;/li&gt;
&lt;li&gt;Prometheus-compatible alerting based on log patterns&lt;/li&gt;
&lt;li&gt;LogQL query language modelled after PromQL&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strength:&lt;/strong&gt; Best for teams deeply invested in the Grafana/Prometheus ecosystem who need a simple, cost-effective solution without full-text search requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; Does not support high cardinality well. Labelling logs with unique identifiers like &lt;code&gt;user_id&lt;/code&gt; or &lt;code&gt;ip_address&lt;/code&gt; causes the index size to explode and performance to degrade. Because it does not index the full log content, complex aggregations on raw log data can be slower compared to columnar stores. There is no built-in UI outside of Grafana.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Kubernetes-native teams already running Grafana and Prometheus who want to add logging without a large storage budget.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License:&lt;/strong&gt; GNU AGPL v3. Self-hosted is free. Grafana Cloud offers a managed Loki service with a free tier (50 GB/month).&lt;/p&gt;

&lt;p&gt;Refer to the &lt;a href="https://grafana.com/docs/loki/latest/setup/" rel="noopener noreferrer"&gt;&lt;strong&gt;Grafana Loki setup guide&lt;/strong&gt;&lt;/a&gt; for self-hosting instructions.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. ELK Stack (Elasticsearch, Logstash, Kibana)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flho1erz86hzrbuk49lnb.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flho1erz86hzrbuk49lnb.webp" alt="A sample ELK stack dashboard on website logs" width="800" height="324"&gt;&lt;/a&gt;&lt;br&gt;A sample ELK stack dashboard on website logs (Source: elastic website)
  &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.elastic.co/elastic-stack" rel="noopener noreferrer"&gt;&lt;strong&gt;ELK Stack&lt;/strong&gt;&lt;/a&gt; is one of the most widely used solutions for log analytics. Elasticsearch acts as the search and analytics engine, Logstash handles server-side data processing, and Kibana provides the dashboarding interface. Its full-text search capabilities are among the strongest available.&lt;/p&gt;

&lt;p&gt;The ecosystem is extensive, with Beats for lightweight data shipping and a wide range of plugins. It is well-suited for complex analysis where you need to search through large volumes of unstructured text data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Powerful full-text search with Lucene/KQL query syntax&lt;/li&gt;
&lt;li&gt;Kibana dashboards for visualizing trends and anomalies&lt;/li&gt;
&lt;li&gt;Beats agents for lightweight log forwarding from edge systems&lt;/li&gt;
&lt;li&gt;Machine learning features for anomaly detection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strength:&lt;/strong&gt; The ability for full-text search across massive datasets, with a mature ecosystem and community.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; Resource-intensive. Elasticsearch clusters require significant memory and CPU, especially at scale. Operational complexity is high as managing shards, replicas, and index lifecycle policies requires dedicated expertise. The Elastic License 2.0 restricts some commercial use cases (offering Elasticsearch as a managed service to third parties). Costs can escalate quickly with data growth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Teams that need powerful full-text search and complex querying capabilities across large, diverse log datasets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License:&lt;/strong&gt; Elastic License 2.0 (free tier available, but not OSI-approved open source). Basic features are free. Commercial features (security, ML, alerting) require a paid subscription.&lt;/p&gt;

&lt;p&gt;The Elastic Stack is available on the &lt;a href="https://www.elastic.co/downloads" rel="noopener noreferrer"&gt;&lt;strong&gt;official downloads page&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. OpenSearch
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7l8umelxf9vpr2y00te5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7l8umelxf9vpr2y00te5.webp" alt="A sample OpenSearch dashboard" width="800" height="524"&gt;&lt;/a&gt;&lt;br&gt;A sample OpenSearch dashboard (Source: ovhcloud)
  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://opensearch.org/" rel="noopener noreferrer"&gt;&lt;strong&gt;OpenSearch&lt;/strong&gt;&lt;/a&gt; began as a community-driven, open source fork of Elasticsearch and Kibana, largely led by began as a community-driven, open source fork of Elasticsearch and Kibana, led by AWS to ensure a fully open (Apache 2.0) future for the technology. What started as a direct clone has now diverged significantly.&lt;/p&gt;

&lt;p&gt;The UI (OpenSearch Dashboards) is evolving separately from Kibana. It is introducing new features like a "workspace" concept to organize views and heavily investing in Piped Processing Language (PPL) for a more intuitive, query-based analysis experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apache 2.0 licensed with no commercial use restrictions&lt;/li&gt;
&lt;li&gt;Full-text search engine compatible with Elasticsearch APIs (with some divergence)&lt;/li&gt;
&lt;li&gt;Built-in security (encryption, authentication, RBAC) without a paid tier&lt;/li&gt;
&lt;li&gt;Piped Processing Language (PPL) for intuitive query building&lt;/li&gt;
&lt;li&gt;Strong backing from AWS and a growing contributor community&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strength:&lt;/strong&gt; The true open-source alternative to ELK for teams that need Elasticsearch-like capabilities under a permissive license with built-in security at no extra cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; OpenSearch was forked from Elasticsearch OSS 7.10.2, and the two projects have since diverged. Existing Elasticsearch clients, plugins, and tooling may work partially, but often require version pinning, compatibility checks, or migration adjustments when moving to OpenSearch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Teams currently on ELK who want a genuinely open source alternative with built-in security, or organizations building on AWS that want tight cloud integration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License:&lt;/strong&gt; Apache 2.0. Fully free and open source. AWS OpenSearch Service provides a managed option.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Graylog
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyjz0vneujt45tttnnxd.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyjz0vneujt45tttnnxd.webp" alt="Log search in Graylog" width="800" height="412"&gt;&lt;/a&gt;&lt;br&gt;Log search and analysis in Graylog dashboard showing comprehensive filtering options
  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.graylog.org/" rel="noopener noreferrer"&gt;&lt;strong&gt;Graylog&lt;/strong&gt;&lt;/a&gt; is a centralized log management solution built on OpenSearch (via its own Graylog Data Node). Older versions supported Elasticsearch, but that backend is deprecated as of Graylog 7 and removed in Graylog 8. While it serves as a general-purpose log manager, it has increasingly pivoted towards security and compliance use cases (SIEM).&lt;/p&gt;

&lt;p&gt;It aims to simplify the operational experience by providing a packaged solution with built-in features for user access control, log parsing, and alerting. This makes it a popular choice for teams that need strict governance over their log data without having to manage complex, piecemeal architectures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security-focused with threat detection rules for common attack patterns&lt;/li&gt;
&lt;li&gt;Built-in user access control with LDAP/Active Directory integration&lt;/li&gt;
&lt;li&gt;Configurable data retention and archiving policies&lt;/li&gt;
&lt;li&gt;Documented audit logging and access control capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strength:&lt;/strong&gt; A cohesive, security-oriented platform suited for teams needing compliance features, audit logging, and role-based access control out of the box.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; The recent license change from open source to Server Side Public License (SSPL) has pushed some users to explore alternatives. The free "Open" tier has feature restrictions compared to the commercial offerings. It relies on OpenSearch (via Graylog Data Node) under the hood, so you inherit some of that operational overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Security and compliance-focused teams that need built-in SIEM-like capabilities, audit logging, and enterprise user management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License:&lt;/strong&gt; SSPL (Server Side Public License) for the open edition. Enterprise features require a commercial license. The "Open" tier is free but limited.&lt;/p&gt;

&lt;p&gt;Graylog can be downloaded from it's &lt;a href="https://graylog.org/downloads/" rel="noopener noreferrer"&gt;&lt;strong&gt;official downloads page&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. FluentBit &amp;amp; FluentD
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.fluentd.org/" rel="noopener noreferrer"&gt;&lt;strong&gt;Fluentd&lt;/strong&gt;&lt;/a&gt; and &lt;a href="https://fluentbit.io/" rel="noopener noreferrer"&gt;&lt;strong&gt;Fluent Bit&lt;/strong&gt;&lt;/a&gt; are open source data collectors used to unify data collection and consumption. They serve similar roles as vendor-neutral "pipes" for your logs but differ in their resource footprint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fluent Bit&lt;/strong&gt; is lightweight and is the preferred choice for collecting logs at the edge (like Kubernetes nodes) due to its high performance. &lt;strong&gt;Fluentd&lt;/strong&gt; is historically used as a heavier aggregator for complex transformations.&lt;/p&gt;

&lt;p&gt;In modern architectures, a common pattern is to use Fluent Bit as a DaemonSet to collect container logs, which are then forwarded to a central &lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;OpenTelemetry Collector&lt;/a&gt; for processing before being sent to a log management platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tiny binary optimized for sidecars and ARM/IoT devices&lt;/li&gt;
&lt;li&gt;Extensive plugin ecosystem&lt;/li&gt;
&lt;li&gt;Multiline log parsing to handle stack traces&lt;/li&gt;
&lt;li&gt;Dynamic tagging for routing logs by Kubernetes labels&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strength:&lt;/strong&gt; The standard for vendor-neutral log collection, offering flexibility to route data to multiple backends simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; These are collectors, not analysis platforms. You still need a separate tool for storage, search, and visualization. Configuring complex routing rules with many plugins can become hard to maintain. Fluentd is written primarily in C, with a Ruby extensibility layer, making it heavier than Fluent Bit's pure C design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Any team that needs a reliable, vendor-neutral log collection layer, especially in Kubernetes environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License:&lt;/strong&gt; Apache 2.0. Fully free and open source.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Logstash
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.elastic.co/logstash/" rel="noopener noreferrer"&gt;&lt;strong&gt;Logstash&lt;/strong&gt;&lt;/a&gt; is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and sends it to your chosen destination. It is the original "L" in the ELK stack, but it works as a standalone tool for log parsing and transformation.&lt;/p&gt;

&lt;p&gt;It is known for its vast library of filters and its capabilities to normalize varying data schemas. Whether you need to parse complex patterns, scrub sensitive data, or geo-locate IP addresses, Logstash allows you to create sophisticated pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Large plugin ecosystem for inputs, filters, and outputs (Elastic maintains an &lt;a href="https://www.elastic.co/guide/en/logstash/current/input-plugins.html" rel="noopener noreferrer"&gt;&lt;strong&gt;official plugin directory&lt;/strong&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Grok patterns for parsing unstructured log formats&lt;/li&gt;
&lt;li&gt;Conditional logic for complex routing and enrichment&lt;/li&gt;
&lt;li&gt;Filter plugins available for masking and mutating fields before output&lt;/li&gt;
&lt;li&gt;Codec support for multiline events and custom formats&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strength:&lt;/strong&gt; Extensible data processing and transformation capabilities for complex log pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; Resource-heavy compared to Fluent Bit or Vector. JVM-based, so it requires meaningful heap allocation (Elastic's docs recommend tuning JVM heap based on pipeline complexity). Startup times are slower than those of lightweight collectors. For simple collection tasks, it is overkill. Many teams building new stacks are opting for the OpenTelemetry Collector instead, given its lighter footprint and vendor-neutral design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Teams with complex ETL requirements that need to parse, transform, and enrich log data from many diverse sources before sending it to a backend.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License:&lt;/strong&gt; Elastic License 2.0 / SSPL. Free to use but with commercial restrictions.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Syslog-ng
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.syslog-ng.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Syslog-ng&lt;/strong&gt;&lt;/a&gt; is a powerful open source syslog server capable of collecting logs from any source, processing them in near real time, and delivering them to a wide variety of destinations. It builds upon the basic syslog protocol, adding content-based filtering, rich parsing, and authentication capabilities.&lt;/p&gt;

&lt;p&gt;It is widely trusted in the Unix/Linux world for its reliability and performance. It allows for flexible log management, including the ability to classify, tag, and correlate log messages, making it a staple for infrastructure and network device logging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;High-performance syslog collection with TLS encryption&lt;/li&gt;
&lt;li&gt;Content-based filtering and routing&lt;/li&gt;
&lt;li&gt;Log classification and pattern-based parsing&lt;/li&gt;
&lt;li&gt;Support for structured logging formats (JSON, key-value)&lt;/li&gt;
&lt;li&gt;Disk-based buffering for reliable delivery&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strength:&lt;/strong&gt; High-performance, reliable log collection and processing with deep roots in Unix/Linux infrastructure and network device logging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; Primarily a collector and forwarder, not a full analytics platform. The configuration syntax has a steeper learning curve than YAML-based tools. Less community activity and plugin development compared to Fluent Bit or the OTel Collector. Not Kubernetes-native, so additional configuration is needed for container environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Infrastructure teams managing traditional Unix/Linux servers and network devices that need reliable syslog collection with advanced parsing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License:&lt;/strong&gt; LGPL 2.1 (core) / GPL 2.0 (some modules). Free and open source.&lt;/p&gt;

&lt;p&gt;Refer to the &lt;a href="https://www.syslog-ng.com/products/open-source-log-management/" rel="noopener noreferrer"&gt;&lt;strong&gt;Syslog-ng product page&lt;/strong&gt;&lt;/a&gt; for self-hosting instructions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open Source Logging Tools at a glance: Comparison Table
&lt;/h2&gt;

&lt;p&gt;Here is our curated list of the top open source logging tools:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Storage Backend&lt;/th&gt;
&lt;th&gt;Query Language&lt;/th&gt;
&lt;th&gt;Learning Curve&lt;/th&gt;
&lt;th&gt;Cloud Option&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SigNoz&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Unified observability (logs + traces + metrics)&lt;/td&gt;
&lt;td&gt;Platform&lt;/td&gt;
&lt;td&gt;ClickHouse (columnar)&lt;/td&gt;
&lt;td&gt;Easy to use Query Builder&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;SigNoz Cloud ($0.3/GB logs)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Grafana Loki&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes cost efficiency&lt;/td&gt;
&lt;td&gt;Platform&lt;/td&gt;
&lt;td&gt;Object storage (S3, GCS)&lt;/td&gt;
&lt;td&gt;LogQL&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Grafana Cloud (free tier: 50 GB/mo)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ELK Stack&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Full-text search at scale&lt;/td&gt;
&lt;td&gt;Platform&lt;/td&gt;
&lt;td&gt;Elasticsearch (inverted index)&lt;/td&gt;
&lt;td&gt;Lucene / KQL&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Elastic Cloud&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OpenSearch&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Apache 2.0 ELK alternative&lt;/td&gt;
&lt;td&gt;Platform&lt;/td&gt;
&lt;td&gt;OpenSearch (inverted index)&lt;/td&gt;
&lt;td&gt;Lucene / PPL / SQL&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;AWS OpenSearch Service&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Graylog&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Security and compliance&lt;/td&gt;
&lt;td&gt;Platform&lt;/td&gt;
&lt;td&gt;Graylog Data Node / OpenSearch&lt;/td&gt;
&lt;td&gt;Lucene&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Graylog Cloud&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;FluentBit / Fluentd&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Vendor-neutral log collection&lt;/td&gt;
&lt;td&gt;Collector&lt;/td&gt;
&lt;td&gt;N/A (forwards to backends)&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Logstash&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Complex ETL&lt;/td&gt;
&lt;td&gt;Collector&lt;/td&gt;
&lt;td&gt;N/A (forwards to backends)&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Syslog-ng&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Unix/Linux syslog collection&lt;/td&gt;
&lt;td&gt;Collector&lt;/td&gt;
&lt;td&gt;N/A (forwards to backends)&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Choosing the Right Open Source Logging Tool
&lt;/h2&gt;

&lt;p&gt;The right tool depends on your team's specific constraints. Here is a framework for making the decision:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you need a single platform for logs, metrics, and traces:&lt;/strong&gt; SigNoz provides unified observability without stitching together separate tools. This is particularly valuable if you are already using or planning to adopt OpenTelemetry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you are already running Grafana and Prometheus:&lt;/strong&gt; Loki is the natural fit. It uses the same label-based model as Prometheus and integrates natively with Grafana dashboards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If full-text search is your primary requirement:&lt;/strong&gt; ELK Stack (or OpenSearch for a permissive license) provides the most powerful search capabilities across unstructured log data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If security and compliance drive your logging needs:&lt;/strong&gt; Graylog's built-in SIEM features, access controls, and user management make it a strong fit for compliance-heavy environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you just need a collection layer:&lt;/strong&gt; FluentBit for lightweight Kubernetes/edge collection, Logstash for complex ETL transformations, or Syslog-ng for traditional Unix/Linux infrastructure.&lt;/p&gt;

&lt;p&gt;For a deeper comparison of &lt;a href="https://signoz.io/comparisons/log-analysis-tools/" rel="noopener noreferrer"&gt;log analysis tools&lt;/a&gt; or &lt;a href="https://signoz.io/blog/log-monitoring/" rel="noopener noreferrer"&gt;log monitoring approaches&lt;/a&gt;, we have dedicated guides that cover these topics.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is the best open source log management tool?
&lt;/h3&gt;

&lt;p&gt;It depends on your use case. For unified observability (logs + traces + metrics), SigNoz is the strongest option. For cost-efficient Kubernetes logging, Grafana Loki is hard to beat. For full-text search power, ELK Stack remains the industry standard. For security and compliance, Graylog is purpose-built.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the difference between log management and log analysis?
&lt;/h3&gt;

&lt;p&gt;Log management covers the full lifecycle: collecting, transporting, storing, retaining, and organizing logs. Log analysis is one part of that lifecycle, focused on searching, querying, and extracting insights from stored log data. Most modern platforms handle both.&lt;/p&gt;

&lt;h3&gt;
  
  
  Which open source tool is the best alternative to Splunk?
&lt;/h3&gt;

&lt;p&gt;SigNoz is a strong alternative if you also want tracing and metrics alongside logs at a fraction of the cost. ELK Stack is also a good alternative given its comparable full-text search capabilities. Both can significantly reduce costs compared to Splunk's pricing (which includes both ingest-based and workload-based models).&lt;/p&gt;

&lt;h3&gt;
  
  
  How do open source log management tools compare to commercial ones like Datadog?
&lt;/h3&gt;

&lt;p&gt;Open source tools offer more control, data privacy (self-hosting), and lower costs, but require more operational effort for setup and maintenance. Commercial tools like Datadog provide a managed experience with less operational overhead but come with higher costs, potential vendor lock-in, and data residency constraints. The gap is narrowing as tools like SigNoz offer managed cloud options alongside their open source editions.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the easiest log management tool to self-host?
&lt;/h3&gt;

&lt;p&gt;Grafana Loki has one of the simplest self-hosted setups since it supports a monolithic single-binary deployment mode with minimal configuration. SigNoz provides Docker Compose and Helm chart-based installation for a quick start. ELK Stack and Graylog have more involved setup processes due to their multi-component architectures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can open source log management tools handle enterprise scale?
&lt;/h3&gt;

&lt;p&gt;Yes. ELK Stack and OpenSearch are deployed at petabyte scale across major enterprises. Grafana Loki is designed for horizontal scalability with object storage backends. SigNoz is built on ClickHouse, a columnar database also used by companies like Uber and Cloudflare for high-volume data workloads. The key is proper capacity planning and operational expertise for self-hosted deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is OpenTelemetry, and how does it relate to log management?
&lt;/h3&gt;

&lt;p&gt;OpenTelemetry is a CNCF project that provides vendor-neutral APIs, SDKs, and a Collector to generate and export telemetry data (logs, metrics, and traces). It is becoming the standard way to instrument cloud-native applications. Using OpenTelemetry for log collection means your data is not locked into any specific vendor or backend. You can send the same log data to SigNoz, Grafana Loki, ELK, or any compatible backend without changing your instrumentation code.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>monitoring</category>
      <category>opensource</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Maximizing Scalability - Apache Kafka and OpenTelemetry</title>
      <dc:creator>Ankit Anand ✨</dc:creator>
      <pubDate>Tue, 03 Mar 2026 05:06:34 +0000</pubDate>
      <link>https://dev.to/ankit01oss/maximizing-scalability-apache-kafka-and-opentelemetry-23m6</link>
      <guid>https://dev.to/ankit01oss/maximizing-scalability-apache-kafka-and-opentelemetry-23m6</guid>
      <description>&lt;p&gt;On a recent thread on the &lt;a href="https://slack.cncf.io/" rel="noopener noreferrer"&gt;CNCF Slack’s OTel Collector channel&lt;/a&gt;, a user asked a question that shone a light on a topic I don't think has been effectively discussed elsewhere.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk693rs5afh1b0piab1kl.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk693rs5afh1b0piab1kl.webp" alt="Cover Image" width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;h2&gt;
  
  
  Why You May Want to Run More Than One OpenTelemetry Collector Inside Your Architecture
&lt;/h2&gt;

&lt;p&gt;This article will discuss both multi-collector architecture and how Apache Kafka can be useful here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwum1swn4p2wcl2781g0.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwum1swn4p2wcl2781g0.webp" alt="A user asks: Does it make sense to think about using an intermediate transport service like Kafka to get event data from the apps to move it forward to the otel-collector? If so, is there a reference implementation/article to read about it? Martin responds: What would be the goal in that as opposed to Collector -&amp;gt; collector tiering?" width="800" height="304"&gt;&lt;/a&gt;&lt;br&gt;Doesn't collector tiering make more sense, generally, than using a queue?
  &lt;/p&gt;

&lt;p&gt;(Big thank you to &lt;a href="https://www.linkedin.com/in/martin-thwaites-ab445120" rel="noopener noreferrer"&gt;Martin&lt;/a&gt; for always being there with a helpful answer.)&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Collector Does
&lt;/h2&gt;

&lt;p&gt;To review, the OpenTelemetry Collector is an optional but strongly recommended part of an &lt;a href="https://signoz.io/blog/opentelemetry-use-cases/" rel="noopener noreferrer"&gt;OpenTelemetry observability&lt;/a&gt; deployment. The collector can gather, compress, manage, and filter data sent by OpenTelemetry instrumentation before data gets sent to your observability backend. If sending data to the &lt;a href="https://signoz.io/docs/introduction/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; backend, the system will look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fss36g57pjfcz6345jlan.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fss36g57pjfcz6345jlan.webp" alt="Graph of a simple distribution with a single collector" width="800" height="373"&gt;&lt;/a&gt;&lt;br&gt;Calls from OpenTelemetry auto-instrumentation, API calls, and other code instrumented with the OpenTelemetry SDK all go to the collector running on a host.
  &lt;/p&gt;

&lt;p&gt;In more advanced cases, however, this may not be sufficient. Imagine an edge service that handles high-frequency requests, sending regular requests to a fairly distant collector on the same network. The result will be errors raised on the app when it fails to reach the collector for every single request.&lt;/p&gt;

&lt;p&gt;Again, the whole benefit of the collector is that it should be able to cache, batch, and compress data for sending, no matter how high-frequency its data ingest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-Collector Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fng8h0pa0pi12fu8u4tof.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fng8h0pa0pi12fu8u4tof.webp" alt="Diagram of a multi-collector architecture" width="800" height="483"&gt;&lt;/a&gt;&lt;br&gt;A collector, B, running close to the service being instrumented could collect data reliably and batch it before sending to a second, central collector, C. The C collector could gather data from multiple other 'front-line' collectors before sending to a data backend.
  &lt;/p&gt;

&lt;p&gt;This has a number of advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: In large-scale distributed systems, a single OpenTelemetry collector might not be sufficient to handle the volume of telemetry data generated by all the services and applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced Network Traffic&lt;/strong&gt;: For every additional step of filtering that happens within your network, you reduce the total amount of network bandwidth used for observability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Filtering and Sampling&lt;/strong&gt;: With a multi-tiered approach, you can perform data filtering, transformation, or sampling at the intermediate collector before forwarding the data to the central collector. This can be done by teams who know the microservices under instrumentation and what data is important to highlight. Alternatively, if you have an issue like PII showing up from multiple services, you can set filtering on the central collector to make sure the rules are followed everywhere.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Using a Kafka Queue for OTLP Data
&lt;/h3&gt;

&lt;p&gt;In the Slack thread above, the proposed solution was to use something like a Kafka queue. This would have the advantage of ingesting events reliably and almost never raising errors. Both an internal queue and a collector-to-collector architecture are ways to improve the reliability of your observability data. The two scenarios where a Kafka Queue makes the most sense for your data are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensuring data collection during database outages - even reliable databases fail, and Kafka can ingest and store data during the outage. When the DB is up again, the consumer can start taking in data again.&lt;/li&gt;
&lt;li&gt;Handling traffic bursts - observability data can spike during a usage spike, and if you're doing deep tracing, the spike can be even larger in scale than the increase in traffic. If you scale your database to handle this spike without any queuing, the DB will be over-provisioned for normal traffic. A queue will buffer the data spike so that the database can handle it when it's ready.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakzq5faze7k66y9ib41l.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakzq5faze7k66y9ib41l.webp" alt="Diagram with multiple collectors with an added Kafka queue" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;In this new version, a Kafka queue receives data from collectors near the edge. Services could also publish directly to Kafka. Collector C reads the data from the queue, using the OTel Kafka Receiver.
  &lt;/p&gt;

&lt;p&gt;To learn more about the options for receiving data from Kafka, see the &lt;a href="https://cloud-native.slack.com/archives/C01N6P7KR6W/p1690203259803679" rel="noopener noreferrer"&gt;Kafka receiver in the OpenTelemetry Collector Contrib repository&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  YAML Configuration for Intermediate Collectors
&lt;/h2&gt;

&lt;p&gt;The process for implementing multiple collectors should be straightforward. If doing this from scratch, it would require the following config for Service A, Intermediate Collector B, and Central Collector C:&lt;/p&gt;

&lt;h3&gt;
  
  
  Service YAML Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;intermediate-collector&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;intermediate-collector&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;intermediate-collector&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;intermediate-collector&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;intermediate-collector&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;your-intermediate-collector-image:latest&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;55678&lt;/span&gt; &lt;span class="c1"&gt;# Replace with the appropriate port number&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Intermediate Collector YAML Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;intermediate-collector&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;intermediate-collector&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;intermediate-collector&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;intermediate-collector&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;intermediate-collector&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;your-intermediate-collector-image:latest&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;55678&lt;/span&gt; &lt;span class="c1"&gt;# Replace with the appropriate port number&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Central Collector YAML Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;central-collector&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;central-collector&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;central-collector&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;central-col&lt;/span&gt;

&lt;span class="s"&gt;lector&lt;/span&gt;
    &lt;span class="s"&gt;spec&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;central-collector&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;your-central-collector-image:latest&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;55678&lt;/span&gt; &lt;span class="c1"&gt;# Replace with the appropriate port number&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The intermediate collector will send telemetry data to the central collector using OTLP.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions: Kafka and the OpenTelemetry Collector Work Better Together
&lt;/h2&gt;

&lt;p&gt;The choice between &lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;OpenTelemetry Collector&lt;/a&gt; and Apache Kafka isn't a zero-sum game. Each has its unique strengths and can even complement each other in certain architectures. The OpenTelemetry Collector excels in data gathering, compression, and filtering, making it a strong candidate for reducing in-system latency and improving data quality before it reaches your backend.&lt;/p&gt;

&lt;p&gt;Apache Kafka shines in scenarios where high reliability and data buffering are critical, such as during database outages or traffic spikes. Kafka's robust queuing mechanism can act as a valuable intermediary, ensuring that no data is lost and that databases are not over-provisioned.&lt;/p&gt;

&lt;p&gt;The multi-collector architecture discussed offers a scalable and efficient way to handle large volumes of telemetry data. By positioning collectors closer to the services being monitored, you can reduce network traffic and enable more effective data filtering. This architecture can be further enhanced by integrating a Kafka queue, which adds another layer of reliability and scalability.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Troubleshooting Python with OpenTelemetry Tracing</title>
      <dc:creator>Ankit Anand ✨</dc:creator>
      <pubDate>Tue, 03 Mar 2026 05:05:58 +0000</pubDate>
      <link>https://dev.to/ankit01oss/troubleshooting-python-with-opentelemetry-tracing-28j</link>
      <guid>https://dev.to/ankit01oss/troubleshooting-python-with-opentelemetry-tracing-28j</guid>
      <description>&lt;p&gt;In the classic definition, Observability is something one step beyond monitoring; it’s how easy our system is to understand with the architecture and monitoring we have. The problem is a familiar one: we have monitoring tools but they’re not answering our question. This article shows how a Python developer can go from having traces but not answers, to fully understanding the root cause of a latency issue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqy17enb9gox3dcxg3zxr.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqy17enb9gox3dcxg3zxr.webp" alt="Cover Image" width="800" height="402"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Python Monitoring without Observability
&lt;/h2&gt;

&lt;p&gt;Let’s lay out our scenario: we’re using a Python application with automatic monitoring from the OpenTelemetry Python SDK. It’s correctly monitoring the number of requests we’re getting, and it’s able to tell us that there are some extreme spikes in latency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7l2eg5sw1b8y6ek00r7h.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7l2eg5sw1b8y6ek00r7h.webp" alt="a screenshot of the SigNoz top level dashboard" width="800" height="598"&gt;&lt;/a&gt;&lt;br&gt;This service loads a web page for users, so response times greater than 7 seconds are unacceptably slow!
  &lt;/p&gt;

&lt;p&gt;So, we definitely have monitoring working, in that it’s showing us there’s a problem and it’s frequent enough to be a concern for all users. Let’s dive in further and we’ll see the current setup lacks Observability. First let’s take a look at traces and sort them by duration. Sure enough we’ve got traces for some requests that took too long to handle:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2q61p6t9oetyzqgaof6f.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2q61p6t9oetyzqgaof6f.webp" alt="a screenshot of the SigNoz top level dashboard" width="800" height="525"&gt;&lt;/a&gt;&lt;br&gt;Sure enough we do have traces for the problematic requests.
  &lt;/p&gt;

&lt;p&gt;When we dive into these traces we get another clue: there’s almost 13 seconds spent in a database request, &lt;code&gt;authenticate_check_db&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn24v633jij0u15vo5qbs.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn24v633jij0u15vo5qbs.webp" alt="a screenshot of the SigNoz top level dashboard" width="800" height="529"&gt;&lt;/a&gt;&lt;br&gt;Automatic instrumentation is getting us nice values, thanks to our cleanly labeled functions
  &lt;/p&gt;

&lt;p&gt;Here we get to one of the interesting facts about Observability: &lt;em&gt;systems are more observable the more knowledge and experience you have with the system.&lt;/em&gt; It’s possible that a senior developer who’s worked with this codebase for years will know one of two things right away:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why &lt;code&gt;authenticate_check_db&lt;/code&gt; can intermittently perform slowly&lt;/li&gt;
&lt;li&gt;Failing that, &lt;a href="https://signoz.io/guides/windows-logs/" rel="noopener noreferrer"&gt;how to check logs&lt;/a&gt; within the database and connect the log lines with this trace&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In those cases, the ‘clues’ we have from OpenTelemetry will be enough to point the way to the cause. But we need to admit in these cases that we don’t really have great observability implemented. A junior developer would be totally lost at this point, not least because there are no identifying attributes on this request. We’ll be hard pressed to connect this trace to another logged request elsewhere, unless our OpenTelemetry setup has already linked the logs at the time trace was measured.&lt;/p&gt;

&lt;p&gt;This guide will show you how to go from this state to one where any developer who works with this service should be able to diagnose the problem very quickly from within their &lt;a href="https://signoz.io/docs/introduction/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; &lt;a href="https://signoz.io/blog/opentelemetry-visualization/" rel="noopener noreferrer"&gt;OpenTelemetry dashboard&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Improve OpenTelemetry Observability
&lt;/h2&gt;

&lt;p&gt;To add depth to this tracing information, we’ll start by checking for a config-level fix, then add attributes to our trace, next we’ll add full-fledged events, and finally add child spans to the trace that give a complete view to anyone looking at OpenTelemetry data from this Python application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can Ops fix this problem without my doing anything?
&lt;/h3&gt;

&lt;p&gt;One question we should ask before going any further: is it possible that our operations team can help us before we go much deeper? When we first set up OpenTelemetry monitoring with automatic instrumentation, the developers would have had little to no involvement, with all the setup being done in configuration, making sure the &lt;a href="https://signoz.io/comparisons/opentelemetry-api-vs-sdk/" rel="noopener noreferrer"&gt;OpenTelemetry SDK&lt;/a&gt; effectively wrapped your application code. Before diving in with custom markers for our traces, we should check in with whoever implemented tracing in the first place to ask if anything can change at the configuration level.&lt;/p&gt;

&lt;p&gt;For example, the &lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;OpenTelemetry collector&lt;/a&gt; may be removing attributes or dropping important traces, and a change to these settings may shed light on the problem. Admittedly, in this case it’s not likely, but it’s worth a check before we start adding calls to our application code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add attributes to identify requests
&lt;/h3&gt;

&lt;p&gt;The first thing we’d like to do is to differentiate these requests slightly. Currently our user authentication method request doesn’t show the user that’s logging. We can fix this by &lt;a href="https://opentelemetry.io/docs/instrumentation/python/manual/" rel="noopener noreferrer"&gt;adding an attribute&lt;/a&gt; to our span.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;
&lt;span class="p"&gt;[...]&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;authenticate_check_db&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;current_span&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_current_span&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;current_span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will send our user as a span attribute. Attributes are key-value pairs only, and we shouldn’t add long serialized strings as attributes. That’s a better use for events, listed below.&lt;/p&gt;

&lt;p&gt;Now when we look at traces, we can see a user for each &lt;code&gt;authenticate_check_db&lt;/code&gt; span.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnyidvkenwolj4dryrstu.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnyidvkenwolj4dryrstu.webp" alt="a screenshot of the SigNoz top level dashboard" width="800" height="496"&gt;&lt;/a&gt;&lt;br&gt;From these spans we get our first clue, only certain users are affected by this latency issue
  &lt;/p&gt;

&lt;h3&gt;
  
  
  Add semantic attributes
&lt;/h3&gt;

&lt;p&gt;In the &lt;a href="https://opentelemetry.io/docs/specs/semconv/database/database-spans/" rel="noopener noreferrer"&gt;OpenTelemetry semantic conventions for database calls&lt;/a&gt;, there are specific attributes that can provide valuable context for tracing and monitoring. Here are some attributes that might help diagnose a problem with intermittent:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;db.operation&lt;/strong&gt;: It describes the type of database operation being performed, such as SELECT, INSERT, UPDATE, or DELETE.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;db.cached&lt;/strong&gt;: indicates whether the query result is fetched from a cache, often the culprit in intermittent latency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;db.rows_affected&lt;/strong&gt;: It represents the number of rows affected by the database operation. This can be crucial for monitoring the impact of write operations.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Adding these attributes requires that we start a new span from the tracer object. To follow the standard your span kind should be &lt;code&gt;client&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tracer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_as_current_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;db_operation_span&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Add attributes for db.operation, db.cached, and db.rows_affected
&lt;/span&gt;    &lt;span class="n"&gt;span&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_current_span&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;db.operation&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;db.cached&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;db_return_cached&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;db.rows_affected&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;db_return_rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Add events to traces for unusual occurences
&lt;/h3&gt;

&lt;p&gt;Events are even simpler than attributes, encoding a string at a particular time in span execution. These events are discrete moments or points of interest within the span's duration, such as method calls, network requests, or any significant activity. Events capture context, timestamps, and optional attributes. In my first attempt at this example I added events for &lt;code&gt;db_query_start&lt;/code&gt; and &lt;code&gt;db_query_end&lt;/code&gt; but this really isn’t right. Events capture points in time, but if you’re measuring the time between a start and an end, you really want a span.&lt;/p&gt;

&lt;p&gt;A better strategy with events is to find points of interest that exist at a single moment in time. In my case, I noticed that the database authentication call includes an optional verification step. Therefore I added the event &lt;code&gt;current_span.add_event("username matches blocklist, adding verification")&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Sure enough, when checking the traces for slowest transactions, they all include this event:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsoz5jtnvedeh1rnb97x.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsoz5jtnvedeh1rnb97x.webp" alt="a screenshot of the SigNoz top level dashboard" width="800" height="395"&gt;&lt;/a&gt;&lt;br&gt;Events will show up in the ‘Events’ tab for span details in the SigNoz dashboard
  &lt;/p&gt;

&lt;h3&gt;
  
  
  Add child spans to track sub-functions
&lt;/h3&gt;

&lt;p&gt;Above I mentioned that spans are the solution when want to measure something with a start and end time. We want to add a span for user verification. In our case it’s a child span, since the work has to happen synchronously with the main DB request. Our final method looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tracer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_as_current_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;authenticate_check_db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                    &lt;span class="n"&gt;current_span&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_current_span&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                    &lt;span class="n"&gt;current_span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                    &lt;span class="c1"&gt;# do work
&lt;/span&gt;                    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tracer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_as_current_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_verification&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;eddy&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                            &lt;span class="n"&gt;current_span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;username matches blocklist, adding verification&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                            &lt;span class="c1"&gt;# do work (slower)
&lt;/span&gt;                        &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                            &lt;span class="c1"&gt;# do work (faster)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you finally see the code that, in our contrived example, slowed things downed if the user was named ‘eddy.’ With this child span in the problem becomes a &lt;em&gt;lot&lt;/em&gt; clearer.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;
  &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhst9c9pqa21rtlu24lwc.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhst9c9pqa21rtlu24lwc.webp" alt="a screenshot of the SigNoz top level dashboard" width="800" height="377"&gt;&lt;/a&gt;&lt;br&gt;We can see in the final version of our SigNoz dashboard that the user&amp;lt;/figcaption&amp;gt;&lt;br&gt;

  verification step is taking up the majority of the request time for certain users_&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions: Observability is every developer’s responsibility
&lt;/h2&gt;

&lt;p&gt;We wouldn’t think much of a Python backend engineer who never wrote tests. But observability, which is just as crucial for running reliable production software, has remained the domain of Operations engineers and SRE’s. In modern complex distributed systems, scalability and reliability are critical. Developers who assist SREs in adding observability features can contribute to building systems that scale effectively and remain reliable under various conditions.&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>performance</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>SigNoz + Tracetest: OpenTelemetry-Native Observability Meets Testing</title>
      <dc:creator>Ankit Anand ✨</dc:creator>
      <pubDate>Tue, 03 Mar 2026 05:04:06 +0000</pubDate>
      <link>https://dev.to/ankit01oss/signoz-tracetest-opentelemetry-native-observability-meets-testing-504l</link>
      <guid>https://dev.to/ankit01oss/signoz-tracetest-opentelemetry-native-observability-meets-testing-504l</guid>
      <description>&lt;p&gt;What is the hidden potential of &lt;a href="https://opentelemetry.io/" rel="noopener noreferrer"&gt;OpenTelemetry&lt;/a&gt;? It goes a lot further than the (awesome) application of tracing and monitoring your software. The &lt;a href="https://signoz.io/blog/opentelemetry-apm/" rel="noopener noreferrer"&gt;OpenTelemetry project&lt;/a&gt; is an attempt to standardize how performance is reported &lt;strong&gt;and&lt;/strong&gt; how trace data is passed around your microservice architecture. This &lt;a href="https://signoz.io/blog/opentelemetry-context-propagation/" rel="noopener noreferrer"&gt;context propagation&lt;/a&gt; is a superpower for those who adopt OpenTelemetry tracing. Tracetest promises to make this deep tracing a huge new asset in your testing landscape, and SigNoz helps all engineers get insight into what OpenTelemetry can see.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie5g38eksq4np0oicobn.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie5g38eksq4np0oicobn.webp" alt="Cover Image" width="800" height="402"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Check out this &lt;a href="https://github.com/kubeshop/tracetest/tree/main/examples/tracetest-signoz-pokeshop" rel="noopener noreferrer"&gt;hands-on Demo example&lt;/a&gt; of how Tracetest works with SigNoz! Or, if you like watching videos more, view a &lt;a href="https://www.youtube.com/watch?v=a4OpEPoQTaE" rel="noopener noreferrer"&gt;demo of Tracetest in the SigNoz Community call&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What is SigNoz?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://signoz.io/" rel="noopener noreferrer"&gt;​​SigNoz&lt;/a&gt; is an open-source, OpenTelemetry-native observability tool that helps you monitor your applications and troubleshoot problems. It provides &lt;a href="https://signoz.io/docs/instrumentation/overview/" rel="noopener noreferrer"&gt;traces&lt;/a&gt;, &lt;a href="https://signoz.io/docs/userguide/navigate-user-interface/" rel="noopener noreferrer"&gt;metrics&lt;/a&gt;, and &lt;a href="https://signoz.io/docs/userguide/logs/" rel="noopener noreferrer"&gt;logs&lt;/a&gt; under a single pane of glass.&lt;/p&gt;

&lt;p&gt;It collects data using OpenTelemetry, an open-source observability solution. OpenTelemetry is backed by the Cloud Native Computing Foundation. The project aims to standardize how we instrument our applications for generating telemetry data (traces, metrics, and logs).&lt;/p&gt;

&lt;p&gt;With SigNoz, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visualize Traces, Metrics, and Logs in a single pane of glass.&lt;/li&gt;
&lt;li&gt;Monitor application metrics like p99 latency, error rates for your services, external API calls, and individual endpoints.&lt;/li&gt;
&lt;li&gt;Find the root cause of the problem by going to the exact traces that are causing the problem and see detailed flame graphs of individual request traces.&lt;/li&gt;
&lt;li&gt;Run aggregates on trace data to get business-relevant metrics.&lt;/li&gt;
&lt;li&gt;Filter and query logs, build dashboards and alerts based on attributes in logs.&lt;/li&gt;
&lt;li&gt;Monitor infrastructure metrics such as CPU utilization or memory usage.&lt;/li&gt;
&lt;li&gt;Record exceptions automatically in Python, Java, Ruby, and Javascript.&lt;/li&gt;
&lt;li&gt;Easily set alerts with DIY &lt;a href="https://signoz.io/blog/query-builder-v5/" rel="noopener noreferrer"&gt;query builder&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Tracetest?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://tracetest.io/" rel="noopener noreferrer"&gt;Tracetest&lt;/a&gt; is a tool for trace-based testing. It’s &lt;a href="https://github.com/kubeshop/tracetest" rel="noopener noreferrer"&gt;open source&lt;/a&gt; and part of the CNCF landscape.&lt;/p&gt;

&lt;p&gt;Tracetest uses your existing &lt;a href="https://opentelemetry.io/" rel="noopener noreferrer"&gt;OpenTelemetry&lt;/a&gt; traces to power trace-based testing with assertions against your trace data at every point of the request transaction. You only need to point Tracetest to your existing trace data source or send traces to Tracetest directly!&lt;/p&gt;

&lt;p&gt;Tracetest makes it possible to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.tracetest.io/concepts/assertions" rel="noopener noreferrer"&gt;Define tests and assertions&lt;/a&gt; against every single microservice a trace goes through.&lt;/li&gt;
&lt;li&gt;Build tests based on your already instrumented system.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.tracetest.io/analyzer/concepts" rel="noopener noreferrer"&gt;Improve your OpenTelemetry instrumentation&lt;/a&gt; by ensuring rules and semantic convention standards are met.&lt;/li&gt;
&lt;li&gt;Define multiple transaction triggers, such as a GET against an API endpoint, a GRPC request, a &lt;a href="https://docs.tracetest.io/examples-tutorials/recipes/testing-kafka-go-api-with-opentelemetry-tracetest" rel="noopener noreferrer"&gt;Kafka message queue&lt;/a&gt;, etc.&lt;/li&gt;
&lt;li&gt;Define assertions against both the response and trace data, ensuring both your response and the underlying processes worked as intended.&lt;/li&gt;
&lt;li&gt;Save and run the tests manually or via CI build jobs with the Tracetest CLI.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tracetest Now Works with SigNoz!
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.tracetest.io/configuration/connecting-to-data-stores/signoz" rel="noopener noreferrer"&gt;Tracetest now works with SigNoz&lt;/a&gt;, allowing you to bring the combined power of Tracetest and SigNoz to your developer workflows, and write trace-based tests with Tracetest!&lt;/p&gt;

&lt;p&gt;If you already have OpenTelemetry instrumentation configured in your code and are using an OpenTelemetry Collector with SigNoz, adding Tracetest to your infrastructure can enable you to write detailed trace-based tests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40htnhkcdww74rw916jy.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40htnhkcdww74rw916jy.webp" alt="architecture" width="800" height="637"&gt;&lt;/a&gt;&lt;br&gt;Image 1: Application architecture.
  &lt;/p&gt;

&lt;p&gt;When running integration tests, it's hard to pinpoint where an HTTP transaction fails in a network of microservices. Tracetest solves this by letting you run tests with assertions using existing trace data across all services. These tests can then be seamlessly integrated into your CI/CD process to ensure your system works well and to catch any regressions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fri1ikq9t1siwijlus4n5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fri1ikq9t1siwijlus4n5.webp" alt="test spec sample" width="800" height="635"&gt;&lt;/a&gt;&lt;br&gt;Image 2: In this example, within the Tracetest UI you can see that test assertions for trace spans succeeded.
  &lt;/p&gt;

&lt;p&gt;Elevate your testing approach by harnessing Tracetest for test creation and SigNoz for analyzing test results. SigNoz empowers you to monitor test executions, establish connections between relevant services across different time frames and gain valuable perspectives on system performance. This combination enables you to understand system behavior, gives you insights into system performance and highlights the impact of changes on performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj72gfog0vinorxclrsv0.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj72gfog0vinorxclrsv0.webp" alt="tracetest tests triggered and visualized in signoz" width="800" height="560"&gt;&lt;/a&gt;&lt;br&gt;Image 3: Traces triggered by Tracetest surfaced in SigNoz.
  &lt;/p&gt;

&lt;p&gt;When using Tracetest, you can find problems by checking trace data over time in SigNoz. Any problems you encounter can become new tests or points to check in Tracetest. This gives you a quick feedback loop for continuous improvement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftma90ngxmq924emkdx72.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftma90ngxmq924emkdx72.webp" alt="Image 4: Here you see a trace drilldown of a test in the SigNoz front end." width="800" height="560"&gt;&lt;/a&gt;&lt;br&gt;Image 4: Here you see a trace drilldown of a test in the SigNoz front end.
  &lt;/p&gt;

&lt;h2&gt;
  
  
  Try Tracetest with SigNoz
&lt;/h2&gt;

&lt;p&gt;Install SigNoz on-prem or sign up for a &lt;a href="https://signoz.io/teams/" rel="noopener noreferrer"&gt;free trial&lt;/a&gt;, then configure your OpenTelemetry collector to send traces to SigNoz (see below).&lt;/p&gt;

&lt;p&gt;Tracetest is open-source and easy to install. Start by installing the Tracetest CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;kubeshop/tracetest/tracetest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From here, follow the &lt;a href="https://docs.tracetest.io/getting-started/installation" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; to install the Tracetest server. Once the server is installed, open the Tracetest Web UI in the browser and follow the instructions for connecting the &lt;a href="https://opentelemetry.io/docs/collector/" rel="noopener noreferrer"&gt;OpenTelemetry Collector&lt;/a&gt; with Tracetest.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F407yux0ri2x1o9kavvmw.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F407yux0ri2x1o9kavvmw.webp" alt="Image 5: Selecting SigNoz in the Tracetest settings." width="800" height="463"&gt;&lt;/a&gt;&lt;br&gt;Image 5: Selecting SigNoz in the Tracetest settings.
  &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://opentelemetry.io/docs/collector/" rel="noopener noreferrer"&gt;Collector&lt;/a&gt; is the recommended way to send OpenTelemetry data to an observability back-end. It is a highly configurable binary that allows you to ingest, process, and export OpenTelemetry data.&lt;/p&gt;

&lt;p&gt;Enabling the SigNoz integration in Tracetest is as simple as configuring your OpenTelemetry collector to send spans to both Tracetest and SigNoz.&lt;/p&gt;

&lt;p&gt;Copy this OpenTelemetry Collector configuration and paste it into your own configuration file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# collector.config.yaml&lt;/span&gt;

&lt;span class="c1"&gt;# If you already have receivers declared, you can just ignore&lt;/span&gt;
&lt;span class="c1"&gt;# this one and still use yours instead.&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100ms&lt;/span&gt;

&lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;verbosity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;detailed&lt;/span&gt;
  &lt;span class="c1"&gt;# OTLP for Tracetest&lt;/span&gt;
  &lt;span class="na"&gt;otlp/tracetest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tracetest:4317&lt;/span&gt; &lt;span class="c1"&gt;# Send traces to Tracetest.&lt;/span&gt;
    &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;insecure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="c1"&gt;# OTLP for Signoz&lt;/span&gt;
  &lt;span class="na"&gt;otlp/signoz&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;address-to-your-signoz-server:4317&lt;/span&gt;
    &lt;span class="c1"&gt;# Send traces to Signoz.&lt;/span&gt;
    &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;insecure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;traces/tracetest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# Pipeline to send data to Tracetest&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;otlp/tracetest&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;traces/signoz&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# Pipeline to send data to Signoz&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;otlp/signoz&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, edit the config to add your SigNoz endpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Trace-based Test in Tracetest
&lt;/h2&gt;

&lt;p&gt;For this example, we’ll use the official example app for Tracetest and SigNoz. To quickly access the example, run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/kubeshop/tracetest.git
&lt;span class="nb"&gt;cd &lt;/span&gt;tracetest/examples/tracetest-signoz-pokeshop/
docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To create a test in Tracetest, start by clicking Create &amp;gt; Create New Test &amp;gt; HTTP Request &amp;gt; Next &amp;gt; Add a name for your test &amp;gt; Next &amp;gt; The URL field should be &lt;code&gt;http://demo-api:8081/pokemon/import&lt;/code&gt; and a POST request, where the body should contain &lt;code&gt;{"id":6}&lt;/code&gt; &amp;gt; Create and Run.&lt;/p&gt;

&lt;p&gt;This will trigger the test and display a distributed trace in the Trace tab. You’ll also see the results of the Trace Analyzer. These results show rules and conventions to adhere to while writing code instrumentation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28trqzeeshoa9m6vp32i.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28trqzeeshoa9m6vp32i.webp" alt="Image 6: Trace Analyzer in the Tracetest Web UI. Validate the quality of the code instrumentation." width="800" height="560"&gt;&lt;/a&gt;&lt;br&gt;Image 6: Trace Analyzer in the Tracetest Web UI. Validate the quality of the code instrumentation.
  &lt;/p&gt;

&lt;p&gt;Proceed to add a test spec to assert that all HTTP requests return status code 200. Click the Test tab and proceed to click the Add Test Spec button.&lt;/p&gt;

&lt;p&gt;In the span selector, add this selector:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;tracetest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will select the HTTP spans.&lt;/p&gt;

&lt;p&gt;In the assertion field, add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;attr&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status_code&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the test spec and publish the test.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3woasd61jsg97a7ztf7.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3woasd61jsg97a7ztf7.webp" alt="Image 7: Adding assertions to a test in the Tracetest Web UI." width="800" height="560"&gt;&lt;/a&gt;&lt;br&gt;Image 7: Adding assertions to a test in the Tracetest Web UI.
  &lt;/p&gt;

&lt;p&gt;If an HTTP span is returning anything other than a 200 status code it will be labeled in red. This is an example of a trace-based test that can assert against every single part of an HTTP transaction, including Kafka streams, and external API calls. (See image 2)&lt;/p&gt;

&lt;p&gt;However, Tracetest cannot give you a historical overview of all test runs and distributed traces. Let's show how SigNoz makes it possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitor Trace-based Tests Over Time with SigNoz
&lt;/h2&gt;

&lt;p&gt;Because you are using two pipelines in the OpenTelemetry Collector, all distributed traces generated will be stored in SigNoz. Additionally, if you configure the Tracetest server with &lt;a href="https://docs.tracetest.io/configuration/telemetry" rel="noopener noreferrer"&gt;Internal Telemetry&lt;/a&gt;, you will see the traces the Tracetest server generates in SigNoz. Using the example above, traces from the services in the Pokeshop API will be stored in SigNoz with a defined &lt;strong&gt;Service Name&lt;/strong&gt; property, while the traces from Tracetest will be stored with the “tracetest” &lt;strong&gt;Service Name&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Data in the Tracetest service will give you insight into every test run. Start by running this query in the Tracetest service to filter all Tracetest test runs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwageyh6c4w59c0jj8qfe.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwageyh6c4w59c0jj8qfe.webp" alt="Image 8: Filter “Tracetest trigger” traces." width="800" height="193"&gt;&lt;/a&gt;&lt;br&gt;Image 8: Filter “Tracetest trigger” traces.
  &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://signoz.io/blog/distributed-tracing/" rel="noopener noreferrer"&gt;distributed traces&lt;/a&gt; chart will be filtered and display performance over time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7mbaputac3coplelw1o.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7mbaputac3coplelw1o.webp" alt="Image 9: Show the results of the filter above. View the chart to see performance and select a distinct trace to drill down." width="800" height="560"&gt;&lt;/a&gt;&lt;br&gt;Image 9: Show the results of the filter above. View the chart to see performance and select a distinct trace to drill down.
  &lt;/p&gt;

&lt;p&gt;From here, you can drill down into the specific trace to troubleshoot. Open the &lt;strong&gt;Tracetest trigger&lt;/strong&gt; trace. Choose a trace that is slow. Once open, the trace waterfall within SigNoz can help you pinpoint exactly which span is causing an issue. (Shown in Image 4, above)&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;Would you like to learn more about Tracetest and what it brings to the table? Check the &lt;a href="https://docs.tracetest.io/examples-tutorials/recipes/running-tracetest-with-dynatrace" rel="noopener noreferrer"&gt;docs&lt;/a&gt; and try it out today by &lt;a href="https://tracetest.io/download" rel="noopener noreferrer"&gt;downloading&lt;/a&gt; it today!&lt;/p&gt;

&lt;p&gt;Also, please feel free to support &lt;a href="https://github.com/signoz/signoz" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; and &lt;a href="https://github.com/kubeshop/tracetest" rel="noopener noreferrer"&gt;Tracetest&lt;/a&gt; by giving both a star on GitHub. ⭐&lt;/p&gt;

</description>
      <category>devops</category>
      <category>microservices</category>
      <category>monitoring</category>
      <category>testing</category>
    </item>
    <item>
      <title>How To Monitor RabbitMQ Metrics With OpenTelemetry</title>
      <dc:creator>Ankit Anand ✨</dc:creator>
      <pubDate>Tue, 03 Mar 2026 05:03:27 +0000</pubDate>
      <link>https://dev.to/ankit01oss/how-to-monitor-rabbitmq-metrics-with-opentelemetry-5hcb</link>
      <guid>https://dev.to/ankit01oss/how-to-monitor-rabbitmq-metrics-with-opentelemetry-5hcb</guid>
      <description>&lt;p&gt;RabbitMQ metrics monitoring is important to ensure that RabbitMQ is performing as expected and to identify and resolve problems quickly. In this tutorial, you will install OpenTelemetry Collector to collect RabbitMQ metrics and then send the collected data to SigNoz for monitoring and visualization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flji51lxxqm6k4r1siiz3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flji51lxxqm6k4r1siiz3.webp" alt="Cover Image" width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;In this tutorial, we cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-rabbitmq-metrics-monitoring/#a-brief-overview-of-rabbitmq" rel="noopener noreferrer"&gt;A Brief Overview of RabbitMQ&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-rabbitmq-metrics-monitoring/#a-brief-overview-of-opentelemetry" rel="noopener noreferrer"&gt;A Brief Overview of OpenTelemetry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-rabbitmq-metrics-monitoring/#what-is-opentelemetry-collector" rel="noopener noreferrer"&gt;What is OpenTelemetry Collector?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-rabbitmq-metrics-monitoring/#how-does-opentelemetry-collector-collect-data" rel="noopener noreferrer"&gt;How does OpenTelemetry Collector collect data?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-rabbitmq-metrics-monitoring/#rabbitmq-metrics-and-attributes-that-you-can-collect-with-opentelemetry" rel="noopener noreferrer"&gt;RabbitMQ Metrics and attributes that you can collect with OpenTelemetry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-rabbitmq-metrics-monitoring/#prerequisites" rel="noopener noreferrer"&gt;Prerequisites&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-rabbitmq-metrics-monitoring/#setting-up-signoz" rel="noopener noreferrer"&gt;Setting up SigNoz&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-rabbitmq-metrics-monitoring/#setting-up-opentelemetry-collector" rel="noopener noreferrer"&gt;Setting Up OpenTelemetry Collector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-rabbitmq-metrics-monitoring/#monitoring-with-signoz-dashboard" rel="noopener noreferrer"&gt;Monitoring with SigNoz Dashboard&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-rabbitmq-metrics-monitoring/#conclusion" rel="noopener noreferrer"&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A Brief Overview of RabbitMQ
&lt;/h2&gt;

&lt;p&gt;RabbitMQ is a critical component of many modern architectures. It is used to decouple applications and provide reliable messaging between them. In this &lt;a href="https://signoz.io/blog/rabbitmq-monitoring/" rel="noopener noreferrer"&gt;article&lt;/a&gt;, we’ve discussed in detail how RabbitMQ is monitored using the built-in tools. We recommend checking out that article to get a hold of the basics of RabbitMQ monitoring, why it is so important, and how to do basic monitoring using built-in tools.&lt;/p&gt;

&lt;p&gt;In a nutshell, RabbitMQ is a message broker. It acts as an intermediary between applications that send messages and applications that receive messages. It ensures that messages are delivered reliably and efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Brief Overview of OpenTelemetry
&lt;/h2&gt;

&lt;p&gt;OpenTelemetry is a set of APIs, SDKs, libraries, and integrations aiming to standardize the generation, collection, and management of telemetry data(logs, metrics, and traces). It is backed by the Cloud Native Computing Foundation and is the leading open-source project in the observability domain.&lt;/p&gt;

&lt;p&gt;The data you collect with OpenTelemetry is vendor-agnostic and can be exported in many formats. Telemetry data has become critical in observing the state of distributed systems. With microservices and polyglot architectures, there was a need to have a global standard. OpenTelemetry aims to fill that space and is doing a great job at it thus far.&lt;/p&gt;

&lt;p&gt;In this tutorial, you will use OpenTelemetry Collector to collect RabbitMQ metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is OpenTelemetry Collector?
&lt;/h2&gt;

&lt;p&gt;OpenTelemetry Collector is a stand-alone service provided by OpenTelemetry. It can be used as a telemetry-processing system with a lot of flexible configurations to collect and manage telemetry data.&lt;/p&gt;

&lt;p&gt;It can understand different data formats and send it to different backends, making it a versatile tool for building observability solutions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;Read our complete guide on OpenTelemetry Collector&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How does OpenTelemetry Collector collect data?
&lt;/h2&gt;

&lt;p&gt;A receiver is how data gets into the OpenTelemetry Collector. Receivers are configured via YAML under the top-level &lt;code&gt;receivers&lt;/code&gt; tag. There must be at least one enabled receiver for a configuration to be considered valid.&lt;/p&gt;

&lt;p&gt;Here’s an example of an &lt;code&gt;otlp&lt;/code&gt; receiver:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An OTLP receiver can receive data via gRPC or HTTP using the &lt;a href="https://github.com/open-telemetry/opentelemetry-proto/blob/main/docs/specification.md" rel="noopener noreferrer"&gt;OTLP&lt;/a&gt; format. There are advanced configurations that you can enable via the YAML file.&lt;/p&gt;

&lt;p&gt;Here’s a sample configuration for an otlp receiver.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;localhost:4318"&lt;/span&gt;
        &lt;span class="na"&gt;cors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;allowed_origins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;http://test.com&lt;/span&gt;
            &lt;span class="c1"&gt;# Origins can have wildcards with *, use * by itself to match any origin.&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://*.example.com&lt;/span&gt;
          &lt;span class="na"&gt;allowed_headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Example-Header&lt;/span&gt;
          &lt;span class="na"&gt;max_age&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;7200&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find more details on advanced configurations &lt;a href="https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After configuring a receiver, &lt;strong&gt;you must enable it&lt;/strong&gt;. Receivers are enabled via pipelines within the service section. A pipeline consists of a set of receivers, processors, and exporters.&lt;/p&gt;

&lt;p&gt;The following is an example pipeline configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;traces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;jaeger&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;zipkin&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that you understand how &lt;a href="https://signoz.io/guides/opentelemetry-collector-vs-exporter/" rel="noopener noreferrer"&gt;OpenTelemetry collector&lt;/a&gt; collects data let’s see what RabbitMQ metrics can be collected.&lt;/p&gt;

&lt;h2&gt;
  
  
  RabbitMQ Metrics and attributes that you can collect with OpenTelemetry
&lt;/h2&gt;

&lt;p&gt;The following metrics and resource attributes for RabbitMQ can be collected by Opentelemetry Collector.&lt;/p&gt;

&lt;h3&gt;
  
  
  Metrics
&lt;/h3&gt;

&lt;p&gt;Collectors provide many metrics that you can use to monitor how your RabbitMQ server is performing. Currently, all metrics types are ‘Sum’ with value type ‘Integer,’ which essentially means it comes in the cumulative sum instead of the raw data of each message separately.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Metric Name&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Consumer Count&lt;/td&gt;
&lt;td&gt;The number of consumers currently reading from the queue.&lt;/td&gt;
&lt;td&gt;rabbitmq.consumer.count&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Messages Published&lt;/td&gt;
&lt;td&gt;The number of messages published to a queue.&lt;/td&gt;
&lt;td&gt;rabbitmq.message.published&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Messages Delivered&lt;/td&gt;
&lt;td&gt;The number of messages delivered to consumers.&lt;/td&gt;
&lt;td&gt;rabbitmq.message.delivered&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Messages Dropped&lt;/td&gt;
&lt;td&gt;The number of messages dropped as unroutable.&lt;/td&gt;
&lt;td&gt;rabbitmq.message.dropped&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Messages Acknowledged&lt;/td&gt;
&lt;td&gt;The number of messages acknowledged by consumers.&lt;/td&gt;
&lt;td&gt;rabbitmq.message.acknowledged&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Messages Current&lt;/td&gt;
&lt;td&gt;The total number of messages currently in the queue.&lt;/td&gt;
&lt;td&gt;rabbitmq.message.current&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You can find the list of supported metrics &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/rabbitmqreceiver/documentation.md#rabbitmqconsumercount" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resource Attributes
&lt;/h3&gt;

&lt;p&gt;Resource attributes help you identify and filter RabbitMQ metrics based on a particular node, queue, etc.&lt;/p&gt;

&lt;p&gt;These resource attributes are enabled by default:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Values&lt;/th&gt;
&lt;th&gt;Enabled&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;rabbitmq.node.name&lt;/td&gt;
&lt;td&gt;The name of the RabbitMQ node.&lt;/td&gt;
&lt;td&gt;Any Str&lt;/td&gt;
&lt;td&gt;true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;rabbitmq.queue.name&lt;/td&gt;
&lt;td&gt;The name of the RabbitMQ queue.&lt;/td&gt;
&lt;td&gt;Any Str&lt;/td&gt;
&lt;td&gt;true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;rabbitmq.vhost.name&lt;/td&gt;
&lt;td&gt;The name of the RabbitMQ vHost.&lt;/td&gt;
&lt;td&gt;Any Str&lt;/td&gt;
&lt;td&gt;true&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Now, let’s go through the steps for collecting these metrics with OpenTelemetry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;This tutorial assumes that the OpenTelemetry Collector is installed on the same host as the RabbitMQ instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Preparing RabbitMQ Servers for OpenTelemetry
&lt;/h3&gt;

&lt;p&gt;Before you can install and run OpenTelemetry Collector on your RabbitMQ servers, you need to prepare them by performing the following steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; &lt;strong&gt;Enable management plugin&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The OpenTelemetry Collector communicates with the RabbitMQ management plugin to collect telemetry data. To enable the management plugin, you can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rabbitmq-plugins &lt;span class="nb"&gt;enable &lt;/span&gt;rabbitmq_management
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; &lt;strong&gt;Add a user for monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The OpenTelemetry Collector needs to have permission to access the RabbitMQ management plugin. You can create a dedicated user for OpenTelemetry and grant it the necessary permissions based on your requirements. To create a user for OpenTelemetry, you can use the RabbitMQ management UI at * &lt;strong&gt;_&lt;code&gt;http://_{node-hostname}\*:15672/&lt;/code&gt;&lt;/strong&gt;.**&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fre99g693qkilf31vxr.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fre99g693qkilf31vxr.webp" alt="Add a user for monitoring" width="749" height="263"&gt;&lt;/a&gt;&lt;br&gt;Add a user for monitoring
  &lt;/p&gt;

&lt;p&gt;You will need to add this user in the configuration file of the OpenTelemetry collector later.&lt;/p&gt;

&lt;h3&gt;
  
  
  If RabbitMQ is not on the same server as OpenTelemetry Collector
&lt;/h3&gt;

&lt;p&gt;If RabbitMQ is not on the same server as OpenTelemetry Collector, then you should do an extra step to open port 15672.&lt;/p&gt;

&lt;p&gt;⚠️ Warning&lt;/p&gt;

&lt;p&gt;It is strongly advised not to open this port to the public. You can open it for specific IPs or private cloud only.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open TCP port 15672&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This step is needed if your RabbitMQ and OpenTelemetry service are not on the same server. The &lt;a href="https://signoz.io/comparisons/opentelemetry-api-vs-sdk/" rel="noopener noreferrer"&gt;OpenTelemetry SDK&lt;/a&gt; for RabbitMQ communicates with the RabbitMQ management plugin over TCP port 15672. Therefore, you need to open this port on all of the RabbitMQ nodes in your cluster.&lt;/p&gt;

&lt;p&gt;To open TCP port 15672 on a Linux server, you can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw allow 15672/tc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are using some firewall for your VMs to block inbound traffic there, you would need to allow &lt;code&gt;15672&lt;/code&gt; port.&lt;/p&gt;

&lt;p&gt;Once you have completed these preparations, you can install and run the OpenTelemetry Collector.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up SigNoz
&lt;/h2&gt;

&lt;p&gt;You need a backend to which you can send the collected data for monitoring and visualization. &lt;a href="https://signoz.io/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; is an OpenTelemetry-native APM that is well-suited for visualizing OpenTelemetry data.&lt;/p&gt;

&lt;p&gt;SigNoz cloud is the easiest way to run SigNoz. You can sign up &lt;a href="https://signoz.io/teams/" rel="noopener noreferrer"&gt;here&lt;/a&gt; for a free account and get 30 days of unlimited access to all features.&lt;/p&gt;

&lt;p&gt;You can also install and self-host SigNoz yourself. Check out the &lt;a href="https://signoz.io/docs/install/" rel="noopener noreferrer"&gt;docs&lt;/a&gt; for installing self-host SigNoz.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up OpenTelemetry Collector
&lt;/h2&gt;

&lt;p&gt;The OpenTelemetry Collector offers various deployment options to suit different environments and preferences. It can be deployed using Docker, Kubernetes, Nomad, or directly on Linux systems. You can find all the installation options &lt;a href="https://opentelemetry.io/docs/collector/installation" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We are going to discuss the manual installation here and resolve any hiccups that come in the way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 - Downloading OpenTelemetry Collector
&lt;/h3&gt;

&lt;p&gt;Download the appropriate binary package for your Linux or macOS distribution from the OpenTelemetry Collector &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-releases/releases" rel="noopener noreferrer"&gt;releases&lt;/a&gt; page. We are using the latest version available at the time of writing this tutorial.&lt;/p&gt;

&lt;p&gt;For MACOS (arm64):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--proto&lt;/span&gt; &lt;span class="s1"&gt;'=https'&lt;/span&gt; &lt;span class="nt"&gt;--tlsv1&lt;/span&gt;.2 &lt;span class="nt"&gt;-fOL&lt;/span&gt; https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.116.0/otelcol-contrib_0.116.0_darwin_arm64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2 - Extracting the package
&lt;/h3&gt;

&lt;p&gt;Create a new directory named &lt;code&gt;otelcol-contrib&lt;/code&gt; and then extract the contents of the &lt;code&gt;otelcol-contrib_0.116.0_darwin_arm64.tar.gz&lt;/code&gt; archive into this newly created directory with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;otelcol-contrib &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;tar &lt;/span&gt;xvzf otelcol-contrib_0.116.0_darwin_arm64.tar.gz &lt;span class="nt"&gt;-C&lt;/span&gt; otelcol-contrib
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3 - Setting up the Configuration file
&lt;/h3&gt;

&lt;p&gt;Create a &lt;code&gt;config.yaml&lt;/code&gt; file in the &lt;code&gt;otelcol-contrib&lt;/code&gt; folder. This configuration file will enable the collector to connect with RabbitMQ and have other settings like at what frequency you want to monitor the instance.&lt;/p&gt;

&lt;p&gt;Go into the directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;otelcol-contrib
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create &lt;code&gt;config.yaml&lt;/code&gt; in folder &lt;code&gt;otelcol-contrib&lt;/code&gt; with the below content in it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;receivers:
  otlp:
    protocols:
      grpc:
        endpoint: localhost:4317
      http:
        endpoint: localhost:4318
  hostmetrics:
    collection_interval: 30s
    scrapers:
      cpu: &lt;span class="o"&gt;{}&lt;/span&gt;
      disk: &lt;span class="o"&gt;{}&lt;/span&gt;
      load: &lt;span class="o"&gt;{}&lt;/span&gt;
      filesystem: &lt;span class="o"&gt;{}&lt;/span&gt;
      memory: &lt;span class="o"&gt;{}&lt;/span&gt;
      network: &lt;span class="o"&gt;{}&lt;/span&gt;
      paging: &lt;span class="o"&gt;{}&lt;/span&gt;
      process:
        mute_process_name_error: &lt;span class="nb"&gt;true
        &lt;/span&gt;mute_process_exe_error: &lt;span class="nb"&gt;true
        &lt;/span&gt;mute_process_io_error: &lt;span class="nb"&gt;true
      &lt;/span&gt;processes: &lt;span class="o"&gt;{}&lt;/span&gt;
  rabbitmq:
    endpoint: http://localhost:15672
    username: &amp;lt;RABBITMQ_USERNAME&amp;gt;
    password: &amp;lt;RABBITMQ_PASSWORD&amp;gt;
    collection_interval: 30s
processors:
  batch:
    send_batch_size: 1000
    &lt;span class="nb"&gt;timeout&lt;/span&gt;: 10s
  &lt;span class="c"&gt;# Ref: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/resourcedetectionprocessor/README.md&lt;/span&gt;
  resourcedetection:
    detectors: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;, system, ec2] &lt;span class="c"&gt;# include ec2 for AWS, gcp for GCP and azure for Azure.&lt;/span&gt;
    &lt;span class="c"&gt;# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.&lt;/span&gt;
    &lt;span class="nb"&gt;timeout&lt;/span&gt;: 2s
    override: &lt;span class="nb"&gt;false
    &lt;/span&gt;system:
      hostname_sources: &lt;span class="o"&gt;[&lt;/span&gt;os] &lt;span class="c"&gt;# alternatively, use [dns,os] for setting FQDN as host.name and os as fallback&lt;/span&gt;
exporters:
  otlp:
    endpoint: &lt;span class="s2"&gt;"ingest.{region}.signoz.cloud:443"&lt;/span&gt; &lt;span class="c"&gt;# replace {region} with your region&lt;/span&gt;
    tls:
      insecure: &lt;span class="nb"&gt;false
    &lt;/span&gt;headers:
      &lt;span class="s2"&gt;"signoz-ingestion-key"&lt;/span&gt;: &lt;span class="s2"&gt;"&amp;lt;SIGNOZ_INGESTION_KEY&amp;gt;"&lt;/span&gt;
  debug:
    verbosity: detailed
service:
  telemetry:
    metrics:
      address: localhost:8888
  pipelines:
    metrics:
      receivers: &lt;span class="o"&gt;[&lt;/span&gt;otlp, rabbitmq]
      processors: &lt;span class="o"&gt;[&lt;/span&gt;batch]
      exporters: &lt;span class="o"&gt;[&lt;/span&gt;otlp]
    metrics/hostmetrics:
      receivers: &lt;span class="o"&gt;[&lt;/span&gt;hostmetrics]
      processors: &lt;span class="o"&gt;[&lt;/span&gt;resourcedetection, batch]
      exporters: &lt;span class="o"&gt;[&lt;/span&gt;otlp]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You would need to replace the following details for the config to work properly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Endpoint for your RabbitMQ instance. For this tutorial: endpoint: &lt;a href="http://localhost:15672/" rel="noopener noreferrer"&gt;http://localhost:15672&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Details of username that can be used for monitoring. (set in prerequisites section) username: &lt;code&gt;&amp;lt;RABBITMQ_USERNAME&amp;gt;&lt;/code&gt; password: &lt;code&gt;&amp;lt;RABBITMQ_PASSWORD&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Under exporters, configure the &lt;code&gt;endpoint&lt;/code&gt; for SigNoz cloud along with the ingestion key under &lt;code&gt;signoz-ingestion-key&lt;/code&gt;. You can find these settings in the SigNoz dashboard.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also note how we have set up the pipeline in the &lt;code&gt;service&lt;/code&gt; section of the config. We have added &lt;code&gt;rabbitmq&lt;/code&gt; in the receiver section of metrics and set the exporter to &lt;code&gt;otlp&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4 - Running the collector service
&lt;/h3&gt;

&lt;p&gt;Every Collector release includes an &lt;code&gt;otelcol&lt;/code&gt; executable that you can run. Since we’re done with configurations, we can now run the collector service with the following command.&lt;/p&gt;

&lt;p&gt;From the &lt;code&gt;otelcol-contrib&lt;/code&gt;, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./otelcol-contrib &lt;span class="nt"&gt;--config&lt;/span&gt; ./config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to run it in the background -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./otelcol-contrib &lt;span class="nt"&gt;--config&lt;/span&gt; ./config.yaml &amp;amp;&amp;gt; otelcol-output.log &amp;amp; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$\&lt;/span&gt;&lt;span class="s2"&gt;!"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; otel-pid
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5 - Debugging the output
&lt;/h3&gt;

&lt;p&gt;If you want to see the output of the logs, we’ve just set up for the background process. You may look it up with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 50 otelcol-output.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;tail 50 will give the last 50 lines from the file &lt;code&gt;otelcol-output.log&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can stop the collector service with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;kill&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&amp;lt; otel-pid&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Monitoring with SigNoz Dashboard
&lt;/h2&gt;

&lt;p&gt;Once the above setup is done, you will be able to access the metrics in the SigNoz dashboard. You can go to the &lt;code&gt;Dashboards&lt;/code&gt; tab and try adding a new panel. You can learn how to create dashboards in SigNoz &lt;a href="https://signoz.io/docs/userguide/manage-dashboards-and-panels/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7jk61rzno22rtffoutv.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7jk61rzno22rtffoutv.webp" alt="RabbitMQ metrics collected by OTel Collector and sent to SigNoz" width="800" height="310"&gt;&lt;/a&gt;&lt;br&gt;RabbitMQ metrics collected by OTel Collector and sent to SigNoz
  &lt;/p&gt;

&lt;p&gt;You can easily create charts with &lt;a href="https://signoz.io/docs/userguide/create-a-custom-query/#sample-examples-to-create-custom-query" rel="noopener noreferrer"&gt;query builder&lt;/a&gt; in SigNoz. Here are the &lt;a href="https://signoz.io/docs/userguide/manage-panels/#steps-to-add-a-panel-to-a-dashboard" rel="noopener noreferrer"&gt;steps&lt;/a&gt; to add a new panel to the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9oxamjedvfrxkouyjci.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9oxamjedvfrxkouyjci.webp" alt="Building a chart for published messages with Query Builder in SigNoz" width="800" height="413"&gt;&lt;/a&gt;&lt;br&gt;Building a chart for published messages with Query Builder in SigNoz
  &lt;/p&gt;

&lt;p&gt;You can write queries to create charts on the RabbitMQ metrics data and add it to a dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnth7n5o0y6vnlmkadbk.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnth7n5o0y6vnlmkadbk.webp" alt="RabbitMQ dashboard in SigNoz" width="800" height="417"&gt;&lt;/a&gt;&lt;br&gt;RabbitMQ dashboard in SigNoz
  &lt;/p&gt;

&lt;p&gt;You can also create alerts on any metric. Learn how to create alerts &lt;a href="https://signoz.io/docs/userguide/alerts-management/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh0g0bk9432w19pvq5ta.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh0g0bk9432w19pvq5ta.webp" width="610" height="342"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;If you want to get started quickly with RabbitMQ monitoring, you can load this &lt;a href="https://github.com/SigNoz/dashboards/blob/main/rabbitmq/rabbitmq.json" rel="noopener noreferrer"&gt;Rabbitmq JSON&lt;/a&gt; in SigNoz dashboard and get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, you installed an OpenTelemetry Collector to collect RabbitMQ metrics and sent the collected data to SigNoz for monitoring and alerts.&lt;/p&gt;

&lt;p&gt;Visit our &lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;complete guide&lt;/a&gt; on OpenTelemetry Collector to learn more about it. OpenTelemetry is quietly becoming the world standard for open-source observability, and by using it, you can have advantages like a single standard for all telemetry signals, no vendor lock-in, etc.&lt;/p&gt;

&lt;p&gt;SigNoz is an open-source &lt;a href="https://signoz.io/blog/opentelemetry-apm/" rel="noopener noreferrer"&gt;OpenTelemetry-native APM&lt;/a&gt; that can be used as a single backend for all your observability needs.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Further Reading&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;Complete Guide on OpenTelemetry Collector&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apm/" rel="noopener noreferrer"&gt;An OpenTelemetry-native APM&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>monitoring</category>
      <category>opensource</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>OpenTelemetry Webinars - Apache Kafka and OTLP data</title>
      <dc:creator>Ankit Anand ✨</dc:creator>
      <pubDate>Tue, 03 Mar 2026 05:02:51 +0000</pubDate>
      <link>https://dev.to/ankit01oss/opentelemetry-webinars-apache-kafka-and-otlp-data-55bm</link>
      <guid>https://dev.to/ankit01oss/opentelemetry-webinars-apache-kafka-and-otlp-data-55bm</guid>
      <description>&lt;p&gt;Join &lt;a href="https://github.com/serverless-mom" rel="noopener noreferrer"&gt;Nočnica Mellifera&lt;/a&gt; and &lt;a href="https://github.com/ankitnayan" rel="noopener noreferrer"&gt;Ankit&lt;/a&gt; as they discuss discuss the relationship between OpenTelemetry and ApacheKafka.&lt;/p&gt;

&lt;p&gt;Below is the recording and an edited transcript of the conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary of the Talk
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/ncOu44NtAec"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Find the conversation transcript below.👇&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Folks, hard times are part of anyone's life. I mean, I think everybody has a time when they feel worse than they did before. So, I have some simple recommendations for that. I think one big one when you have a weird interaction with someone is to go out and try to be nice. Go out and get coffee, leave a big tip at the coffee shop, message that friend who you know is halfway a friend of yours, and say something nice. Ask them how work's been going and be kind. That's a real thing.&lt;/p&gt;

&lt;p&gt;My second strategy, which is not as common, I think a strategy is I read a book about Arctic explorers. By the way, there is always a new one to read. You can always find one. Don't worry about that; they are available. There's something about feeling bad. In my case, I was dealing with some home improvement, and I sort of ran out of money on the project. I was all bummed about that. Then I just read about somebody eating shoes for two months, and it cheered me right up. I wasn't eating shoes. Great.&lt;/p&gt;

&lt;p&gt;All right, folks, it's time. We're going to do our OpenTelemetry webinar. Thank you so much for joining us. There's a little bumble and bobble with the LinkedIn invite, so we may not have a ton of people seeing this LinkedIn live. Sorry about that. If you were registered for the other event, I will send you a message with the canned version of this video.&lt;/p&gt;

&lt;p&gt;Thank you so much to the people who watch us on YouTube and the people who check us out after the fact. So many, like hundreds and actually like a thousand of you on the last video. That was cool. Thank you so much.&lt;/p&gt;

&lt;p&gt;This week, I have the tech head at SigNoz - Ankit to talk about this question that I sort of thought I had answers to, but then I realized, oh, there's more subtlety here than I realized. Well, first, let's introduce our guest. Oh my gosh, Ankit, say hi to the people. Oh no, you're in that direction. Say hi to the people and tell us what you do at Signos and what you're interested in. Introduce yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ANKIT:&lt;/strong&gt; Sure, yeah. Thanks for the call, the events, and all. Thank you, folks, for coming here. I am Ankit. I handle tech and some products at SigNoz. I'm one of the co-founders, and I love the domain of observability. I'm always up for a brainstorming discussion on how to debug your infrastructure services and distributed systems. These are my areas of interest. Feel free to reach out to me on community Slack, or I'm available on Twitter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Something I love about being part of this community is that there is a discussion happening all the time on the CNCF Slack. If you want to have a college minor in OpenTelemetry and spend a month or two just reading the questions that come through the OpenTelemetry Slack, people, I try to help out, but people like Martin and staff are incredibly helpful in there. Some of our team are helpful. It's cool. This whole show starts with a thread from that on the CNCF Slack and from, I think, the OpenTelemetry collector channel. I don't have the thread in front of me at the moment, but that's all right.&lt;/p&gt;

&lt;p&gt;The question that came in on that Slack was, "Hey, I want to send my OTLP data via an Apache Kafka queue within my architecture." And maybe I should have had an architectural diagram. So, this is the most complex architectural thought that's going to be in this one.&lt;/p&gt;

&lt;p&gt;You know, I have my collector somewhere in my cloud, and I want to enter the data into an Apache Kafka queue. Then have a collector read out of that queue and send it on to whatever my data back end is.&lt;/p&gt;

&lt;p&gt;On that thread, most people were like, "Why are we doing this? Maybe let's not do this."&lt;/p&gt;

&lt;p&gt;So, I want to kind of start there by sorting out why would somebody want to use their Kafka queue within their system to transmit their telemetry data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ANKIT:&lt;/strong&gt; Right. So, like, it's one of the very interesting takes, and I always see some conversations going around about tiering of OpenTelemetry collectors versus using a queue.&lt;/p&gt;

&lt;p&gt;So, some point that I thought may be useful to discuss with all of us is tiering of OpenTelemetry collectors, it's always fine. But there are a few pros and cons that we need to keep in hindsight, thought on so that we enable better monitoring and better transmitting of data to different background sources.&lt;/p&gt;

&lt;p&gt;First, like the Kafka-like wall or queue, whatever you say, it is very helpful at scale and near the backend systems that deal with the observability data, right? So, if you have to handle scale like 100,000 spans per second or maybe up to a million spans per second from the observability data, it's perfectly fine to have tiering of &lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;OpenTelemetry collectors&lt;/a&gt; even before the data goes into Kafka.&lt;/p&gt;

&lt;p&gt;The tiering of collectors is very useful near the clients, right? So, OpenTelemetry has the advantage that it is very lightweight and can have some amount of data that is configurable in memory. Okay, so the nearer to the client is, the less scale it has to deal with, and it can keep things in memory for a short duration of time.&lt;/p&gt;

&lt;p&gt;To provide robustness to this system, you need to have some failover mechanism. What if the remote OpenTelemetry collector it is communicating to, fails, or whatever the backend or the next tier of OpenTelemetry collector that it communicates is down for some time? How much of the telemetric data does the client OpenTelemetry collector have to keep in memory?&lt;/p&gt;

&lt;p&gt;Right, so that is a very complex capacity planning you would have to do or use the default settings. Let's say my tier one &lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;OpenTelemetry collector&lt;/a&gt;, just wants to hold 10,000 spans overall. And if the downstream collector is down for a larger period of time, where does the client OpenTelemetry collector overflow the 10 spans, and the data is dropped? Right? So, like these things and like OpenTelemetry collector second tiers, first tiers, or even third tiers, they are very helpful when there are, I would say, flaky networks or edge devices where the network call might fail. A small amount of data or some buffer amount of data can be held in the OpenTelemetry collectors in memory.&lt;/p&gt;

&lt;p&gt;The Kafka is like a huge bus. It can ingest a huge amount of data. The advantage is it provides durability. You can persist the data right directly to the disk. The OpenTelemetry collectors do not have a provision, and they have a buffer to keep things in memory. So, when things come to a scale, let's say you have 500 GB of data, right?&lt;/p&gt;

&lt;p&gt;So, the advantage is it provides durability. You can persist the data directly to the disk. The OpenTelemetry collectors do not have a provision, and they prefer to keep things in memory. So, when things come at a scale, let's say you have 500 GB of telemetry data that happened during 5 minutes of your downtime. Then what, like to keep it? Do you want to keep it in memory or do you want to write it to Kafka in the disc directly? It saves a lot of money there. First, and the second is, that you will have to keep in mind that Kafka scales very well with the amount of data.&lt;/p&gt;

&lt;p&gt;So, it can handle the surge of traffic very well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; So I want to start back at the beginning with my questions there, which is, we'll talk about collector to collector in just a minute, as this idea that you can put a collector closer to your service. But I want to start with, and I don't disagree that any of this is probably what makes sense, but at a really basic level, what is the problem with losing some data from the collector? So, we're trying to send it, we have our service running right now.&lt;/p&gt;

&lt;p&gt;These should be like asynchronous outbound requests. Right? So, we have one collector. It's way away from, on the network or, God forbid, it's run on some separate service, some other service someplace. So, it's just making requests via the internet, via network out to the collector with its data, and that's faulty. Right? That sometimes drops some data.&lt;/p&gt;

&lt;p&gt;Now, it seemed like when a couple of people were discussing this, they were worried about their service's performance as a result. So, is that a concern that you could have, like, okay, because you're getting timeouts and because you're sending this a huge number of requests, you're affecting how well this service works for users because it's seeing all this trouble trying to send its open telemetry data, right?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ANKIT:&lt;/strong&gt; It does, to some scale, it can be minimized as much as possible, like turning off the log exporter or if you have not configured it properly. Like, it will try to keep a lot of data in memory, and it will restrict the application's access to the memory. Hence, applications will slow down, and the logs that you print will also have a lot of noise about the retries of the OpenTelemetry collector to the onstream, right?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Okay. All right. And that makes sense. Yeah. I sort of, I was having a failure of imagination there where I was like, okay, what is, you know, when you have some side effect, you know, web requests normally, you don't see like, hey, that's massively affecting your performance. But you're right. Like, one of the things is you really shouldn't be, this is a hot take, folks, but to make your application work, you shouldn't be having to fine-tune exactly how much telemetry data you're sending, right? That's, we're starting to get to the point where it's like, okay, now I don't want this tool anymore. Right? I don't want to add it to my next microservice because I have to fine-tune it to make the service recording.&lt;/p&gt;

&lt;p&gt;So, I get that it's like, okay, we want to have something to grab the data close to where the service is running. But yeah, so the next point is a very reasonable way to do that. And the way that was recommended by sort of the collector community was they said, "Hey, just run another collector closer to the service. Have it make decisions about what it's going to be retaining, and what it's going to be sending. Have it in some way partitioned off from the actual service.&lt;/p&gt;

&lt;p&gt;So, for example, if it's blowing up and using a ton of memory, it's not the same memory allocation for the application. But you're saying that, yes, that's a way to do it. But it totally can make sense to use Kafka.&lt;/p&gt;

&lt;p&gt;Okay, here was the third question that I had in that sequence, which was, you know, I always understood Kafka as being this, you know, really handling the multiplicity well between subscribers and publishers. And I was like, that superpower does not seem applicable here, where it's like, I wouldn't probably want to have like, oh yeah, everyone could publish to an observability topic, and you can have multiple other services that are subscribed to the like. I can't, I mean, I can imagine Netflix or something needing that kind of architecture. But like, nobody else can I imagine needing to have that kind of vendor or system-neutral system. For, for, am I wrong in that? Am I wrong in understanding that that's probably not what we're thinking about with Kafka here, right?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ANKIT:&lt;/strong&gt; So, I would not say it's just about being at a Netflix scale. Even at a medium scale, it is a very good practice to have some queue in place. Okay, what it helps in is I have seen two things happen very frequently to any medium-scale companies: the surge of traffic like the usual day traffic, and some days, some marketing event or some event, the data spikes to around 10x.&lt;/p&gt;

&lt;p&gt;And you don't want your backend to scale up, and by that time, your services are throwing errors. So, that is one thing that Kafka or any queue does very well to handle the surge in traffic. And the next thing is, like, it's just like a write-ahead log. Just dump it out into a Kafka, do any sort of processing you want to, and pick it up from where you left off. So, let's say you're using a SaaS or in-house backend, if SaaS or your in-house backend needs some time to scale up, the consumers at Kafka can limit that. And it can help you have a consistent load to your backend, whether it be a database or any rate-limited SaaS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Right, this is where the power comes in. With, for example, a collector, we have maybe a collector running close to our services. You're still going to need to do a bit of thinking about hey, you know, how much memory does this thing have, how is it batching its data sends? Because, no, it can't just accept any completely arbitrary amount of data and run super reliably. If you already have a Kafka at the center, probably at the center of your architecture, right, it is super reliable in terms of how much it can take. This huge burst of data, and will handle it just fine. And so you are adding to the load on that queue, but yeah, like that's something it should be able to handle pretty centrally.&lt;/p&gt;

&lt;p&gt;So, are you recommending that people think about queuing even kind of outside of this Kafka space? They just think hey, even with multiple collectors, you might want to think about a queue at the center of your system?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ANKIT:&lt;/strong&gt; Yeah. If you're talking about a few gigabytes of data, then it's fine. If you're talking about 100 gigabytes of data, like you need to have some disk where you can write on. Right, you cannot keep hings in memory and wait for the system to get started.&lt;/p&gt;

&lt;p&gt;Downtimes also become very important at that point in time. So, it's perfectly fine to have tiers of OpenTelemetry collectors nearer to the application. One at the infra and like a layer of a queue or Kafka before sending it to the backend.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; I'll also say that running multiple collectors has one enormous advantage. I'll send a link to an architecture diagram for this idea down below. But it has one enormous advantage, which is, I think for most of us realistically, you're not going to be talking to the application developers or even the people who do direct operations with those applications, to modify your telemetry data.&lt;/p&gt;

&lt;p&gt;Ideally, we certainly don't want to be saying, "Hey, can everyone make a PR to their service today because I want one more attribute on my traces?" We want to be able to do the configuration of our observability system on one side and have the application code people worry about their things, and we don't need them to be able to make a change. So, if you have multiple collectors connected, your ability to say, "Hey, this service is overproducing or this group of services is overproducing. I'm going to do some clever stuff. No, I'm just going to clamp it. I'm just going to say, 'Hey, you can only send this many bytes of data every single cycle' and stuff."&lt;/p&gt;

&lt;p&gt;If you have that collector very close to each of these services, then you're able to do that in a very fine-grained way. That's a big advantage, totally outside of these considerations. It's just nice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ANKIT:&lt;/strong&gt; It's a very good angle to look into things.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Okay, let me look at my big sheet of questions. So, I had misunderstood and didn't get when I was first looking at this that it was super easy to serialize into a queue with the OpenTelemetry protocol, the OTLP. Can you talk a little bit about that? I know you left some comments on that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ANKIT:&lt;/strong&gt; All right. So, to start with, the Kafka exporters and the Kafka receivers both support JSON and proto serialization formats today. And Avro is yet to come. I see a PR already open, and they would be supporting that Avro format. And the schemas are almost stable. They are marked as stable and should not be changing much. So, it should be good. There is no need to worry much there. And I looked into the issues also that they are looking to support schema registries as options to share schemas between the different producers and consumers. So, that is also going to help attain much more stability and robust distribution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Awesome. So, let's dive into some specific Kafka configs to consider when we start talking about the options that we have. So, I talked a little bit about using multiple collectors, and a collector very close to the service, and saying, "Okay, there I could configure how I want it to handle my OpenTelemetry data." But you can also use Kafka is not completely neutral here, right? You can do config to it to modify how you're collecting data. Can you talk a little bit about that?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ANKIT:&lt;/strong&gt; Sure, yeah. So basically, there are a few angles. An example that is very relevant to our OpenTelemetry data, in general, would be if we need to enable tail-based sampling, what we can do is, like, the tail-based sampling needs all the traces under the same &lt;a href="https://signoz.io/comparisons/opentelemetry-trace-id-vs-span-id/" rel="noopener noreferrer"&gt;Trace ID&lt;/a&gt; to be present in the otel collector, in the same otel collector. That is a very big limitation that we had to deal with.&lt;/p&gt;

&lt;p&gt;And there have been many architectures around it. But yeah, like if you can partition by Trace ID and write it to a topic, there can be multiple consumers who can read from the different partitions of a topic, and all those consumers can run in parallel with that Trace ID as the partition key. It helps a lot in managing the tailbase samples.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; So, let's pause to just talk about the idea of our partition, right? It's like a section of resource allocation within the Kafka queue, right? It says, "Hey, this is how much we're going to devote to..." And this makes sense, right? Like, "Hey, customer payment data is being handled by the queue. Get a bigger partition than, you know, logging user behavior and which was the shoe color that they clicked on the most."&lt;/p&gt;

&lt;p&gt;So, this is partitioning. And then, tail base sampling, that one probably people are a little more comfortable with, but it's this idea that we want to make decisions about what traces to send and how many of them after the trace has occurred. So, head-based and tail-based sampling, we talk about it a lot, but I always want to get us back to the basics. What the heck is it like?&lt;/p&gt;

&lt;p&gt;Headbase sampling is the thing that is sort of always available to us, which is just when a trace starts, like, "No, save that one or don't save that one," which, you know, maybe by service or something. We could be kind of smart about it, but it's making a random decision of what are we going to save and what are we going to lose.&lt;/p&gt;

&lt;p&gt;Tailbase is some decision about what's in the trace. It's sort of been this Holy Grail for seven years now in all of the APM observability space. So, one of the things that can get us a little closer there would be at least saying, "Hey, where are all these traces coming from?" And so there is a PR out there for partitioning for handling a partition by Trace. It's not part of mainstream Kafka yet, but it's worth thinking about that it may be possible to say, "Hey, let's go ahead and see what's going on these traces. Link them together by Trace ID."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ANKIT:&lt;/strong&gt; That's true. And regarding how much data can Kafka handle, there are a few ways to increase the throughput that you can handle. Like changing the batch size, increase it a little bit. There is a linger.ms setting, like how much time should a producer wait to send it out to Kafka, right? And that setting is by default 0 milliseconds. It means, like, as soon as the producer produces it, it has to be sent to Kafka.&lt;/p&gt;

&lt;p&gt;So, you can configure that to five or 10 milliseconds. A very big improvement can be achieved by changing the acknowledgment in Kafka. So, if the producer has a setting, there are three types of acknowledgments.&lt;/p&gt;

&lt;p&gt;Acknowledgments like fire and fogget, then wait for the leader to respond with acknowledgment and the third one is like all - the replicas should respond with their reception of the data with an agreement, right?&lt;/p&gt;

&lt;p&gt;So the second one, like if just a leader responds that you have received the data, it should be fine. But this helps a lot in retries and reducing the time that Kafka receives the data. These are one and like always use compression.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; I love that, and I think this is great advice for people. I wanted to step to kind of our last question that I had that I think is critical because we're taking up our full time here, which is just talking about this. This is sort of just sort of a footnote here is that the other thing you do, and also this is what you find folks when you try to Google like &lt;a href="https://signoz.io/blog/maximizing-scalability-apache-kafka-and-opentelemetry/" rel="noopener noreferrer"&gt;OpenTelemetry Kafka&lt;/a&gt; is you get a lot of answers about monitoring your Kafka queue with OpenTelemetry. So, just want to cover that very briefly, like talking about what is kind of the critical things that you should be monitoring when you're monitoring the health of your Kafka queue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ANKIT:&lt;/strong&gt; Kafka has a lot of metrics. It has brokers; it has producers; it has consumers. I would say like the useful stuff you can figure out like the heap memory and all these things. Apart from that, the few relevant things that you should monitor in Kafka are consumer lag.&lt;/p&gt;

&lt;p&gt;That is very important. What is the offset that the producer is writing to, and how far is the consumer from reading that data? If that lag is increasing, then you should check into your consumer services as to why are they not able to process that amount of data or what is happening. That's number one.&lt;/p&gt;

&lt;p&gt;You should check out if you're talking about the complete pipeline that the otel collector writes to Kafka, and if a otel collector is reading from Kafka, you must monitor how much data is being written to Kafka in each partition in each topic, how much data is being read by the consumers. Is the &lt;a href="https://signoz.io/guides/opentelemetry-collector-vs-exporter/" rel="noopener noreferrer"&gt;OpenTelemetry exporter&lt;/a&gt; failing to write data to Kafka? Is Kafka going through a rebalancing due to the consumer clients' numbers changing, which can change for a lot of reasons due to restarting of ports, scaling up, scaling down of the consumers, and all? So these are the few basic operational things you must monitor apart from the general health of the broker and the producers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Yeah, I think it's so critical because right Kafka is meant to be monitored and meant to be like an operational tool. So, of course, it generates a ton of signals, but you're so right that it's like if you look at the default importer for this stuff, right? You see like 30-plus different signals that show up. And, you know, when I start thinking about making a dashboard, it's something like &lt;a href="https://signoz.io/docs/introduction/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; that have 30 different things being charted, and you ask someone, you say, "Hey, I think something's wrong with our Kafka queue. What's up?" And you're looking at 30 things. It starts to get pretty tough.&lt;/p&gt;

&lt;p&gt;So, yeah, you want to try to keep it to about four, right? That you say, "Hey, I need to see these top level; otherwise, we are healthy or we're not. And I'll fool with the time window or something to see if there's a pattern, but you know, you want to try to get this as a very ground-level thing about observability.&lt;/p&gt;

&lt;p&gt;First is the question of do we have an incident, and 30 metrics are not going to cut it. You have to be kind of focused, and I think consumer lag has got to be a major one. Hey, how soon the consistency happening between producer and consumer is pretty key, and any slowdown in actually reading the data off the queue is a real question.&lt;/p&gt;

&lt;p&gt;Alright, now, I know there are going to be follow-up questions and comments, so please do feel free to drop comments under this video, and we will reply to them via text after the fact of the video.&lt;/p&gt;

&lt;p&gt;I will pause; we just have a couple of viewers, so I will pause if you want to drop any questions into chat. I will take a look here.&lt;/p&gt;

&lt;p&gt;You'll see a few more links down in the chat, especially to The OpenTelemetry Collector contrib repository with the Kafka components that you should check out. But Ankit, are there other things that you want people to know, places you'd like them to look you up or get more information about?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ANKIT:&lt;/strong&gt; Sure, I am always available in the SigNoz community, so you can hit me up there if you want to start a discussion. If you want to start a discussion just like the GitHub itself, that is also fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Yeah, if you have questions about collecting data with OpenTelemetry and using the OpenTelemetry Collector, you should join the CNCF Slack and take a look there. If you're curious about charting all of the OpenTelemetry signals of your metrics, logs, and traces all in one place with an open-source tool, go and join the SigNoz Slack and talk to us about it. You will not get kicked out for wanting to talk about Grafana or wanting to compare tools or anything.&lt;/p&gt;

&lt;p&gt;SigNoz is part of your solution for observability and is especially effective. Oh, there was a Reddit thread this week that was like, hey, I just want to get everything charted together, and I understand that this tool is like the perfect thing to do logs from Java or something, but my problem is that it's way too many tools and no one's going to check them all. So, I want one thing.&lt;/p&gt;

&lt;p&gt;So, I was like SigNoz, baby. I didn't know how to say SigNoz. I love it. One of our community members came in and said, hey, the thing you're describing is SigNoz. So, that was cool.&lt;/p&gt;

&lt;p&gt;Anyway, we will be back next week with a new topic. I think we're going to dive deeper into the &lt;a href="https://signoz.io/comparisons/opentelemetry-api-vs-sdk/" rel="noopener noreferrer"&gt;OpenTelemetry API&lt;/a&gt;, maybe we're going to do something. It's going to be so fun. But I'm not going to promise which topic is going to be. I'll get that listed later today. Thank you so much for joining, everybody. We will see you soon. Bye-bye.&lt;/p&gt;




&lt;p&gt;Thank you for taking out the time to read this transcript :) If you have any feedback or want any changes to the format, please create an &lt;a href="https://github.com/SigNoz/signoz/issues" rel="noopener noreferrer"&gt;issue&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Feel free to join our Slack community and say hi! 👋&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/slack" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falb6jj148l2mta73yust.webp" alt="SigNoz Slack community"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>OpenTelemetry for AI - OpenLLMetry with SigNoz and Traceloop</title>
      <dc:creator>Ankit Anand ✨</dc:creator>
      <pubDate>Tue, 03 Mar 2026 05:02:16 +0000</pubDate>
      <link>https://dev.to/ankit01oss/opentelemetry-for-ai-openllmetry-with-signoz-and-traceloop-5h3h</link>
      <guid>https://dev.to/ankit01oss/opentelemetry-for-ai-openllmetry-with-signoz-and-traceloop-5h3h</guid>
      <description>&lt;p&gt;Join &lt;a href="https://github.com/serverless-mom" rel="noopener noreferrer"&gt;Nočnica Mellifera&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/nirga/" rel="noopener noreferrer"&gt;Nir&lt;/a&gt; to discuss how machine learning can be monitored with OpenTelemetry. We'll see how the SigNoz dashboards can help you monitor resource use, performance, and find problems before your infra budget goes haywire.&lt;/p&gt;

&lt;p&gt;Below is the recording and an edited transcript of the conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary of the Talk
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/feKopGAlKtc"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Find the conversation transcript below.👇&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; In my first role as a developer, I believe I was trying to do a day-one commit, and I ran my unit tests. I got back that 223 tests were failing. I'll always remember that number because I went to my senior developer, my mentor, and I said I broke a couple of the unit tests with my commit and she said “How many did you break?” I was like, "Oh, 223."&lt;/p&gt;

&lt;p&gt;What I said was I said a couple hundred." She said, "How many exactly did you break?" I said, "Oh boy, you know she's more upset with me than I thought, 223."&lt;/p&gt;

&lt;p&gt;She says, "Oh, that's fine. 223. You're always broken. If it had been 224 or 220, that would have been a problem. But 223 is fine. You're ready to go and commit."&lt;/p&gt;

&lt;p&gt;That is always a better thing to start with than, I think we're live. But that has that much to do with what we're talking about today. We're talking about OpenTelemetry and machine learning with my guest. We have Nir here today. Nir, say hi to the people. Tell us something about Trace Loop. Yeah, thank you so much for joining us.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; Thank you. I'm Nir, the CEO of Trace Loop. We're a YC company. We did YC Winter '23, and we are focusing on building a tool for monitoring and evaluation of large language models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; I love that. I think this is such a great topic because OpenTelemetry is this tool to have a very standard way to monitor how our software is running in production. And one of the things that it has always been able to do is tell you how much your resource usage is. But for web applications, we're not concerned about resource usage with massive growth of users.&lt;/p&gt;

&lt;p&gt;Yeah, they're using the amount of bandwidth they're using, and they're usually doing okay. But, in the machine learning space, this is a huge concern. Is stuff growing totally out of proportion?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; Yeah, I think specifically around evaluation, this is the most complex part and it still hasn't been resolved for large language models or generative AI in general. So if you look at traditional machine learning, you always had this set of metrics that were sort of guiding you towards what's right and wrong.&lt;/p&gt;

&lt;p&gt;But then came GenAI and you build a prompt where you're working with this model, and you get a response, and you have no idea whether this response was better than the previous one other than just looking at it with your human eyes and saying, "Okay, this looks better."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Yeah, and a massive thing is that when you sit down and realize, "Oh, I am just manually examining my AWS billing to see if I'm doing well," that's a concern. And what's the quality that we're getting out as well is another big piece.&lt;/p&gt;

&lt;p&gt;I love this topic, and I wrote a whole bunch of questions because I was very curious. We were gonna have a great demo of using SigNoz to do this monitoring. We've had a little bit of a technical limitation today, so we are going to do that as a separate segment that will go live probably next week or shortly before KubeCon. We'll try to make that happen, but this is going to be the Q&amp;amp;A portion. We'll do our demo portion very soon. But I wanted to start with my questions here.&lt;/p&gt;

&lt;p&gt;You know you can read them if you're a visual learner, but talk to me about the decision to use OpenTelemetry on top of your basis for these tools. I'm curious about that because I think a lot of people do associate OpenTelemetry with a web request, event-based framework, which is not what we're working with here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; We're huge fans of OpenTelemetry. We've been using it for years. I used to be Chief Architect with Fiverr, and we integrated OpenTelemetry into Fiverr.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; That's right, I forgot that Fiverr was like an early OpenTelemetry shop. That's cool. Sorry, go ahead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; So we're huge fans of OpenTelemetry, and I think when we started working with large language models, building these pipelines where you have a vector database, you retrieve data, and then you use it to build a prompt that you send to OpenAI, it makes sense. It feels like a trace that you have in OpenTelemetry. So it's natural to just use OpenTelemetry to export this data and visualize it. For us, it was the easiest. The decision to just use OpenTelemetry because it's easier to build a tracing dashboard using OpenTelemetry rather than starting from scratch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Yeah, I love that. On our community Slack for SigNoz, we have conversations where someone's like, "Is your dashboard compatible with this specific library that generates metrics?" I understand where they're coming from because I've been working in observability for a very long time.&lt;/p&gt;

&lt;p&gt;I remember when that was the case. It was like, "Oh yeah, you have this one library to do this one thing." For example, database indexing. That's an example of something not very event-based or request-based. "Is it compatible with my observability dashboard?" Of course, you find out no, it won't handle those kinds of metrics these days.&lt;/p&gt;

&lt;p&gt;In the OpenTelemetry space, it's like, "Will it report to the &lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;OpenTelemetry collector&lt;/a&gt;?" If yes, then yes. If no, then no. But generally, the answer is yes. You know there's some kind of ingestion tool for it. But you're right that the actual, like, giving a prompt, adding a single vector, and getting back a response that looks like a trace, with a request-response pattern to it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; Yeah, exactly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; And I apologize, this is my ignorance. You know, I think of using monitoring for LLMs as monitoring for the construction phase, the learning phase. But we're also talking about monitoring how well is it performing to answer questions. How well is it functioning that model?&lt;/p&gt;

&lt;p&gt;All right, so talk to me a little bit about the limitations of existing tools. We're not trying to embarrass anybody, but this is new technology, so of course, there are limitations to what's out there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; I think the thing I like the most about OpenTelemetry or one of the things I like the most, is auto-instrumentation. So you don't have to do anything. You just initialize the standard &lt;a href="https://signoz.io/comparisons/opentelemetry-api-vs-sdk/" rel="noopener noreferrer"&gt;OpenTelemetry SDK&lt;/a&gt;, and you immediately get visibility into HTTP calls, database calls, whatever you want. So we wanted to have the same experience with LLMs.&lt;/p&gt;

&lt;p&gt;Right now, the existing tools for observability with LLMs require you to manually log and specify in your code where you call OpenAI or replace the OpenAI API with some other proprietary API, which is a lot of work if you have a significant code base already using LLMs. What's great about OpenTelemetry is that you get a great infrastructure for auto-instrumenting these calls. So basically, you just add one line of code, and you get instant visibility into your vector database calls, LLMs calls, and anything you want.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; This is such a big piece with OpenTelemetry. This &lt;a href="https://signoz.io/blog/opentelemetry-context-propagation/" rel="noopener noreferrer"&gt;context propagation&lt;/a&gt; and automatic instrumentation is fantastic. A lot of the early tools for adding observability to IoT, for example, tend to be very single-use tools. It's a specific dashboard, a specific call. But yeah, also you'll see where it's like, "Oh, we're fully bought into this technology, so we have 500 calls for it across the code base." That's going to be a tough sell.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; Definitely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Now, this gets us to how we handle tracing the entire system execution, not just the model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; Because we're using OpenTelemetry, we need to instrument and log calls to Pinecone and OpenAI. But if you also use some database in your system as part of the LLM chain or workflow, then you get it out of the box. OpenTelemetry already instruments a lot of standard libraries. So if you're using requests in Python, you get this visibility out of the box. You don't need to do anything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; That's been my experience as well. Even if you have something like Treay Bizarre, you can add a couple of lines of code to add it to your traces or add metrics if you're bought into that context.&lt;/p&gt;

&lt;p&gt;One of the things we have seen people dealing with is the issue of extreme vendor lock-in. They got locked into an observability tool or a DB tool and were completely stuck on that model. You mentioned that you are seeing some growth with people saying, "Hey, I want to move beyond OpenAI," or, "I want to integrate this other tool," and lock-in has been more limited. How do you feel like that's a concern that you're addressing with this tool?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; I think, and this was one of the points I made when we did a Show Hackernews post a few weeks ago, is that I think OpenTelemetry did a great thing for the traditional microservice cloud observability world. Ten years ago or five years ago, you needed to install a Datadog agent, and then you were locked into Datadog for life. Switching to another observability platform was a lot of work. But then came OpenTelemetry, and it gave you the freedom to choose. You can use Datadog, you can use New Relic, you can use Sentry, whatever you want.&lt;/p&gt;

&lt;p&gt;So, we can use the same technology and the same idea to do the same with large language models. Just use OpenTelemetry as a standard, and then you can connect it to those traditional observability platforms, but also to the new ones that specialize in evaluation and monitoring for large language models.&lt;/p&gt;

&lt;p&gt;And when you talk to a lot of companies, many companies, when they're just starting, they still want to use their existing observability tools. They don't want to switch to a new one. Only when they start using LLMs in a more advanced way do they look for more sophisticated, specific solutions for LLMs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; By the way, I want to thank you so much. We drafted a few questions, and I said, "Here's what I want to ask you." Thank you so much for rolling with me, and completely changing the sequence on you. I feel so bad about this. Two or three years ago, I had a great guest, very knowledgeable, who had rehearsed the questions in order and was super disappointed that I had to change the sequence.&lt;/p&gt;

&lt;p&gt;I want to talk about some of the technical challenges that you faced. I think that the very basic request modeling and the concept of tracing can make sense here. But I'm sure there was a bit of a challenge doing that translation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; I think OpenTelemetry is a great technology as a consumer, someone who's just using it. But if you're developing around it, it's fairly complex. We were already knowledgeable of observability, but there was a steep learning curve in understanding how to build custom instrumentations in OpenTelemetry. How do you connect it to the LLM ecosystem? It's a complex technology, evolving all the time, and it's still very young. There's a steep learning curve to work with it properly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; One of the things I was looking at this month is that you can't connect trace spans after the fact on the collector side. It's a perfect example where a lot of people using this are experts at it, they've been working in large enterprises, but it's not the first thing that occurs to you to document.&lt;/p&gt;

&lt;p&gt;It's something I'm trying to work on with the project, finding places where we can put something out, either changing the documentation or writing guides to help people get more into it. But once you're trying to develop against it, it's an advanced tool.&lt;/p&gt;

&lt;p&gt;By the way, I shared the link to the Show Hackernews. That's a great place to get a primer and also to see what's on people's minds and what they're interested in. Hackernews always has an interesting crowd that shows up. It can be very hard to get their interest, but once you do, they get excited. The number of people who moved from clicking on it, and reading the README, to downloading it and deploying it is quite impressive.&lt;/p&gt;

&lt;p&gt;So right now, it (here, Traceloop) provides instrumentation for specific models like OpenAI, Anthropic, and Coherent. Are there plans to extend into other machine learning models in the future?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; Yes, we already support the Transformers model for Hugging Face. We're adding more every day. If you go by this list, I got it from the comments. I didn't try it out on every integration to start with. We are constantly adding more and more instrumentations. We plan to support everything in the ecosystem. We already support Pinecone and ChromIDB as vector databases. We'll have others coming in soon. We are supporting the Langchain as a framework for LLMs. We basically will support everything in this ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; This makes sense. If you're working with APIs and you have expected request or response structures, it shouldn't come to a point where you're like, "No, we can't instrument that API." That should be doable.&lt;/p&gt;

&lt;p&gt;Lastly, before we go into a demo, do you have advice for engineers who are considering a lot of this tooling for people who are making use of these models within their products but want to start asking questions about security, compliance, observability, the second-order concerns?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; I think you should always aspire to use as many open-source tools as you can because they can be easily deployed on-prem if you need to. Especially around large language models, we see a lot of larger companies that see privacy as a major concern for them because the prompts within the traces contain very sensitive data. They can't trust this information with anyone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Yeah, it makes sense, especially when you're dealing with sensitive data like sales relationships on the enterprise side or therapy questions on the personal side. It's the kind of data you don't want to see spread about or shared. Talking about self-hosting and open-source tools as alternatives for how you're modeling certain data makes a lot of sense. Telemetry's strength is that you can route data to multiple places, so you can have the same collector doing that kind of routing.&lt;/p&gt;

&lt;p&gt;This has been a fantastic talk, and I can't wait for us to dive into a little demo of implementing it on your own set of tools and reporting some of the data over to &lt;a href="https://signoz.io/docs/introduction/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; One of the questions people keep asking is, "Hey, why is this separate from OpenTelemetry? Are you planning on integrating it back into OpenTelemetry, giving it to the bigger project?" I've been contributing to the &lt;a href="https://signoz.io/blog/opentelemetry-apm/" rel="noopener noreferrer"&gt;OpenTelemetry project&lt;/a&gt; for a long time, and I'd love to become part of OpenTelemetry at some point. But you have to understand that OpenTelemetry is a big project.&lt;/p&gt;

&lt;p&gt;This is a huge project with a lot of stakeholders, and a lot of users are using it outside of the LM world. It's hard to commit the semantic conventions we're defining for the instrumentations today into the main OpenTelemetry repo. This is still evolving, and we're trying to figure out what's the right way to report the different prompts or bases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; I'll say on that, when you talk to the tags or the SIGs within the group, they're like, "We want to see people doing instrumentation as separate projects." Instrumentation is mainly for the core language projects. Anybody could do a collector, but this is new instrumentation. So, it's early days, and how it's going to be folded in and when it will be folded could be years down the line. I think that makes sense.&lt;/p&gt;

&lt;p&gt;Yeah, it may surprise some people, but that's how the project works. It's easy to contribute to the collector, but instrumentation is generally done as separate projects. I think that makes a ton of sense.&lt;/p&gt;

&lt;p&gt;Thank you for having such complete answers to these questions. I love that you're coming in with so much background and empathy for the users who are trying to understand what's going on with these models and track their actual usage.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;DEMO STARTS NOW&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;📝 Note&lt;/p&gt;

&lt;p&gt;The conversation ends here, as it transitions into a demonstration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR&lt;/strong&gt; Today, I'm going to showcase OpenTelemetry with a simple demo using Pinecone, a recommendation engine. I've set up a RAG pipeline based on a Pinecone demo. I loaded a dataset named 'longchain-python-docs,' containing all the Langchain documentation up to around June or May 2023, into the Pinecone database and now it's ready to use and if we go here we can see this is like the set of documentation all the docs you have on Langchain.&lt;/p&gt;

&lt;p&gt;On top of that, I built a really simple RAG pipeline, that allows you to ask questions on on the Lang chain documentation. Whenever you get a question we go to Pinecone we get the five most relevant documents from Lang chain documentation and then we call open AI with a prompt that contains all this documentation with the question the user asks and that's it and we we print the answer.&lt;/p&gt;

&lt;p&gt;We can run it here and see the answer, it's called look retrieval it takes a while because the opening is slow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; This is probably the number one use case for OpenTelemetry: "Hey, why has our service gotten so much slower?" Well, I think it might be the API, but I can't prove it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; So now you might ask, what was the slow part? Was it the querying? The pinecone? OpenAI? The embedding? I have no idea, right? But yeah, you can see the answer here gave me a pretty good answer to the question, "How do I build an agent with Langchain?" Like, a really good agent, good answer, sorry, from the documentation. And yeah, we want to see, okay, it took like 10 seconds to build this agent, and now we can see all the telemetry for our service. This is helpful!&lt;/p&gt;

&lt;p&gt;Let's see how long it takes to install OpenLLMetry. It should only take a few seconds. If it takes longer, we'll need to figure out why. To install OpenLLMetry, go to the documentation. It's pretty easy to follow. If you're a keyboard person, you can use the command line instead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; We'll also share links to the documentation in the chat, so you don't have to type everything you see in the video.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; Okay, so to install OpenLLMetry, we'll install traceloop-sdk, which wraps OpenLLMetry. Then, we'll initialize it. Since we're running locally, we'll disable batch as well. Let's see how long it takes...&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; We are going to see right now how to disable batching when trying out instrumentation. If you start with a production-style thing with batch responses, you will sit there sending requests for five minutes and see nothing. You will think it is broken, go back and change your code, so you want your batching to be disabled to get that data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; Right, so let's do it here. I am using poetry to install the traceloop-sdk. After it is installed, we are going to go to the pipeline and import the traceloop-sdk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Sorry, I have a funny story to tell you. I had not used a code completion tool like a modern code completion tool until this week because a friend asked me to try out their tool. I was like, "Oh, this is very nice!" I was doing my imports and it said, "Do you want to import this other thing that you use later in your code?" I was like, "Uh, why are you telling me to import this random component?" Then I saw, "Oh, right, because I use it and I forgot to import it." It was marked in the parser and stuff, but I was like, "Oh, that's pretty nice!" Thank you for that&lt;/p&gt;

&lt;p&gt;When you are first implementing &lt;a href="https://signoz.io/blog/what-is-opentelemetry/" rel="noopener noreferrer"&gt;open telemetry&lt;/a&gt; instrumentation or when using instrumentation, it is important to disable batching so that you can see each request traced in your dashboard. This is because you want to be able to see the results of your changes as soon as possible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; Absolutely. So, I'll do that now. We're using Poetry to install the SDK.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Installation process is performed.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once it's installed, we can use OpenTelemetry to trace our requests. We need to import it, and we're good to go.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;OpenTelemetry is imported.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So, now, you can instrument your code and monitor the performance in detail.&lt;/p&gt;

&lt;p&gt;We'll re-run the demo with OpenTelemetry, and this time, we'll see the results displayed as traces.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The demo is rerun with OpenTelemetry.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can see that the traces are exported. This allows you to see the different steps and response times within your application.&lt;/p&gt;

&lt;p&gt;This level of visibility is essential for understanding how each part of your system contributes to the overall response time.&lt;/p&gt;

&lt;p&gt;However, what you'll notice is that these are individual spans, and they're not connected to form a complete trace.&lt;/p&gt;

&lt;p&gt;A complete trace, with connected spans, would provide a better picture of the entire workflow.&lt;/p&gt;

&lt;p&gt;To create connected traces, we can use annotations. OpenTelemetry recommends annotating your workflow. You can use decorators to annotate different tasks and link them together.&lt;/p&gt;

&lt;p&gt;So, you'll have a comprehensive view of how data flows through your system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Precisely. We'll annotate our workflow and different tasks within our program. We've annotated the main method, which is the "ask_question" function. It's the core of our workflow.&lt;/p&gt;

&lt;p&gt;That's the part that calls other methods and orchestrates the whole process.&lt;/p&gt;

&lt;p&gt;We've also annotated the "query_lm" task, which corresponds to the call to OpenAI, and the "retrieve_docs" task, which retrieves documents from Pinecone.&lt;/p&gt;

&lt;p&gt;This helps us see how different parts of our system are performing and if any are causing bottlenecks. After annotating our code, we can re-run it and check the traces to see the complete workflow.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The program is rerun with annotations.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; Let’s go to the Trace Loop website and create an account.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign in to your GitHub account and create an organization.&lt;/li&gt;
&lt;li&gt;Generate an API key.&lt;/li&gt;
&lt;li&gt;Add the API key as an environment variable.&lt;/li&gt;
&lt;li&gt;Run your program again.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once you have disabled batching, you can see the traces of your requests in the Trace Loop dashboard. You can also see how long each request took and which services were called.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;They demonstrate how to enable a trace loop, sign up, and create an API key to export traces to the &lt;a href="https://signoz.io/blog/opentelemetry-visualization/" rel="noopener noreferrer"&gt;OpenTelemetry dashboard&lt;/a&gt;. However, they realize that the generated traces are individual and not a connected sequence.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To see a complete connected trace, annotations are needed. OpenTelemetry recommends annotating your workflow to view a unified trace. You can use decorators to connect and visualize the overall process.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;They proceed to add annotations to connect the spans and show the workflow as a comprehensive trace.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; Great, now let's run the program again. You can see it exporting traces. Go back to the tracing screen, and you can see the traces for the call to OpenAI and the call to Pinecone. But they're disconnected spans, so we want to connect them.&lt;/p&gt;

&lt;p&gt;Now, when we view the traces, we can see the entire workflow. With annotations, you can follow the sequence of tasks and get insights into which parts might need optimization. This provides valuable information for improving the performance of your application.&lt;/p&gt;

&lt;p&gt;It's an essential tool for anyone building AI-integrated services and wanting to understand what's happening under the hood.&lt;/p&gt;

&lt;p&gt;OpenTelemetry's integration capabilities allow you to connect it to various components and gather comprehensive traces of your system's operation.&lt;/p&gt;

&lt;p&gt;Now, if you go back to the Traceloop dashboard, you can see the workflow that connects everything. It shows you how much time was spent on each task, offering real insight into what's happening.&lt;/p&gt;

&lt;p&gt;Exactly, and because we're using OpenTelemetry, you can connect it to anything you want. In our documentation, we have a section on integrations, like how to integrate with SigNoz and send trace data to it. It's easy, and you can set environment variables to configure it.&lt;/p&gt;

&lt;p&gt;[The demonstration ends here, with a transition into further discussion about integrations.]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Alright, so I want to show you how easy it is to integrate OpenTelemetry with your existing services. First, make sure you've set up your service name to ensure that everything is well-defined. Setting the service name is a crucial step for proper tracing and monitoring of your application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; Once you have your service name in place, you can go to your settings, which I already have here in my SigNoz instance. And please, bear with us for a moment as we adjust the text size to make it look just right.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; It's always interesting when text sizing becomes a part of your workflow, isn't it?&lt;/p&gt;

&lt;p&gt;Oh, you have no idea! It's a common struggle, especially when you zoom in and out of your interface. So, let's go to the ingestion keys now. But remember folks, don't send stuff to the Trace Loop account; that would be weird. Right, and we want to avoid any odd data ending up in the Trace Loop account.&lt;/p&gt;

&lt;p&gt;Now, can we take a look at a couple of traces in the SigNoz dashboard? Let's explore and report it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; To report it, we need to set up Trace Loop and add the correct base URL. We also need to export the Trace Loop headers. These details are all well-documented. Now, let's copy the ingestion key and add it as an environment variable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; By the way, typing keys from memory is quite an acrobatic feat, especially when you're doing it during a live stream. It's indeed a skill, but it's all part of the job.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; And that's it; now we're all set. Let's run the app again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; I'd undoubtedly misspell "access" if I were doing it manually. Automation can be a lifesaver in these situations.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The setup is complete, and they start sending traces.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; So now we're sending traces. Let's go back to SigNoz, navigate to the traces section, and wait for a few seconds to see the traces.&lt;/p&gt;

&lt;p&gt;We should be able to see the traces from our application within SigNoz.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;They wait for a moment and start seeing the traces.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We've got a couple of traces. Let's explore them. This is where we can dive into the details of what our application is doing.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;They analyze the traces in the Signals dashboard.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; These are the same traces we saw in Trace Loop, and you can observe the workflow, tasks, and everything here. The ability to have a complete trace of your application's operation is invaluable for monitoring and debugging.&lt;/p&gt;

&lt;p&gt;But what's impressive is how well-structured the attributes are. The key naming is logical and makes sense. It's organized in a way that you can query and analyze the data effectively. And check out the prompt attributes – they're nicely labeled and provide meaningful data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; This is the kind of visibility that's essential for understanding your application's behavior.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;They discuss the service name and attributes in the traces.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can set your service name explicitly if needed, but the default service name is quite reasonable.&lt;/p&gt;

&lt;p&gt;Customizing the service name gives you the flexibility to label services in a way that suits your business.&lt;/p&gt;

&lt;p&gt;Nir That's true. And here's one more thing I want to show. Some people might be hesitant to annotate their code. But if you're using structured frameworks like Langchain, you don't need to annotate anything. Frameworks often have built-in structures that OpenTelemetry can utilize for tracing. I've rewritten the example you saw earlier, this time using Langchain. And you can see that you don't need to annotate anything. We're still calling Trace Loop to initialize it. It's essentially two lines of code to integrate with OpenTelemetry.&lt;/p&gt;

&lt;p&gt;It's a very clean approach, and it saves you the effort of manually annotating your code.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;They run the Langchain example without manual annotations.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now, with Langchain, we don't have to annotate tasks manually. OpenTelemetry can figure out the program structure and tasks automatically.&lt;/p&gt;

&lt;p&gt;That's a significant advantage for those who prefer a more hands-off approach to tracing.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;They analyze the traces generated by Langchain and see the same high-quality attributes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As you can see, we have the same level of detail in the traces without manual annotations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; It's an excellent option for people who want to streamline their tracing without investing in extensive annotations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NIR:&lt;/strong&gt; And it's good to remember that not all LLM providers, databases, or frameworks are supported yet. We encourage anyone interested to contribute and help us build OpenTelemetry integration for their preferred frameworks and tools.&lt;/p&gt;

&lt;p&gt;Collaboration and contributions can make OpenTelemetry even more versatile and comprehensive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Absolutely. Start with what you can integrate automatically, and then build on that. Your observability will benefit from the additional context. It's a practical approach to gain better insights into your applications.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;They wrap up their discussion.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I want people to check this out, and they can find links to the documentation in the description below. If you have questions, please feel free to ask in the comments. And I'll reach out to you on the CNCF Slack.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The demonstration ends with a call to action for the audience.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NICA:&lt;/strong&gt; Thank you for joining us. We'll be back next week with more insights.&lt;/p&gt;




&lt;p&gt;Thank you for taking out the time to read this transcript :) If you have any feedback or want any changes to the format, please create an &lt;a href="https://github.com/SigNoz/signoz/issues" rel="noopener noreferrer"&gt;issue&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Feel free to join our Slack community and say hi! 👋&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/slack" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falb6jj148l2mta73yust.webp" alt="SigNoz Slack community"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Monitor MySQL Metrics with OpenTelemetry</title>
      <dc:creator>Ankit Anand ✨</dc:creator>
      <pubDate>Tue, 03 Mar 2026 05:00:28 +0000</pubDate>
      <link>https://dev.to/ankit01oss/how-to-monitor-mysql-metrics-with-opentelemetry-5ga1</link>
      <guid>https://dev.to/ankit01oss/how-to-monitor-mysql-metrics-with-opentelemetry-5ga1</guid>
      <description>&lt;p&gt;Database monitoring is an important aspect to look at for a high-volume or high-traffic system. The database performance drastically impacts the response times for the application. In this tutorial, you will install OpenTelemetry Collector to collect MySQL metrics and then send the collected data to SigNoz for monitoring and visualization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9birxcjaq0uqljz3ybc.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9birxcjaq0uqljz3ybc.webp" alt="Cover Image" width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;In this tutorial, we cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-mysql-metrics-monitoring/#a-brief-overview-of-mysql-database" rel="noopener noreferrer"&gt;A brief overview of MySQL Database&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-mysql-metrics-monitoring/#a-brief-overview-of-opentelemetry" rel="noopener noreferrer"&gt;A Brief Overview of OpenTelemetry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-mysql-metrics-monitoring/#how-does-opentelemetry-collector-collect-data" rel="noopener noreferrer"&gt;How does OpenTelemetry Collector collect data?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-mysql-metrics-monitoring/#pre-requisites" rel="noopener noreferrer"&gt;Pre-requisites&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-mysql-metrics-monitoring/#setting-up-signoz" rel="noopener noreferrer"&gt;Setting up SigNoz&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-mysql-metrics-monitoring/#setting-up-opentelemetry-collector" rel="noopener noreferrer"&gt;Setting up OpenTelemetry collector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-mysql-metrics-monitoring/#monitoring-with-signoz-dashboard" rel="noopener noreferrer"&gt;Monitoring with Signoz Dashboard&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-mysql-metrics-monitoring/#reference-mysql-metrics-and-labels-collected-by-opentelemetry" rel="noopener noreferrer"&gt;Reference: MySQL metrics and labels collected by OpenTelemetry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-mysql-metrics-monitoring/#conclusion" rel="noopener noreferrer"&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-mysql-metrics-monitoring/#further-reading" rel="noopener noreferrer"&gt;Further Reading&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to jump straight into implementation, start with this &lt;a href="https://signoz.io/blog/opentelemetry-mysql-metrics-monitoring/#pre-requisites" rel="noopener noreferrer"&gt;pre-requisites&lt;/a&gt; section.&lt;/p&gt;

&lt;h2&gt;
  
  
  A brief overview of MySQL Database
&lt;/h2&gt;

&lt;p&gt;MySQL is an open-source relational database used by several popular companies around the world. Over the years, it has matured quite well and provides excellent performance even at large scale. Despite this, the tooling provided by the MySQL community is not good enough to monitor the database easily. With a metrics collector like Opentelemetry Collector, we could easily fetch the metrics and publish them to a remote destination like SigNoz to visualize them.&lt;/p&gt;

&lt;p&gt;In this tutorial, we will build an end-to-end monitoring solution for MySQL using an &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/mysqlreceiver/README.md" rel="noopener noreferrer"&gt;OpenTelemetry MySQL receiver&lt;/a&gt; to collect the metrics and &lt;a href="https://signoz.io/" rel="noopener noreferrer"&gt;Signoz&lt;/a&gt; to visualize the collected metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Brief Overview of OpenTelemetry
&lt;/h2&gt;

&lt;p&gt;OpenTelemetry is a set of APIs, SDKs, libraries, and integrations aiming to standardize the generation, collection, and management of telemetry data(logs, metrics, and traces). It is backed by the Cloud Native Computing Foundation and is the leading open-source project in the observability domain.&lt;/p&gt;

&lt;p&gt;The data you collect with OpenTelemetry is vendor-agnostic and can be exported in many formats. Telemetry data has become critical in observing the state of distributed systems. With microservices and polyglot architectures, there was a need to have a global standard. OpenTelemetry aims to fill that space and is doing a great job at it thus far.&lt;/p&gt;

&lt;p&gt;In this tutorial, you will use OpenTelemetry Collector to collect MySQL metrics for performance monitoring.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is OpenTelemetry Collector?
&lt;/h3&gt;

&lt;p&gt;OpenTelemetry Collector is a stand-alone service provided by OpenTelemetry. It can be used as a telemetry-processing system with a lot of flexible configurations to collect and manage telemetry data.&lt;/p&gt;

&lt;p&gt;It can understand different data formats and send it to different backends, making it a versatile tool for building observability solutions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;Read our complete guide on OpenTelemetry Collector&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How does OpenTelemetry Collector collect data?
&lt;/h2&gt;

&lt;p&gt;Data collection in OpenTelemetry Collector is facilitated through receivers. Receivers are configured via YAML under the top-level &lt;code&gt;receivers&lt;/code&gt; tag. To ensure a valid configuration, at least one receiver must be enabled.&lt;/p&gt;

&lt;p&gt;Below is an example of an &lt;strong&gt;&lt;code&gt;otlp&lt;/code&gt;&lt;/strong&gt; receiver:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The OTLP receiver accepts data through gRPC or HTTP in the &lt;a href="https://github.com/open-telemetry/opentelemetry-proto/blob/main/docs/specification.md" rel="noopener noreferrer"&gt;OTLP&lt;/a&gt; format. There are advanced configurations that you can enable via the YAML file.&lt;/p&gt;

&lt;p&gt;Here’s a sample configuration for an OTLP receiver:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;localhost:4318"&lt;/span&gt;
        &lt;span class="na"&gt;cors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;allowed_origins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;http://test.com&lt;/span&gt;
            &lt;span class="c1"&gt;# Origins can have wildcards with *, use * by itself to match any origin.&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://*.example.com&lt;/span&gt;
          &lt;span class="na"&gt;allowed_headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Example-Header&lt;/span&gt;
          &lt;span class="na"&gt;max_age&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;7200&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find more details on advanced configurations &lt;a href="https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once a receiver is configured, it needs to be &lt;strong&gt;enabled&lt;/strong&gt; to start the data flow. This involves setting up &lt;strong&gt;pipelines&lt;/strong&gt; within a &lt;strong&gt;&lt;code&gt;service&lt;/code&gt;&lt;/strong&gt;. A &lt;strong&gt;pipeline&lt;/strong&gt; acts as a streamlined pathway for data, outlining how it should be processed and where it should go. A pipeline comprises of the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Receivers:&lt;/strong&gt; These are entry points for data into the OpenTelemetry Collector, responsible for collecting data from various sources and feeding it into the pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Processors:&lt;/strong&gt; After data is received, processors manipulate, filter, or enhance the data as needed before it proceeds further in the pipeline. They provide a way to customize the data according to specific requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exporters:&lt;/strong&gt; After processing, the data is ready for export. Exporters define the destination for the data, whether it's an external monitoring system, storage, or another service. They format the data appropriately for the chosen output.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below is an example pipeline configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
            &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s a breakdown of the above metrics pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Receivers:&lt;/strong&gt; This pipeline is configured to receive metrics data from two sources: OTLP and Prometheus. The &lt;strong&gt;&lt;code&gt;otlp&lt;/code&gt;&lt;/strong&gt; receiver collects metrics using both gRPC and HTTP protocols, while the &lt;strong&gt;&lt;code&gt;prometheus&lt;/code&gt;&lt;/strong&gt; receiver gathers metrics from Prometheus.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Processors:&lt;/strong&gt; Metrics data is processed using the &lt;strong&gt;&lt;code&gt;batch&lt;/code&gt;&lt;/strong&gt; processor. This processor likely batches metrics before exporting them, optimizing the data flow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exporters:&lt;/strong&gt; Metrics processed through this pipeline can be exported to both OTLP and Prometheus destinations. The &lt;strong&gt;&lt;code&gt;otlp&lt;/code&gt;&lt;/strong&gt; exporter sends data to an endpoint specified in the configuration, and the &lt;strong&gt;&lt;code&gt;prometheus&lt;/code&gt;&lt;/strong&gt; exporter handles the export of metrics to a Prometheus-compatible destination.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;This tutorial assumes that the OpenTelemetry Collector is installed on the same host as the MySQL setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Preparing MySQL database setup
&lt;/h3&gt;

&lt;p&gt;For the purpose of this tutorial, we can use a local MySQL setup if you have it installed already. In case you do not have a MySQL database installed already, you can follow the below guide to run the MySQL database locally using &lt;a href="https://docs.docker.com/engine/install" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; and &lt;a href="https://docs.docker.com/compose" rel="noopener noreferrer"&gt;Docker-Compose&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This would help you get rid of any technical challenges related to setting up the agent or database locally. Below links can help you with the Docker installation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/desktop/install/linux-install/" rel="noopener noreferrer"&gt;Docker Desktop for Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/desktop/install/mac-install" rel="noopener noreferrer"&gt;Docker Desktop for Mac (macOS)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/desktop/install/windows-install" rel="noopener noreferrer"&gt;Docker Desktop for Windows&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you have Docker installation ready to go, create the below &lt;code&gt;docker-compose.yaml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.3"&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mysqldb&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysql&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;MYSQL_DATABASE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;myoteldb"&lt;/span&gt;
      &lt;span class="na"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;password123"&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3306:3306"&lt;/span&gt;
    &lt;span class="na"&gt;expose&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Opens port 3306 on the container&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3306"&lt;/span&gt;
      &lt;span class="c1"&gt;# Where our data will be persisted&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;my-db:/var/lib/mysql&lt;/span&gt;
&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;my-db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once done, execute the below command from the same folder to get the MySQL database server up and running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;docker-compose up -d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a very simple Docker compose file that can spin up MySQL Database in no time. It stores the data from MySQL DB locally in your disk and mounts it onto the container. It makes the database accessible on port 3306 locally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up SigNoz
&lt;/h2&gt;

&lt;p&gt;You need a backend to which you can send the collected data for monitoring and visualization. &lt;a href="https://signoz.io/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; is an OpenTelemetry-native APM that is well-suited for visualizing OpenTelemetry data.&lt;/p&gt;

&lt;p&gt;SigNoz cloud is the easiest way to run SigNoz. You can sign up &lt;a href="https://signoz.io/teams/" rel="noopener noreferrer"&gt;here&lt;/a&gt; for a free account and get 30 days of unlimited access to all features.&lt;/p&gt;

&lt;p&gt;You can also install and self-host SigNoz yourself. Check out the &lt;a href="https://signoz.io/docs/install/" rel="noopener noreferrer"&gt;docs&lt;/a&gt; for installing self-host SigNoz.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up OpenTelemetry collector
&lt;/h2&gt;

&lt;p&gt;The OpenTelemetry Collector offers various deployment options to suit different environments and preferences. It can be deployed using Docker, Kubernetes, Nomad, or directly on Linux systems. You can find all the installation options &lt;a href="https://opentelemetry.io/docs/collector/installation" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We are going to discuss the manual installation here and resolve any hiccups that come in the way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 - Downloading OpenTelemetry Collector
&lt;/h3&gt;

&lt;p&gt;Download the appropriate binary package for your Linux or macOS distribution from the OpenTelemetry Collector &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-releases/releases" rel="noopener noreferrer"&gt;releases&lt;/a&gt; page. We are using the latest version available at the time of writing this tutorial.&lt;/p&gt;

&lt;p&gt;For MACOS (arm64):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--proto&lt;/span&gt; &lt;span class="s1"&gt;'=https'&lt;/span&gt; &lt;span class="nt"&gt;--tlsv1&lt;/span&gt;.2 &lt;span class="nt"&gt;-fOL&lt;/span&gt; https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.116.0/otelcol-contrib_0.116.0_darwin_arm64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2 - Extracting the package
&lt;/h3&gt;

&lt;p&gt;Create a new directory named &lt;code&gt;otelcol-contrib&lt;/code&gt; and then extract the contents of the &lt;code&gt;otelcol-contrib_0.116.0_darwin_arm64.tar.gz&lt;/code&gt; archive into this newly created directory with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;otelcol-contrib &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;tar &lt;/span&gt;xvzf otelcol-contrib_0.116.0_darwin_arm64.tar.gz &lt;span class="nt"&gt;-C&lt;/span&gt; otelcol-contrib
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3 - Setting up the configuration file
&lt;/h3&gt;

&lt;p&gt;Create a &lt;code&gt;config.yaml&lt;/code&gt; file in the &lt;code&gt;otelcol-contrib&lt;/code&gt; folder. This configuration file will enable the collector to connect with MySQL and have other settings, such as the frequency at which you want to monitor the instance.&lt;/p&gt;

&lt;p&gt;📝 Note&lt;/p&gt;

&lt;p&gt;The configuration file should be created in the same directory where you unpack the &lt;code&gt;otel-collector-contrib&lt;/code&gt; binary. In case you have globally installed the binary, it is ok to create on any path.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:4317&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:4318&lt;/span&gt;
  &lt;span class="na"&gt;mysql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:3306&lt;/span&gt;
    &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;your-root-username&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;your-root-password&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;collection_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
    &lt;span class="na"&gt;initial_delay&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
&lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;resource/env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;attributes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deployment.environment&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;staging&lt;/span&gt;
      &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;upsert&lt;/span&gt;
  &lt;span class="na"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;send_batch_size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1000&lt;/span&gt;
    &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
&lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ingest.{region}.signoz.cloud:443"&lt;/span&gt; &lt;span class="c1"&gt;# replace {region} with your region&lt;/span&gt;
    &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;insecure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;signoz-ingestion-key"&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{signoz-token}"&lt;/span&gt; &lt;span class="c1"&gt;# Obtain from https://{your-signoz-url}/settings/ingestion-settings&lt;/span&gt;
  &lt;span class="na"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;verbosity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;detailed&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;telemetry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:8888&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;mysql&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;resource/env&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You would need to replace &lt;code&gt;region&lt;/code&gt; and &lt;code&gt;signoz-token&lt;/code&gt; in the above file with the region of your choice (for Signoz Cloud) and token obtained from Signoz Cloud → Settings → Integration Settings. The ingestion key details are also available in the SigNoz Cloud dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9iloteucogy6uc9y18q.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9iloteucogy6uc9y18q.webp" alt="You can find ingestion details in the SigNoz dashboard" width="800" height="376"&gt;&lt;/a&gt;&lt;br&gt;You can find ingestion details in the SigNoz dashboard
  &lt;/p&gt;

&lt;p&gt;Additionally, replace the MySQL username and password as well. In case you are using the &lt;code&gt;docker-compose&lt;/code&gt;-based setup, the username will be &lt;code&gt;root&lt;/code&gt;, and the password will be &lt;code&gt;password123&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The above configuration is quite simple - Whenever you wish to monitor a different remote database, all you would need to change is the &lt;code&gt;endpoint&lt;/code&gt; URL for the &lt;code&gt;mysql&lt;/code&gt; receiver. You can also monitor multiple MySQL databases by adding multiple receivers, as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:4317&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:4318&lt;/span&gt;
  &lt;span class="na"&gt;mysql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysqldb:3306&lt;/span&gt;
    &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
    &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password123&lt;/span&gt;
    &lt;span class="na"&gt;collection_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
    &lt;span class="na"&gt;initial_delay&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
  &lt;span class="na"&gt;mysql/2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;some-remote-database-url:3306&lt;/span&gt;
    &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;remote-username&lt;/span&gt;
    &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-password&lt;/span&gt;
    &lt;span class="na"&gt;collection_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
    &lt;span class="na"&gt;initial_delay&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
&lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;resource/env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;attributes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deployment.environment&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;staging&lt;/span&gt;
      &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;upsert&lt;/span&gt;
  &lt;span class="na"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;send_batch_size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1000&lt;/span&gt;
    &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
&lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ingest.{region}.signoz.cloud:443"&lt;/span&gt; &lt;span class="c1"&gt;# replace {region} with your region&lt;/span&gt;
    &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;insecure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;signoz-ingestion-key"&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{signoz-token}"&lt;/span&gt; &lt;span class="c1"&gt;# Obtain from https://{your-signoz-url}/settings/ingestion-settings&lt;/span&gt;
  &lt;span class="na"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;verbosity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;detailed&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;telemetry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost:8888&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;mysql&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;mysql/2&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;resource/env&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4 - Running the collector service
&lt;/h3&gt;

&lt;p&gt;Every Collector release includes an &lt;code&gt;otelcol&lt;/code&gt; executable that you can run. Since we’re done with configurations, we can now run the collector service with the following command.&lt;/p&gt;

&lt;p&gt;From the &lt;code&gt;otelcol-contrib&lt;/code&gt;, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./otelcol-contrib &lt;span class="nt"&gt;--config&lt;/span&gt; ./config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to run it in the background -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./otelcol-contrib &lt;span class="nt"&gt;--config&lt;/span&gt; ./config.yaml &amp;amp;&amp;gt; otelcol-output.log &amp;amp; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$\&lt;/span&gt;&lt;span class="s2"&gt;!"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; otel-pid
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5 - Debugging the output
&lt;/h3&gt;

&lt;p&gt;If you want to see the output of the logs, we’ve just set up for the background process. You may look it up with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 50 otelcol-output.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;tail 50 will give the last 50 lines from the file &lt;code&gt;otelcol-output.log&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can stop the collector service with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;kill "$(&amp;lt; otel-pid)"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should start seeing the metrics on your Signoz Cloud UI in about 30 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring with Signoz Dashboard
&lt;/h2&gt;

&lt;p&gt;Once the above setup is done, you will be able to access the metrics in the SigNoz dashboard. You can go to the Dashboards tab and try adding a new panel. You can learn how to create dashboards in SigNoz &lt;a href="https://signoz.io/docs/userguide/manage-dashboards-and-panels/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1k7e5sv5l89fsnovkyny.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1k7e5sv5l89fsnovkyny.webp" alt="MySQL metrics collected by OTel Collector and sent to Signoz" width="800" height="367"&gt;&lt;/a&gt;&lt;br&gt;MySQL metrics collected by OTel Collector and sent to Signoz
  &lt;/p&gt;

&lt;p&gt;You can easily create charts with &lt;a href="https://signoz.io/docs/userguide/create-a-custom-query/#sample-examples-to-create-custom-query" rel="noopener noreferrer"&gt;query builder&lt;/a&gt; in SigNoz. Here are the &lt;a href="https://signoz.io/docs/userguide/manage-panels/#steps-to-add-a-panel-to-a-dashboard" rel="noopener noreferrer"&gt;steps&lt;/a&gt; to add a new panel to the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0r1hl8uha9bc0rhoueh.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0r1hl8uha9bc0rhoueh.webp" alt="Building a chart to monitor the number of queries executed for each operation type" width="800" height="490"&gt;&lt;/a&gt;&lt;br&gt;Building a chart to monitor the number of queries executed for each operation type
  &lt;/p&gt;

&lt;p&gt;You can build a complete dashboard around various metrics emitted. Here’s a look at a sample dashboard we built out using the metrics collected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cnomusgqbmzkjx6oy26.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cnomusgqbmzkjx6oy26.webp" width="800" height="322"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;You can also create alerts on any metric. Learn how to create alerts &lt;a href="https://signoz.io/docs/userguide/alerts-management/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkla3fu6do9vpvvmmz7d.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkla3fu6do9vpvvmmz7d.webp" alt="Creating alerts using a metric panel" width="424" height="335"&gt;&lt;/a&gt;&lt;br&gt;Creating alerts using a metric panel
  &lt;/p&gt;

&lt;p&gt;For instance, as shown above, you could create an uptime alert that quickly updates you if there is any downtime in the database. If you want to get started quickly with &lt;a href="https://signoz.io/blog/mysql-monitoring-tools/" rel="noopener noreferrer"&gt;MySQL monitoring&lt;/a&gt;, you can load this &lt;a href="https://github.com/SigNoz/dashboards/tree/main/mysql" rel="noopener noreferrer"&gt;MySQL JSON&lt;/a&gt; in the SigNoz dashboard and get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference: MySQL metrics and labels collected by OpenTelemetry
&lt;/h2&gt;

&lt;p&gt;The OpenTelemetry collector receiver connects with your desired MySQL database and gathers the below set of metrics:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Attributes&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;mysql.buffer_pool.data_pages&lt;/td&gt;
&lt;td&gt;Number of data pages for an InnoDB buffer pool&lt;/td&gt;
&lt;td&gt;status&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.buffer_pool.limit&lt;/td&gt;
&lt;td&gt;Configured size of the InnoDB buffer pool&lt;/td&gt;
&lt;td&gt;none&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.buffer_pool.operations&lt;/td&gt;
&lt;td&gt;Number of operations on the InnoDB buffer pool&lt;/td&gt;
&lt;td&gt;operation&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.buffer_pool.page_flushes&lt;/td&gt;
&lt;td&gt;Sum of Requests to flush pages for the InnoDB buffer pool&lt;/td&gt;
&lt;td&gt;none&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.buffer_pool.pages&lt;/td&gt;
&lt;td&gt;Sum of pages in the InnoDB buffer pool&lt;/td&gt;
&lt;td&gt;kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.buffer_pool.usage&lt;/td&gt;
&lt;td&gt;Number of bytes in the InnoDB buffer pool&lt;/td&gt;
&lt;td&gt;status&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.double_writes&lt;/td&gt;
&lt;td&gt;Number of writes to the InnoDB doublewrite buffer pool&lt;/td&gt;
&lt;td&gt;kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.handlers&lt;/td&gt;
&lt;td&gt;Number of requests to various MySQL handlers&lt;/td&gt;
&lt;td&gt;kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.index.io.wait.count&lt;/td&gt;
&lt;td&gt;Sum of I/O wait events for a particular index&lt;/td&gt;
&lt;td&gt;operation, table, schema, index&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.index.io.wait.time&lt;/td&gt;
&lt;td&gt;Total time of I/O wait events for a particular index&lt;/td&gt;
&lt;td&gt;operation, table, schema, index&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.locks&lt;/td&gt;
&lt;td&gt;Total MySQL locks&lt;/td&gt;
&lt;td&gt;kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.log_operations&lt;/td&gt;
&lt;td&gt;Number of InnoDB log operations&lt;/td&gt;
&lt;td&gt;operation&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.mysqlx_connections&lt;/td&gt;
&lt;td&gt;Total MySQLx connections&lt;/td&gt;
&lt;td&gt;status&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.opened_resources&lt;/td&gt;
&lt;td&gt;Total opened resources&lt;/td&gt;
&lt;td&gt;kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.operations&lt;/td&gt;
&lt;td&gt;Total operations including fsync, reads and writes&lt;/td&gt;
&lt;td&gt;operation&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.page_operations&lt;/td&gt;
&lt;td&gt;Total operation on InnoDB pages&lt;/td&gt;
&lt;td&gt;operation&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.prepared_statements&lt;/td&gt;
&lt;td&gt;Number of times each type of Prepared statement command got issued&lt;/td&gt;
&lt;td&gt;command&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.row_locks&lt;/td&gt;
&lt;td&gt;Total InnoDB row locks present&lt;/td&gt;
&lt;td&gt;kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.row_operations&lt;/td&gt;
&lt;td&gt;Total row operations executed&lt;/td&gt;
&lt;td&gt;operation&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.sorts&lt;/td&gt;
&lt;td&gt;Total MySQL sort execution&lt;/td&gt;
&lt;td&gt;kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.table.io.wait.count&lt;/td&gt;
&lt;td&gt;Total I/O wait events for a specific table&lt;/td&gt;
&lt;td&gt;operation, table, schema&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.table.io.wait.time&lt;/td&gt;
&lt;td&gt;Total wait time for I/O events for a table&lt;/td&gt;
&lt;td&gt;operation, table,schema&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.threads&lt;/td&gt;
&lt;td&gt;Current state of MySQL threads&lt;/td&gt;
&lt;td&gt;kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.tmp_resources&lt;/td&gt;
&lt;td&gt;Number of temporary resources created&lt;/td&gt;
&lt;td&gt;resource&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.uptime&lt;/td&gt;
&lt;td&gt;Number of seconds since the server has been up&lt;/td&gt;
&lt;td&gt;none&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.client.network.io&lt;/td&gt;
&lt;td&gt;Number of transmitted bytes between server and clients&lt;/td&gt;
&lt;td&gt;kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.commands&lt;/td&gt;
&lt;td&gt;Total number of executions for each type of command&lt;/td&gt;
&lt;td&gt;command&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.connection.count&lt;/td&gt;
&lt;td&gt;Total connection attempts (including successful and failed)&lt;/td&gt;
&lt;td&gt;none&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.connection.errors&lt;/td&gt;
&lt;td&gt;Errors occured during the connections&lt;/td&gt;
&lt;td&gt;error&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.joins&lt;/td&gt;
&lt;td&gt;Number of joins that performed table scans&lt;/td&gt;
&lt;td&gt;kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.mysqlx_worker_threads&lt;/td&gt;
&lt;td&gt;Number of available worker threads&lt;/td&gt;
&lt;td&gt;kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.query.client.count&lt;/td&gt;
&lt;td&gt;Number of statements executed by the server and sent by a client&lt;/td&gt;
&lt;td&gt;none&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.query.count&lt;/td&gt;
&lt;td&gt;Number of statements executed including the statements ran by system&lt;/td&gt;
&lt;td&gt;none&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.query.slow.count&lt;/td&gt;
&lt;td&gt;Number of Slow queries&lt;/td&gt;
&lt;td&gt;none&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.replica.sql_delay&lt;/td&gt;
&lt;td&gt;Lag in seconds for the replica compared to source&lt;/td&gt;
&lt;td&gt;none&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.replica.time_behind_source&lt;/td&gt;
&lt;td&gt;Delay in replication&lt;/td&gt;
&lt;td&gt;none&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.statement_event.count&lt;/td&gt;
&lt;td&gt;Summary of current and recent events&lt;/td&gt;
&lt;td&gt;schema, digest, digest_text, kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.statement_event.wait.time&lt;/td&gt;
&lt;td&gt;Total Wait time for the summarized timed events&lt;/td&gt;
&lt;td&gt;schema, digest, digest_text&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.table.lock_wait.read.count&lt;/td&gt;
&lt;td&gt;Total table lock wait read events&lt;/td&gt;
&lt;td&gt;schema, table,kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.table.lock_wait.read.time&lt;/td&gt;
&lt;td&gt;Total table lock wait time for read events&lt;/td&gt;
&lt;td&gt;schema, table, kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.table.lock_wait.write.count&lt;/td&gt;
&lt;td&gt;Total table lock wait read events&lt;/td&gt;
&lt;td&gt;schema, table, kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.table.lock_wait.write.time&lt;/td&gt;
&lt;td&gt;Total table lock wait time for write events&lt;/td&gt;
&lt;td&gt;schema, table, kind&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mysql.table_open_cache&lt;/td&gt;
&lt;td&gt;Number of hits, misses or overflows for open tables cache lookups&lt;/td&gt;
&lt;td&gt;status&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;📝 Note&lt;/p&gt;

&lt;p&gt;Some of the above metrics are specific to the Enterprise edition of MySQL and would not be available in this exercise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, you configured an OpenTelemetry collector to fetch metrics from the MySQL database and visualize them using SigNoz Cloud. You also learned about the variety of MySQL metrics that are available for monitoring.&lt;/p&gt;

&lt;p&gt;Visit our &lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;complete guide&lt;/a&gt; on OpenTelemetry Collector to learn more about it. OpenTelemetry is quietly becoming the world standard for open-source observability, and by using it, you can have advantages like a single standard for all telemetry signals, no vendor lock-in, etc.&lt;/p&gt;

&lt;p&gt;SigNoz is an open-source &lt;a href="https://signoz.io/blog/opentelemetry-apm/" rel="noopener noreferrer"&gt;OpenTelemetry-native APM&lt;/a&gt; that can be used as a single backend for all your observability needs.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Further Reading&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;Complete Guide on OpenTelemetry Collector&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apm/" rel="noopener noreferrer"&gt;An OpenTelemetry-native APM&lt;/a&gt;&lt;/p&gt;




</description>
      <category>database</category>
      <category>devops</category>
      <category>monitoring</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Monitor Prometheus Metrics with OpenTelemetry Collector?</title>
      <dc:creator>Ankit Anand ✨</dc:creator>
      <pubDate>Tue, 03 Mar 2026 04:59:52 +0000</pubDate>
      <link>https://dev.to/ankit01oss/how-to-monitor-prometheus-metrics-with-opentelemetry-collector-36d3</link>
      <guid>https://dev.to/ankit01oss/how-to-monitor-prometheus-metrics-with-opentelemetry-collector-36d3</guid>
      <description>&lt;p&gt;OpenTelemetry provides a component called OpenTelemetry Collector, which can be used to collect data from multiple sources. Prometheus is a popular metrics monitoring tool that has a wide adoption. If you’re using Prometheus SDKs to generate metrics, you can collect them via OpenTelemetry collector and send them to a backend of your choice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuewfoyod8bee1zwuwby5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuewfoyod8bee1zwuwby5.webp" alt="Cover Image" width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;In this tutorial, you will configure an OpenTelemetry Collector to scrape Prometheus metrics from a sample Flask application and send it to SigNoz for monitoring and visualization.&lt;/p&gt;

&lt;p&gt;We cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#what-is-prometheus" rel="noopener noreferrer"&gt;What is Prometheus?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#what-is-opentelemetry" rel="noopener noreferrer"&gt;What is OpenTelemetry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#what-is-opentelemetry-collector" rel="noopener noreferrer"&gt;What is OpenTelemetry Collector?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#how-does-opentelemetry-collector-collect-data" rel="noopener noreferrer"&gt;How does OpenTelemetry Collector collect data?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#flask-metrics-that-you-can-collect-with-opentelemetry-in-prometheus-format" rel="noopener noreferrer"&gt;Flask Metrics that you can collect with OpenTelemetry in Prometheus format&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#collecting-prometheus-metrics-with-opentelemetry-collector" rel="noopener noreferrer"&gt;Collecting Prometheus Metrics with OpenTelemetry Collector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#prerequisites" rel="noopener noreferrer"&gt;Prerequisites&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#set-up-the-flask-application" rel="noopener noreferrer"&gt;Set up the Flask application&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#set-up-signoz" rel="noopener noreferrer"&gt;Set up SigNoz&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#set-up-opentelemetry-collector" rel="noopener noreferrer"&gt;Set up OpenTelemetry Collector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#monitor-prometheus-metrics-in-signoz" rel="noopener noreferrer"&gt;Monitor Prometheus Metrics in SigNoz&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#conclusion" rel="noopener noreferrer"&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#further-reading" rel="noopener noreferrer"&gt;Further Reading&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to jump straight into implementation, start with this &lt;a href="https://signoz.io/blog/opentelemetry-collector-prometheus-receiver/#prerequisites" rel="noopener noreferrer"&gt;prerequisites&lt;/a&gt; section.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Prometheus?
&lt;/h2&gt;

&lt;p&gt;Prometheus is an open-source metrics monitoring tool. It collects and stores metrics as time-series data (metrics that change over time).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What are metrics?&lt;/p&gt;

&lt;p&gt;Metrics are measurements taken from an application or IT infrastructure that change over time. Examples could be error responses, service requests, response latency, CPU usage, memory usage, etc.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Prometheus is a great fit for generating and collecting time-series data, but it is limited to metrics. Whereas OpenTelemetry can help generate logs, metrics, and traces, providing a one-stop solution of all your observability data needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is OpenTelemetry
&lt;/h2&gt;

&lt;p&gt;OpenTelemetry is a set of APIs, SDKs, libraries, and integrations that aims to standardize the generation, collection, and management of telemetry data(logs, metrics, and traces).&lt;/p&gt;

&lt;p&gt;OpenTelemetry provides a unified and vendor-agnostic way of collecting telemetry data. Telemetry data has become critical in observing the state of distributed systems. With microservices and polyglot architectures, there was a need to have a global standard. OpenTelemetry aims to fill that space and is doing a great job at it thus far.&lt;/p&gt;

&lt;p&gt;In comparison to Prometheus, OpenTelemetry extends beyond the sole collection of metrics, offering a more comprehensive approach to observability. Using Prometheus to collect metrics from your application involves instrumenting applications with the Prometheus SDK and deploying a Prometheus agent to collect, aggregate, and make these metrics available for monitoring and analysis. But instead of a Prometheus agent, you can use OpenTelemetry Collector to scrape &lt;a href="https://signoz.io/guides/what-are-the-4-types-of-metrics-in-prometheus/" rel="noopener noreferrer"&gt;Prometheus metrics&lt;/a&gt;. OpenTelemetry collector is more advanced and can help you collect logs and traces, too. You can also process the collected data and send it to any backend of your choice that supports OpenTelemetry data.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is OpenTelemetry Collector?
&lt;/h2&gt;

&lt;p&gt;OpenTelemetry Collector is a stand-alone service provided by OpenTelemetry. It can be used as a telemetry-processing system with a lot of flexible configurations that gather and process observability data, such as traces, metrics, and logs, from different parts of a software system. It then sends this data to chosen destinations, allowing for centralized analysis and monitoring. The collector simplifies the task of collecting and exporting telemetry data in cloud-native environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does OpenTelemetry Collector collect data?
&lt;/h2&gt;

&lt;p&gt;Data collection in OpenTelemetry Collector is facilitated through receivers. Receivers are configured via YAML under the top-level &lt;code&gt;receivers&lt;/code&gt; tag. To ensure a valid configuration, at least one receiver must be enabled.&lt;/p&gt;

&lt;p&gt;Below is an example of an &lt;strong&gt;&lt;code&gt;otlp&lt;/code&gt;&lt;/strong&gt; receiver:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The OTLP receiver accepts data through gRPC or HTTP in the &lt;a href="https://github.com/open-telemetry/opentelemetry-proto/blob/main/docs/specification.md" rel="noopener noreferrer"&gt;OTLP&lt;/a&gt; format. There are advanced configurations that you can enable via the YAML file.&lt;/p&gt;

&lt;p&gt;Here’s a sample configuration for an OTLP receiver:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;localhost:4318"&lt;/span&gt;
        &lt;span class="na"&gt;cors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;allowed_origins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;http://test.com&lt;/span&gt;
            &lt;span class="c1"&gt;# Origins can have wildcards with *, use * by itself to match any origin.&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://*.example.com&lt;/span&gt;
          &lt;span class="na"&gt;allowed_headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Example-Header&lt;/span&gt;
          &lt;span class="na"&gt;max_age&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;7200&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find more details on advanced configurations &lt;a href="https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once a receiver is configured, it needs to be &lt;strong&gt;enabled&lt;/strong&gt; to start the data flow. This involves setting up &lt;strong&gt;pipelines&lt;/strong&gt; within a &lt;strong&gt;&lt;code&gt;service&lt;/code&gt;&lt;/strong&gt;. A &lt;strong&gt;pipeline&lt;/strong&gt; acts as a streamlined pathway for data, outlining how it should be processed and where it should go. A pipeline comprises of the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Receivers:&lt;/strong&gt; These are entry points for data into the OpenTelemetry Collector, responsible for collecting data from various sources and feeding it into the pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Processors:&lt;/strong&gt; After data is received, processors manipulate, filter, or enhance the data as needed before it proceeds further in the pipeline. They provide a way to customize the data according to specific requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exporters:&lt;/strong&gt; After processing, the data is ready for export. Exporters define the destination for the data, whether it's an external monitoring system, storage, or another service. They format the data appropriately for the chosen output.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below is an example pipeline configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
            &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s a breakdown of the above metrics pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Receivers:&lt;/strong&gt; This pipeline is configured to receive metrics data from two sources: OTLP and Prometheus. The &lt;strong&gt;&lt;code&gt;otlp&lt;/code&gt;&lt;/strong&gt; receiver collects metrics using both gRPC and HTTP protocols, while the &lt;strong&gt;&lt;code&gt;prometheus&lt;/code&gt;&lt;/strong&gt; receiver gathers metrics from Prometheus.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Processors:&lt;/strong&gt; Metrics data is processed using the &lt;strong&gt;&lt;code&gt;batch&lt;/code&gt;&lt;/strong&gt; processor. This processor likely batches metrics before exporting them, optimizing the data flow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exporters:&lt;/strong&gt; Metrics processed through this pipeline can be exported to both OTLP and Prometheus destinations. The &lt;strong&gt;&lt;code&gt;otlp&lt;/code&gt;&lt;/strong&gt; exporter sends data to an endpoint specified in the configuration, and the &lt;strong&gt;&lt;code&gt;prometheus&lt;/code&gt;&lt;/strong&gt; exporter handles the export of metrics to a Prometheus-compatible destination.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Flask Metrics that you can collect with OpenTelemetry in Prometheus format
&lt;/h2&gt;

&lt;p&gt;Monitoring metrics from your Flask applications is crucial for gaining insights into the performance, health, and behavior of your Flask application. In this section, we will look at some Flask metrics and their significance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Metrics
&lt;/h3&gt;

&lt;p&gt;Below are some of the metrics that can be collected or monitored from your Flask applications in Prometheus format.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Metric Name&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;HTTP request duration&lt;/td&gt;
&lt;td&gt;Measures the number of HTTP requests hitting your Flask application, providing insights into request latency.&lt;/td&gt;
&lt;td&gt;flask_http_request_duration_seconds_count&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HTTP request sum&lt;/td&gt;
&lt;td&gt;Tracks the total time spent processing all HTTP requests, giving an aggregate measure of the server's workload.&lt;/td&gt;
&lt;td&gt;flask_http_request_duration_seconds_sum&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HTTP requests total&lt;/td&gt;
&lt;td&gt;Designed to analyze the distribution of request durations by categorizing them into specific time ranges or buckets. Useful for identifying performance outliers and understanding the spread of request durations.&lt;/td&gt;
&lt;td&gt;flask_http_request_duration_seconds_bucket&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You can find more information &lt;a href="https://github.com/rycus86/prometheus_flask_exporter/tree/master/examples/sample-signals#requests-per-second" rel="noopener noreferrer"&gt;here&lt;/a&gt; on the type of Flask metrics in Prometheus format that can be collected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Collecting Prometheus Metrics with OpenTelemetry Collector
&lt;/h2&gt;

&lt;p&gt;In this section, you will set up the &lt;a href="https://signoz.io/guides/opentelemetry-collector-vs-exporter/" rel="noopener noreferrer"&gt;OpenTelemetry collector&lt;/a&gt; to collect metrics from a Flask application in Prometheus format and send the collected data to SigNoz for monitoring and visualization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python and Flask installed&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://signoz.io/teams/" rel="noopener noreferrer"&gt;SigNoz cloud&lt;/a&gt; account&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Set up the Flask application
&lt;/h2&gt;

&lt;p&gt;A simple Flask application has been provided, you can access it &lt;a href="https://github.com/SigNoz/opentelemetry-collector-prometheus-receiver-example" rel="noopener noreferrer"&gt;here&lt;/a&gt;. The provided Flask application has been configured to export metrics in Prometheus format using the “prometheus_flask_exporter” library.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/SigNoz/opentelemetry-collector-prometheus-receiver-example" rel="noopener noreferrer"&gt;Sample Flask Application&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Assuming you already have a Flask application to use, be sure to integrate the Prometheus metrics exporting capabilities for your application using the “prometheus_flask_exporter” library.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;prometheus_flask_exporter&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PrometheusMetrics&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nc"&gt;PrometheusMetrics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Set up SigNoz
&lt;/h2&gt;

&lt;p&gt;You need a backend to which you can send the collected data for monitoring and visualization. &lt;a href="https://signoz.io/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; is an OpenTelemetry-native APM that is well-suited for visualizing OpenTelemetry data.&lt;/p&gt;

&lt;p&gt;SigNoz cloud is the easiest way to run SigNoz. You can sign up &lt;a href="https://signoz.io/teams/" rel="noopener noreferrer"&gt;here&lt;/a&gt; for a free account and get 30 days of unlimited access to all features.&lt;/p&gt;

&lt;p&gt;You can also install and self-host SigNoz yourself. Check out the &lt;a href="https://signoz.io/docs/install/" rel="noopener noreferrer"&gt;docs&lt;/a&gt; for installing self-host SigNoz.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set up OpenTelemetry Collector
&lt;/h2&gt;

&lt;p&gt;The OpenTelemetry Collector offers various deployment options to suit different environments and preferences. It can be deployed using Docker, Kubernetes, Nomad, or directly on Linux systems. You can find all the installation options &lt;a href="https://opentelemetry.io/docs/collector/installation" rel="noopener noreferrer"&gt;here&lt;/a&gt;. For the purpose of this article, the OpenTelemetry Collector will be installed manually.&lt;/p&gt;

&lt;h3&gt;
  
  
  Download the OpenTelemetry Collector
&lt;/h3&gt;

&lt;p&gt;Download the appropriate binary package for your Linux or macOS distribution from the OpenTelemetry Collector &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-releases/releases" rel="noopener noreferrer"&gt;releases&lt;/a&gt; page. We are using the latest version available at the time of writing this tutorial.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;curl&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;proto&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;=https&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;tlsv1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;fOL&lt;/span&gt; &lt;span class="nx"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.89.0/otelcol-contrib_0.89.0_darwin_arm64.tar.gz&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📝 Note&lt;/p&gt;

&lt;p&gt;For macOS users, download the binary package specific to your system.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Build&lt;/th&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;M1 Chip&lt;/td&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Intel&lt;/td&gt;
&lt;td&gt;amd64 (x86-64)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Extract the package
&lt;/h3&gt;

&lt;p&gt;Create a new directory named &lt;code&gt;otelcol-contrib&lt;/code&gt; and then extract the contents of the &lt;code&gt;otelcol-contrib_0.89.0_darwin_arm64.tar.gz&lt;/code&gt; archive into this newly created directory with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;mkdir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;otelcol-contrib&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;otelcol-contrib&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Extract the contents of the binary package in that directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;tar&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;xvzf&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;otelcol-contrib_&lt;/span&gt;&lt;span class="mf"&gt;0.89&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;_darwin_arm&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="err"&gt;.tar.gz&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-C&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;otelcol-contrib&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Set up the Configuration file
&lt;/h3&gt;

&lt;p&gt;In the same &lt;code&gt;otelcol-contrib&lt;/code&gt; directory, create a config.yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;touch&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;config.yaml&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste the below config into the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;receivers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;otlp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;protocols&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;grpc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
      &lt;span class="nl"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="nl"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nl"&gt;config&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nl"&gt;global&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nl"&gt;scrape_interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Adjust&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt; &lt;span class="nx"&gt;interval&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;needed&lt;/span&gt;
      &lt;span class="nx"&gt;scrape_configs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;job_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;flask&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
          &lt;span class="nx"&gt;static_configs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;targets&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;127.0.0.1:5000&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Adjust&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;Prometheus&lt;/span&gt; &lt;span class="nx"&gt;address&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt;

&lt;span class="nx"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;batch&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;send_batch_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;
    &lt;span class="nx"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;

&lt;span class="nx"&gt;exporters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;otlp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ingest.{region}.signoz.cloud:443&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;tls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;insecure&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="nx"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Adjust&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;timeout&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;needed&lt;/span&gt;
    &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;signoz-ingestion-key&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;SIGNOZ_INGESTION_KEY&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="nx"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;verbosity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;detailed&lt;/span&gt;

&lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;telemetry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;address&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8888&lt;/span&gt;
  &lt;span class="nx"&gt;pipelines&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;receivers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;otlp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prometheus&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="nx"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;batch&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="nx"&gt;exporters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;otlp&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;{region}&lt;/code&gt; with the region for your SigNoz cloud account and &lt;code&gt;&amp;lt;SIGNOZ_INGESTION_KEY&amp;gt;&lt;/code&gt; with the ingestion key for your account. You can find these settings in the SigNoz dashboard under &lt;code&gt;Settings &amp;gt; Ingestion Settings&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd67bm1stzf2yknvuxuk.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd67bm1stzf2yknvuxuk.webp" alt="You can find ingestion key details under settings tab of SigNoz" width="800" height="376"&gt;&lt;/a&gt;&lt;br&gt;You can find ingestion key details under settings tab of SigNoz
  &lt;/p&gt;

&lt;p&gt;You can find more information on Prometheus receiver &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/prometheusreceiver" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Run the collector service
&lt;/h3&gt;

&lt;p&gt;In the same &lt;code&gt;otelcol-contrib&lt;/code&gt; directory, run the below command to start the collector service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;./otelcol-contrib&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;--config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;./config.yaml&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should receive a similar output to show it has started successfully:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="mi"&gt;2023-11-21&lt;/span&gt;&lt;span class="err"&gt;T&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;51.002&lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;info&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;service@v&lt;/span&gt;&lt;span class="mf"&gt;0.89&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/telemetry.go:&lt;/span&gt;&lt;span class="mi"&gt;85&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Setting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;up&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;own&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;telemetry...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="mi"&gt;2023-11-21&lt;/span&gt;&lt;span class="err"&gt;T&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;51.004&lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;info&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;service@v&lt;/span&gt;&lt;span class="mf"&gt;0.89&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/telemetry.go:&lt;/span&gt;&lt;span class="mi"&gt;202&lt;/span&gt;&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="err"&gt;Serving&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Prometheus&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;metrics&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"address"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localhost:8888"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Basic"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="mi"&gt;2023-11-21&lt;/span&gt;&lt;span class="err"&gt;T&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;51.005&lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;info&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;service@v&lt;/span&gt;&lt;span class="mf"&gt;0.89&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/service.go:&lt;/span&gt;&lt;span class="mi"&gt;143&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="err"&gt;Starting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;otelcol-contrib...&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0.89.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"NumCPU"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="mi"&gt;2023-11-21&lt;/span&gt;&lt;span class="err"&gt;T&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;51.005&lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;info&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;extensions/extensions.go:&lt;/span&gt;&lt;span class="mi"&gt;34&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="err"&gt;Starting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;extensions...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="mi"&gt;2023-11-21&lt;/span&gt;&lt;span class="err"&gt;T&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;51.010&lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;warn&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;internal@v&lt;/span&gt;&lt;span class="mf"&gt;0.89&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/warning.go:&lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="err"&gt;Using&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;address&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;exposes&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;server&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;every&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;network&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;interface,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;which&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;may&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;facilitate&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Denial&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Service&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;attacks&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"kind"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"receiver"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"otlp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"data_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"metrics"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"documentation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="mi"&gt;2023-11-21&lt;/span&gt;&lt;span class="err"&gt;T&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;51.011&lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;info&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;otlpreceiver@v&lt;/span&gt;&lt;span class="mf"&gt;0.89&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/otlp.go:&lt;/span&gt;&lt;span class="mi"&gt;83&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Starting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;GRPC&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;server&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"kind"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"receiver"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"otlp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"data_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"metrics"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"endpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0:4317"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="mi"&gt;2023-11-21&lt;/span&gt;&lt;span class="err"&gt;T&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;51.012&lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;warn&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;internal@v&lt;/span&gt;&lt;span class="mf"&gt;0.89&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/warning.go:&lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="err"&gt;Using&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;address&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;exposes&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;server&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;every&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;network&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;interface,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;which&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;may&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;facilitate&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Denial&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Service&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;attacks&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"kind"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"receiver"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"otlp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"data_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"metrics"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"documentation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="mi"&gt;2023-11-21&lt;/span&gt;&lt;span class="err"&gt;T&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;51.012&lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;info&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;otlpreceiver@v&lt;/span&gt;&lt;span class="mf"&gt;0.89&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/otlp.go:&lt;/span&gt;&lt;span class="mi"&gt;101&lt;/span&gt;&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="err"&gt;Starting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;HTTP&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;server&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"kind"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"receiver"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"otlp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"data_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"metrics"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"endpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0:4318"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="mi"&gt;2023-11-21&lt;/span&gt;&lt;span class="err"&gt;T&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;51.013&lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;info&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;prometheusreceiver@v&lt;/span&gt;&lt;span class="mf"&gt;0.89&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/metrics_receiver.go:&lt;/span&gt;&lt;span class="mi"&gt;239&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="err"&gt;Starting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;discovery&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;manager&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"kind"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"receiver"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"prometheus"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"data_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"metrics"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="mi"&gt;2023-11-21&lt;/span&gt;&lt;span class="err"&gt;T&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;51.072&lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;info&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;prometheusreceiver@v&lt;/span&gt;&lt;span class="mf"&gt;0.89&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/metrics_receiver.go:&lt;/span&gt;&lt;span class="mi"&gt;230&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="err"&gt;Scrape&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;job&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;added&lt;/span&gt;&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"kind"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"receiver"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"prometheus"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"data_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"metrics"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"jobName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"prometheus"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="mi"&gt;2023-11-21&lt;/span&gt;&lt;span class="err"&gt;T&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;51.072&lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;info&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;service@v&lt;/span&gt;&lt;span class="mf"&gt;0.89&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/service.go:&lt;/span&gt;&lt;span class="mi"&gt;169&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="err"&gt;Everything&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;is&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ready.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Begin&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;running&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;processing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;data.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="mi"&gt;2023-11-21&lt;/span&gt;&lt;span class="err"&gt;T&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;51.072&lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;info&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="err"&gt;prometheusreceiver@v&lt;/span&gt;&lt;span class="mf"&gt;0.89&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/metrics_receiver.go:&lt;/span&gt;&lt;span class="mi"&gt;281&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="err"&gt;Starting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;scrape&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;manager&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"kind"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"receiver"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"prometheus"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"data_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"metrics"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Monitor Prometheus Metrics in SigNoz
&lt;/h2&gt;

&lt;p&gt;Once the collector service has been started successfully, navigate to your SigNoz Cloud account and access the "Dashboard" tab. Click on the “New Dashboard” button to create a new dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcl1bqgzba7iuuzwjkdr.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcl1bqgzba7iuuzwjkdr.webp" width="800" height="246"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;To give the dashboard a name, click on “Configure.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezy7wf02rflao4bebjim.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezy7wf02rflao4bebjim.webp" width="800" height="153"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Enter your preferred name in the "Name" input box and save the changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gkxfdl34hm0xjzqs6kq.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gkxfdl34hm0xjzqs6kq.webp" width="800" height="464"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Now, you can create various panels for your dashboard. There are three visualization options to display your data: Time Series, Value, and Table formats. Choose the format that best suits your preferences, depending on the metric you want to monitor. For the initial metric, opt for the "Time Series" visualization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feinja3kq640oyh6rcvuv.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feinja3kq640oyh6rcvuv.webp" width="800" height="464"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;In the "Query Builder" tab, enter "flask," and you should see various Flask metrics. This confirms that the OpenTelemetry Collector is successfully collecting the Flask metrics and forwarding them to SigNoz for monitoring and visualization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvo83okzhb4o0xp5nll8.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvo83okzhb4o0xp5nll8.webp" width="800" height="359"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Using the &lt;a href="https://signoz.io/blog/query-builder-v5/" rel="noopener noreferrer"&gt;Query Builder&lt;/a&gt;, you can run queries against your metrics. For the first query, set the below values in the Query Builder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;Telemetry&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;data:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Metrics&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;Aggregation:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Rate&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;Query:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;flask_http_request_duration_seconds_count&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;Filter:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;status=&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should look as the below image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbcrrybi5vinl73r5gr7.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbcrrybi5vinl73r5gr7.webp" width="800" height="464"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Click on the “Stage and Run Query” button to run the query, navigate up, and you will find the "Save" button to save your changes.&lt;/p&gt;

&lt;p&gt;You can also query metrics using PromQL. Add a new panel and switch over to the PromQL tab. Set the below values in PromQL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;PromQL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Query:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;histogram_quantile(&lt;/span&gt;&lt;span class="mf"&gt;0.9&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;rate(flask_http_request_duration_seconds_bucket&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="err"&gt;status=&lt;/span&gt;&lt;span class="s2"&gt;"200"&lt;/span&gt;&lt;span class="p"&gt;}[&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="err"&gt;s&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;))&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;Legend&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Format:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="err"&gt;path&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should look as the below image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38e03mbhvs9e9ud6m5kw.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38e03mbhvs9e9ud6m5kw.webp" width="800" height="464"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;This query leverages the &lt;code&gt;histogram_quantile&lt;/code&gt; function to determine the 90th percentile of the request duration for successful (status code 200) Flask HTTP requests over the last 30 seconds. The 'Legend Format' helps label the data points with the corresponding path information.&lt;/p&gt;

&lt;p&gt;Click on the "Stage and Run Query" button, and then save the panel.&lt;/p&gt;

&lt;p&gt;You can repeat the same steps for different metrics you would like to visualize. After creating different panels for various metrics, click on the "Save Layout" button, which enables you to save the current arrangement and configuration of your panels. This ensures that your customized dashboard layout and visualizations are saved for future reference and monitoring.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn66hg6e4hqkbjvqlbwg.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn66hg6e4hqkbjvqlbwg.webp" alt="Flask Monitoring Dashboard built in SigNoz" width="800" height="464"&gt;&lt;/a&gt;&lt;br&gt;Flask Monitoring Dashboard built in SigNoz
  &lt;/p&gt;

&lt;p&gt;If you would like to replicate the above dashboard, you can easily do so by copying the JSON file available in this &lt;a href="https://github.com/SigNoz/dashboards/tree/main/flask-monitoring" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;. Import the copied JSON file into a new dashboard, and it will recreate the same layout and configurations for your convenience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, you configured an OpenTelemetry collector to scrape Prometheus metrics from a sample flask application. You then sent the data to SigNoz for monitoring and visualization.&lt;/p&gt;

&lt;p&gt;Visit our &lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;complete guide&lt;/a&gt; on OpenTelemetry Collector to learn more about it.&lt;/p&gt;

&lt;p&gt;OpenTelemetry is becoming a global standard for open-source observability, offering advantages such as a unified standard for all telemetry signals and avoiding vendor lock-in. With OpenTelemetry, instrumenting your applications to collect logs, metrics, and traces becomes seamless, and you can monitor and visualize your telemetry data with SigNoz.&lt;/p&gt;

&lt;p&gt;As SigNoz is a full-stack observability tool, you don't have to use multiple tools for your monitoring needs.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Further Reading&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;Complete Guide on OpenTelemetry Collector&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apm/" rel="noopener noreferrer"&gt;An OpenTelemetry-native APM&lt;/a&gt;&lt;/p&gt;




</description>
      <category>devops</category>
      <category>monitoring</category>
      <category>tooling</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Monitor Apache Server Metrics with OpenTelemetry</title>
      <dc:creator>Ankit Anand ✨</dc:creator>
      <pubDate>Tue, 03 Mar 2026 04:59:16 +0000</pubDate>
      <link>https://dev.to/ankit01oss/how-to-monitor-apache-server-metrics-with-opentelemetry-1n0g</link>
      <guid>https://dev.to/ankit01oss/how-to-monitor-apache-server-metrics-with-opentelemetry-1n0g</guid>
      <description>&lt;p&gt;Monitoring Apache web server metrics ensures your web server performs efficiently, securely, and reliably. In this tutorial, you will configure OpenTelemetry Collector to collect Apache metrics and send them to SigNoz for monitoring and visualization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8wc3na3op0jzuln2c5g5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8wc3na3op0jzuln2c5g5.webp" alt="Cover Image" width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;We cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#what-is-apache" rel="noopener noreferrer"&gt;What is Apache?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#what-is-opentelemetry" rel="noopener noreferrer"&gt;What is OpenTelemetry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#what-is-opentelemetry-collector" rel="noopener noreferrer"&gt;What is OpenTelemetry Collector?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#how-does-opentelemetry-collector-collect-data" rel="noopener noreferrer"&gt;How does OpenTelemetry Collector collect data?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#collecting-apache-metrics-with-opentelemetry-collector" rel="noopener noreferrer"&gt;Collecting Apache Metrics with OpenTelemetry Collector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#prerequisites" rel="noopener noreferrer"&gt;Prerequisites&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#setting-up-signoz" rel="noopener noreferrer"&gt;Setting up SigNoz&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#setting-up-apache" rel="noopener noreferrer"&gt;Setting up Apache&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#setting-up-opentelemetry-collector" rel="noopener noreferrer"&gt;Setting up OpenTelemetry Collector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#monitoring-apache-metrics-with-signoz-dashboard" rel="noopener noreferrer"&gt;Monitoring Apache metrics with SigNoz dashboard&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#metrics-and-resource-attributes-for-apache-supported-by-opentelemetry" rel="noopener noreferrer"&gt;Metrics and Resource Attributes for Apache supported by OpenTelemetry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#conclusion" rel="noopener noreferrer"&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#further-reading" rel="noopener noreferrer"&gt;Further Reading&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to jump straight into implementation, start with this &lt;a href="https://signoz.io/blog/opentelemetry-apache-server-metrics-monitoring/#prerequisites" rel="noopener noreferrer"&gt;prerequisites&lt;/a&gt; section.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Apache?
&lt;/h2&gt;

&lt;p&gt;Apache HTTP Server is an open-source web server that seamlessly delivers web pages to clients across the globe over the internet. It accepts HTTP requests from clients and responds back to them with the requested information in the form of web pages.&lt;/p&gt;

&lt;p&gt;Apache's core function is the delivery of static web pages, but its capabilities extend far beyond this. Through the incorporation of modules, Apache can effortlessly handle dynamic content. Some of the most widely used modules include SSL for establishing secure connections, Server-Side Programming Support (such as PHP) for dynamic content generation, and load-balancing configurations to ensure optimal performance under heavy traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is OpenTelemetry
&lt;/h2&gt;

&lt;p&gt;OpenTelemetry is a set of APIs, SDKs, libraries, and integrations aiming to standardize the generation, collection, and management of telemetry data(logs, metrics, and traces). It is backed by the Cloud Native Computing Foundation and is the leading open-source project in the observability domain.&lt;/p&gt;

&lt;p&gt;The data you collect with OpenTelemetry is vendor-agnostic and can be exported in many formats. Telemetry data has become critical in observing the state of distributed systems. With microservices and polyglot architectures, there was a need to have a global standard. OpenTelemetry aims to fill that space and is doing a great job at it thus far.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is OpenTelemetry Collector?
&lt;/h2&gt;

&lt;p&gt;OpenTelemetry Collector is a stand-alone service provided by OpenTelemetry. It can be used as a telemetry-processing system with a lot of flexible configurations that gather and process observability data, such as traces, metrics, and logs, from different parts of a software system. It then sends this data to chosen destinations, allowing for centralized analysis and monitoring. The collector simplifies the task of collecting and exporting telemetry data in cloud-native environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does OpenTelemetry Collector collect data?
&lt;/h2&gt;

&lt;p&gt;Data collection in OpenTelemetry Collector is facilitated through receivers. Receivers are configured via YAML under the top-level &lt;code&gt;receivers&lt;/code&gt; tag. To ensure a valid configuration, at least one receiver must be enabled.&lt;/p&gt;

&lt;p&gt;Below is an example of an &lt;strong&gt;&lt;code&gt;otlp&lt;/code&gt;&lt;/strong&gt; receiver:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The OTLP receiver accepts data through gRPC or HTTP in the &lt;a href="https://github.com/open-telemetry/opentelemetry-proto/blob/main/docs/specification.md" rel="noopener noreferrer"&gt;OTLP&lt;/a&gt; format. There are advanced configurations that you can enable via the YAML file.&lt;/p&gt;

&lt;p&gt;Here’s a sample configuration for an otlp receiver:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;localhost:4318"&lt;/span&gt;
        &lt;span class="na"&gt;cors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;allowed_origins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;http://test.com&lt;/span&gt;
            &lt;span class="c1"&gt;# Origins can have wildcards with *, use * by itself to match any origin.&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://*.example.com&lt;/span&gt;
          &lt;span class="na"&gt;allowed_headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Example-Header&lt;/span&gt;
          &lt;span class="na"&gt;max_age&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;7200&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find more details on advanced configurations &lt;a href="https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once a receiver is configured, it needs to be &lt;strong&gt;enabled&lt;/strong&gt; to start the data flow. This involves setting up &lt;strong&gt;pipelines&lt;/strong&gt; within a &lt;strong&gt;&lt;code&gt;service&lt;/code&gt;&lt;/strong&gt;. A &lt;strong&gt;pipeline&lt;/strong&gt; acts as a streamlined pathway for data, outlining how it should be processed and where it should go. A pipeline comprises of the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Receivers:&lt;/strong&gt; These are entry points for data into the OpenTelemetry Collector, responsible for collecting data from various sources and feeding it into the pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Processors:&lt;/strong&gt; After data is received, processors manipulate, filter, or enhance the data as needed before it proceeds further in the pipeline. They provide a way to customize the data according to specific requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exporters:&lt;/strong&gt; After processing, the data is ready for export. Exporters define the destination for the data, whether it's an external monitoring system, storage, or another service. They format the data appropriately for the chosen output.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below is an example pipeline configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
            &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s a breakdown of the above metrics pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Receivers:&lt;/strong&gt; This pipeline is configured to receive metrics data from two sources: OTLP and Prometheus. The &lt;strong&gt;&lt;code&gt;otlp&lt;/code&gt;&lt;/strong&gt; receiver collects metrics using both gRPC and HTTP protocols, while the &lt;strong&gt;&lt;code&gt;prometheus&lt;/code&gt;&lt;/strong&gt; receiver gathers metrics from Prometheus.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Processors:&lt;/strong&gt; Metrics data is processed using the &lt;strong&gt;&lt;code&gt;batch&lt;/code&gt;&lt;/strong&gt; processor. This processor likely batches metrics before exporting them, optimizing the data flow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exporters:&lt;/strong&gt; Metrics processed through this pipeline are exported to both OTLP and Prometheus destinations. The &lt;strong&gt;&lt;code&gt;otlp&lt;/code&gt;&lt;/strong&gt; exporter sends data to an endpoint specified in the configuration, and the &lt;strong&gt;&lt;code&gt;prometheus&lt;/code&gt;&lt;/strong&gt; exporter handles the export of metrics to a Prometheus-compatible destination.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Collecting Apache Metrics with OpenTelemetry Collector
&lt;/h2&gt;

&lt;p&gt;In this section, you will learn how Apache metrics can be collected with the &lt;a href="https://signoz.io/guides/opentelemetry-collector-vs-exporter/" rel="noopener noreferrer"&gt;OpenTelemetry Collector&lt;/a&gt; and how to visualize the collected metrics in SigNoz.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://httpd.apache.org/download.cgi" rel="noopener noreferrer"&gt;Apache Installed&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://signoz.io/teams/" rel="noopener noreferrer"&gt;SigNoz cloud&lt;/a&gt; account&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;📝 Note&lt;/p&gt;

&lt;p&gt;For Mac users, Apache comes preinstalled&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up SigNoz
&lt;/h2&gt;

&lt;p&gt;You need a backend to which you can send the collected data for monitoring and visualization. &lt;a href="https://signoz.io/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; is an OpenTelemetry-native APM that is well-suited for visualizing OpenTelemetry data.&lt;/p&gt;

&lt;p&gt;SigNoz cloud is the easiest way to run SigNoz. You can sign up &lt;a href="https://signoz.io/teams/" rel="noopener noreferrer"&gt;here&lt;/a&gt; for a free account and get 30 days of unlimited access to all features.&lt;/p&gt;

&lt;p&gt;You can also install and self-host SigNoz yourself. Check out the &lt;a href="https://signoz.io/docs/install/" rel="noopener noreferrer"&gt;docs&lt;/a&gt; for installing self-host SigNoz.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Apache
&lt;/h2&gt;

&lt;p&gt;Once you have installed the Apache web server, confirm it is running on the assigned port:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7emyilb2buuzuoq90dr.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7emyilb2buuzuoq90dr.webp" alt="Apache running" width="800" height="149"&gt;&lt;/a&gt;&lt;br&gt;Apache running
  &lt;/p&gt;

&lt;p&gt;Since Apache will be monitored, it has to be configured to expose its metrics. The endpoint at which Apache exposes metrics is &lt;a href="http://localhost/server-status?auto" rel="noopener noreferrer"&gt;http://localhost:80/server-status?auto&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To expose the metrics, open the Apache configuration file for editing, depending on the environment you are in.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For mac:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;nano&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;apache2&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;httpd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;conf&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify that the below config line is enabled and not commented out
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;LoadModule&lt;/span&gt; &lt;span class="nx"&gt;status_module&lt;/span&gt; &lt;span class="nx"&gt;lib&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;httpd&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;modules&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;mod_status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;so&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;&lt;code&gt;mod_status&lt;/code&gt;&lt;/strong&gt; module is an Apache module that provides a web-based view of server statistics, including information about server performance, current connections, and other relevant details. When enabled, it creates a web page accessible through a browser that displays real-time information about the Apache server's status. It's a useful tool for monitoring and troubleshooting server performance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scroll to the end of the file and paste the below
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Location&lt;/span&gt; &lt;span class="s"&gt;"/server-status"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      SetHandler server-status
      Require host example.com
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Location&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration block enables the &lt;strong&gt;&lt;code&gt;/server-status&lt;/code&gt;&lt;/strong&gt; endpoint, and it can only be accessed from the localhost. This is a security measure to ensure that sensitive server status information is not exposed to the public internet. This is where Apache’s server statistics will be displayed.&lt;/p&gt;

&lt;p&gt;Replace &lt;a href="http://example.com/" rel="noopener noreferrer"&gt;example.com&lt;/a&gt; with the correct domain name where you have Apache running. For instance, if you have Apache running on localhost, it will be as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Location&lt;/span&gt; &lt;span class="s"&gt;"/server-status"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    SetHandler server-status
    Require host localhost
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Location&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check for syntax errors
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apachectl&lt;/span&gt; &lt;span class="nx"&gt;configtest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Restart the server to apply changes
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;sudo&lt;/span&gt; &lt;span class="nx"&gt;apachectl&lt;/span&gt; &lt;span class="nx"&gt;restart&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To view Apache server statistics in your browser, navigate to the domain where Apache is running and add the "/server-status" endpoint to the URL. In this example, since Apache is running on localhost, you can access the statistics by visiting &lt;a href="http://localhost/server-status" rel="noopener noreferrer"&gt;localhost/server-status&lt;/a&gt; in your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funxy60dzm89l0i12d1ri.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funxy60dzm89l0i12d1ri.webp" alt="Apache Server Status" width="800" height="479"&gt;&lt;/a&gt;&lt;br&gt;Apache Server Status
  &lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up OpenTelemetry Collector
&lt;/h2&gt;

&lt;p&gt;The OpenTelemetry Collector offers various deployment options to suit different environments and preferences. It can be deployed using Docker, Kubernetes, Nomad, or directly on Linux systems. You can find all the installation options &lt;a href="https://opentelemetry.io/docs/collector/installation" rel="noopener noreferrer"&gt;here&lt;/a&gt;. For the purpose of this article, the OpenTelemetry Collector will be installed manually.&lt;/p&gt;

&lt;h3&gt;
  
  
  Download the OpenTelemetry Collector
&lt;/h3&gt;

&lt;p&gt;Download the appropriate binary package for your Linux or macOS distribution from the OpenTelemetry Collector &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-releases/releases" rel="noopener noreferrer"&gt;releases&lt;/a&gt; page. We are using the latest version available at the time of writing this tutorial.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;curl&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;proto&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;=https&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;tlsv1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;fOL&lt;/span&gt; &lt;span class="nx"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.89.0/otelcol-contrib_0.89.0_darwin_arm64.tar.gz&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📝 Note&lt;/p&gt;

&lt;p&gt;For macOS users, download the binary package specific to your system.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Build&lt;/th&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;M1 Chip&lt;/td&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Intel&lt;/td&gt;
&lt;td&gt;amd64 (x86-64)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Extract the package
&lt;/h3&gt;

&lt;p&gt;Create a new directory named &lt;code&gt;otelcol-contrib&lt;/code&gt; and then extract the contents of the &lt;code&gt;otelcol-contrib_0.89.0_darwin_arm64.tar.gz&lt;/code&gt; archive into this newly created directory with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;mkdir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;otelcol-contrib&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;otelcol-contrib&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Extract the contents of the binary package in that directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;tar&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;xvzf&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;otelcol-contrib_&lt;/span&gt;&lt;span class="mf"&gt;0.89&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;_darwin_arm&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="err"&gt;.tar.gz&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-C&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;otelcol-contrib&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Set up the Configuration file
&lt;/h3&gt;

&lt;p&gt;In the same &lt;code&gt;otelcol-contrib&lt;/code&gt; directory, create a config.yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;touch&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;config.yaml&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste the below config into the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;receivers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;otlp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;protocols&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;grpc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
      &lt;span class="nl"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="nl"&gt;apache&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nl"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:80/server-status?auto&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;collection_interval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;

&lt;span class="nx"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;batch&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;send_batch_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;
    &lt;span class="nx"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;

&lt;span class="nx"&gt;exporters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;otlp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ingest.{region}.signoz.cloud:443&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;tls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;insecure&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="nx"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Adjust&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;timeout&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;needed&lt;/span&gt;
    &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;signoz-ingestion-key&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;SIGNOZ_INGESTION_KEY&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="nx"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;verbosity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;detailed&lt;/span&gt;

&lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="nx"&gt;telemetry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;address&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;8888&lt;/span&gt;
  &lt;span class="nx"&gt;pipelines&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nx"&gt;receivers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;otlp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;apache&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="nx"&gt;processors&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;batch&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="nx"&gt;exporters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;otlp&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;a href="http://localhost/" rel="noopener noreferrer"&gt;http://localhost:80&lt;/a&gt; with the correct endpoint where you have Apache running. Also, replace &lt;code&gt;{region}&lt;/code&gt; with the region for your SigNoz cloud account and &lt;code&gt;&amp;lt;SIGNOZ_INGESTION_KEY&amp;gt;&lt;/code&gt; with the ingestion key for your account. You can find these settings in the SigNoz dashboard under &lt;code&gt;Settings &amp;gt; Ingestion Settings&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd67bm1stzf2yknvuxuk.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd67bm1stzf2yknvuxuk.webp" alt="You can find ingestion key details under settings tab of SigNoz" width="800" height="376"&gt;&lt;/a&gt;&lt;br&gt;You can find ingestion key details under settings tab of SigNoz
  &lt;/p&gt;

&lt;p&gt;You can find more information on OpenTelemetry Apache receiver &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/apachereceiver" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Run the collector service
&lt;/h3&gt;

&lt;p&gt;In the same &lt;code&gt;otelcol-contrib&lt;/code&gt; directory, run the below command to start the collector service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;otelcol&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;contrib&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should receive a similar output to show it has started successfully:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="nx"&gt;T14&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;58.999&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;    &lt;span class="nx"&gt;info&lt;/span&gt;    &lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;v0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;89.0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;telemetry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;go&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;85&lt;/span&gt;     &lt;span class="nx"&gt;Setting&lt;/span&gt; &lt;span class="nx"&gt;up&lt;/span&gt; &lt;span class="nx"&gt;own&lt;/span&gt; &lt;span class="nx"&gt;telemetry&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="nx"&gt;T14&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;59.000&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;    &lt;span class="nx"&gt;info&lt;/span&gt;    &lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;v0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;89.0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;telemetry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;go&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;202&lt;/span&gt;    &lt;span class="nx"&gt;Serving&lt;/span&gt; &lt;span class="nx"&gt;Prometheus&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt;      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;address&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;localhost:8888&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;level&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Basic&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="nx"&gt;T14&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;59.000&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;    &lt;span class="nx"&gt;info&lt;/span&gt;    &lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;v0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;89.0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;go&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;143&lt;/span&gt;      &lt;span class="nx"&gt;Starting&lt;/span&gt; &lt;span class="nx"&gt;otelcol&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;contrib&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;     &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Version&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0.89.0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;NumCPU&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="nx"&gt;T14&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;59.000&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;    &lt;span class="nx"&gt;info&lt;/span&gt;    &lt;span class="nx"&gt;extensions&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;extensions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;go&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;34&lt;/span&gt;         &lt;span class="nx"&gt;Starting&lt;/span&gt; &lt;span class="nx"&gt;extensions&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="nx"&gt;T14&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;59.001&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;    &lt;span class="nx"&gt;warn&lt;/span&gt;    &lt;span class="nx"&gt;internal&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;v0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;89.0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;warning&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;go&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;      &lt;span class="nx"&gt;Using&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt; &lt;span class="nx"&gt;address&lt;/span&gt; &lt;span class="nx"&gt;exposes&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;every&lt;/span&gt; &lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;which&lt;/span&gt; &lt;span class="nx"&gt;may&lt;/span&gt; &lt;span class="nx"&gt;facilitate&lt;/span&gt; &lt;span class="nx"&gt;Denial&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;Service&lt;/span&gt; &lt;span class="nx"&gt;attacks&lt;/span&gt;      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;kind&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;receiver&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;otlp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;data_type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;metrics&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;documentation&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="nx"&gt;T14&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;59.001&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;    &lt;span class="nx"&gt;info&lt;/span&gt;    &lt;span class="nx"&gt;otlpreceiver&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;v0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;89.0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;otlp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;go&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;83&lt;/span&gt;     &lt;span class="nx"&gt;Starting&lt;/span&gt; &lt;span class="nx"&gt;GRPC&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;kind&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;receiver&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;otlp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;data_type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;metrics&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;endpoint&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0.0.0.0:4317&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="nx"&gt;T14&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;59.002&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;    &lt;span class="nx"&gt;warn&lt;/span&gt;    &lt;span class="nx"&gt;internal&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;v0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;89.0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;warning&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;go&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;      &lt;span class="nx"&gt;Using&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt; &lt;span class="nx"&gt;address&lt;/span&gt; &lt;span class="nx"&gt;exposes&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;every&lt;/span&gt; &lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;which&lt;/span&gt; &lt;span class="nx"&gt;may&lt;/span&gt; &lt;span class="nx"&gt;facilitate&lt;/span&gt; &lt;span class="nx"&gt;Denial&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;Service&lt;/span&gt; &lt;span class="nx"&gt;attacks&lt;/span&gt;      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;kind&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;receiver&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;otlp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;data_type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;metrics&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;documentation&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="nx"&gt;T14&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;59.002&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;    &lt;span class="nx"&gt;info&lt;/span&gt;    &lt;span class="nx"&gt;otlpreceiver&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;v0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;89.0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;otlp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;go&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;101&lt;/span&gt;    &lt;span class="nx"&gt;Starting&lt;/span&gt; &lt;span class="nx"&gt;HTTP&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;kind&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;receiver&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;otlp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;data_type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;metrics&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;endpoint&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0.0.0.0:4318&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="mi"&gt;2023&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="nx"&gt;T14&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;59.002&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;0100&lt;/span&gt;    &lt;span class="nx"&gt;info&lt;/span&gt;    &lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;v0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;89.0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;go&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;169&lt;/span&gt;      &lt;span class="nx"&gt;Everything&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="nx"&gt;ready&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Begin&lt;/span&gt; &lt;span class="nx"&gt;running&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;processing&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Monitoring Apache metrics with SigNoz dashboard
&lt;/h2&gt;

&lt;p&gt;Once the collector service has been started successfully, navigate to your SigNoz Cloud account and access the "Dashboard" tab. Click on the “New Dashboard” button to create a new dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcl1bqgzba7iuuzwjkdr.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcl1bqgzba7iuuzwjkdr.webp" width="800" height="246"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;To give the dashboard a name, click on “Configure.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezy7wf02rflao4bebjim.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezy7wf02rflao4bebjim.webp" width="800" height="153"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Enter your preferred dashboard name in the "Name" input box and save the changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxa97egf4c8i0248xwx37.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxa97egf4c8i0248xwx37.webp" alt="Dashboard Naming" width="800" height="444"&gt;&lt;/a&gt;&lt;br&gt;Dashboard Naming
  &lt;/p&gt;

&lt;p&gt;Now, you can create various panels for your dashboard. There are three visualization options to display your data: Time Series, Value, and Table formats. Choose the format that best suits your preferences, depending on the metric you want to monitor. For the initial metric, you can opt for the "Time Series" visualization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtmw39ts8p3e3or2n19z.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtmw39ts8p3e3or2n19z.webp" alt="Dashboard visualization options" width="800" height="444"&gt;&lt;/a&gt;&lt;br&gt;Dashboard visualization options
  &lt;/p&gt;

&lt;p&gt;In the "Query Builder" tab, enter "Apache" and you should see various Apache metrics. This confirms that the OpenTelemetry Collector is successfully collecting the Apache metrics and forwarding them to SigNoz for monitoring and visualization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5gx8xpzy299c843vy5m.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5gx8xpzy299c843vy5m.webp" alt="Collected Apache metrics for visualization" width="800" height="321"&gt;&lt;/a&gt;&lt;br&gt;Collected Apache metrics for visualization
  &lt;/p&gt;

&lt;p&gt;You can query the collected metrics using the &lt;a href="https://signoz.io/blog/query-builder-v5/" rel="noopener noreferrer"&gt;query builder&lt;/a&gt; and create panels for your dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69jyqfutrkyiabysw07z.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69jyqfutrkyiabysw07z.webp" alt="Monitoring dashboard for Apache" width="800" height="824"&gt;&lt;/a&gt;&lt;br&gt;Monitoring dashboard for Apache
  &lt;/p&gt;

&lt;p&gt;Visit the SigNoz &lt;a href="https://signoz.io/docs/userguide/manage-dashboards-and-panels/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; to learn more about creating dashboards and running queries. To replicate the above dashboard shown, use the JSON configuration &lt;a href="https://github.com/SigNoz/dashboards/tree/main/apache-web-server" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Import the copied JSON file into a new dashboard, and it will recreate the same layout and configurations for your convenience.&lt;/p&gt;

&lt;p&gt;Besides just setting up dashboards to monitor your Apache metrics, you can create alerts for the different metrics you query. Click on the drop-down of the panel from your dashboard, and then click on “Create Alerts.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ntlzgdh9hwn30nhekdv.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ntlzgdh9hwn30nhekdv.webp" alt="Create alerts on important Apache metrics" width="800" height="344"&gt;&lt;/a&gt;&lt;br&gt;Create alerts on important Apache metrics
  &lt;/p&gt;

&lt;p&gt;It will take you to the alerts page, and from there, you can create the alerts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Metrics and Resource Attributes for Apache supported by OpenTelemetry
&lt;/h2&gt;

&lt;p&gt;The following metrics and resource attributes for Apache can be collected by the Opentelemetry Collector.&lt;/p&gt;

&lt;h3&gt;
  
  
  Metrics
&lt;/h3&gt;

&lt;p&gt;These metrics are enabled by default. Collectors provide many metrics that you can use to monitor how your Apache server is performing or if something is not right.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Terms for Metrics &amp;amp; Attributes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Metric&lt;/strong&gt; &lt;strong&gt;Name:&lt;/strong&gt; The name of the metric is a unique identifier that distinguishes it from other metrics. It helps in referencing and organizing various metrics on SigNoz as well.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metric Type:&lt;/strong&gt; The type of metric defines the kind of data it represents. The metric type indicates the type of data that the metric measures. some common &lt;a href="https://signoz.io/docs/metrics-management/types-and-aggregation/" rel="noopener noreferrer"&gt;metric types&lt;/a&gt; include gauge, counter, sum, and histogram.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Value Type:&lt;/strong&gt; The value type indicates the type of data that is used to represent the value of the metric. Some common value types are integer and double.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unit:&lt;/strong&gt; The unit specifies the measurement unit associated with the metric. It helps in interpreting and comparing metric values, including Bytes, NONE, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temporality:&lt;/strong&gt; It involves understanding the temporal patterns and fluctuations within the data, providing insights into how the metric evolves over time. Temporality is crucial for analyzing trends, identifying patterns, and making informed decisions based on the temporal behavior of the observed metric.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monotonic:&lt;/strong&gt; The monotonic flag indicates whether the metric value is always increasing or decreasing. A monotonic metric is useful for tracking trends over time, such as the total count of events or occurrences.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metrics&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Metrics Name&lt;/th&gt;
&lt;th&gt;Metric Type&lt;/th&gt;
&lt;th&gt;Value Type&lt;/th&gt;
&lt;th&gt;Unit&lt;/th&gt;
&lt;th&gt;Aggregation Temporality&lt;/th&gt;
&lt;th&gt;Monotic&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CPU Load&lt;/td&gt;
&lt;td&gt;CPU load on the Apache server&lt;/td&gt;
&lt;td&gt;apache.cpu.load&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Double&lt;/td&gt;
&lt;td&gt;%&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU Time&lt;/td&gt;
&lt;td&gt;Cumulative CPU time consumed by Apache processes&lt;/td&gt;
&lt;td&gt;apache.cpu.time&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;td&gt;Double&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{jiff}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Cumulative&lt;/td&gt;
&lt;td&gt;True&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Current Connections&lt;/td&gt;
&lt;td&gt;Total current connections to the Apache server&lt;/td&gt;
&lt;td&gt;apache.current_connections&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;td&gt;Int&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{connections}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Cumulative&lt;/td&gt;
&lt;td&gt;False&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Server Load&lt;/td&gt;
&lt;td&gt;Load on the Apache server&lt;/td&gt;
&lt;td&gt;apache.load&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Double&lt;/td&gt;
&lt;td&gt;%&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Request Time&lt;/td&gt;
&lt;td&gt;Cumulative time taken to process Apache requests&lt;/td&gt;
&lt;td&gt;apache.request.time&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;td&gt;Int&lt;/td&gt;
&lt;td&gt;ms&lt;/td&gt;
&lt;td&gt;Cumulative&lt;/td&gt;
&lt;td&gt;True&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total Requests&lt;/td&gt;
&lt;td&gt;Total number of requests handled by the Apache server&lt;/td&gt;
&lt;td&gt;apache.requests&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;td&gt;Int&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{requests}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Cumulative&lt;/td&gt;
&lt;td&gt;True&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scoreboard Metrics&lt;/td&gt;
&lt;td&gt;Cumulative count of workers in different states&lt;/td&gt;
&lt;td&gt;apache.scoreboard&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;td&gt;Int&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{workers}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Cumulative&lt;/td&gt;
&lt;td&gt;False&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Traffic Metrics&lt;/td&gt;
&lt;td&gt;Cumulative traffic handled by the Apache server&lt;/td&gt;
&lt;td&gt;apache.traffic&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;td&gt;Int&lt;/td&gt;
&lt;td&gt;By&lt;/td&gt;
&lt;td&gt;Cumulative&lt;/td&gt;
&lt;td&gt;True&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Uptime&lt;/td&gt;
&lt;td&gt;Total uptime of the Apache server&lt;/td&gt;
&lt;td&gt;apache.uptime&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;td&gt;Int&lt;/td&gt;
&lt;td&gt;s&lt;/td&gt;
&lt;td&gt;Cumulative&lt;/td&gt;
&lt;td&gt;True&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Workers&lt;/td&gt;
&lt;td&gt;Total count of Apache server workers&lt;/td&gt;
&lt;td&gt;apache.workers&lt;/td&gt;
&lt;td&gt;Sum&lt;/td&gt;
&lt;td&gt;Int&lt;/td&gt;
&lt;td&gt;&lt;code&gt;{workers}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Cumulative&lt;/td&gt;
&lt;td&gt;False&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You can visit the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/apachereceiver/documentation.md#apache" rel="noopener noreferrer"&gt;Apache receiver&lt;/a&gt; GitHub repo to learn more about these metrics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resource Attributes
&lt;/h3&gt;

&lt;p&gt;Resource attributes are a set of key-value pairs that provide additional context about the source of a metric. They are used to identify and classify metrics, and to associate them with specific resources or entities within a system.&lt;/p&gt;

&lt;p&gt;The below attributes are enabled by default for Apache.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Values&lt;/th&gt;
&lt;th&gt;Enabled&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;apache.server.name&lt;/td&gt;
&lt;td&gt;The name of the Apache HTTP server.&lt;/td&gt;
&lt;td&gt;Any Str&lt;/td&gt;
&lt;td&gt;true&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;apache.server.port&lt;/td&gt;
&lt;td&gt;The port of the Apache HTTP server.&lt;/td&gt;
&lt;td&gt;Any Str&lt;/td&gt;
&lt;td&gt;true&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You can see these resource attributes in the &lt;a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/apachereceiver/documentation.md#resource-attributes" rel="noopener noreferrer"&gt;OpenTelemetry Collector Contrib&lt;/a&gt; repo for the Apache receiver.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, you configured an OpenTelemetry collector to export metrics data from an Apache web server. You then sent the data to SigNoz for monitoring and visualization.&lt;/p&gt;

&lt;p&gt;Visit our &lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;complete guide&lt;/a&gt; on OpenTelemetry Collector to learn more about it.&lt;/p&gt;

&lt;p&gt;OpenTelemetry is becoming a global standard for open-source observability, offering advantages such as a unified standard for all telemetry signals and avoiding vendor lock-in. With OpenTelemetry, instrumenting your applications to collect logs, metrics, and traces becomes seamless, and you can monitor and visualize your telemetry data with SigNoz.&lt;/p&gt;

&lt;p&gt;SigNoz is an open-source &lt;a href="https://signoz.io/blog/opentelemetry-apm/" rel="noopener noreferrer"&gt;OpenTelemetry-native APM&lt;/a&gt; that can be used as a single backend for all your observability needs.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Further Reading&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;Complete Guide on OpenTelemetry Collector&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/blog/opentelemetry-apm/" rel="noopener noreferrer"&gt;An OpenTelemetry-native APM&lt;/a&gt;&lt;/p&gt;




</description>
      <category>devops</category>
      <category>monitoring</category>
      <category>performance</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>OpenTelemetry Webinars - The Trace API</title>
      <dc:creator>Ankit Anand ✨</dc:creator>
      <pubDate>Tue, 03 Mar 2026 04:58:40 +0000</pubDate>
      <link>https://dev.to/ankit01oss/opentelemetry-webinars-the-trace-api-3jf9</link>
      <guid>https://dev.to/ankit01oss/opentelemetry-webinars-the-trace-api-3jf9</guid>
      <description>&lt;p&gt;Join &lt;a href="https://github.com/serverless-mom" rel="noopener noreferrer"&gt;Nočnica Mellifera&lt;/a&gt; and &lt;a href="https://github.com/srikanthccv/" rel="noopener noreferrer"&gt;Srikanth&lt;/a&gt; to talk in detail about the OpenTelemetry Trace API. We'll talk about adding spans, events, attributes and other extra info, whether it's really possible to replace logs with traces, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary of the Talk
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/wnmyXAMqoJk"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Find the conversation transcript below.👇&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; I've been working with my dog lately; it's a great dog at the house. But it was a pandemic dog, one of those dogs you get during the pandemic, so it's not great with new people and new situations. Very loving and sweet around the house, but incredibly nervous outside.&lt;/p&gt;

&lt;p&gt;Doesn't bark at anybody but just wants to go home. We're trying to get it to associate new situations with nice things, with treats, with hearing a nice sound, and being like, "Hey, come over here." If you hear a bang or see a car drive towards you, come over here, and you can get a nice treat. Come nibble on this. I just wish someone would do that for me. I wish someone would follow me around and be like, "Hey, do you have a rough day? Here, have a little piece of candy. It'll cheer you up." And I'll start to associate difficult situations with good things happening.&lt;/p&gt;

&lt;p&gt;It's time for the OpenTelemetry webinar. Thank you so much for joining us. I've started telling a little anecdote at the beginning of the show instead of saying, "I think we're streaming," because it gives me time to check that we are live, and we are live. Let me hide this for a second. I promise that we are going to talk about the &lt;a href="https://signoz.io/blog/opentelemetry-spans/" rel="noopener noreferrer"&gt;OpenTelemetry Trace&lt;/a&gt; API.&lt;/p&gt;

&lt;p&gt;We have Srikanth as our guest. Srikanth, introduce yourself and tell them what you do for the &lt;a href="https://signoz.io/docs/introduction/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Hey, everyone. I'm Srikanth. I work at SigNoz. I mostly work on the metrics and alerting parts of the signal system. Also, I spend quite an amount of time working in OpenTelemetry. I used to be a maintainer for the &lt;a href="https://signoz.io/docs/instrumentation/opentelemetry-python/" rel="noopener noreferrer"&gt;OpenTelemetry Python&lt;/a&gt; SDK and currently work with the OpAMP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; That's awesome. You're here today because we had an episode about the API in general and the division between the API and the SDK. It became obvious that we wanted to talk a little bit more in-depth about multiple aspects of this API.&lt;/p&gt;

&lt;p&gt;I wanted to talk today a little bit about some of the questions that I had about tracing. As you're joining this call, please feel free to pop into the comment section and add your questions. We'll ask some of those here, but these are the ones that I wrote. We have a little walkthrough through it. How do you want to go through this? Do you want to just start with your slides first, or however you'd like to do it?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftuvo1mebqg7mrj2sgxwl.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftuvo1mebqg7mrj2sgxwl.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Let's start with the slides first.&lt;/p&gt;

&lt;p&gt;I will give a brief overview of this, like what the agenda is. In the last session, we looked into what are the different aspects of OpenTelemetry, the major parts of OpenTelemetry, what is API, what is SDK, what the collector does, and who cares about what. Today, we want to talk about the Trace API in detail.&lt;/p&gt;

&lt;p&gt;This is particularly relevant for the folks who get some Telemetry data out of the auto instrumentation but to make full use of OpenTelemetry's potential, you need to start instrumenting your code. Auto instrumentation is great to get started, but at some point, you need to start manual instrumentation. That's where having some idea about the trace API gets into the picture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; One question I had was, do you think everybody is going to end up doing some amount of manual instrumentation? My understanding is that it probably makes sense to do some kind of hybrid, where it's not that you're saying, "Oh, we use auto instrumentation, and then you pull it out and start all your traces manually." It usually makes more sense to say, "Hey, you have automatic instrumentation running, maybe kicking off most of your traces, and then you're adding to it with manual calls." Is that generally what you would expect to see?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; It's correct. What you generally do with manual instrumentation is you probably won't start the trace by yourself because auto instrumentation is at the forefront. It already starts the trace journeys somewhere within the framework. You would generally add a span to certain parts of your codebase or application not covered by auto instrumentation. You start a new span, let's say you have some auto instrumentation spans at the start of the framework request, but there's nothing beyond that. You have a certain functionality within your application, that is not traced, but you would like to because that's where the most contextual information is available. There you generally start the span since there is already a trace started upstream. What you get is a span that you're generating within your application, it's tied up to the existing span, so there's no new. You generally don't create a new trace on your own unless there's a very specific requirement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; This makes some sense. There are always going to be chunks of your codebase that aren't hit by auto-instrumentation at all. For example, if you're handling web requests in some kind of web server, kicking off a request is pretty straightforward. There should be a million little hints to the people who created this automatic instrumentation library that, "Yeah, I think a request is starting now. I think we're handling something that we want to call a new trace."&lt;/p&gt;

&lt;p&gt;So, you're very often adding spans, not saying, "Hey, this trace is just getting missed." Obviously, in my time, I've seen examples where it's like, "Oh, this one type is not showing up," and I'm sorry. We can hop back to the presentation. I didn't mean to get us off track with my questions already, but yeah, we can alternate between the questions on the slides.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Nice. All right, next slide, please.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytuwhts4mqcuk2ozm51r.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytuwhts4mqcuk2ozm51r.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Okay. I want to give a brief idea about when you want to start manual instrumentation. Having some understanding of the concepts helps you work with manual instrumentation well, especially when it comes to the tracing API of OpenTelemetry. These are the main parts: a tracer provider, a context, a tracer, and a span.&lt;/p&gt;

&lt;p&gt;We'll go through them to give a brief idea about what each one does. The tracer provider gives you access to the tracer, and the tracer gives you access to the span. It's a hierarchy where you have a provider that gives you tracers, and then the tracer object enables you to create spans.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqv0o756dn13qa2x06ao.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqv0o756dn13qa2x06ao.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Let's talk about the tracer provider. The tracer provider, as I mentioned, is the object that helps you get tracers. You might wonder why there's a need for a provider. Why not just start the span? Why have this hierarchy structure where you obtain the tracer object first and then try to create it later?&lt;/p&gt;

&lt;p&gt;This is even a further granular level of telemetry information. Let's say you have numerous modules within your system, and you want to scope each tracer. Each tracer that you create has a name associated with it a version and a schema URL set of attributes. This tracer object allows you to have one global tracer object or a scoped tracer object.&lt;/p&gt;

&lt;p&gt;For instance, if you have code in one module following old semantic conventions and another following new semantic conventions, you may want to create two different tracer objects. The schema URL represents a semantic convention.&lt;/p&gt;

&lt;p&gt;For example, if you provide the schema URL as 1.14.0, any data sent to the backend will have this information associated with it. This helps distinguish the data within the scope tracer with a semantic convention of 1.14.0. But in practice, unless you have these requirements, you would generally use the one global tracer object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; This might be like having a single service but feeling like there are two separate services, says an internal and external version of some account detail service. We don't want those traces mixed; they don't make sense together.&lt;/p&gt;

&lt;p&gt;This gets us to the basic question of how much work you should be doing to make sure that your code displays observability correctly within your codebase. If it makes sense for your organization and your data as it travels through your system, then that's something you should worry about during custom instrumentation or setting up instrumentation.&lt;/p&gt;

&lt;p&gt;If it's like, "Oh, this doesn't make sense, and I want to add explanatory details," maybe there should be a better way to do that than editing your codebase. But it makes sense to set up your observability at first. A perfect example of having scoped tracers to define what we consider an actual single service here, even though it's part of a larger system getting tracers along with something else.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyvydsn4x33xqvo32ioo.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyvydsn4x33xqvo32ioo.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;In general, you may not need this many scoped tracers in practice, but it is possible to have different traces for different purposes. For example, within a certain package, you might want to attribute spans to a specific scope, like com.mycompany.package.&lt;/p&gt;

&lt;p&gt;If you want even further granularity, you could use com.mycompany.module. It's up to you whether, from a business or code perspective, you want to use scope separation between the tracer and the associated spans. You can certainly use it if it makes sense, but if not, you can have one global tracer used everywhere within your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk87v24r7l196n9e1rx30.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk87v24r7l196n9e1rx30.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Next, I want to talk about the context. This is important for you to know because tracing is based on spans connected by parent-child relationships. This relationship is preserved based on context, known as context propagation.&lt;/p&gt;

&lt;p&gt;There are two types: in-process and out-of-process context propagation. Let's focus on in-process context propagation, which needs to be managed manually. Out-of-process context propagation is automatically taken care of by the client libraries.&lt;/p&gt;

&lt;p&gt;In in-process context propagation, when you start a span in function foo() and another in the function bar(), you want to establish a parent-child relationship.&lt;/p&gt;

&lt;p&gt;In languages like Go, explicit &lt;a href="https://signoz.io/blog/opentelemetry-context-propagation/" rel="noopener noreferrer"&gt;context propagation&lt;/a&gt; is required. In languages like Python, JavaScript, and Java, implicit context propagation happens via thread locals, and you don't need to interact with the context directly.&lt;/p&gt;

&lt;p&gt;In Go, you need to interact with the context. There are two functions to be aware of: SpanFromContext and ContextWithSpan. The span from context gives you the span within a given context object. If the span doesn't exist, it returns nil. You use this object to add attributes, events, etc. Context with span is used to create a new context object with a span for in-process context propagation.&lt;/p&gt;

&lt;p&gt;If you have three functions, for example, and you want to start a span in functions one and three but not in two, you need to propagate the parent from function one to function three. This is done by creating a new context object with the span and passing it down the function calls.&lt;/p&gt;

&lt;p&gt;Nika: If you get back just a span from the span from context, can you add a child to that span object?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Yes. When you start a new span with the received context object, it automatically adds it as a child, so you don't need to do it manually.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8zm5vm87d77xbxetfo9.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8zm5vm87d77xbxetfo9.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;If you want to add a child, just start a new span with the given context. If you want to add certain attributes or perform certain operations on the existing span object, you will use SpanFromContext; otherwise, you just start a span with the given context object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; So, yeah, the reason you'd be loading the span would be to ask some questions like, "Hey, what span am I currently in? Let me find that value."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Yes, so whatever is part of the context object, there can be one span as part of the context, if you send it down to some function, if you start a new span with the context, the parent-child relationship is automatically formed.&lt;/p&gt;

&lt;p&gt;If you don't use that context, that's where, if you come across broken traces, you need to start looking into it. You need to use the context that has been given to you; otherwise, what should be a single trace is now a broken trace in your application. So, you get a context with the span and pass it down to the functions where you want to have a parent-child relationship and use the given context object so that the trace is continued without any broken spans.&lt;/p&gt;

&lt;p&gt;Yeah, this [IMAGE] is another example where you don't want to create a new span but rather want to pass it down and add certain more attributes or add certain more metadata and then end the span in that function. So, the context you received here would have the parent context, and what you do is trace that span from context. It extracts the span object out of the context given to this function, and then you can set attributes and end the span here or continue passing down the context object to another set of functions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdz79lbgfv5qv32a8rw60.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdz79lbgfv5qv32a8rw60.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Yeah. So, that was about the context, having proper context propagation to avoid broken traces. If you have broken traces or need to connect spans that aren't connected, the go-to place is to look at the context and ensure that it's propagated properly. In many languages, you don't need to worry about it, but in languages without implicit context propagation, you should take care of that detail. Next up, we have the Tracer.&lt;/p&gt;

&lt;p&gt;We've seen that there's a provider that gives you access to the Tracer, and the Tracer is the object responsible for creating spans. You cannot instantiate a span directly in OpenTelemetry; it has to happen via the Tracer object.&lt;/p&gt;

&lt;p&gt;The Tracer object has two primary operations: StartAsCurrentSpan and StartSpan. StartAsCurrentSpan starts the span and sets it in the context. It's a convenient method to avoid manually setting the span inside the context. The StartSpan operation creates a span and gives you an instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3kxb3p7bmk9u3p0btgu.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3kxb3p7bmk9u3p0btgu.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;In the slides, I used multiple languages just to give the audience a brief idea. I put together some examples. If you look at the example here on the first span, you have tracer.start_as_current_span. So, by using start as the current span, you don't need to manually work with the context. You don't need to worry about whether there will be a parent-child relationship preserved or not. The convenience wrapper does the work for you, creates the span, and sets it into the context. Anytime the child span starts, there's a parent-child relationship preserved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; In Hazel Weekly's talk four weeks ago, I was just seeing and name-checking it. I'll share a link to the video once it's shared, but I didn't think you could create these parent-child relationships or connect spans after the fact like you can't do it with a collector setting in &lt;a href="https://signoz.io/blog/what-is-opentelemetry/" rel="noopener noreferrer"&gt;OpenTelemetry&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you're seeing orphan spans or broken traces, you need to go fix it within your code instrumentation. It's not currently practical to try and fix it afterward.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwdp0jee3q3t5y87b9v1.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwdp0jee3q3t5y87b9v1.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; You cannot do that at the collector level because the collector does not know how to have this parent-child relationship. It doesn't know how to connect these spans. Any broken relationship has to be fixed in the application where you have instrumented.&lt;/p&gt;

&lt;p&gt;This is another example where we are not using StartAsCurrentSpan; we are just using StartSpan. If you use a StartSpan, it is your responsibility to make sure that the context is managed properly.&lt;/p&gt;

&lt;p&gt;So, let's say you get the parent and set it as the span in the context. You have to pass the context object to the child span.&lt;/p&gt;

&lt;p&gt;If you remember, we did not have this line of &lt;code&gt;context = set_span_in_context&lt;/code&gt; in the previous code. That is because that convenience span does the work for you. But if there are certain use cases, certain cases where you don't want to start a span as a current span, this is how you would have to do that.&lt;/p&gt;

&lt;p&gt;It could be because when you do the start as a current span, the timer, the start time, everything is automatically set at the point in time. But let's say if you do &lt;code&gt;StartSpan&lt;/code&gt;, you have the option to, so you have the ability to pass the start timestamp.&lt;/p&gt;

&lt;p&gt;So, when did this span start? You are not starting this as a current timestamp. It gives you the ability to say what is the timestamp when this span has started. So that's an advantage. There can be certain use cases where you don't want to immediately start a span but start a span with a certain, sometime later or some custom start time, which is not the current timestamp. So in those use cases where you do this, we make sure to have context in &lt;code&gt;idea&lt;/code&gt; otherwise you would get a broken, I see.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; Yeah, that's interesting. I didn't actually realize that that was possible. I've seen that, like, it's an asynchronous process. There's a network between you and the database, but we don't really care about that. And so it's like, "Oh, I wish I could start it when the database actually starts to handle it," you know? So it's like not the current time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; I've seen certain users had certain use cases where, like, with some use cases where they had to pass, and then they use this pretty extensively because they get the data from somewhere. Like, they don't immediately have this tracing information, but they want to have the traces. So what they do is they get this start time somewhere from the database or some other means.&lt;/p&gt;

&lt;p&gt;So they get this, "When should I start this operation?" So they get that start time from somewhere and use that information to start spans. And still, in the end, when you go to your tracing back where you were looking at the data, you do not know whether it actually was, you know, how it was started, whether it was customized or not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; So, you get a DB return that had like, "Hey, this was the time I received this request, and here's the response, and I want to add that span but on the handler for the return, right? So, it's like, no, don't start it now because I'm just done now, right? Like, don't always make it the span length like a millisecond. Let's show what the actual return time was." So, yeah, that's pretty cool. I didn't realize that was an option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8erk6amqbwcyxr338npm.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8erk6amqbwcyxr338npm.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Now we have seen the Tracer object. I'm just repeating myself. There's a provider that gives you access to the Tracer, and then there's a Tracer object that helps you to start spans. The entire work of manual instrumentation will be revolving around the span. Anything that you do will be on the span object, and there are several operations you can perform on the span object. I've listed them, and we'll go through them one by one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpt75bgftzp9yngmj2x7.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpt75bgftzp9yngmj2x7.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Now, we have the GetContext. So, what does it give you? A span has a context associated with it. What does that SpanContext contain? It contains a TraceID, it contains a SpanID, it contains a TraceState and Flag, and it contains a IsRemote. Every span you create has this information. Is this span IsRemote or is this not?&lt;/p&gt;

&lt;p&gt;When a span is a remote span, it means the span, you just have it. So let's say you have two services, and you have a service B, which is not creating a trace on its own, but it is just propagating the context, like propagating the trace. Like, it is not continuing the trace that has originated in service A. So, in service B, how do you know that it should not start a new span but it should just continue the span?&lt;/p&gt;

&lt;p&gt;What happens is, under the hood, service B creates a dummy span that has the IsRemote set to true. That indicates, "Okay, someone has already started a trace journey. I will create a dummy span with IsRemote set to true, and it also has the parent TraceID and SpanID, and it sets that span in the context and starts the span so that the trace journey continues."&lt;/p&gt;

&lt;p&gt;This GetContext helps you get the TraceID, &lt;a href="https://signoz.io/comparisons/opentelemetry-trace-id-vs-span-id/" rel="noopener noreferrer"&gt;SpanID&lt;/a&gt;, and IsRemote information. For some business use cases, where you want to take this TraceID, SpanID, let's say, you started a span, and you want to give this SpanID to some external system or attach this SpanID to one of your business intelligence reports, you would use this &lt;code&gt;GetContext&lt;/code&gt;. You call &lt;code&gt;GetContext&lt;/code&gt; on the span object, and you get the span context. This span context has the TraceID, SpanID, TraceState, and metadata.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftuogsecqkgwus52o9a60.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftuogsecqkgwus52o9a60.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Here, in the example, the first line is &lt;code&gt;span.get_span_context()&lt;/code&gt;, which gives you the span context object. We have a span_id that gets the SpanID from the context. One common use case is when you want to include trace contextual information, such as TraceID, and SpanID, as part of your logs. You use &lt;code&gt;GetContext&lt;/code&gt; to access the SpanID and TraceID, and you can include them in your logs. This helps correlate between the traces generated and the logs sent to the log backend.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; I hadn't realized that you could set your start time on a span, but the reason I always knew to get your current span was to connect and get out your TraceID and SpanID. So, when I'm logging this, I'll be able to connect it to this trace easily.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnvrlujeve8rf94shhcl.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnvrlujeve8rf94shhcl.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Now, we have another, which is very unknown. There's something called &lt;code&gt;IsRecording&lt;/code&gt;. It tells you whether the span is recording the data or not. If &lt;code&gt;IsRecording&lt;/code&gt; is set to false, any operation on the span object will be ignored.&lt;/p&gt;

&lt;p&gt;Let's say you do &lt;code&gt;SetAttributes&lt;/code&gt;, and add event on the span; it is only captured as long as it is recording. This &lt;code&gt;IsRecording&lt;/code&gt; is decided by something called the sampler at the SDK level. It's important to note that you can have use cases where you are recording but not sampling. You are recording and processing it, but you don't want the whole span to go into the destination system. In those cases, &lt;code&gt;IsRecording&lt;/code&gt; becomes very handy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; Could this also be for something where maybe you're using the baggage for some other purpose within your engineering, like for testing or something? So, you'd be passing this trace around, but you're not going to record it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; It's not related to baggage. With baggage, you're passing around metadata. But &lt;code&gt;IsRecording&lt;/code&gt; is entirely related to sampling, where you don't want to sample but still want to record.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; So, if you're using baggage for something interesting like the trace test guys, they could just be passing the context without passing around a trace. Okay, I got it. Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiml0xt398vzoly6jsfr3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiml0xt398vzoly6jsfr3.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Now, we have &lt;code&gt;SetAttributes&lt;/code&gt;. This will be the most common operation you will do on a span when you are doing manual instrumentation. Anytime you create a span, the real value comes when you attach some rich contextual information to it. Just starting a span alone doesn't give you any meaningful information. It's only useful when you have some contextual information on it, like when, local, or whatever contextual information that is associated with when it started.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;SetAttributes&lt;/code&gt; operation is what you would call the function or operation on the span object that helps you attach contextual information. For example, you could attach a customer ID or any other contextual information that will help you debug issues later. This is where it's really important. There can be certain requests that are specific to certain customers, and not having that customer ID will not be able to help you.&lt;/p&gt;

&lt;p&gt;When you start a span, make sure that you add the necessary information that will help you debug issues later. Otherwise, just creating a span and not having essential attributes set means it's not useful. It's only as useful as the data that you attach to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhqgpx7quq7bo2n2ds9z.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhqgpx7quq7bo2n2ds9z.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Next, we have &lt;code&gt;AddEvents&lt;/code&gt;. During your span journey, there could be many events happening. You could send an event like a request to an external system and know whether it succeeded or not.&lt;/p&gt;

&lt;p&gt;For example, if it failed, you would want to add an "external call failed" event to this span. When you look back at your tracing, it has this information about the things it tried and if there was a failure. The event is more useful than attributes because it tells you much more information. For instance, it gives you the time when the event happened and what specific attributes caused the event to fail. Let's say you add an "external event fail" with a timestamp and attributes that caused the failure. &lt;code&gt;AddEvent&lt;/code&gt; is helpful in such cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; This is always my understanding like an event is just a richer object. It's not just a single key-value pair that you're adding. So, if you have a whole event readout, like from a failure lookup or something, don't add nine attributes to that span. Just add an event. Right?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8hbed3ie4orxx2267g5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8hbed3ie4orxx2267g5.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Next, next, we have &lt;code&gt;AddLinks&lt;/code&gt;. It's useful when you want to link different traces together, like when you want to add traces together.&lt;/p&gt;

&lt;p&gt;For instance, in a regular HTTP request flow, there might be a branch-out logic where a request goes to a queue. Within the regular request flow, it's synchronous, but when you send an event to the queue, you don't know when it will end.&lt;/p&gt;

&lt;p&gt;In those cases, how do you connect whatever the queue has received, like whatever message was processed by the queue? How do you know which request triggered the queue processing? This is where &lt;code&gt;AddLinks&lt;/code&gt; helps. When you start a new trace in the queue processing system, you link this trace to the existing trace. This helps in tracing back to see which request triggered the message processing. &lt;code&gt;AddLinks&lt;/code&gt; is not very common but very helpful in messaging systems. Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfztw7sl6yrn7evtry83.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfztw7sl6yrn7evtry83.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Next, we have &lt;code&gt;SetStatus&lt;/code&gt;. Any span that you perform, any operation, has states associated with it—either the operation was successful or it was a failure. A span represents a unit of operation. The &lt;code&gt;SetStatus&lt;/code&gt; method lets you set the status of this operation. By default, it is unset. Let's say you make a call to the database and the call fails for some reason. You use &lt;code&gt;SetStatus&lt;/code&gt; to indicate that this operation failed.&lt;/p&gt;

&lt;p&gt;Here in this example, we are looking at an external event. We are making some request calls, and if the request fails, we are setting the span status as an error. So that when it reaches your tracing backend, you know that it has failed. This is highlighted in your tracing backend. It makes it easy to have a quick look at what are those operations that fail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; Yeah, and if it's failing with a third-party API or something, right? Like if it's failing to look up a user, that's not going to show up automatically in tracing or automatically track as a failure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; You need to take care of that detail again in manual instrumentation. Anytime you suspect there can be a possibility of an operation failure, make sure that you set the status so that you don't miss whether this operation was successful or a failure when it reaches you back.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgoymn8bdvxn3nlob2bmv.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgoymn8bdvxn3nlob2bmv.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;Next, we have &lt;code&gt;UpdateName&lt;/code&gt;. I haven't seen anybody use this in practice, but once you start the span name, you can change the name afterward. Also, when you create the span, you generally give the span name. But for some reason, you don't want to use the original span name, &lt;code&gt;updateName&lt;/code&gt; gives you the ability to change the span name. I haven't seen this in practice. Not many people use it, but it is possible. Let's say you start with some name, and you don't have enough information to give a proper span name, but you'll get it sometime later. Then it's perfectly fine to start with some name that does not make sense, which is not very useful, but update it later on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; Yeah, all I can think is like you've bought into OpenTelemetry, and you have a constant annoyance around some span that's not being labeled correctly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4ek0a1z9rvont7afzqh.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4ek0a1z9rvont7afzqh.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; And then we discussed that there's a start span that accepts the timestamp. Similarly, there is an &lt;code&gt;End&lt;/code&gt; method which also accepts a timestamp. If you do not provide any timestamp when you perform &lt;code&gt;span.end&lt;/code&gt;, it takes the current timestamp as the end timestamp. But it gives you the flexibility to give the custom timestamp.&lt;/p&gt;

&lt;p&gt;So, you can start a span at your own time and end a span with your timestamp. You are not bounded by the fact that it will only take the current end time as a current timestamp. This gives you the flexibility. This helps in use cases where you have a start time and end time for certain operations that you get from somewhere, maybe a database or somewhere, maybe some written to file or somewhere where you do not have them. But you want to have a trace created with raw data.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;End&lt;/code&gt; also has this optional parameter timestamp. If you don't provide it, it will use the timestamp. But if you provide it, the span end timestamp will be the timestamp that you have given as an argument.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4zl6cif4w7c4qo3khtc.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4zl6cif4w7c4qo3khtc.webp"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;/p&gt;

&lt;p&gt;And next, we have &lt;code&gt;RecordException&lt;/code&gt;. This is an &lt;code&gt;AddEvent&lt;/code&gt;. It's a convenient method again. When you do &lt;code&gt;RecordException&lt;/code&gt;, it adds an event with certain attributes like stack trace, exception type, and exception message. There are certain attributes which it neatly formats for you. But it is just a simple convenience around the event.&lt;/p&gt;

&lt;p&gt;So that's all. We have seen the provider; we have seen the Tracer object, and we have discussed the span, which is where you work mostly. We have also briefly discussed the context, which is very important so that you don't get broken traces in your system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; This is so cool because I was writing this week about doing some of this stuff in Python, about adding some manual calls to add details to your traces. But there's a couple that was like, I'd seen them but hadn't thought about how they would be used, like &lt;code&gt;IsRecording&lt;/code&gt;, and yeah, that's pretty nice to see.&lt;/p&gt;

&lt;p&gt;So, you know, this fills our time. I think we have to come back for another week to ask a couple more of my questions about the tracing strategy. But this was so great. I want to thank you for taking the time. Let's give a few minutes to see if we have some questions coming in, just keeping an eye here on the event.&lt;/p&gt;

&lt;p&gt;Let me see if we're getting questions here in the chat. One sec. And yeah, I'll take a look on YouTube as well. Thank you so much, guys. Usually, you do ask some great questions. Looks like chat's quiet today, so we're free. We don't have to stay any later today, so that's the good news.&lt;/p&gt;

&lt;p&gt;Well, folks, this will go up as a blog post on the Signal blog as well. You can see us almost every week with their OpenTelemetry webinars. We'll be doing a call soon on using OpenTelemetry to monitor language learning models, which we're super excited about. This was great. Of course, we have to have you back again very soon. And I'm remembering which direction you're in. So it's this way.&lt;/p&gt;

&lt;p&gt;Thank you so much for joining us, folks. If you have questions, either go ahead and drop them under the video as comments or join our community Slack and ask them there. Thank you so much, everybody. Thank you.&lt;/p&gt;




&lt;p&gt;Thank you for taking out the time to read this transcript :) If you have any feedback or want any changes to the format, please create an &lt;a href="https://github.com/SigNoz/signoz/issues" rel="noopener noreferrer"&gt;issue&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Feel free to join our Slack community and say hi! 👋&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/slack" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falb6jj148l2mta73yust.webp" alt="SigNoz Slack community"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>api</category>
      <category>devops</category>
      <category>monitoring</category>
      <category>opensource</category>
    </item>
    <item>
      <title>OpenTelemetry Webinars - The Open Agent Management Protocol</title>
      <dc:creator>Ankit Anand ✨</dc:creator>
      <pubDate>Tue, 03 Mar 2026 04:57:59 +0000</pubDate>
      <link>https://dev.to/ankit01oss/opentelemetry-webinars-the-open-agent-management-protocol-a09</link>
      <guid>https://dev.to/ankit01oss/opentelemetry-webinars-the-open-agent-management-protocol-a09</guid>
      <description>&lt;p&gt;Open Agent Management Protocol (OpAMP) is the emerging open standard to manage a fleet of telemetry agents at scale.&lt;/p&gt;

&lt;p&gt;Take a look at the conversation between &lt;a href="https://github.com/serverless-mom" rel="noopener noreferrer"&gt;Nočnica Mellifera&lt;/a&gt; and &lt;a href="https://github.com/srikanthccv/" rel="noopener noreferrer"&gt;Srikanth&lt;/a&gt; as we discuss recent updates to the standard and how you can remotely manage the &lt;a href="https://signoz.io/blog/opentelemetry-collector-complete-guide/" rel="noopener noreferrer"&gt;OpenTelemetry collector&lt;/a&gt; with OpAMP.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary of the Talk
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/Givt10eJcy8"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Find the conversation transcript below.👇&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; Folks, the transition between seasons in Oregon is pretty distinct. You go from this beautiful autumn weather, cold but kind of clear, to absolutely bucketing down rain. Not quite cold; it never freezes here in Oregon, not really. But just enough so you can't grow avocados. Oh boy, it was cold. We spent a little time at the beach this weekend, and that is work only for very committed people, I'll tell you that.&lt;/p&gt;

&lt;p&gt;But that's not what we're here to talk about. We're going to talk about OpenTelemetry, an open open-agent management protocol. And we have Srikanth, a former contributor, putting it in the right direction with us today. Say hi to the people, and tell them what you do for SigNoz.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Hey, everyone. I'm Srikanth. I work for SigNoz. I mostly worked on the metrics and alerts, and I was a maintainer for the &lt;a href="https://signoz.io/docs/instrumentation/opentelemetry-python/" rel="noopener noreferrer"&gt;OpenTelemetry Python&lt;/a&gt; SDK. Currently, I work mostly on the Open Agent Management Protocol (OpAMP).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; Very cool. This is something that a lot of people have been curious about, and I'm excited to talk more about it. We were just having this tailbase sampling conversation, and I was like, "Man, isn't it possible to do something like that with the agent management protocol?" We'll get to that.&lt;/p&gt;

&lt;p&gt;But it seems as we start to ask these big, large-scale, and highly sophisticated feature questions, this could be part of the answer. I wrote up some questions. Some of them were a little silly, but I was just getting started on this topic area. Let me go ahead and drop the spec document that we're talking about. I think this is one thing I might contribute to the project at some point. I'd like to see a more high-level doc get released rather than just the spec, which is what's currently out there. Anyway, here's the link if anybody wants to follow those OpenTelemetry docs for the spec.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent vs Collector
&lt;/h2&gt;

&lt;p&gt;I want to start with, when looking at the spec, one of the things is you hear the term "agent" used over and over again. Now, I believe from reading it that we're talking generally about a collector. But is that wrong, or are we talking about an instrumentation agent? Help me with this very basic piece of knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; So the spec is rightly worded. It is not just the collector; the protocol, the whole specification of the protocol, while the current focus is around the collector, the wording, and the way that's designed is not specifically for the collector. It could be an instrumentation agent; it could be anything. Any other program as long as it speaks the protocol. Today, the focus is mostly on the collector. Eventually, what could happen is that even the SDK can have the OpAMP implementation. It's not just the collector configuration that you could dynamically change; it could be a sampling rate at the SDK level as well. The wording is there, but it's not just the collector. It's worded in such a way that it could be anything.&lt;/p&gt;

&lt;p&gt;It's any Telemetry agent that we are talking about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; Yeah, so the idea is that this is supposed to be a standard way, again not collector-specific, to let us do configuration on the OpenTelemetry data transmission.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Yes,&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; And that remains the focus, right? Is the idea of, like, hey, we want to control how this data is being said. I do have that part right?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Yeah, correct.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; Okay. So, again, back to the basic questions. This is right now marked as beta, and I'm curious how you feel about it getting used in production. What pieces do you think are working well, and what pieces do you think are still to be defined?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Yeah. So, for an end user, I don't think there is much that they can use today. It's like, you know, there still have a lot of the protocol is beta, the specification is parts of beta. Ultimately, for an end user, the implementation is what they are concerned about and parts of the implementation in terms of what end users get. Is there support for operator-like? How do they go about controlling today?&lt;/p&gt;

&lt;p&gt;So, that implementation is still very much in development work. For an end user, there's still a lot of work to do for them to even try it out. But there's a lot of effort going on in getting those things out. But for a vendor, I would say, you know, I would hope to see that everybody does something around it. So, if you are an observability vendor, try it out, do some POCs, get feedback from the users, and then bring those use cases. What problems do your users want to solve so that the implementation can go in that direction?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; Yeah. And that's been kind of my understanding too, that like, this promises quite a bit. And I think it's worth looking at. As an end user, what might you see working next year? Right? For live collector config, it's worth thinking about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tail-based sampling
&lt;/h2&gt;

&lt;p&gt;But not something that we want to see kind of, "Hey, don't worry." For example, I'll start this now. I didn't even put in a banner for it. But the one that has been on my mind is the idea of tail-based sampling when you get into a conversation about how are traces selected and you look at, of course, you know, the vast majority of traces that you send are not useful. So, you'd like to be a little more focused. And rather than making decisions when a trace starts, wouldn't it be cool if you could make decisions after a trace has been captured, which are we going to transmit? And this raises some really basic Computer Science 202 problems, right? Like, how do you save resources if you have to process every single trace anyway?&lt;/p&gt;

&lt;p&gt;The basic one is, “Hey if you decide at a particular point, let's say at a collector, about, "Oh, wow, this kind of trace is really interesting. I want to see a bunch more of those." How do you synchronize across multiple points?”&lt;/p&gt;

&lt;p&gt;Presumably, you are not using one collector to say, "Hey, okay, go ahead and change the sample rate on this trace smartly." And if you don't have it distributed, then your tail-based sampling becomes a massive bottleneck. If you do have it distributed, then you have the synchronization problem. So, today, I would not say, "Oh, you can do this with OpAPM and the collector as it currently stands." But I would say, like, hey, if you know this is something that you're going to want to cover further down the road, I think you may see a solution for this, say, in 12 months or two years.&lt;/p&gt;

&lt;p&gt;This is a defined system to say, "Hey, let's communicate about how we want to communicate these." And so, you may see this in a little while. Sorry, that was me talking a lot. But what do you want to say about tail-based sampling and the usability thereof?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Yeah, so tail-based, the agent management part, has several things, and config management is one of them. It enables, by dynamically giving you a chance to dynamically change the config, it enables a lot of these use cases. But the issue with that, specifically the tail-based sampling, is that it requires a particular kind of setup. When you do this tail-based sampling, you need to have the load-balancing export, like an exporter in place so that the span of the same trace goes to the same collector.&lt;/p&gt;

&lt;p&gt;So, when the sampling decision is made, it's made on the full trace. That requires a particular kind of setup, and those implementations are not ready. There's not an easy way to do this today. Although you can change the configuration dynamically, a setup that requires tail-based sampling cannot be controlled dynamically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; And you very correctly say, "Even if the feature was there to do remote config, you would need a very particular architecture to support that correctly," because the act of saying, "Hey, I want to sample a bunch of these at this rate," even in the very rough version of, like, we'll take half of them or something, there's not an easy way to do that with a filtering system that is supposed to be stateless.&lt;/p&gt;

&lt;p&gt;It's supposed to get the config and nothing else. But yeah, I saw some discussion at KubeCon about, well, you know, you could introduce a plug-in or a processor that is stateful that goes and checks some database file or something else. But even so, you still need to be handling your traces in a very particular way to have them be useful. I don't mean to pour water over.&lt;/p&gt;

&lt;p&gt;Everybody's tailbase sampling dreams. I think maybe there are smarter ways to configure your head-based sampling and smarter ways to configure how you're storing and managing your data overall, such that you stop being quite so stuck on this like, "Hey, what if I only saved the interesting traces somehow?" But anyway, that was on our minds.&lt;/p&gt;

&lt;p&gt;So let's talk about some of the things that are supposed to be handled with this in the future. Let's talk about auto-updates, where right now, maybe you're seeing some implementations with the collector that require these relatively manual processes like restarting the collector, obviously for new config, but also updating being a little bit effortful currently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Yeah. So, the protocol has something called packages. Those packages are top-level packages or add-ons. Those can be downloadable. The way auto-updates could work is a server offers a new downloadable version. Say, I got a new version, something like v88, and the server offers a v88 version. Here is where you get the URL where you can go and fetch the package.&lt;/p&gt;

&lt;p&gt;The message that the server offers, contains the downloadable URL, a checksum that you can verify against all those details. It's the job of the agent, whether it's a supervisor server or integrated, to download that, restart the collector process with the newly downloaded version, and it can check if the newly updated package has detected some issues. It will offer the downgraded version, and if it exists, you download it and just run it with the existing downgraded version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Credential Management
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; I see. Predicting proactively that a big part of this is going to be applying a certain package to your system. Right? So there's also some mention that was news to me when I was getting read up on this for this week. I didn't think about, again, I don't think a lot about security, is this thing about credentials management? There's some stuff in there about connection credentials management for stuff like client-side TLS, and certificate replication, and that was not stuff that I realized was implemented as part of this protocol.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Yeah. Agent management, as I mentioned earlier, has several parts. One is config, one is credentials, and then packaging packages. If you look at the protocol specifications, you will find them. The way credential management is structured, there are two ways: client-initiated requests and server-initiated requests.&lt;/p&gt;

&lt;p&gt;In server-initiated requests, as soon as the client connects to the OpAMP server, it can offer. Okay, so what's the OPAM? So there's something called OPAM connection settings. As soon as the client connects to the server, the server can offer.&lt;/p&gt;

&lt;p&gt;Okay, here are the settings that you need to use from the next time that you call me, like any time. Okay, so now we are connected and it offers settings. These TLS certificates should be used. So there's also an endpoint. Okay, so you should connect to the server.&lt;/p&gt;

&lt;p&gt;The server can let's say, you, the agent, like the collector, have to send data to some point that is secured. Okay, so here's the endpoint, here are the TLS credentials that you should use. You should connect when you send the data.&lt;/p&gt;

&lt;p&gt;And the thing with the server-initiated flow is that any time the server thinks that it has to rotate the certificates, it just offers the new certificates. So when a client sees that it has received new connection settings offered, it will follow the new connection settings and then rotate the certificates.&lt;/p&gt;

&lt;p&gt;In the client-initiated certificate request, the flow is the same. It generates the certificates and then it sends them. Here are my credentials. The server accepts. So that's how the certificate management is done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; I see. So definitely seeing some possible improvements for how we're able to manage secure certificates with those connections.&lt;/p&gt;

&lt;p&gt;All right, and let's give a moment here if you want to add any questions to the chat. Looks like we have a few people watching, which is fantastic. So I see a few models implemented for how the communication is working. Here's how I had this written down. Maybe this is not great, but can you describe the relationship and communication model between the client and the agent and how this model supports different implementation scenarios like if there's a sidecar or have a plugin if this is integrated directly in our code?&lt;/p&gt;

&lt;h2&gt;
  
  
  Communication Model
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; So, when you want to control, dynamically change certain things, like change the configuration, one way to go about it is integrated. By integrated, I mean there's a server that can offer dynamic configuration, and then there's a client. You could make this client part of the same code base that the collector runs. The client implementation will be part of the same collector binary. It runs along with the collector, and there's no separate process involved. It communicates with the server.&lt;/p&gt;

&lt;p&gt;Whenever it receives a new configuration from the server, it just reinitiates the whole pipeline. Let's say your collector is running. You started it without restarting the same process. It just reinitializes, stops whatever is happening, says, "Hey, I received a new configuration," flushes the pipelines, and then starts over all over again.&lt;/p&gt;

&lt;p&gt;That is integrated, where the client is part of the same binary that the collector is running. That's one way to do it. The other way to do it is not to make this integrated. Keep the client as a separate process. The issue with having the client being part of the integrated, one major issue with that approach is if a process crashes, if the collector process has crashed, who is responsible for bringing it back up?&lt;/p&gt;

&lt;p&gt;Because since the client is also part of the collector process, who is going to bring it back up? There are these details where the client and the collector will go away at the same time, so there is nobody to keep a check on the mandatory existence of the collector process. That's one issue.&lt;/p&gt;

&lt;p&gt;To avoid that issue, you have another model where the client is running as a separate process. This is called the supervisor approach. You have the collector, you have the supervisor. This supervisor process is what communicates with the server. It gets the configuration from the server and runs alongside the collector process.&lt;/p&gt;

&lt;p&gt;Anytime it receives a new configuration, it writes it to the configuration file and restarts the collector process. If the collector process, for some reason, has failed to start, it could be due to an invalid configuration or anything else, it tries to revert it, roll back to the existing earlier configuration that was working, and sends the status to the server, saying, "Hey, the config that you have sent, I tried to give that configuration to the collector, but it failed to come up, failed to start," and then it sends the error message.&lt;/p&gt;

&lt;p&gt;In this model, even if the collector crashes for any reason, you still have the supervisor process, which monitors the collector process and will make sure that there is always a collector process, either with the new configuration or with the old configuration.&lt;/p&gt;

&lt;p&gt;So, these are the two major models and how this is implemented in a Kubernetes-native environment. When you talk about the sidecar implementation, if you take the supervisor approach in the Kubernetes environment, you have the sidecar supervisor and the main collector as part of the same pod. There are two containers, the supervisor and the collector, and they are part of the same pod. The supervisor will be doing the same work that it does inside Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; This is a concern we've talked about a lot recently, knowing that there's a process there to pick up your data. There are two reasons for that. One we don't want gaps in our durability. We don't want weird blackouts. But also, it may not be healthy for your process to be trying to report to a collector who's not there on the system.&lt;/p&gt;

&lt;p&gt;Good to worry about. You don't want to sit there trying to write failover directly into your implementation, so just making sure the process is running is pretty solid. I was going to talk about WebSocket and HTTP transport. I wasn't sure that this was insightful. This is the transmission of config data. I think we're pretty hip to that. I don't want to turn this into a long discussion of WebSocket versus HTTP in general.&lt;/p&gt;

&lt;p&gt;I've noticed there's just talk about the supervisor process. So what is the role of that supervisor process? This is kind of what we were talking about, keeping the agent up and seeing that the collector process is running at all times. Is there more to that that we want to make sure that people understand as they're exploring this?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Yeah, I think that's mostly it. It's going to ensure that the collector process is running, and this is the main process that's communicating with the server. It fetches the updates, handles the process of downloading the new version, and then restarts the whole thing. It also manages the connection work. Essentially, the supervisor does the whole management work.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s next?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; Got it. So, my last question is just about what you see as the immediate future of the next couple of quarters. Are you looking to see maybe some implementations of a toolkit to do remote collector config with this, or do you think it's going to be a little longer for that? Or are we kind of waiting on some vendors to create some more complex demonstrations of using the tool?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; What I believe is that there has been a lot of effort going on into this, both in the operator and in the general collector side. There is good progress that has been made. What I hope to see in the next couple of quarters is that as an end user, you will get to see some sort of implementation where you can dynamically change the collector configuration. You will get to see a UI where the current configuration of the collectors is given, and then you can modify and trigger the config update. It goes to the collector, and as an end user, you can dynamically control that. You will see it happen in the operator and the regular collector, in both. That's what I see going on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; Yeah, no, I think that's sort of what we're all hoping for, right? Within your telemetry tool, within your APM tool, you're able to say, "Oh, I want to see a little more of this, I want to filter out things like this," and you're able to just feed it down as config. So, we'll see exciting developments next year. Maybe by the next KubeCon, we'll be talking about practical demonstrations of it.&lt;/p&gt;

&lt;p&gt;Well, you know, we have covered our half-hour. We've got a lot of good information here. I'll be doing a write-up on this, so hopefully, give me a little help when we do that, to talk about this.&lt;/p&gt;

&lt;p&gt;I think especially I want to talk about this in terms of the tail-base sampling world and kind of where we're at and where we see the next year of development. I don't want to give the impression that either tailbase sampling or config that is automatically shared across a whole swarm of agents is going to happen or something that you need to have working to be able to do &lt;a href="https://signoz.io/blog/what-is-opentelemetry/" rel="noopener noreferrer"&gt;OpenTelemetry&lt;/a&gt; at scale. So, we will be continuing that conversation.&lt;/p&gt;

&lt;p&gt;Did you have the final stuff? Did you want people to get involved? There is an SIG, I believe, for people could join.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Srikanth:&lt;/strong&gt; Yeah, there's a SIG, and there's a meeting that happens bi-weekly. It will be today in Pacific time for you. It starts in some time. There's a lot of work to do. You can join the SIG, and there's also a Slack channel. You can come and say hi. If you're interested in contributing, any sort of contribution, it does not have to be code. Awesome.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nica:&lt;/strong&gt; Yeah, I encourage people to check out the SIG calendar for OpenTelemetry. There's some great stuff there. Oh yeah, it's in like two hours. If you're interested, check it out. I will share the link to that calendar system because everything needs help.&lt;/p&gt;

&lt;p&gt;You do not have to be a master hacker to be able to contribute a lot. See me tomorrow about this time at the collector SIG for sure, where I'll be trying to get a few more filter functions working and a little more general-purpose.&lt;/p&gt;

&lt;p&gt;All right, folks, that's been our time. We will be back next week with more stuff about OpenTelemetry. We are on the &lt;a href="https://signoz.io/docs/introduction/" rel="noopener noreferrer"&gt;SigNoz&lt;/a&gt; team. Check out Signoz if you're interested in the dashboard for your OpenTelemetry data. But in general, do try out OpenTelemetry and the open world of monitoring. It's got a lot to offer you.&lt;/p&gt;

&lt;p&gt;Thank you so much. We will be back next week.&lt;/p&gt;




&lt;p&gt;Thank you for taking out the time to read this transcript :) If you have any feedback or want any changes to the format, please create an &lt;a href="https://github.com/SigNoz/signoz/issues" rel="noopener noreferrer"&gt;issue&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Feel free to join our Slack community and say hi! 👋&lt;/p&gt;

&lt;p&gt;&lt;a href="https://signoz.io/slack" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falb6jj148l2mta73yust.webp" alt="SigNoz Slack community"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>devops</category>
      <category>monitoring</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
