DEV Community

Cover image for OpenTelemetry Collector on Kubernetes with Helm Chart – Part 3
Yoav Danieli for Aspecto

Posted on • Updated on • Originally published at aspecto.io

OpenTelemetry Collector on Kubernetes with Helm Chart – Part 3

Written by Yoav Danieli @ Team Aspecto

In this article, you will learn how to use the OpenTelemetry Helm chart to deploy a collector as a gateway in a Kubernetes cluster.

In previous tutorials, I demonstrated how to deploy the OpenTelemetry Collector as both Gateway and Agent on a Kubernetes cluster. It took a lot of work and hours of research. Luckily there is an easier way. Enter Helm Charts.

Related Guides

What is Helm?

Before we start using Helm, let's give it a quick introduction.

Helm is an open-source package manager for Kubernetes. It allows software built for Kubernetes to be provided, used, and shared. It's also a graduate project from the Cloud Native Computing Foundation (CNCF), similar to Kubernetes, Jaeger, and OpenTelemetry (although OTel is still incubating).

Helm packages software uses a format called Charts. A chart is a collection of files that form a set of Kubernetes resources. We can use a single chart to deploy simple things like a single-pod hello-world program or enormously complex stuff like a full-stack web app with HTTP servers, databases, caches, and more.

For more information about Helm, visit Helm Docs.

Now let's dive right in and figure out how to use the Helm chart provided by OpenTelemetry.

Using OpenTelemetry Helm Chart

Installing Opentelemetry-Collector Helm Chart

1. Install Helm using homebrew.

brew install helm
Enter fullscreen mode Exit fullscreen mode

Add the OpenTelemetry helm repository. A helm repository is an HTTP server from which we can find and install helm charts. The following command will establish a connection with the repository server and allow us to find and install its resources.

helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
Enter fullscreen mode Exit fullscreen mode

3. Search the repo and find out which charts it offers:

helm search repo open-telemetry
Enter fullscreen mode Exit fullscreen mode

This will produce the following output:

NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
open-telemetry/opentelemetry-collector  0.34.0          0.61.0          OpenTelemetry Collector Helm chart for Kubernetes
open-telemetry/opentelemetry-demo       0.3.1           0.4.0-alpha     opentelemetry demo helm chart
open-telemetry/opentelemetry-operator   0.13.0          0.60.0          OpenTelemetry Operator Helm chart for Kubernetes
Enter fullscreen mode Exit fullscreen mode

The repository offers three charts:

  • OpenTelemetry-collector -- The chart that we will use to create the collector.

  • OpenTelemetry-demo -- This is a demo chart that creates an incredibly complex microservice system. It's written in many programming languages, and you can find good examples for almost any use case you need.

  • OpenTelemetry-operator -- This chart helps us set up an OpenTelemetry operator in our project. The operator provides the implementation of the OpenTelemetry collector and injects auto instrumentation to the services running inside our cluster. I will demonstrate the usage of the OpenTelemetry collector in future articles.

When using a helm chart, we are provided with a pre-defined set of parameters. These parameters, as well as their default values, can be found in a file called values.yml. To show the file, we can use the command:

helm show values open-telemetry/opentelemetry-collector
Enter fullscreen mode Exit fullscreen mode

But it is easier to jump to the chart source code and look at the files in the browser. I will not elaborate on each of the values, instead, I will demonstrate how to create the same Gateway Collector deployment we created above. Hopefully, it would be much simpler.

First, clean up your cluster and remove all resources within it. Getting started with a clean slate

Let's install the chart as is. With its default values.

helm install opentelemetry-collector open-telemetry/opentelemetry-collector
Enter fullscreen mode Exit fullscreen mode

We get an error.

 

[ERROR] 'mode' must be set.
Enter fullscreen mode Exit fullscreen mode

As stated in the source code from the link above. The default value for mode is set to an empty string while valid values are 'deployment''daemonset' or 'statefulset'.

Let's add a mode value. 

Create a collector-values.yml file and set the mode to deployment, the same as our Gateway.

helm install opentelemetry-collector open-telemetry/opentelemetry-collector -f ./collector-values.yml
Enter fullscreen mode Exit fullscreen mode

So now we have an up-and-running collector. 

💡Pro Tip -- We can replace the command 'install' with 'template' to see the exact Kubernetes resources rendered and deployed to the cluster. Looking at the resources generated by the helm chart, we see similarities to the resources we used ourselves. And all we did was install the chart with a specific mode!

OpenTelemetry-Collector Helm Chart Configuration

To configure it the same way as the Gateway collector we deployed in the previous chapters, there are more values we need to set. Let's start with the configuration taken from the chart's examples under 'deployment-otlp-traces'.

mode: deployment

ports:
  jaeger-compact:
    enabled: false
  jaeger-thrift:
    enabled: false
  jaeger-grpc:
    enabled: false
  zipkin:
    enabled: false

config:
  receivers:
    jaeger: null
    prometheus: null
    zipkin: null
  service:
    pipelines:
      traces:
        receivers:
          - otlp
      metrics: null
      logs: null
Enter fullscreen mode Exit fullscreen mode

Here we disable the ports we are not using and enable only the traces pipeline, with only the otlp receiver. This configuration will install a collector that receives telemetry data via grpc and http protocols in otlp format. 

There are no exporters defined, so the chart will render the default exporter which is a simple logging. Let's add the OTLP exporter for Aspecto, our distributed tracing platform. And let's also add Jaeger exporter:

config:
 receivers:
   jaeger: null
   prometheus: null
   zipkin: null
 exporters:
   otlp/aspecto:
     endpoint: otelcol.aspecto.io:4317
     headers:
       Authorization: ${ASPECTO_API_KEY}
   jaeger:
     endpoint: jaeger-all-in-one:14250
     tls:
       insecure: true
   logging:
     loglevel: debug
 service:
   telemetry:
     logs:
       level: debug
   pipelines:
     traces:
       receivers:
         - otlp
       exporters:
         - otlp/aspecto
         - jaeger
         - logging
     metrics: null
     logs: null

Enter fullscreen mode Exit fullscreen mode

Populate the ASPECTO_API_KEY with a secret, same as above. The OpenTelemetry collector chart supports it by using the extraEnv property:

extraEnvs:
 - name: ASPECTO_API_KEY
   valueFrom:
     secretKeyRef:
       name: aspecto
       key: api-key
       optional: false
Enter fullscreen mode Exit fullscreen mode

We also need to expose our collector to the internet using the load balancer service:

service:
 type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

The last thing we should do is tell the helm chart which OpenTelemetry collector image it should use and how. By default, it uses the latest image from the otel/opentelemetry-collector-contrib docker repo. 

We want to change it to the stable collector-core image of our choosing tag.

image:
 repository: otel/opentelemetry-collector
 tag: "latest"

command:
 name: otelcol
Enter fullscreen mode Exit fullscreen mode

Upgrade the deployment:

helm upgrade opentelemetry-collector open-telemetry/opentelemetry-collector -f ./collector-values.yml
Enter fullscreen mode Exit fullscreen mode

And also deploy Jaeger UI, so we can visualize our traces:

kubectl apply -f jaeger.yml
Enter fullscreen mode Exit fullscreen mode

After checking the logs and status of our Collector and Jaeger deployments and verifying everything is ready, we can start our test service to generate traces. 

Once the service runs, you should see traces reaching the destinations configured in the exporter's section.

Final Thoughts and Aspecto Helm Chart

Using the OpenTelemetry-Collector Helm chart was simple and quick compared to manually defining each resource. I highly recommend using it in your clusters. 

If you reach a point where you need to extend those resources or add properties that are not enabled via the helm chart, you can output the template resource with the helm template command and use it. Easy peasy.

In addition, some distributed tracing vendors offer their own helm chart to add a vendor-specific functionality to the Collector configuration. 

Check Aspecto Helm charts here.

ASPECTO HELM CHART

Thanks for reading, and if you have any questions, I'm happy to connect and help.

P.S. You can find a lot of the concepts covered here in our latest live workshop on deploying the OpenTelemetry Collector on Kubernetes.

Top comments (0)