In the last couple of weeks, I spent a ton of time looking at different ways to send OpenTelemetry (OTel) data to Lightstep.
In case the super-obvious title didn’t tip you off already, there are three different ways to do so:
- Direct from application
- OpenTelemetry Collector
- Launchers (via Collector or Direct from application)
In this post, I will dig into each of these three approaches in detail, with code snippets which explain how to get data into Lightstep Observability. Let’s do this!
Note: If you’re looking for full code listings, don’t panic! You see them in the Lightstep OTel examples repository.
Pre-Requisites
Before we continue, here are some things that you’ll need:
- A basic understanding of Golang
- A basic understanding of the OpenTelemetry Collector
If you’d like to run the full code examples, you’ll also need:
- A Lightstep Observability account
- A Lightstep Access Token to tell Lightstep what project to send your traces to
- A working local Golang development environment
- Docker (we'll need it to run the OTel Collector locally)
OpenTelemetry & Lightstep
Lightstep Observability supports the native OpenTelemetry Protocol (OTLP). It can receive data in the OTLP format either via HTTP or gRPC. You will need to specify which method you wish to use in your code, as we’ll see in the upcoming code snippets.
If you're curious about using gRPC vs HTTP for OpenTelemetry, check out these docs.
Note: Other Observability tools that support OTLP include Honeycomb, Grafana, and Jaeger.
Direct from Application
If you’re getting started with instrumenting your application with OpenTelemetry, this is probably the most common route taken by most beginners. As the name suggests, we are sending data to a given Observability back-end directly from our application code.
To do this, we must do the following:
- Install the required OpenTelemetry packages, and import them
- Configure an Exporter
- Configure a TracerProvider
- Initialize the Exporter and TracerProvider to send data to Lightstep
Don’t panic if you don’t know what all this means. We’ll be digging in shortly.
Note: You can see the full example of sending OTel data to Lightstep directly via OTLP over gRPC here. The HTTP version can be found here.
How it Works
1- Install the required OTel libraries
These are the libraries that are required to send data to an Observability back-end (e.g Lightstep).
go get go.opentelemetry.io/otel \
go.opentelemetry.io/otel/exporters/otlp/otlptrace \
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc \
go.opentelemetry.io/otel/propagation \
go.opentelemetry.io/otel/sdk/resource \
go.opentelemetry.io/otel/sdk/trace \
go.opentelemetry.io/otel/semconv/v1.10.0 \
go.opentelemetry.io/otel/trace
In our application code, we’ll need to import the same libraries:
import (
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.10.0"
"go.opentelemetry.io/otel/trace"
)
If you wish to use HTTP instead of gRPC, replace otlptracegrpc
with otlptracehttp
.
2- Configure the Exporter
An Exporter is how we send data to OpenTelemetry. As I mentioned earlier, Lightstep accepts data in the OTLP format, so we need to define an OTLP Exporter.
Note: Some vendors don’t accept data in OTLP format, which means that you will need to use a vendor-specific exporter to send data to them.
We configure our Exporter like this:
var (
tracer trace.Tracer
endpoint = "ingest.lightstep.com:443"
lsToken = "<LS_ACCESS_TOKEN>"
)
func newExporter(ctx context.Context) (*otlptrace.Exporter, error) {
var headers = map[string]string{
"lightstep-access-token": lsToken,
}
client := otlptracegrpc.NewClient(
otlptracegrpc.WithHeaders(headers),
otlptracegrpc.WithEndpoint(endpoint),
)
return otlptrace.New(ctx, client)
}
Some noteworthy items:
- The
endpoint
is set toingest.lightstep.com:443
, which points to Lightstep’s public Microsatellite pool. If you are using an on-premise satellite pool, then check out these docs. - You must provide a value for
<LS_ACCESS_TOKEN>
with your own Lightstep Access Token. - We are sending data to Lightstep via gRPC. If you wish to use HTTP instead of gRPC, your client connection will look like this:
client := otlptracehttp.NewClient(
otlptracehttp.WithHeaders(headers),
otlptracehttp.WithEndpoint(endpoint),
otlptracehttp.WithURLPath("traces/otlp/v0.9"),
)
Notice how we have to add an extra configuration option, WithURLPath
. This configuration option allows us to override the default URL path for sending traces. The default value is /v1/traces
; however, for HTTP connections, Lightstep expects this value to be traces/otlp/v0.9
.
3- Configure the TracerProvider
A TracerProvider
serves as the entry point of the OpenTelemetry API. It provides access to Tracer
s. A Tracer
is responsible for creating a Span to trace the given operation.
We configure our TracerProvider like this:
var (
tracer trace.Tracer
serviceName = "test-go-server-grpc"
serviceVersion = "0.1.0"
lsEnvironment = "dev"
)
func newTraceProvider(exp *otlptrace.Exporter) *sdktrace.TracerProvider {
resource, rErr :=
resource.Merge(
resource.Default(),
resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String(serviceName),
semconv.ServiceVersionKey.String(serviceVersion),
attribute.String("environment", lsEnvironment),
),
)
if rErr != nil {
panic(rErr)
}
return sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exp),
sdktrace.WithResource(resource),
)
}
A few noteworthy items:
- We define a Resource to provide OpenTelemetry with a bunch of information that identifies our service. This includes things like
serviceName
andserviceVersion
, which are required by Lightstep to be set. As the name implies,serviceName
is the name of the microservice that you are instrumenting. -
sdktrace.WithBatcher
tells OpenTelemetry to use the BatchSpanProcessor. That is, it says to export the data in batches. For the purposes of this example, we’re not doing anything fancy with this.
4- Initialize the Exporter and TracerProvider to send data to Lightstep
We’re finally ready to send data to Lightstep! We do this by calling the newExporter
and newTraceProvider
functions above from our main
function:
func main() {
ctx := context.Background()
exp, err := newExporter(ctx)
if err != nil {
log.Fatalf("failed to initialize exporter: %v", err)
}
tp := newTraceProvider(exp)
defer func() { _ = tp.Shutdown(ctx) }()
otel.SetTracerProvider(tp)
// More code here
...
}
Try it!
Let's see the code example in action. In this example, we will run a Server with a /ping
endpoint. The server will send OTel data to Lightstep directly via OTLP over gRPC. We will hit the endpoint using curl
.
1- Clone the repo
git clone git@github.com:lightstep/opentelemetry-examples.git
2- Open a terminal window and run the server program
cd opentelemetry-examples/go/opentelemetry/otlp/server
export LS_ACCESS_TOKEN=<your_access_token>
go run server.go
Be sure to replace <your_access_token>
with your own Lightstep Access Token
3- Open a new terminal window and hit the endpoint
curl http://localhost:8081/ping
Side-by-side sample output from the server output and curl
command:
4- See it in Lightstep
Note: Want to run the HTTP version? Replace
go run server.go
in Step 2 withgo run server-http.go
.
OpenTelemetry Collector
The next approach to sending data to an Observability back-end is by way of the OpenTelemetry Collector. For non-development setups, this is the recommended approach to send OpenTelemetry data to your Observability back-end.
To send your instrumented data to your Observability back-end via the Collector, we must do the following:
- Have an OpenTelemetry Collector instance running somewhere (running it locally is easiest)
- Install the required OpenTelemetry packages, and import them
- Configure an Exporter
- Configure a TracerProvider
- Initialize the Exporter and TracerProvider
Note: You can see the full code listing here.
Looks almost the same as the Direct approach, doesn’t it? Almost...
We’ll get into the differences shortly.
How it Works
1- Install the required OTel libraries
These are the libraries that are required to send data to an Observability back-end (e.g Lightstep).
go get go.opentelemetry.io/otel \
go.opentelemetry.io/otel/exporters/otlp/otlptrace \
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc \
go.opentelemetry.io/otel/propagation \
go.opentelemetry.io/otel/sdk/resource \
go.opentelemetry.io/otel/sdk/trace \
go.opentelemetry.io/otel/semconv/v1.10.0 \
go.opentelemetry.io/otel/trace
In our application code, we’ll need to import the same libraries:
import (
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.10.0"
"go.opentelemetry.io/otel/trace"
)
If you wish to use HTTP instead of gRPC, replace otlptracegrpc
with otlptracehttp
.
2- Configure the Exporter
As we saw in the Direct example, we are exporting our data via OTLP (see how the return type is otlptrace.Exporter
). The difference is that instead of exporting our data directly to Lightstep, we’re exporting our data to the OTel Collector, which happens to ingest OTel data from our application in OTLP format as well.
In our Direct example, before we could create a new Exporter, we first needed to create a new Trace Client (otlptracegrpc.NewClient
), so that we could tell OpenTelemetry how to send data to Lightstep. We don’t need to do this when we use the Collector, because the Collector takes care of creating a Trace Client for us behind the scenes, using the information in the Collector config YAML to do so.
We configure our Exporter like this:
var (
endpoint = "localhost:4317"
)
func newExporter(ctx context.Context) (*otlptrace.Exporter, error) {
exporter, err :=
otlptracegrpc.New(ctx,
otlptracegrpc.WithInsecure(),
otlptracegrpc.WithEndpoint(endpoint),
)
return exporter, err
}
Some noteworthy items:
- The
endpoint
is your Collector’s URL. - In the example below, the Collector
endpoint
is set tolocalhost:4317
, which means that the OpenTelemetry Collector is running locally, using Docker, listening on gRPC port4317
. - You do not need to provide a Lightstep Access token as part of this configuration, as that value is set in the OTel Collector’s configuration YAML file.
- Note that the
WithInsecure
option is set. This is required if you’re using the Collector, and only if a certificate isn't configured in the Collector. (That’s a blog post for another day. 😜) - We are sending data to the Collector via gRPC. If you wish to use HTTP instead of gRPC, simply replace
otlptracegrpc
withotlptracehttp
like this:
exporter, err :=
otlptracehttp.New(ctx,
otlptracehttp.WithInsecure(),
otlptracehttp.WithEndpoint(endpoint),
)
3- Configure the TracerProvider
Our TracerProvider
is identical to the one we configured in the Direct example:
var (
tracer trace.Tracer
serviceName = "test-go-server-grpc"
serviceVersion = "0.1.0"
lsEnvironment = "dev"
)
func newTraceProvider(exp *otlptrace.Exporter) *sdktrace.TracerProvider {
resource, rErr :=
resource.Merge(
resource.Default(),
resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String(serviceName),
semconv.ServiceVersionKey.String(serviceVersion),
attribute.String("environment", lsEnvironment),
),
)
if rErr != nil {
panic(rErr)
}
return sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exp),
sdktrace.WithResource(resource),
)
}
4- Initialize the Exporter and TracerProvider to send data to Lightstep
We’re finally ready to send data to Lightstep! We do this by calling the newExporter
and newTraceProvider
functions above from our main
function:
func main() {
ctx := context.Background()
exp, err := newExporter(ctx)
if err != nil {
log.Fatalf("failed to initialize exporter: %v", err)
}
tp := newTraceProvider(exp)
defer func() { _ = tp.Shutdown(ctx) }()
otel.SetTracerProvider(tp)
otel.SetTextMapPropagator(
propagation.NewCompositeTextMapPropagator(
propagation.TraceContext{},
propagation.Baggage{},
),
)
tracer = tp.Tracer(serviceName, trace.WithInstrumentationVersion(serviceVersion))
// More code here
...
}
Note that this is the same as what we saw in the Direct example. Only the underlying code in the newExporter
function is different.
Try it!
Let's see the code example in action. In this example, we will run a Server with a /ping
endpoint. The server will send OTel data to Lightstep through the Collector, over gRPC. We will hit the endpoint using curl
.
1- Clone the repo
git clone git@github.com:lightstep/opentelemetry-examples.git
2- Run the Collector
Open up a new terminal window. First, you'll need to edit the collector.yaml file. Be sure to replace ${LIGHTSTEP_ACCESS_TOKEN}
with your own Lightstep Access Token.
Now you can start up the Collector:
cd opentelemetry-examples/collector/vanilla
docker run -it --rm -p 4317:4317 -p 4318:4318 \
-v $(pwd)/collector.yaml:/otel-config.yaml \
--name otelcol otel/opentelemetry-collector-contrib:0.53.0 \
"/otelcol-contrib" \
"--config=otel-config.yaml"
Note: This may take a little while if it's the first time you're pulling the Collector image.
Sample output:
3- Open up a new terminal window and run the server program
cd opentelemetry-examples/go/opentelemetry/collector/server
go run server.go
4- Open third terminal window and hit the endpoint
curl http://localhost:8081/ping
Side-by-side sample output from the server output and curl
command:
And your Collector output should look something like this:
5- See it in Lightstep
Launcher
The final approach that we’ll be exploring today is the Launcher. If you’ve perused through the OpenTelemetry docs and haven’t seen any mention of a Launcher anywhere, it’s because they’re not part of OTel per se.
You can think of Launchers as wrappers around the OTel SDKs. Launchers were originally created by some of the talented engineers here at Lightstep, to provide a way to encapsulate OpenTelemetry setup and configuration. Put simply, the launchers were born out of them being tired of duplicating the SDK setup code. Once again, Developer Laziness for the win! (For the record, I am a firm believer that Developer Laziness is what makes for great software. We just hate repetition!) Launchers also add a layer of validation to give users a better understanding of all the required parameters. For more on Launchers, check out this article by Ted Young.
We currently have Launchers for Go, Python, Java, and Node.JS.
Okay...now that we understand why Launchers exist, let’s find out how to use them to send OTel data to Lightstep.
To do this, we must do the following:
- Install the required OpenTelemetry and Launcher packages, and import them
- Configure the Launcher
- Initialize the Launcher
Looks a bit different than with the other two examples, doesn’t it? As you can see, the Launcher takes care of configuring and initializing the Exporter and TracerProvider.
Let’s dig in.
Note: You can see the full example of sending OTel data to Lightstep using the Go Launcher through the Collector over gRPC here. The direct (via Launcher) version can be found here.
How it Works
1- Install the required OTel libraries
go get github.com/lightstep/otel-launcher-go/launcher \
go.opentelemetry.io/otel \
go.opentelemetry.io/otel/semconv/v1.10.0 \
go.opentelemetry.io/otel/trace
In our application code, we’ll need to import the same libraries:
import (
"github.com/lightstep/otel-launcher-go/launcher"
"go.opentelemetry.io/otel"
semconv "go.opentelemetry.io/otel/semconv/v1.10.0"
"go.opentelemetry.io/otel/trace"
)
Huh...fewer packages to install and import!
2- Configure the Launcher
Here, we’re configuring the Launcher, similar to what we did when we configured our Exporter and TracerProvider. Except it’s all encapsulated in this lovely launcher.ConfigureOpentelemetry
! Super cool. 😎
var (
tracer trace.Tracer
serviceName = "test-go-server-launcher"
serviceVersion = "0.1.0"
endpoint = "ingest.lightstep.com:443"
lsToken = "<LS_ACCESS_TOKEN>"
)
func newLauncher() launcher.Launcher {
otelLauncher := launcher.ConfigureOpentelemetry(
launcher.WithServiceName(serviceName),
launcher.WithServiceVersion(serviceVersion),
launcher.WithAccessToken(lsToken),
launcher.WithSpanExporterEndpoint(endpoint),
launcher.WithMetricExporterEndpoint(endpoint),
launcher.WithPropagators([]string{"tracecontext", "baggage"}),
launcher.WithResourceAttributes(map[string]string{
string(semconv.ContainerNameKey): "my-container-name",
}),
)
return otelLauncher
}
Some noteworthy items:
- The
endpoint
is set toingest.lightstep.com:443
, which points to Lightstep’s public Microsatellite pool. If you are using an on-premise satellite pool, then check out these docs. - You must provide a value for
<LS_ACCESS_TOKEN>
with your own Lightstep Access Token. - Launchers use gRPC only. Not a deal-breaker, to be honest.
Ugh...that’s all well and good, but what if you wanted to use a Collector? Didn’t I say that that’s the preferred method for non-developer setups? Yes, I sure did! And not to worry, because you can use Launchers to send OTel data to the Collector instead of directly to Lightstep. To do that, you just need to:
- Change the
endpoint
value tolocalhost:4317
- Set
WithSpanExporterInsecure
totrue
- Set
WithMetricExporterInsecure
totrue
- Remove the
WithAccessToken
setting (since this is handled by the OTel Collector’s configuration YAML file
Which means that your code would look like this:
var (
tracer trace.Tracer
serviceName = "test-go-server-launcher"
serviceVersion = "0.1.0"
endpoint = "ingest.lightstep.com:443"
)
func newLauncher() launcher.Launcher {
otelLauncher := launcher.ConfigureOpentelemetry(
launcher.WithServiceName(serviceName),
launcher.WithServiceVersion(serviceVersion),
launcher.WithSpanExporterInsecure(true),
launcher.WithSpanExporterEndpoint(endpoint),
launcher.WithMetricExporterEndpoint(endpoint),
launcher.WithMetricExporterInsecure(true),
launcher.WithPropagators([]string{"tracecontext", "baggage"}),
launcher.WithResourceAttributes(map[string]string{
string(semconv.ContainerNameKey): "my-container-name",
}),
)
return otelLauncher
}
3- Initialize the Launcher
All we need to do is call our newLauncher
function, and we’re done!
func main() {
otelLauncher := newLauncher()
defer otelLauncher.Shutdown()
tracer = otel.Tracer(serviceName)
// More code here
...
}
Overall, the Launcher approach requires less code, compared to the other two sans-Launcher approaches.
Try it!
Let's see the code example in action. In this example, we will run a Server with a /ping
endpoint. The server will send OTel data to Lightstep using the Go Launcher through the Collector, over gRPC. We will hit the endpoint using curl
.
1- Clone the repo
git clone git@github.com:lightstep/opentelemetry-examples.git
2- Run the Collector
Open up a new terminal window. First, you'll need to edit the collector.yaml file. Be sure to replace ${LIGHTSTEP_ACCESS_TOKEN}
with your own Lightstep Access Token.
Now you can start up the Collector:
cd opentelemetry-examples/collector/vanilla
docker run -it --rm -p 4317:4317 -p 4318:4318 \
-v $(pwd)/collector.yaml:/otel-config.yaml \
--name otelcol otel/opentelemetry-collector-contrib:0.53.0 \
"/otelcol-contrib" \
"--config=otel-config.yaml"
Note: This may take a little while if it's the first time you're pulling the Collector image.
Sample output:
3- Open up a new terminal window and run the server program
cd opentelemetry-examples/go/launcher/server
go run server.go
4- Open third terminal window and hit the endpoint
curl http://localhost:8081/ping
Side-by-side sample output from the server output and curl
command:
And your Collector output should look something like this:
- See it in Lightstep
Note: Want to run the direct version using the Launcher? Simply skip Step 2. In Step 3 set the
LS_ACCESS_TOKEN
environment variable:export LS_ACCESS_TOKEN=<your_access_token>
, where<your_access_token>
is your own Lightstep Access Token, and replacego run server.go
withgo run server-otlp.go
.
Gotchas
While I was messing around with each of the 3 approaches, I encountered a few gotchas, so I thought I’d share them here.
1- gRPC Debugging
gRPC is the bane of my existence. Especially when I see that lovely context deadline exceeded
message. It makes my blood boil. Fortunately, my OTel friends at Lighstep told me about two nice little flags that make gRPC debugging a little easier:
export GRPC_GO_LOG_VERBOSITY_LEVEL=99
export GRPC_GO_LOG_SEVERITY_LEVEL=info
Set these beauties, and you’ll know relatively quickly if you can’t connect to your gRPC endpoint. This is what a successful connection looks like:
2022/07/26 16:28:36 Using default LS endpoint ingest.lightstep.com:443
2022/07/26 16:28:36 INFO: [core] [Channel #1] Channel created
2022/07/26 16:28:36 INFO: [core] [Channel #1] original dial target is: "ingest.lightstep.com:443"
2022/07/26 16:28:36 INFO: [core] [Channel #1] parsed dial target is: {Scheme:ingest.lightstep.com Authority: Endpoint:443 URL:{Scheme:ingest.lightstep.com Opaque:443 User: Host: Path: RawPath: ForceQuery:false RawQuery: Fragment: RawFragment:}}
2022/07/26 16:28:36 INFO: [core] [Channel #1] fallback to scheme "passthrough"
2022/07/26 16:28:36 INFO: [core] [Channel #1] parsed dial target is: {Scheme:passthrough Authority: Endpoint:ingest.lightstep.com:443 URL:{Scheme:passthrough Opaque: User: Host: Path:/ingest.lightstep.com:443 RawPath: ForceQuery:false RawQuery: Fragment: RawFragment:}}
2022/07/26 16:28:36 INFO: [core] [Channel #1] Channel authority set to "ingest.lightstep.com:443"
2022/07/26 16:28:36 INFO: [core] [Channel #1] Resolver state updated: {
"Addresses": [
{
"Addr": "ingest.lightstep.com:443",
"ServerName": "",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}
],
"ServiceConfig": null,
"Attributes": null
} (resolver returned new addresses)
2022/07/26 16:28:36 INFO: [core] [Channel #1] Channel switches to new LB policy "pick_first"
2022/07/26 16:28:36 INFO: [core] [Channel #1 SubChannel #2] Subchannel created
2022/07/26 16:28:36 Using default service name test-go-client-grpc
2022/07/26 16:28:36 Using default service version 0.1.0
2022/07/26 16:28:36 Using default environment dev
2022/07/26 16:28:36 INFO: [core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING
2022/07/26 16:28:36 INFO: [core] [Channel #1 SubChannel #2] Subchannel picks a new address "ingest.lightstep.com:443" to connect
2022/07/26 16:28:36 INFO: [core] pickfirstBalancer: UpdateSubConnState: 0x14000380100, {CONNECTING <nil>}
2022/07/26 16:28:36 INFO: [core] [Channel #1] Channel Connectivity change to CONNECTING
Get "http://localhost:8081/ping": dial tcp [::1]:8081: connect: connection refused
2022/07/26 16:28:37 INFO: [core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY
2022/07/26 16:28:37 INFO: [core] pickfirstBalancer: UpdateSubConnState: 0x14000380100, {READY <nil>}
2022/07/26 16:28:37 INFO: [core] [Channel #1] Channel Connectivity change to READY
2- Debug Spans (Launchers only)
If you’re using a Launcher and your Spans not knowing showing up in Lightstep, you can set the OTEL_LOG_LEVEL
flag before running your code:
export OTEL_LOG_LEVEL=debug
go run <your_app>.go
Your debug output looks something like this:
2022/07/26 15:39:10 debug logging enabled
2022/07/26 15:39:10 configuration
2022/07/26 15:39:10 {
"SpanExporterEndpoint": "localhost:4317",
"SpanExporterEndpointInsecure": true,
"ServiceName": "test-go-client-launcher",
"ServiceVersion": "0.1.0",
"Headers": null,
"MetricExporterEndpoint": "localhost:4317",
"MetricExporterEndpointInsecure": true,
"MetricExporterTemporalityPreference": "cumulative",
"MetricsEnabled": true,
"LogLevel": "debug",
"Propagators": [
"tracecontext",
"baggage"
],
...
}
Which approach is best?
When I first started on my OTel journey in 2021 (in my pre-Lightstep days), I sent OTel data to my Observability back-end by way of the OTel Collector. To me, this was a no-brainer, because the Collector can:
- Ingest data from multiple sources (including applications and infrastructure metrics)
- Tack on/remove metadata
- Mask data
- Sample data
- Send data to multiple back-ends at the same time (great if you were evaluating different vendors or transitioning from one vendor to another)
I’m personally a HUGE fan of the Collector, and I stand by my statement that it is good practice to run a OTel Collector in Pre-Prod/Prod environments to send your OpenTelemetry data to an Observability back-end.
BUT...I have to admit that I was thinking about this problem from more of an operational perspective, rather than from a developer’s perspective.
The thing is, when you’re getting started with OTel, chances are, you’re starting from zero. Which means that you’re already having to figure out this whole instrumentation bit. That’s already stressful enough. Add trying to stand up a Collector on top of it all, and you’ve already got too many moving parts and a likely very overwhelmed developer...even if you run it with the simplest configuration (i.e. locally, via Docker). That, and, do you really need to run a Collector when you’re just doing local development? It’s probably more effort than it’s worth.
BUT...I also learned from personal experience that connecting to an Observability back-end through the Direct approach was a royal pain in the arse. Documentation was veeeery sparse. Examples were incomplete. Needless to say, it was a very trying journey. And I had difficulties with using both HTTP and gRPC.
So this all begs the question–what’s a good, easy way to instrument your code and send it to an Observability back-end? Well, this is where the Launchers come into play! Because they give you the best of both worlds. You can connect directly to your Observability back-end, OR you can connect via the OTel Collector. In addition, the Launchers don’t restrict you to using Lightstep as your Observability back-end, because:
- If you connect to a Collector from the Launcher, the Collector automagically gives you the ability to send to multiple Observability back-ends
- If you choose to connect directly to a non-Lightstep Observability back-end that accepts OTel data in OTLP format from the Launcher
I have to admit that before I used the Go Launcher, I was quite skeptical about it. After all, it’s not vanilla OTel, which made me think..."Uh-oh...vendor lock-in! Isn’t that what OTel is trying to avoid?"
But two things changed my mind about it. First, the fact that you’re not locked into a specific vendor (see above). Secondly, our friends at Honeycomb have been working to bring Launchers to the community, as per work done here, so chances are, launchers may be (vanilla) OTel’s future!
My conclusion: the Launcher wins, due to its flexibility and overall simplicity compared to its counterparts.
Final Thoughts
We’ve learned about how we can send our OTel data to Lightstep in three different ways:
- Direct from our application
- Via the OTel Collector
- Using Launchers, which can send data directly to Lightstep or by way of the Collector
In non-dev setups, using a Collector is the preferred way to send data to your Observability back-end; however, if you’re just getting started with OTel, sending OTel data directly to your Observability back-end makes the most sense, because you have to deal with fewer moving parts.
That said, using vanilla OTel to do either of the above can be a bit overwhelming, which is where Launchers come in, as they abstract a bunch of that connectivity stuff, therefore making it easier to send data to your Observability back-end, whether it’s direct, or by way of a Collector.
Whew! That was a lot to think about and take in! Give yourself a pat on the back, because we’ve covered a LOT! Now, please enjoy this picture of some goats.
Peace, love, and code. 🦄 🌈 💫
Got questions about OTel instrumentation with Golang? Contact us! Connect with us through e-mail or hit me up on Twitter. Hope to hear from y’all!
Top comments (0)