This post is intended to be a guide for understanding and working on Prometheus instrumentation using the go client. I personally did not feel any existing content on this was comprehensive enough and hence, decided to pen down my lessons from recent trial-and-error development.
End Goal
Developing an exporter for weather data fetched using the API.
All of the code referred to in the post is from this repo.
Prerequisites
- A working installation of Prometheus running on port 9090
- Go 1.15+
- An API key from tomorrow.io
- A basic understanding of metrics and exporters - this is an article I recommend. I will be covering the practical aspects related to developing and using them.
Querying the weather API
Our GET request will query the API to gather data about the temperature in metric units at specific coordinates, with a 1 hour interval between data points.
The GET request can be made using NewRequest
from net/http
. Replace the "APIKEY" string with your API key.
url := fmt.Sprintf("https://api.tomorrow.io/v4/timelines?location=%f,%f&fields=temperature×teps=%s&units=%s", 73.98529171943665, 40.75872069597532, "1h", "metric")
req, err := http.NewRequest("GET", url, nil)
if err != nil {
return Response{}, errors.New("error in GET request")
}
req.Header.Add("apikey", "APIKEY")
res, err := http.DefaultClient.Do(req)
if err != nil {
return Response{}, err
}
defer res.Body.Close()
Since the response data is structured JSON, we create structs with specific fields to unmarshal the response. This is done using the encoding/json
package.
For the struct fields, look at this part of the repo. In the interests of brevity, I will not be adding those snippets here.
body, err := ioutil.ReadAll(res.Body)
if err != nil {
return Response{}, errors.New("error reading response")
}
var dat Response
if err := json.Unmarshal(body, &dat); err != nil {
return Response{}, errors.New("error unmarshalling JSON")
}
Developing a custom exporter
I am going to implement an exporter two ways, asynchronous and synchronous.
Jobs and Registries
Many target endpoints with the same purpose combine to form a job.
Serving Metrics
Metrics can be 'served' i.e. exposed at a port of your choice. Here, I have chosen port 2112. I have chosen '/metrics' as my endpoint.
- Asynchronous
In this type of exporter, metrics are collected asynchronously. The function to collect the metrics is run asynchronously.
go recordMetrics()
The two metrics we will implement are gauges, opsProcessed
and tempCelsius
Here is the initialisation for tempCelsius
:
var tempCelsius = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "current_temperature_api_celsius",
Help: "Current temperature",
},
)
Name
should be a fully qualified Prometheus metric name and Help
is the description that appears on hovering over the metric in the UI.
Since we are using promauto
to initialise them, we do not need to explicitly add these to the registry.
Each call to the API returns multiple temperatures, each of which will be treated as a separate data point. To add these to the gauge, we will use gauge.Set(dataPoint)
as shown:
for _, interval := range dat.Data.Timestep[0].TempVal {
tempCelsius.Set(interval.Values.Temp)
}
The above code snippet is an example of 'direct instrumentation' where the same metrics are updated on each scrape.
- Synchronous. We first create a registry and register the standard Process and Go metrics.
reg := prometheus.NewRegistry()
reg.MustRegister(
collectors.NewProcessCollector(collectors.ProcessCollectorOpts{}),
collectors.NewGoCollector(),
)
The metrics we will be monitoring are temperature fetched using the weather API, the time taken for those requests and the humidity percentage.
This manner of metric exporting involves 'collectors'. A collector simply represents a set of metrics. In contrast to 'direct instrumentation', this involves creating new metrics each time with MustNewConstMetric
.
Let's first describe the temperature and humidity metrics. This is the description for temperature:
tempDesc = prometheus.NewDesc(
"temperature_city_fahrenheit",
"temperature of a city in fahrenheit",
[]string{"city"}, nil,
)
The first parameter is the metric name, this is the name that will be used when querying the metric from the UI.
The second is the description/help string. It gives a brief description of the metric.
A struct CityStats
is defined. Based on this struct, the Collect
and NewCityStats
functions, and the CityStatsCollector
interface are defined.
func (cc CityStatsCollector) Collect(ch chan<- prometheus.Metric) {
tempByCity, humidityByCity := cc.CityStats.TemperatureAndHumidity()
for city, temp := range tempByCity {
ch <- prometheus.MustNewConstMetric(
tempDesc,
prometheus.CounterValue,
float64(temp),
city,
)
}
}
Here, the TemperatureAndHumidity
function basically returns the results of the API call. Each of those are added to the metric channel ch
using MustNewConstMetric
.
Setup
Let's understand the structure of the Prometheus config file used with such exporters.
We will be working with a very basic version of the file.
First, we have the global
rules. Here, we will focus on 'scrape_interval'.
Scrape interval refers to the time period between two consecutive scraping runs. Scraping is the process of gathering data for all the metrics from specific endpoints or 'targets'.
global:
scrape_interval: 15s
Next up, we have the scrape configs. Each job here should be assigned a unique 'job_name'.
Job configs can be specified in two ways, either statically or dynamically using the available service discovery mechanism.
For now, we will be focusing on the static configs.
- job_name: myapp
static_configs:
- targets:
- localhost:2112
The endpoints we mentioned above, 'targets' are assigned to a job and are one or more monitored URL endpoints that Prometheus will scrape metrics from.
Run Prometheus as usual with the config file specs discussed above.
To run either the synchronous or asynchronous exporters, just run them using go run sync.go temp.go
or go run async.go temp.go
.
Hopefully this was a good primer on getting started with exporting custom metrics. Would love to discuss and hear feedback in the comments.
Top comments (0)