DEV Community

Brian Kopp
Brian Kopp

Posted on

Use External Metrics to Scale Workloads in Kubernetes

It's a pretty common use-case to scale workers based on some value. One of the most common use-cases is to scale queue consumers based on the size of a queue. If you're in kubernetes, you're almost certainly using prometheus, and if you're using custom metrics, it would be a good guess that you're using prometheus-adapter. This article lays out how to accomplish this requirement.

TLDR - here's the repo.

Acquire a kubernetes

I'm using minikube. minikube start.

Install Things

I'm using Helm3.

helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
kubectl create namespace monitoring
helm upgrade -i prom-op stable/prometheus-operator -n monitoring -f ./manifests/prom-op.yaml
helm upgrade -i prom-adap stable/prometheus-adapter -n monitoring -f ./manifests/prom-adapt.yaml

The prom-op.yaml and prom-adapt.yaml are pretty lightweight.

# prom-op.yaml
prometheus:
  service:
    type: NodePort # Because I want to access from my computer
# prom-adapt.yaml
prometheus:
  # they need to be friends
  url: http://prom-op-prometheus-operato-prometheus.monitoring.svc

You Need Applications

I'm making two really basic nodejs apps. One that produces an external metric, and one that literally does nothing. Here's the metric producer. You can look at the src/noop folder in the repo to see how it does nothing, in case you're interested in that sort of thing!

const express = require('express');

const app = express();

app.get('/metrics', (_, res) => {
    const metricsText = '# HELP foobar This is just an example of a metric' +
        '\n# TYPE foobar counter' +
        '\nfoobar 9000'
    res.send(metricsText);
});

console.log('starting server...');
const server = app.listen(3000, (err) => {
    if (err) {
        console.error('error starting server on 3000', err);
        process.exit(1);
    }
    console.log('started server on 3000');
});

const stop = () => {
    console.log('received stop signal, stopping...');
    server.close((err) => {
        if (err) {
            console.error('error stopping server', err);
            process.exit(1);
        }
        console.log('successfully stopped server');
        process.exit(0);
    });
};
process.on('SIGTERM', stop);
process.on('SIGINT', stop);

Throw these up in docker so your kubernetes cluster can pull them.

cd src/producer
docker build . -t briankopp/metrics-producer:0.0.1
docker push briankopp/metrics-producer:0.0.1

Do similar things for the noop server.

Deploy to kubernetes

In the repo, there are some basic yaml files which deploy the producer and consumer apps in their own namespaces.

kubectl apply -f ./manifests/producer/namespace.yaml
kubectl apply -f ./manifests/producer/app.yaml
kubectl apply -f ./manifests/consumer/namespace.yaml
kubectl apply -f ./manifests/consumer/app.yaml

The producer/app.yaml includes a ServiceMonitor, which tells prometheus-operator to scrape it. The consumer/app.yaml includes a horizontal pod autoscaler.

Plumb the metrics

External metrics don't automatically come through in prometheus-adapter. You must provide a config telling prometheus-adapter what to serve. Add this to the prom-adapt.yaml file.

rules:
  external:
  - seriesQuery: 'foobar'
    resources:
      overrides:
        namespace: { resource: namespace }
        service: { resource: service }
        pod: { resource: pod }
    metricsQuery: 'foobar'

Ship it!

Update the prometheus-adapter helm chart, wait a few minutes, and then check that your metrics are coming through.

> k get --raw /apis/external.metrics.k8s.io/v1beta1/
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"external.metrics.k8s.io/v1beta1","resources":[{"name":"foobar","singularName":"","namespaced":true,"kind":"ExternalMetricValueList","verbs":["get"]}]}

Check out the horizontal pod autoscaler to make sure it's working for the consumer.

> k get hpa -n consumer
NAME       REFERENCE             TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
consumer   Deployment/consumer   1k/1k (avg)   1         10        9          47m

Wrapping Up

Sometimes, it's a big pain to get these metrics to plumb through to your horizontal pod autoscalers. Hopefully this is helpful in getting you up and running!

Top comments (0)