DEV Community

Cover image for How to Convert Kubernetes Manifests into Nomad Jobspecs
Adriana Villela
Adriana Villela

Posted on • Updated on • Originally published at adri-v.Medium

How to Convert Kubernetes Manifests into Nomad Jobspecs

Ever since I started exploring Nomad, one of the things that I’ve enjoyed doing is taking Docker Compose files and Kubernetes manifests, and translating them into HashiCorp Nomad jobspec. I did it for Temporal back in March 2022, and also for an early version of Tracetest, back in the summer of 2022.

In my latest Nomadification Project (TM), I got the OpenTelemetry Demo App to run on Nomad (with HashiQube, of course). To do this, I used the OpenTelemetry Demo App Helm Chart as my guide. In doing this, and other Nomadifications, I realized that I’ve never gone through the process of explaining the conversion process from Kubernetes manifests to Nomad jobspecs.

So, as you may have guessed, today, I will go through the process of converting Kubernetes manifests to Nomad jobspecs, so that if you ever find yourself in a situation whereby you’re thinking, “Gee, it would be nice to see this Kubernetes stuff running on Nomad,” you now have a process!

I’ll use examples from the work I did recently in converting the OpenTelemetry Demo App Helm Charts into Nomad jobspecs to illustrate the process.

Are you ready? Let’s do this!

Manifests and Helm Charts and Jobspecs…oh my!

While I like working with both Kubernetes and Nomad alike, there is one thing that I find exceedingly irritating in Kubernetes Land: the fact that a Kubernetes manifest for an app deployment is made up of a scavenger hunt of YAML definitions of various Kubernetes objects. Nomad, however, takes a different approach, using a single HashiCorp Configuration Language (HCL) jobspec file as a one-stop shop for defining your app. I personally find Nomad HCL a lot easier to manage, since there are fewer moving parts, and when it comes to converting Kubernetes manifests to Nomad jobspecs, I find that having a single file to work with makes things a lot simpler.

In order to convert a Kubernetes manifest into a Nomad jobspec, we first need to start with a basic Nomad jobspec. This will serve as a template for deploying our application in Nomad.

Let’s start with our template jobspec below. Please bear in mind that this is a starting point for our conversion. After all, some services are more complex than others, so while for some services, we need all of the components below to be included in our jobspec, for other services, we may end up with a more pared down version of the jobspec.

job "<service_name>" {
  type        = "service"
  datacenters = ["dc1"]

  group "<service_name>" {
    count = 1

    network {
      mode = "host"

      port "<port_name>" {
        to = <port_number>
      }
    }

    service {
      name = "<service_name>"
      port = "<port_name>"
      tags = [<tags_here>]

      check {
        <service_check_here>
      }
    }


    task "<service_name>" {
      driver = "docker"

      config {
        image = "<image_name>"
        image_pull_timeout = "25m"
        args = [<args_go_here>]
        ports = ["<port_name>"]
      }

      restart {
        attempts = 10
        delay    = "15s"
        interval = "2m"
        mode     = "delay"
      }

      env {
          <env_vars_here>
      }      

      template {
        data = <<EOF
<env_vars_derived_from_consul>
EOF
        destination = "local/env"
        env         = true
      }

      resources {
        cpu    = 60
        memory = 650
      }

    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Great…so now we’ve got our jobspec template. Yay! But we need to fill in the blanks, don’t we? So...where do we start?

Since we’re going from Kubernetes to Nomad, we need to look at the application’s Kubernetes manifest. Fortunately, we can grab this info easily from the OTel Helm Charts Repo, which, as you may have guessed, has a Helm Chart for the OTel Demo App. It also contains the rendered YAML manifests available to us here.

The OpenTelemetry Demo App is made up of a number of services. The process of converting the Kubernetes manifest of each service to its corresponding Nomad jobspec is very similar, so in the interest of not boring you to death, I’ll be choosing one service to illustrate the conversion: the featureflagservice.

Conversion Process

With the Nomad jobspec template and Kubernetes manifest in hand, we are ready to begin the conversion!

NOTE: You can find the repo with all of the OpenTelemetry Demo App jobspec files here.

1- Grab the Kubernetes manifests

As I mentioned earlier, the rendered YAML manifests for the OpenTelemetry Demo App are available to us here. Since, for the purposes of this tutorial, we only care about the featureflagservice’s Kubernetes manifest, I’ve gone ahead and grabbed the manifest pertaining to the featureflagservice, which is made up of a Deployment and a Service, as per below.

Here is the Deployment YAML:

---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-featureflagservice
  labels:
    helm.sh/chart: opentelemetry-demo-0.14.3
    app.kubernetes.io/name: example
    app.kubernetes.io/instance: example
    app.kubernetes.io/component: featureflagservice
    app.kubernetes.io/version: "1.2.1"
    app.kubernetes.io/part-of: opentelemetry-demo
    app.kubernetes.io/managed-by: Helm
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: example
      app.kubernetes.io/instance: example
      app.kubernetes.io/component: featureflagservice
  template:
    metadata:
      labels:
        app.kubernetes.io/name: example
        app.kubernetes.io/instance: example
        app.kubernetes.io/component: featureflagservice
    spec:
      containers:
        - name: featureflagservice
          image: 'ghcr.io/open-telemetry/demo:v1.2.1-featureflagservice'
          imagePullPolicy: IfNotPresent
          ports:

          - containerPort: 50053
            name: grpc
          - containerPort: 8081
            name: http
          env:
          - name: OTEL_SERVICE_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.labels['app.kubernetes.io/component']
          - name: OTEL_K8S_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
          - name: OTEL_K8S_NODE_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: spec.nodeName
          - name: OTEL_K8S_POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          - name: FEATURE_FLAG_GRPC_SERVICE_PORT
            value: "50053"
          - name: FEATURE_FLAG_SERVICE_PORT
            value: "8081"
          - name: OTEL_EXPORTER_OTLP_TRACES_PROTOCOL
            value: grpc
          - name: DATABASE_URL
            value: ecto://ffs:ffs@example-ffspostgres:5432/ffs
          - name: OTEL_EXPORTER_OTLP_ENDPOINT
            value: http://example-otelcol:4317
          - name: OTEL_RESOURCE_ATTRIBUTES
            value: service.name=$(OTEL_SERVICE_NAME),k8s.namespace.name=$(OTEL_K8S_NAMESPACE),k8s.node.name=$(OTEL_K8S_NODE_NAME),k8s.pod.name=$(OTEL_K8S_POD_NAME)
          resources:
            limits:
              memory: 175Mi
Enter fullscreen mode Exit fullscreen mode

Here is the Service YAML:

---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
  name: example-featureflagservice
  labels:
    helm.sh/chart: opentelemetry-demo-0.14.3
    app.kubernetes.io/name: example
    app.kubernetes.io/instance: example
    app.kubernetes.io/component: featureflagservice
    app.kubernetes.io/version: "1.2.1"
    app.kubernetes.io/part-of: opentelemetry-demo
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 50053
      name: grpc
      targetPort: 50053
    - port: 8081
      name: http
      targetPort: 8081
  selector:
    app.kubernetes.io/name: example
    app.kubernetes.io/instance: example
    app.kubernetes.io/component: featureflagservice
Enter fullscreen mode Exit fullscreen mode

Yikes! This all looks pretty overwhelming, doesn’t it? Fortunately, it’s not as scary as it looks. Don’t worry…I’ll guide you along. Let’s keep going.

2- Prepare the jobspec

With our Kubernetes YAMLs in hand, let’s go back to our jobspec template from earlier, and fill in some blanks. Since we know that we’re working with the featureflagservice, I’ve gone ahead and replaced &lt;service_name> with featureflagservice, which means that now our template looks like this:

job "featureflagservice" {
  type        = "service"
  datacenters = ["dc1"]

  group "featureflagservice" {
    count = 1

    network {
      mode = "host"

      port "<port_name>" {
        to = <port_number>
      }
    }

    service {
      name = "<service_name>"
      port = "<port_name>"
      tags = [<tags_here>]

      check {
        <service_check_here>
      }
    }


    task "featureflagservice" {
      driver = "docker"

      config {
        image = "<image_name>"
        image_pull_timeout = "25m"
        args = [<args_go_here>]
        entrypoint = [<entrypoints_go_here>]
        ports = ["<port_name>"]
      }

      restart {
        attempts = 10
        delay    = "15s"
        interval = "2m"
        mode     = "delay"
      }

      env {
          <env_vars_here>
      }      

      template {
        data = <<EOF
<env_vars_derived_from_consul>
EOF
        destination = "local/env"
        env         = true
      }

      resources {
        cpu    = 60
        memory = 650
      }

    }
  }
}
Enter fullscreen mode Exit fullscreen mode

NOTE: You could technically give different names to your job, task and group, such as featureflagservice-job, featureflagservice-task and featureflagservice-group (or really anything you want), but for the sake of simplicity (with a sprinkling of lack of originality), I decided to give them all the same name: featureflagservice.

Some useful terminology:

  • job is the unit of control. The job is the thing that you start, stop, and update. 
  • group is the unit of scale. The group defines how many instances you are running. 
  • task is the unit of work. The task is what you actually want to run.

3- Port definitions

The next set of blanks that we need to fill in are in the network stanza. More specifically, the &lt;port_name> and &lt;port_number> values in the port stanza.

If we look at the featureflagservice’s Service YAML above, you’ll notice that it exposes two ports: 50053 (gRPC) and 8081 (HTTP), per spec -> ports -> targetPort. Let’s plug these into our jobspec:

network {
  mode = "host"

  port "http" {
    to = 8081
  }
  port "grpc" {
    to = 50053
  }
}
Enter fullscreen mode Exit fullscreen mode

As you can see per the snippet above, we labeled (named) our ports http and grpc. These labels will allow us to refer to those ports by a human-friendly label, rather than by number. Which means that if one or both of the port numbers change, we only need to make the change in one place. And spoiler alert: we will be referring to those ports elsewhere in the jobspec.

NOTE: Feel free to label your ports anything you want–just make sure that it’s reasonably descriptive.

4- Service Definition

Now that we’ve defined our ports, we need to register our services, which is done by way of the service stanza. Since we have two ports in the network stanza above, we need to define two services: one per port.

The service definition for the http port looks like this:

service {
  name = "featureflagservice-http"
  port = "http"
  tags = [
    "traefik.http.routers.otel-collector-http.rule=Host(`featureflag.localhost`)",
    "traefik.http.routers.otel-collector-http.entrypoints=web",
    "traefik.http.routers.otel-collector-http.tls=false",
    "traefik.enable=true",
  ]

  check {
    type     = "tcp"
    interval = "10s"
    timeout  = "5s"
  }
}
Enter fullscreen mode Exit fullscreen mode

Noteworthy items:

The service for the grpc port looks like this:

service {
  name = "featureflagservice-grpc"
  port = "grpc"

  check {
    type     = "tcp"
    interval = "10s"
    timeout  = "5s"
  }
}
Enter fullscreen mode Exit fullscreen mode

Noteworthy items:

  • Since we’re not exposing any outside services, we don’t need the tags attribute with the Traefik configurations.
  • The port attribute refers to the grpc port that we defined in the network stanza earlier.
  • We’re doing the same health check that we did for the http port.

For additional examples health checks, check out:

5- Task Definition

Okay…now we’re ready to define our task. Since we’re running a containerized workload, our task uses the Docker driver.

Config Stanza

Since we’re using the Docker driver, we need to provide the following information to Nomad via the config stanza:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-featureflagservice
...
spec:
...
    spec:
      containers:
        - name: featureflagservice
          image: 'ghcr.io/open-telemetry/demo:v1.2.1-featureflagservice'
...
Enter fullscreen mode Exit fullscreen mode

This translates to the config stanza of the featureflagservice task looking like this:

config {
  image = "ghcr.io/open-telemetry/demo:v1.2.1-featureflagservice"
  image_pull_timeout = "25m"
  ports = ["http", "grpc"]
}
Enter fullscreen mode Exit fullscreen mode

A few noteworthy items:

Env Stanza

We’re not quite done with configuring our featureflagservice task. If you look at the Deployment YAML, you’ll notice that there are a number of environment variables under the env tag:

env:
  - name: OTEL_SERVICE_NAME
    valueFrom:
      fieldRef:
        apiVersion: v1
        fieldPath: metadata.labels['app.kubernetes.io/component']
  - name: OTEL_K8S_NAMESPACE
    valueFrom:
      fieldRef:
        apiVersion: v1
        fieldPath: metadata.namespace
  - name: OTEL_K8S_NODE_NAME
    valueFrom:
      fieldRef:
        apiVersion: v1
        fieldPath: spec.nodeName
  - name: OTEL_K8S_POD_NAME
    valueFrom:
      fieldRef:
        apiVersion: v1
        fieldPath: metadata.name
  - name: FEATURE_FLAG_GRPC_SERVICE_PORT
    value: "50053"
  - name: FEATURE_FLAG_SERVICE_PORT
    value: "8081"
  - name: OTEL_EXPORTER_OTLP_TRACES_PROTOCOL
    value: grpc
  - name: DATABASE_URL
    value: ecto://ffs:ffs@example-ffspostgres:5432/ffs
  - name: OTEL_EXPORTER_OTLP_ENDPOINT
    value: http://example-otelcol:4317
  - name: OTEL_RESOURCE_ATTRIBUTES
    value: service.name=$(OTEL_SERVICE_NAME),k8s.namespace.name=$(OTEL_K8S_NAMESPACE),k8s.node.name=$(OTEL_K8S_NODE_NAME),k8s.pod.name=$(OTEL_K8S_POD_NAME)
Enter fullscreen mode Exit fullscreen mode

You can ignore the ones that start with OTEL_K8S_, as they are Kubernetes-specific; however we do care about these:

  • OTEL_SERVICE_NAME
  • FEATURE_FLAG_GRPC_SERVICE_PORT
  • FEATURE_FLAG_SERVICE_PORT
  • OTEL_EXPORTER_OTLP_TRACES_PROTOCOL
  • DATABASE_URL
  • OTEL_EXPORTER_OTLP_ENDPOINT
  • OTEL_RESOURCE_ATTRIBUTES

So how do we configure these in Nomad? Through the task’s env stanza. Which means that our environment variables look like this:

env {
  FEATURE_FLAG_GRPC_SERVICE_PORT = "${NOMAD_PORT_grpc}"
  FEATURE_FLAG_SERVICE_PATH_ROOT = "\"/feature\""
  FEATURE_FLAG_SERVICE_PORT = "${NOMAD_PORT_http}"
  OTEL_EXPORTER_OTLP_TRACES_PROTOCOL = "grpc"
  OTEL_RESOURCE_ATTRIBUTES = "service.name=featureflagservice"
}
Enter fullscreen mode Exit fullscreen mode

A few noteworthy items:

  • Rather than hard-coding the value of FEATURE_FLAG_GRPC_SERVICE_PORT to 50053 and 8081, we’re using NOMAD_PORT_grpc and NOMAD_PORT_http. These are actually runtime environment variable, which follow the NOMAD_PORT_&lt;label> naming convention. This prevents you from hard-coding the port number, which comes in handy if the port number changes in the network stanza for whatever reason, as you only need to change the number in one spot.
  • If you look at the Deployment YAML, you’ll notice that OTEL_RESOURCE_ATTRIBUTES is set to service.name=$(OTEL_SERVICE_NAME),k8s.namespace.name=$(OTEL_K8S_NAMESPACE),k8s.node.name=$(OTEL_K8S_NODE_NAME),k8s.pod.name=$(OTEL_K8S_POD_NAME). But I only set OTEL_RESOURCE_ATTRIBUTES to service.name=featureflagservice. Why? Well, because the other attributes in the Deployment YAML were Kubernetes-related, so I left them out.

Template Stanza

Wait…but why are DATABASE_URL and OTEL_EXPORTER_OTLP_ENDPOINT missing?? Well, if you look at the Deployment YAML, you’ll notice that the values for the above two environment variables are ecto://ffs:ffs@example-ffspostgres:5432/ffs and http://example-otelcol:4317, respectively.

Which begs the question...how does this translate to Nomad-speak? example-ffspostgres and example-otelcol, are the service names in Kubernetes for PostgreSQL and the OpenTelemetry Collector, respectively, so if we tried to use those same names in our jobspec definition, you’d get a big ‘ole nasty error from Nomad.

We could use the IP addresses of those services, but that’s not such a great idea, because IP addresses for services are bound to change, so if and when that address changes, your jobspec will fail to deploy.

What we need is a way to dynamically get a service’s IP address, given the service’s name. This is where Consul comes in. Among other things, Consul offers service discovery, which does exactly what we need.

To use Consul service-discovery, we need the following:

  1. The name of the service that we’re referencing
  2. The Nomad template stanza

The Nomad template stanza is very reminiscent of a Kubernetes ConfigMap. Per the Nomad docs, templates let you “ship configuration files that are populated from environment variables, Consul data, Vault secrets, or just general configurations within a Nomad task.” In our case, we’re using a template to query Consul services, so that we can find the IP address and port number of these services, so that we can populate our to populate our two remaining environment variables, DATABASE_URL and OTEL_EXPORTER_OTLP_ENDPOINT. The code for that looks like this:

template {
  data = <<EOF
{{ range service "ffspostgres-service" }}
DATABASE_URL = "ecto://ffs:ffs@{{ .Address }}:{{ .Port }}/ffs"
{{ end }}

{{ range service "otelcol-grpc" }}
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT = "http://{{ .Address }}:{{ .Port }}"
{{ end }}
EOF
  destination = "local/env"
  env         = true
}
Enter fullscreen mode Exit fullscreen mode

Noteworthy items:

  • The template stanza is defined inside the task stanza.
  • The lines destination = "local/env" and env = true tell Nomad that these are environment variables
  • The line {{ range service "ffspostgres-service" }} tells Nomad to look for a service in Consul called ffspostgres-service. Once it finds the service name, we can pull the service’s IP address and port number using {{ .Address }} and {{ .Port }}, respectively.
  • Similarly, the line {{ range service "otelcol-grpc" }} tells Nomad to look for a service called otelcol-grpc. Once it finds the service name, we can pull the service’s IP address and port number using {{ .Address }} and {{ .Port }}, respectively.

But wait...where the heck do these service names come from?? Well, remember when we defined services in step 4 above, we gave each of our services a name?

ffspostgres-service is the name of the PostgreSQL service. You can check out the Nomad service definition here. (Aside: Take note of the service’s command-based health check to check database connectivity.)

Similarly, otelcol-grpc is the name of the gRPC service of the OpenTelemetry Collector. You can check out the service definition here.

For more info on Consul service discovery, check out this HashiCorp discussion forum. In addition, Nomad now has native service discovery sans Consul. For more info, check out docs here.

For an example of using the template stanza for configuration files, check out the OpenTelemetry Collector’s jobspec here.

Restart Rules

Unlike Docker Compose, you can’t specify dependency between services in Nomad. So, in order to ensure that Service X doesn’t die on you because it’s dependent on Service Y, which hasn’t started yet, you can put a restart policy into place. Below is the restart policy that I configured for featureflagservice:

restart {
  attempts = 10
  delay    = "15s"
  interval = "2m"
  mode     = "delay"
}
Enter fullscreen mode Exit fullscreen mode

The above restart policy states that Nomad will try to restart the job 10 times in the span of 2 minutes. It will wait 15 seconds between restarts. If, after 10 attempts at a restart. By default, if the job still hasn’t started successfully, Nomad will fail the deployment and the job will be dead. This is dictated by the mode attribute, which defaults to fail. That’s not what we want, so instead we must set our mode to delay. This tells Nomad to restart the job another 10 times. This cycle continues on until the job finally starts up successfully.

Resource Allocations

If you follow my writings on Nomad, you’ll know that I am a HUGE fan of using HashiQube for running a Hashi environment on my local machine. This, of course, means that I have way less computing power than if I was, say, running this in Nomad in a datacenter. Which means that I have to be very mindful of the resources that I use, both for CPU and memory.

To get the correct values for CPU and memory usage, I had to play around a little. First, I started by deploying the jobspecs without any resource allocations, and checked out the jobs in Nomad to see if I either over-allocated or under-allocated resources.

For memory utilization, I looked at the resources consumed under the service’s allocation dashboard:

Screen capture of the Nomad UI, showing CPU and memory utilization for a given allocation in Nomad. The graphs for CPU and memory utilization have a red box around them.

If you look at the above screen capture for the featureflagservice, you can see that I’m using about 60% of the memory that I allocated to this jobspec, which is pretty decent. If I deploy a service and see that it’s getting close to 100% memory usage (anything at 80% or above), I bump up the amount of memory used.

If you prefer the command line, you can run:

export ALLOCATION_ID=$(nomad job allocs -json featureflagservice | jq -r '.[0].ID')
nomad alloc status -stats $ALLOCATION_ID
Enter fullscreen mode Exit fullscreen mode

Sample output:

...
Task "featureflagservice" is "running"
Task Resources:
CPU       Memory           Disk     Addresses
0/55 MHz  151 MiB/250 MiB  300 MiB  

Memory Stats
Cache  Swap  Usage
0 B    0 B   151 MiB

CPU Stats
Percent  Throttled Periods  Throttled Time
2.89%    0                  0
...
Enter fullscreen mode Exit fullscreen mode

As you can see from the printout above CPU utilization is at 0 out of 55 MHz, and memory utilization is at 151 MiB out of 250 MiB.

For CPU utilization, I look at Nomad’s Topology dashboard.

Screen capture of the Nomad UI showing the Topology view. This shows 75% memory used out of the 9.28GB RAM allocated to the Nomad, and 60% compute power out of the 2 GHz compute power allocated to Nomad.

I can see that for all of my services, I am using a grand total of 1.21 GHz of CPU for all of my jobspecs (all OTel Demo App jobspecs), out of my allotted 2 GHz (if you’re curious, I configured this setting here in HashiQube). By looking at my service’s CPU utilization from the allocation’s Resource Utilization dashboard, and by looking at how much compute power I have from the Topology dashboard, I can play around with the CPU utilization to reach a value that won’t exhaust my allocated resources. As a general rule of thumb, I like to make sure that all of my services are using 60-75% of the allotted resources.

So, with all that in mind, below are my resources settings for the featureflagservice, where CPU is measured in GHz, and memory is measured in MiB (mebibytes).

resources {
  cpu    = 55
  memory = 250
}
Enter fullscreen mode Exit fullscreen mode

6- The Final Product!

Now that we’ve got all of our pieces in place, our final jobspec looks like this:

job "featureflagservice" {
  type        = "service"
  datacenters = ["dc1"]

  group "featureflagservice" {
    count = 1

    network {
      mode = "host"

      port "http" {
        to = 8081
      }
      port "grpc" {
        to = 50053
      }
    }

    service {
      name = "featureflagservice-http"
      port = "http"
      tags = [
        "traefik.http.routers.featureflagservice.rule=Host(`feature.localhost`)",
        "traefik.http.routers.featureflagservice.entrypoints=web",
        "traefik.http.routers.featureflagservice.tls=false",
        "traefik.enable=true",
      ]

      check {
        type     = "tcp"
        interval = "10s"
        timeout  = "5s"
      }
    }

    service {
      name = "featureflagservice-grpc"
      port = "grpc"

      check {
        type     = "tcp"
        interval = "10s"
        timeout  = "5s"
      }
    }

    task "featureflagservice" {
      driver = "docker"

      config {
        image = "otel/demo:v1.1.0-featureflagservice"
        image_pull_timeout = "10m"
        ports = ["http", "grpc"]
      }

      restart {
        attempts = 10
        delay    = "15s"
        interval = "2m"
        mode     = "delay"
      }

      env {
        FEATURE_FLAG_GRPC_SERVICE_PORT = "${NOMAD_PORT_grpc}"
        FEATURE_FLAG_SERVICE_PATH_ROOT = "\"/feature\""
        FEATURE_FLAG_SERVICE_PORT = "${NOMAD_PORT_http}"
        OTEL_EXPORTER_OTLP_TRACES_PROTOCOL = "grpc"
        OTEL_SERVICE_NAME = "featureflagservice"
      }

      template {
        data = <<EOF
{{ range service "ffspostgres-service" }}
DATABASE_URL = "ecto://ffs:ffs@{{ .Address }}:{{ .Port }}/ffs"
{{ end }}

{{ range service "otelcol-grpc" }}
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT = "http://{{ .Address }}:{{ .Port }}"
{{ end }}
EOF
        destination = "local/env"
        env         = true
      }

      resources {
        cpu    = 55
        memory = 250
      }

    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Ta-da!! 🎉

Final Thoughts

Whew! We covered a lot today! At the end of the day, I hope that this shows you that converting a Kubernetes manifest to a Nomad jobspec is not rocket science! It just takes a little bit of knowledge and patience.

Although this was by no means an exhaustive conversion, I hope that this little tutorial has given you the confidence to go from, “I wish that there was an example of how to run this on Nomad,” to, “I can get this to run in Nomad myself!”

I shall now reward you with a picture of Phoebe and our dearly departed Bunny, peering out of their cage.

Two rats peering out of a cage: a white rat on the left, and a light brown and white rat on the right.

Peace, love, and code. 🦄 🌈 💫

Peace sign, heart, and terminal

Got questions about Observability and/or OpenTelemetry? Want to collaborate on the OTel Demo App for Nomad? Talk to me! Feel free to connect through e-mail, or hit me up on Mastodon or LinkedIn. Hope to hear from y’all!

Top comments (0)