DEV Community

Alex Spinov
Alex Spinov

Posted on

HashiCorp Nomad Has a Free API: Simple and Flexible Workload Orchestrator

Nomad is a simple and flexible workload orchestrator that deploys containers, VMs, binaries, and batch jobs. Unlike Kubernetes, it handles non-containerized workloads natively and is much simpler to operate.

What Is Nomad?

Nomad by HashiCorp is a single binary that schedules and orchestrates workloads across a fleet of machines. It supports Docker containers, Java applications, raw binaries, and batch jobs with a unified workflow.

Key Features:

  • Multi-workload: containers, VMs, binaries, batch
  • Single binary, no external dependencies
  • Multi-datacenter and multi-region
  • Bin packing and spread scheduling
  • Service discovery (built-in + Consul)
  • Canary deployments and rollbacks
  • CSI volume support
  • REST API and CLI

Quick Start

# Install
brew install nomad

# Start dev agent
nomad agent -dev

# Web UI at http://localhost:4646
Enter fullscreen mode Exit fullscreen mode

Nomad Job File

# webapp.nomad.hcl
job "webapp" {
  datacenters = ["dc1"]
  type = "service"

  group "web" {
    count = 3

    network {
      port "http" { to = 8080 }
    }

    service {
      name = "webapp"
      port = "http"
      check {
        type     = "http"
        path     = "/health"
        interval = "10s"
        timeout  = "3s"
      }
    }

    task "server" {
      driver = "docker"

      config {
        image = "myapp:latest"
        ports = ["http"]
      }

      resources {
        cpu    = 500
        memory = 256
      }

      env {
        DB_HOST = "prod-db.example.com"
      }
    }
  }

  update {
    max_parallel     = 1
    canary           = 1
    min_healthy_time = "30s"
    healthy_deadline = "5m"
    auto_revert      = true
  }
}
Enter fullscreen mode Exit fullscreen mode

Nomad API: Programmatic Job Management

import requests

NOMAD = "http://localhost:4646/v1"

# List all jobs
jobs = requests.get(f"{NOMAD}/jobs").json()
for job in jobs:
    print(f"Job: {job['Name']}, Status: {job['Status']}, Type: {job['Type']}")

# Get job details
job = requests.get(f"{NOMAD}/job/webapp").json()
print(f"Job: {job['Name']}, Version: {job['Version']}")

# Get allocations (running instances)
allocs = requests.get(f"{NOMAD}/job/webapp/allocations").json()
for alloc in allocs:
    print(f"Alloc: {alloc['ID'][:8]}, Node: {alloc['NodeID'][:8]}, Status: {alloc['ClientStatus']}")
Enter fullscreen mode Exit fullscreen mode

Deploy and Scale

import json

# Parse and submit a job
with open("webapp.nomad.json") as f:
    job_spec = json.load(f)

result = requests.post(f"{NOMAD}/jobs", json={"Job": job_spec}).json()
print(f"Eval ID: {result['EvalID']}")

# Scale job
requests.post(f"{NOMAD}/job/webapp/scale", json={
    "Count": 5,
    "Target": {"Group": "web"}
})

# Force periodic job run
requests.post(f"{NOMAD}/job/batch-job/periodic/force")

# Stop a job
requests.delete(f"{NOMAD}/job/webapp")
Enter fullscreen mode Exit fullscreen mode

Canary Deployment

# Promote canary deployment
deployment = requests.get(f"{NOMAD}/job/webapp/deployment").json()
requests.post(f"{NOMAD}/deployment/promote/{deployment['ID']}", json={
    "All": True
})

# Or rollback
requests.post(f"{NOMAD}/job/webapp/revert", json={
    "JobVersion": job["Version"] - 1
})
Enter fullscreen mode Exit fullscreen mode

Batch Job Example

job "data-pipeline" {
  type = "batch"

  periodic {
    crons            = ["0 */6 * * *"]
    prohibit_overlap = true
  }

  group "etl" {
    task "transform" {
      driver = "docker"
      config {
        image = "etl-pipeline:latest"
        args  = ["--date", "${NOMAD_JOB_NAME}"]
      }
      resources {
        cpu    = 2000
        memory = 1024
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Nomad vs Kubernetes

Feature Nomad Kubernetes
Complexity Simple Complex
Binary Single Multiple
Non-container Native Limited
Learning curve Days Weeks/months
Scaling Manual + auto HPA/VPA
Ecosystem HashiCorp CNCF (huge)

Resources


Need to scrape web data for your workloads? Check out my web scraping tools on Apify — production-ready actors for Reddit, Google Maps, and more. Questions? Email me at spinov001@gmail.com

Top comments (0)