DEV Community

Alexander Leonhard
Alexander Leonhard

Posted on

Your Industry Runs Like Infrastructure in 2010

If you've spent any time in infrastructure engineering, you've seen this movie before.

There's a domain running on manual processes. Everyone knows it's fragile. The people inside it have built elaborate workarounds — spreadsheets, tribal knowledge, "just ask Sarah, she knows how it works." Then something changes — scale, regulation, or both — and the workarounds stop working. Slowly at first. Then all at once.

That's hiring right now.

For the people who already know
If you lived through the shift from click-ops to IaC, the rest of this will feel familiar. Maybe uncomfortably so.
Remember what infrastructure looked like before Terraform? You logged into the console. You clicked through wizards. You configured things by hand. The state of production lived in someone's head — or worse, in a wiki page last updated eight months ago. When an auditor asked "what's running and why," the answer involved a lot of screen sharing and creative reconstruction.

Then declarative specs happened. Version control. Drift detection. Immutable state. Provider abstraction. And somewhere along the way, the industry collectively decided: yeah, we're never going back.

If you were part of that shift, you'll recognize every pattern below. If you weren't — if you're in a domain that skipped it, dismissed it, or figured it didn't apply to you — well. Keep reading. This might sting a little.

The part where hiring looks exactly like infrastructure in 2010
Every major recruiting platform is shipping AI agents. LinkedIn, SeekOut, hireEZ, half the ATS market. Agents that source, screen, schedule, and — increasingly — make recommendations that determine who gets hired.

Here's how they operate:

  • Manual configuration. Every job posting is hand-crafted free text. Every resume is unstructured. Every screening call is a snowflake process that lives in someone's head.
  • No shared state. The recruiter's agent and the employer's agent have no common data format. They can't exchange structured information. They can't compare constraints programmatically.
  • No audit trail. When the AI rejects someone, nobody logs what the model saw, what version was running, what the constraints were, or whether a human had the chance to override.
  • Point-to-point integrations everywhere. Every ATS builds its own agent. Every agent is its own walled garden. Connecting them requires custom work. N platforms × M agencies = N×M integrations that nobody wants to maintain.

Anyone who's untangled a web of hand-configured servers is nodding right now.

The mapping that writes itself

IaC Hiring What actually exists today
.tf file — declarative, typed, versioned Talent profiles + job requirements as structured schemas Free-text resumes and job descriptions written for humans, not machines
terraform plan — preview, no side effects Match scoring — structured overlap between supply and demand Black-box AI screening with no explainability
terraform apply — execute desired state Settlement — hire confirmed, contract terms locked, both sides notified Email chains, handshake deals, and "I'll send over the contract by Friday"
.tfstate — immutable record Compliance vault — every prompt, decision, and override logged An ATS activity log if you're lucky. Nothing if you're not.
Idempotency Same constraints in → same scores out LLM temperature randomness on every single call
Drift detection Profile or job changes trigger re-evaluation Full re-screen from scratch. Every time.
Provider abstraction Same engine, any LLM underneath Vendor lock-in per tool
Modules — reusable across projects Same engine, different domains. Hiring today. Logistics, procurement, consulting tomorrow. Every vertical builds from scratch and learns the same lessons the hard way

IaC people will look at this table and think: "Obviously." People in recruiting will look at it and think: "Wait, that's possible?"
That gap is the opportunity.

The three forces that made IaC inevitable (and are now converging on hiring)
1. Scale broke manual processes.
Nobody adopted Terraform because it was trendy. They adopted it because you can't click-ops 500 servers. The hiring equivalent is arriving: when AI agents source from every job board simultaneously, the inbound volume breaks human screening. You need structured data, machine-readable constraints, and deterministic evaluation — or you drown.

2. Regulation demanded auditability.
SOC 2, HIPAA, PCI — IaC became mandatory not because engineers loved YAML, but because auditors said "show me the state." The EU AI Act is doing exactly this for hiring. AI in recruitment is explicitly classified as high-risk. Article 26 requires documentation, human oversight, bias auditing, and immutable logging. Penalties go up to €35M or 7% of global revenue. Enforcement expected late 2027.
Every domain that thought "compliance doesn't apply to us" eventually learned otherwise. Hiring is next.

3. Interoperability required open standards.
Terraform didn't win because HashiCorp built the best engine. It won because HCL became the shared language across providers. AWS, GCP, Azure — one spec, multiple backends.
The hiring equivalent doesn't exist yet. There's no shared schema that defines what a talent profile or a job requirement looks like in machine-readable terms. Without it, every AI agent is a snowflake server. With it, N×M integrations collapse to N+M, and agents across platforms can actually negotiate on structured data.

What this looks like when you build it
Two open protocols, MIT-licensed:
OTP (Open Talent Protocol) — typed fields for talent profiles. Skills as structured arrays, not keyword strings. Seniority as an enum. Salary expectations as a range. Location and availability as machine-readable constraints. Think of it as the schema definition for what an agent says about a person.

OJP (Open Job Protocol) — typed constraint sets for job requirements. Hard requirements vs. preferences as distinct categories. Budget as a range. Compliance flags per jurisdiction. The schema for what an agent says about a role.

These are the .tf files for hiring.

On top of them, an engine handles matching (deterministic overlap — same input, same output), negotiation (multi-round, bilateral, async), settlement (confirmed hire, signed webhooks, idempotent), and a compliance vault (immutable, append-only, every decision logged with model version and inputs).

The cost thing infrastructure engineers will quietly appreciate
Free-text resumes force frontier-model inference on every evaluation. The LLM has to parse, extract entities, infer structure, then reason about fit. Expensive. Unreliable. Slow.

Structured schemas flip this. When a profile arrives as typed fields, the LLM compares data against constraints. Shorter prompts. Less reasoning. Cheaper models for the bulk of the work.

80% of evaluations run on the smallest models available. Frontier models are reserved for the cases where nuanced judgment actually matters. Same idea as progressive disclosure in MCP servers — don't dump the full tool inventory into the context window when metadata is enough for the first pass.

Where this sits

Layer What Who
Domain Schemas, negotiation, settlement, audit adnx.ai
Transport Agent-to-agent coordination A2A (Google / Linux Foundation)
Knowledge Organizational procedures Agent Skills (Anthropic)
Connectivity Tool access MCP (Anthropic / Linux Foundation)
Systems ATS, job boards, HRIS Your existing stack

MCP connects agents to tools. Skills teach agents your processes. A2A lets agents talk to each other. None of them define what agents say about talent and jobs, how a hire gets settled, or how the decision gets audited.

That's the domain layer. The part of the stack that transport protocols always leave unfinished. TCP/IP didn't ship HTTP. HTTP didn't ship Stripe. And A2A won't ship hiring infrastructure. Someone has to build it.

Every industry figures this out eventually. Some figured it out early and built on top. Some waited and spent the next decade catching up. If you're in a domain that still runs on unstructured text, manual processes, and "ask Sarah" — you know which group you're in.

Open protocols, open contribution
OTP and OJP are MIT-licensed. The specs are public. If you've ever contributed to an OpenAPI spec, a Terraform provider, or a Kubernetes CRD — same process.

Specs: opentalentprotocol.org | openjobprotocol.org
Docs: docs.adnx.ai

You wouldn't provision infrastructure by hand anymore.

Top comments (0)