<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alexander Leonhard</title>
    <description>The latest articles on DEV Community by Alexander Leonhard (@testinat0r).</description>
    <link>https://dev.to/testinat0r</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/testinat0r"/>
    <language>en</language>
    <item>
      <title>Why building a job scraper for $0.39/1,000 jobs is not about the money.</title>
      <dc:creator>Alexander Leonhard</dc:creator>
      <pubDate>Tue, 14 Apr 2026 13:28:14 +0000</pubDate>
      <link>https://dev.to/testinat0r/why-building-a-job-scraper-for-0391000-jobs-is-not-about-the-money-1cnn</link>
      <guid>https://dev.to/testinat0r/why-building-a-job-scraper-for-0391000-jobs-is-not-about-the-money-1cnn</guid>
      <description>&lt;p&gt;I needed thousands of job postings in OJP v0.2 schema. Not a handful for a demo — enough volume that cost per posting had to disappear as a line item.&lt;/p&gt;

&lt;p&gt;The existing options didn't work for me. Commercial scrapers price per-posting at numbers that assume you're an ATS vendor passing the cost to customers. Open-source ones want you to write a custom adapter per career page, which is its own slow failure mode. Neither fits what a protocol layer actually needs: cheap, fresh, and structured the same way across every board.&lt;/p&gt;

&lt;p&gt;So I built my own in a single session. One production run: &lt;strong&gt;887 postings across 4 boards.&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cost&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0.39 / 1,000 postings&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Throughput&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3.9s / job&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Success rate&lt;/td&gt;
&lt;td&gt;77% raw, 95%+ after retry&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Most of the "hard" parts weren't the LLM. The LLM call is the cheap part. Everything around it is where the cost and the success rate actually live.&lt;/p&gt;




&lt;h2&gt;
  
  
  The architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;scrape-jobs.json (queue + status)
        │
        ▼
┌─────────────────────────┐
│ Playwright browser      │  ← stealth context, one per worker
└─────────┬───────────────┘
          │
   fetch + strip HTML → ~6K tokens of clean text
          │
          ▼
┌─────────────────────────┐
│ Gemini Flash-Lite       │  ← ~$0.0004/call
│ (OJP v0.2 extraction)   │
└─────────┬───────────────┘
          │
   sanitize + validate (JSON Schema)
          │
          ▼
     results.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;BFS queue. Listing pages discover individual posting URLs, add them as &lt;code&gt;pending&lt;/code&gt;, and the loop runs until empty. Status lives in the input file itself so runs resume cleanly after a crash.&lt;/p&gt;

&lt;p&gt;Five fallback layers, each kicks in when the previous fails: DOM link scraping → heuristic job-container detection → visual navigator (screenshot + vision model picks clickable selectors) → schema sanitizer → full vision retry on extraction failures.&lt;/p&gt;




&lt;h2&gt;
  
  
  The five things that moved the numbers
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Content hashing so reruns are free
&lt;/h3&gt;

&lt;p&gt;Every fetched page gets a SHA-256 of its stripped text. If the hash matches the last scrape, skip the LLM call entirely — no tokens, no cost.&lt;/p&gt;

&lt;p&gt;On a weekly re-crawl, 95% of URLs hash-skip. Only actual job edits re-extract. This is what makes the whole thing viable as a recurring pipeline rather than a one-shot.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Stealth Playwright to beat bot detection
&lt;/h3&gt;

&lt;p&gt;Realistic user-agent + viewport + timezone, &lt;code&gt;--disable-blink-features=AutomationControlled&lt;/code&gt;, and an init script that hides the &lt;code&gt;navigator.webdriver&lt;/code&gt; flag most bots forget about. Gets past the common bot-detection layers on 4/5 boards.&lt;/p&gt;

&lt;p&gt;One ATS in my test set still blocks with a full CAPTCHA challenge. That one's on the list.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Per-worker parallelism partitioned by domain
&lt;/h3&gt;

&lt;p&gt;Partition pending URLs by board domain so workers don't step on each other. If one domain dominates the queue (say, 400 URLs on one board vs. 20 on another), split the big one across shards and interleave the URLs so early items spread across all workers. One Chromium instance per thread, no shared state to debug at 2am.&lt;/p&gt;

&lt;p&gt;This matters more than the raw worker count. Naive round-robin parallelism on a queue-mixed stream fights itself — you end up with every worker holding a connection to the same board.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. A sanitizer that absorbs LLM schema drift
&lt;/h3&gt;

&lt;p&gt;Gemini Flash-Lite is cheap but will happily return &lt;code&gt;"manager"&lt;/code&gt; for a seniority enum that only accepts &lt;code&gt;"lead"&lt;/code&gt;, or &lt;code&gt;"other"&lt;/code&gt; for a language code that must be ISO 639-1. The reframe: stop trying to prompt-engineer the model into perfect schema compliance. Assume it will drift. Catch the drift deterministically before validation.&lt;/p&gt;

&lt;p&gt;What the sanitizer actually does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maps enum synonyms to canonical values (&lt;code&gt;manager → lead&lt;/code&gt;, &lt;code&gt;intermediate → mid&lt;/code&gt;, &lt;code&gt;graduate → junior&lt;/code&gt;, &lt;code&gt;chief → c_level&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Normalizes language names to ISO 639-1 (&lt;code&gt;english → en&lt;/code&gt;, &lt;code&gt;deutsch → de&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Moves misplaced fields into the right nesting (LLMs love putting &lt;code&gt;skills&lt;/code&gt; at the root instead of under &lt;code&gt;must_have&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Strips nulls because the schema has &lt;code&gt;additionalProperties: false&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Took success from 77% → 90%+ on the same input without changing the prompt or the model.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Vision-retry for the last 10%
&lt;/h3&gt;

&lt;p&gt;When text extraction fails — an SPA rendered nothing parseable, or Gemini Lite returned invalid JSON even after sanitization — re-run with a full-page screenshot through Gemini Flash vision.&lt;/p&gt;

&lt;p&gt;Recovered 4/20 retries at &lt;strong&gt;$0.0037 per recovered posting&lt;/strong&gt;. Boards where text stripping returned 0 chars become viable because Playwright still renders the page. Vision sees what the user sees.&lt;/p&gt;

&lt;p&gt;One nuance worth noting: Flash-Lite has weaker vision grounding, so the retry path specifically uses &lt;code&gt;gemini-flash-latest&lt;/code&gt; even though the primary extraction uses Lite.&lt;/p&gt;




&lt;h2&gt;
  
  
  The validated cost model
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Per call&lt;/th&gt;
&lt;th&gt;Per 1K postings&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Fetch (Playwright)&lt;/td&gt;
&lt;td&gt;free&lt;/td&gt;
&lt;td&gt;~3s compute&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HTML stripping&lt;/td&gt;
&lt;td&gt;free&lt;/td&gt;
&lt;td&gt;local&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Extraction (Flash-Lite text)&lt;/td&gt;
&lt;td&gt;$0.0004&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0.39&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Visual nav discovery&lt;/td&gt;
&lt;td&gt;$0.0006&lt;/td&gt;
&lt;td&gt;$0.003 per hard board&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vision retry&lt;/td&gt;
&lt;td&gt;$0.0007&lt;/td&gt;
&lt;td&gt;$0.015 per 20 retries&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;End-to-end at scale including retries: &lt;strong&gt;~$0.42 / 1,000&lt;/strong&gt;. That's ~$4 per 10K, ~$40 per 100K.&lt;/p&gt;

&lt;p&gt;Every run writes a frozen &lt;code&gt;stats-{timestamp}.json&lt;/code&gt; with extraction and vision cost tracked separately, so I can diff regressions between runs. A cumulative &lt;code&gt;stats.json&lt;/code&gt; merges at the end of each run — single-writer, no race conditions across parallel workers.&lt;/p&gt;




&lt;h2&gt;
  
  
  Gotchas I paid for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Playwright base image version must match the &lt;code&gt;playwright&lt;/code&gt; pip version exactly, or headless Chromium fails with &lt;code&gt;Executable doesn't exist&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Read/modify/write of shared stats from parallel workers creates race conditions — use per-run files and merge once at the end&lt;/li&gt;
&lt;li&gt;Boards with locale prefixes (&lt;code&gt;/en_US/jobs/...&lt;/code&gt;, &lt;code&gt;/de_DE/jobs/...&lt;/code&gt;) create duplicate URLs that inflate extractions 10× unless you normalize during link discovery&lt;/li&gt;
&lt;li&gt;Gemini Flash Lite's vision grounding isn't good enough for retry — hardcode the larger model for screenshots&lt;/li&gt;
&lt;li&gt;Screenshot-based vision on heavy SPA boards works even when text stripping returns 0 chars, because Playwright still renders the page&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What this is really about
&lt;/h2&gt;

&lt;p&gt;The cost target wasn't the point for me. Sub-cent per posting is a useful headline, but it's a side effect.&lt;/p&gt;

&lt;p&gt;The point is that once extraction gets this cheap and this structured, &lt;strong&gt;the scraping step stops being a moat&lt;/strong&gt;. Anyone who wants job data at scale can get it. What remains valuable is the schema you extract into — OJP in my case — and the protocol layer that makes those extractions interoperable across every agent and every board.&lt;/p&gt;

&lt;p&gt;I'm building ADNX because the next generation of hiring doesn't run on recruiter tools. It runs on agent-to-agent transactions against domain protocols. Job ingestion at $0.39/1K is just the plumbing. The interesting work is what structured hiring data enables once it's cheap enough to be ambient.&lt;/p&gt;

&lt;p&gt;If you're working on anything adjacent — agent protocols, domain-specific extraction, A2A patterns — ping me. Always looking to compare notes.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Stack:&lt;/strong&gt; Python 3.12, Playwright 1.58, &lt;code&gt;google-genai&lt;/code&gt; SDK, &lt;code&gt;jsonschema&lt;/code&gt;, Docker.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>mcp</category>
      <category>ai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I tried Claude's Managed Agents: built a job scraper &amp; conversion to an agent ready protocol.</title>
      <dc:creator>Alexander Leonhard</dc:creator>
      <pubDate>Thu, 09 Apr 2026 13:46:01 +0000</pubDate>
      <link>https://dev.to/testinat0r/i-tried-claudes-managed-agents-built-a-job-scraper-conversion-to-an-agent-ready-protocol-5co3</link>
      <guid>https://dev.to/testinat0r/i-tried-claudes-managed-agents-built-a-job-scraper-conversion-to-an-agent-ready-protocol-5co3</guid>
      <description>&lt;p&gt;Anthropic launched "Managed Agents" yesterday - So I built one.&lt;/p&gt;

&lt;p&gt;The Agent scrapes career pages autonomously - point it to a URL and it starts the job — tried with Personio, Stripe, Vercel — parses the listings, and converts them into OJP — Open Job Protocol, a structured JSON schema designed for machines, not humans.&lt;/p&gt;

&lt;p&gt;Stripe alone had 100+ listings across 9 pages. Vercel had ~75. I ran 40 through the full pipeline: access --&amp;gt; scrape --&amp;gt; parse --&amp;gt; convert --&amp;gt; validate --&amp;gt; expose on adnx.ai&lt;/p&gt;

&lt;p&gt;Total cost: over $5 in LLM inference. That's roughly $0.13 per job just to turn unstructured HTML into a structured format (using Sonnet 4.6).&lt;/p&gt;

&lt;p&gt;That's too expensive. I'll get into why that number matters. But first — the part that surprised me.&lt;/p&gt;

&lt;h2&gt;
  
  
  The whole thing took about 2 hours
&lt;/h2&gt;

&lt;p&gt;Not because I'm fast. Because the tools are commoditized now.&lt;/p&gt;

&lt;p&gt;Claude did the setup and debug work. I scoped what I want.&lt;/p&gt;

&lt;p&gt;Claude Managed Agents are priced at $0.08 per session-hour. MCP is supported by Claude, ChatGPT, Copilot, Cursor, LangChain — basically everything. You can spin up an agent that connects to real tools, works with real data, and does real work in an afternoon. Not a demo. Not a mockup. A working system.&lt;/p&gt;

&lt;p&gt;Two years ago, building something like this would've been a multi-week project. Custom integrations, bespoke tooling, glue code everywhere. Now it's an MCP server, a prompt, and a few hours of iteration.&lt;/p&gt;

&lt;p&gt;Two months ago, this would still require a lot of back and forth, patching tooling and infrastructure.&lt;/p&gt;

&lt;p&gt;The infrastructure to &lt;em&gt;build&lt;/em&gt; agents isn't the bottleneck anymore. The question has shifted from "can you build an agent" to "what does your agent actually operate on and do?"&lt;/p&gt;

&lt;h2&gt;
  
  
  What this agent does
&lt;/h2&gt;

&lt;p&gt;It hits career pages, whatever URL you point it at — extracts listings, and converts each one into an OJP object.&lt;/p&gt;

&lt;p&gt;Here's what a run with a batch of 10 looked like:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#&lt;/th&gt;
&lt;th&gt;Title&lt;/th&gt;
&lt;th&gt;Org&lt;/th&gt;
&lt;th&gt;OJP ID&lt;/th&gt;
&lt;th&gt;Validation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Backend Engineer, Core Technology&lt;/td&gt;
&lt;td&gt;Stripe&lt;/td&gt;
&lt;td&gt;&lt;code&gt;a3f7e1c2…&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;✅ VALID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;AI/ML Eng. Manager, Payment Intelligence&lt;/td&gt;
&lt;td&gt;Stripe&lt;/td&gt;
&lt;td&gt;&lt;code&gt;b4e8f2d3…&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;✅ VALID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Android Engineer, Terminal&lt;/td&gt;
&lt;td&gt;Stripe&lt;/td&gt;
&lt;td&gt;&lt;code&gt;c5f9a3e4…&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;✅ VALID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Backend / API Engineer, Billing&lt;/td&gt;
&lt;td&gt;Stripe&lt;/td&gt;
&lt;td&gt;&lt;code&gt;d6a0b4f5…&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;✅ VALID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Account Executive, AI Sales&lt;/td&gt;
&lt;td&gt;Stripe&lt;/td&gt;
&lt;td&gt;&lt;code&gt;e7b1c5a6…&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;✅ VALID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;Software Engineer, AI SDK&lt;/td&gt;
&lt;td&gt;Vercel&lt;/td&gt;
&lt;td&gt;&lt;code&gt;f8c2d6b7…&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;✅ VALID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;Software Engineer, CDN&lt;/td&gt;
&lt;td&gt;Vercel&lt;/td&gt;
&lt;td&gt;&lt;code&gt;a9d3e7c8…&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;✅ VALID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;Engineering Manager, AI Gateway&lt;/td&gt;
&lt;td&gt;Vercel&lt;/td&gt;
&lt;td&gt;&lt;code&gt;b0e4f8d9…&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;✅ VALID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;Senior Product Security Engineer&lt;/td&gt;
&lt;td&gt;Vercel&lt;/td&gt;
&lt;td&gt;&lt;code&gt;c1f5a9e0…&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;✅ VALID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;Software Engineer, Agent&lt;/td&gt;
&lt;td&gt;Vercel&lt;/td&gt;
&lt;td&gt;&lt;code&gt;d2a6b0f1…&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;✅ VALID&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;10 out of 10 valid. Each one is a structured OJP object — requirements, compensation, location, skills, work authorization, remote policy, about 30 fields that matter for matching.&lt;/p&gt;

&lt;p&gt;Not a scraped blob of text. A machine-first representation of a job that any agent speaking OJP can evaluate against a talent profile, run constraint matching on, and make decisions with — without burning tokens on parsing and interpretation every time.&lt;/p&gt;

&lt;p&gt;The difference between handing an agent a PDF and handing it a structured API response. One requires understanding. The other requires a schema lookup.&lt;/p&gt;

&lt;p&gt;How to retrieve them?&lt;/p&gt;

&lt;p&gt;&lt;code&gt;POST https://sandbox.adnx.ai/api/v1/jobs/inquire                                                                                                                        { "ojp_id": "a3f7e1c2…" }&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Try it on the sandbox via &lt;a href="https://docs.adnx.ai/quickstart/" rel="noopener noreferrer"&gt;docs.adnx.ai&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have to seed some OJP listings yourself (WIP).&lt;/p&gt;

&lt;h2&gt;
  
  
  The cost problem is real and really expensive at scale
&lt;/h2&gt;

&lt;p&gt;$5 for 40 jobs. At that rate, processing 10,000 listings costs $1,250. That's just the &lt;em&gt;conversion&lt;/em&gt; step — before any matching, ranking, or agent-to-agent negotiation happens.&lt;/p&gt;

&lt;p&gt;Job posts are duplicated on existing job platforms, so you can easily 3-5x.&lt;/p&gt;

&lt;p&gt;The irony: most of the costs comes from the LLM interpreting and parsing unstructured HTML. The agent spends tokens to read and figure out what's a salary range, what's a requirement, what's marketing copy - it has to reason significantly.&lt;/p&gt;

&lt;p&gt;If those listings were already in OJP, the conversion step wouldn't exist. &lt;/p&gt;

&lt;p&gt;The cost drops to near &lt;code&gt;zero&lt;/code&gt; for downstream agents, getting more efficient at scale and using batch API calls.&lt;/p&gt;

&lt;p&gt;This is the token economics argument for domain-specific protocols in one screenshot and an exchange - optimized for agents.&lt;/p&gt;

&lt;p&gt;Every agent in the hiring space is independently burning tokens to parse the same unstructured data into roughly the same structured representation. Multiply that across 100+ AI recruiting startups, each processing millions of listings, and you're looking at an industry-wide waste of compute that structured schemas would eliminate.&lt;/p&gt;

&lt;p&gt;The research says structured context reduces token usage by 60-90% vs. unstructured input. My $5 for 40 jobs is a data point in that range. The conversion is the tax. The protocol is what eliminates it.&lt;/p&gt;

&lt;p&gt;Who wins? Big-AI. Who pays? Employers or your margin!&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I'm sharing this
&lt;/h2&gt;

&lt;p&gt;Two reasons.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One:&lt;/strong&gt; this is a proof of concept anyone can replicate. OJP is MIT-licensed. The tools to build this kind of agent are free or nearly free. You don't need to integrate into anything. You don't need API partnerships or sandbox access or a contract with an ATS vendor. You spin up an MCP server, point an agent at job data, and start converting.&lt;/p&gt;

&lt;p&gt;No technical debt. No vendor lock-in. No integration overhead. Just a protocol and an agent.&lt;/p&gt;

&lt;p&gt;That's the point of building at the protocol layer. When the tools are commoditized, the protocol becomes the leverage. Anyone can build on it. The value isn't in controlling the agent — it's in what the agents agree to speak.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Two:&lt;/strong&gt; I want to be honest about what's not working yet. $0.13 per job conversion is not viable at scale. The protocol itself is solid — structured, evaluable, machine-first. But the bridge from the current unstructured world to the protocol world is expensive. That bridge cost will come down as models get cheaper and as more data originates in structured formats. But right now, it's the real bottleneck.&lt;/p&gt;

&lt;p&gt;If you're building in this space, that's the gap worth looking at. Not "how do I build an agent" — that's solved. But "how do I get structured domain data into my agent's hands without burning a dollar per transaction on interpretation."&lt;/p&gt;

&lt;h2&gt;
  
  
  The bigger picture
&lt;/h2&gt;

&lt;p&gt;The agent infrastructure is ready. MCP is everywhere. A2A is gaining traction. Claude Managed Agents just made deployment a one-liner. The frameworks, the runtimes, the coordination protocols — they exist and they work.&lt;/p&gt;

&lt;p&gt;What's missing is the domain layer. The structured schemas that define what a "job" or a "talent profile" looks like in machine-first format. The transaction protocols that let agents negotiate, settle, and audit hiring decisions. The compliance infrastructure that satisfies EU AI Act requirements (enforcement starts August 2, 2026 — hiring is explicitly classified as high-risk AI) without bolting on governance after the fact.&lt;/p&gt;

&lt;p&gt;That's what I'm working on. Not another hiring tool. The protocol layer underneath all of them.&lt;/p&gt;

&lt;p&gt;Yesterday's agent is a proof point: with commodity tools and an open protocol, you can go from zero to a working system with real data in an afternoon. The cost curve is the remaining problem. But the architecture is right.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm building open protocol infrastructure for agent-to-agent hiring transactions (OTP/OJP, MIT-licensed). The agent I built yesterday is rough and the economics don't work yet — but it's real data, real protocol output, and zero integration debt. If you're working on similar problems in any domain, I'd like to hear what you're running into. &lt;a href="https://linkedin.com/in/alexanderleonhard" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; | &lt;a href="https://adnx.ai" rel="noopener noreferrer"&gt;adnx.ai&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>claude</category>
      <category>discuss</category>
    </item>
    <item>
      <title>The Internet Isn't Ready for Agents. Neither Is Your Business Model.</title>
      <dc:creator>Alexander Leonhard</dc:creator>
      <pubDate>Sun, 05 Apr 2026 13:43:30 +0000</pubDate>
      <link>https://dev.to/testinat0r/the-internet-isnt-ready-for-agents-neither-is-your-business-model-260b</link>
      <guid>https://dev.to/testinat0r/the-internet-isnt-ready-for-agents-neither-is-your-business-model-260b</guid>
      <description>&lt;p&gt;Earlier this year I spent weeks building an AI tool to help myself and others apply for jobs faster and more successfully. I failed and succeeded at the same time.&lt;/p&gt;

&lt;p&gt;Failed — I didn't land a job myself. Succeeded — I learned to use Claude, shipped a production-ready SaaS app with AI features, and understood something fundamental about why this space is broken.&lt;/p&gt;

&lt;p&gt;The tools one person can build within a few sessions can read and parse thousands of job listings. They could match requirements, tailor resumes, draft cover letters. But half the job boards blocked it outright. The other half demanded CAPTCHAs, SMS codes, email verification loops.&lt;/p&gt;

&lt;p&gt;Which is why workarounds like Browser Plugins exist, to circumvent bot-protection layers, meant to make sure you are human.&lt;/p&gt;

&lt;p&gt;The tools I build couldn't apply to a single job.&lt;/p&gt;

&lt;p&gt;That's not a bug. That's the architecture. And then it clicked.&lt;/p&gt;

&lt;h2&gt;
  
  
  The web assumes you have a body
&lt;/h2&gt;

&lt;p&gt;The entire internet authentication stack — from email verification to CAPTCHA to SMS codes — was designed around one assumption: users are physical beings. It was so obvious nobody questioned it. Every platform converged on the same pattern independently, pushed by bot spam, and accidentally created a hard boundary between entities that have bodies and entities that don't.&lt;/p&gt;

&lt;p&gt;The result is a web split into two layers: an open read layer where agents roam freely, and a gated write layer that demands proof of physical existence to participate.&lt;/p&gt;

&lt;p&gt;This matters more than it sounds.&lt;/p&gt;

&lt;p&gt;If you're building anything that involves agents doing real work — not just answering questions, but negotiating, transacting, committing — you hit this wall fast. Your agent can analyze every job posting on the internet but can't apply to a single one. It can evaluate every talent profile but can't schedule an interview.&lt;br&gt;
Read access without write access isn't intelligence. It's surveillance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "fix" that makes everything worse
&lt;/h2&gt;

&lt;p&gt;You'd think the market would respond by solving this infrastructure gap. Instead, it responded with workarounds.&lt;/p&gt;

&lt;p&gt;Tools like AIApply, EnhanceCV, and a dozen others now let job seekers auto-generate "tailored" resumes and blast hundreds of applications per week. AIApply claims 1.1 million users and 150+ applications per person per week. LinkedIn application volume spiked 45% in the past year alone.&lt;/p&gt;

&lt;p&gt;I understand why these tools exist. The application process is broken, and people are frustrated. But here's what actually happened: instead of fixing the infrastructure, we gave humans AI-powered tools to brute-force through human-shaped forms faster. The agent still pretends to be a person. The form still pretends it's talking to one.&lt;/p&gt;

&lt;p&gt;The result on the demand side is brutal. Recruiters are drowning in AI-generated boilerplate. Every resume looks the same. Every cover letter hits the same keywords. Cost-per-hire is going up. Time-to-hire is going up. The Greenhouse CEO called it an "AI doom loop" — applicants use AI to mass-apply, companies use AI to mass-filter, both sides get worse outcomes, and trust craters. Only 8% of job seekers believe AI screening is fair. 42% who lost trust in hiring blame AI directly.&lt;/p&gt;

&lt;p&gt;These tools aren't solving the problem. They're making "AI in hiring" taste bad for the entire demand side. And the root cause is the same: they're operating ON broken infrastructure instead of replacing it.&lt;/p&gt;

&lt;p&gt;The same structural problem breaks pricing&lt;br&gt;
Here's where it gets interesting. The identity problem and the pricing problem share the same root cause: infrastructure designed around human assumptions.&lt;/p&gt;

&lt;p&gt;Take flat-rate AI subscriptions. They were built for human interaction patterns — a person asks a few questions, gets a few answers, goes to lunch. A heavy user might make 20-50 API calls a day.&lt;/p&gt;

&lt;p&gt;Then agents showed up. Background agents routinely hit 500-2,000+ calls overnight. They don't take lunch breaks. They don't get tired. They don't have the natural throttle that a human body provides.&lt;/p&gt;

&lt;p&gt;The subscription model worked when usage correlated with having a body. One body ≈ one predictable usage pattern. Remove the body, the economics collapse. You get adverse selection: the heaviest agent users exploit flat pricing while moderate users subsidize them.&lt;/p&gt;

&lt;p&gt;This is the same fault line. Human-shaped infrastructure meets non-human participants, and the mismatch isn't cosmetic — it's structural.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means if you're building in hiring
&lt;/h2&gt;

&lt;p&gt;I work on agent infrastructure for hiring, so let me make this concrete.&lt;/p&gt;

&lt;p&gt;The hiring industry runs on systems designed for humans clicking through interfaces. Every ATS, every job board, every screening tool assumes a person is on the other end making decisions at human speed. The entire workflow — post a job, collect applications, screen resumes, schedule interviews, negotiate offers — is designed around human attention as the bottleneck.&lt;/p&gt;

&lt;p&gt;The auto-apply tools tried to speed up the human side. That just moved the bottleneck — now you've got 10x the volume with the same filtering infrastructure. When agents enter this workflow properly, two things need to work from the ground up:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identity&lt;/strong&gt;. A recruiting agent evaluating talent needs to be verifiable. Not "verified by a human who owns it" — verifiable on its own terms. The employer's agent needs to know it's negotiating with a legitimate recruiting agent, not a spam bot that scraped a resume database. Today, there's no standard way to do this. Every integration is bespoke, every trust relationship is ad hoc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Economics&lt;/strong&gt;. If an agent can process 1,000 talent evaluations in the time a recruiter handles 10, you can't charge per-seat. The unit of value shifts from "access" to "outcome." The business model that makes sense is transactional — you pay when an agent actually delivers a confirmed hire, not for the right to log in.&lt;br&gt;
These aren't separate problems. They're the same problem: your infrastructure was built for bodies.&lt;/p&gt;

&lt;h2&gt;
  
  
  The uncomfortable next question
&lt;/h2&gt;

&lt;p&gt;Most of the AI-in-hiring discourse right now focuses on the application layer. Better auto-apply tools. Smarter resume scanners. Faster interview scheduling bots.&lt;br&gt;
The harder question is: should agents be filling out human forms at all?&lt;/p&gt;

&lt;p&gt;Agent identity isn't a feature request. It's a prerequisite. Without verifiable agent credentials — not just "who owns this agent" but "what is this agent authorized to do, what has it done before, how did it perform" — you can't have agent-to-agent transactions that anyone trusts. And you definitely can't escape the doom loop where both sides just throw more AI at a fundamentally human-shaped pipe.&lt;/p&gt;

&lt;p&gt;97% of enterprises expect a major AI agent security incident this year. Not because agents are inherently dangerous, but because we're running non-human participants through human-shaped trust infrastructure and hoping for the best.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where this is heading
&lt;/h2&gt;

&lt;p&gt;The fix isn't smarter workarounds. It isn't better auto-apply tools or more sophisticated resume generators. The fix is building infrastructure that doesn't assume anything about what's on the other end of the connection.&lt;/p&gt;

&lt;p&gt;Machine-readable schemas instead of human-readable forms. Verifiable credentials instead of SMS codes. Transaction-based pricing instead of seat licenses. Immutable audit trails instead of trust-on-first-use.&lt;/p&gt;

&lt;p&gt;Some of this already exists in pieces. MCP gives agents a standard way to connect to tools. A2A gives agents a way to talk to each other. But the domain-specific layers — the protocols that define what a "job" looks like to a machine, what a "talent profile" means in structured data, how to settle a hiring transaction between two agents that have never met — that's still mostly missing.&lt;/p&gt;

&lt;p&gt;That's what I'm working on. Not the application layer. Not another tool that helps agents pretend to be humans faster. The foundational layer that makes pretending unnecessary.&lt;/p&gt;

&lt;p&gt;The internet isn't ready for agents. The business models aren't ready for agents. The question isn't whether that changes — it's whether you're building workarounds on the old assumptions or on infrastructure for the new ones.&lt;/p&gt;

&lt;p&gt;--&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I'm building open protocol infrastructure for agent-to-agent hiring transactions. If you're working on agent identity, agent economics, or domain-specific agent protocols, I'd genuinely like to hear what you're running into. Find me on LinkedIn or check out adnx.ai.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>programming</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Your Industry Runs Like Infrastructure in 2010</title>
      <dc:creator>Alexander Leonhard</dc:creator>
      <pubDate>Sat, 04 Apr 2026 23:18:59 +0000</pubDate>
      <link>https://dev.to/testinat0r/your-industry-runs-like-infrastructure-in-2010-4a9m</link>
      <guid>https://dev.to/testinat0r/your-industry-runs-like-infrastructure-in-2010-4a9m</guid>
      <description>&lt;p&gt;If you've spent any time in infrastructure engineering, you've seen this movie before.&lt;/p&gt;

&lt;p&gt;There's a domain running on manual processes. Everyone knows it's fragile. The people inside it have built elaborate workarounds — spreadsheets, tribal knowledge, "just ask Sarah, she knows how it works." Then something changes — scale, regulation, or both — and the workarounds stop working. Slowly at first. Then all at once.&lt;/p&gt;

&lt;p&gt;That's hiring right now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For the people who already know&lt;/strong&gt;&lt;br&gt;
If you lived through the shift from click-ops to IaC, the rest of this will feel familiar. Maybe uncomfortably so.&lt;br&gt;
Remember what infrastructure looked like before Terraform? You logged into the console. You clicked through wizards. You configured things by hand. The state of production lived in someone's head — or worse, in a wiki page last updated eight months ago. When an auditor asked "what's running and why," the answer involved a lot of screen sharing and creative reconstruction.&lt;/p&gt;

&lt;p&gt;Then declarative specs happened. Version control. Drift detection. Immutable state. Provider abstraction. And somewhere along the way, the industry collectively decided: yeah, we're never going back.&lt;/p&gt;

&lt;p&gt;If you were part of that shift, you'll recognize every pattern below. If you weren't — if you're in a domain that skipped it, dismissed it, or figured it didn't apply to you — well. Keep reading. This might sting a little.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The part where hiring looks exactly like infrastructure in 2010&lt;/strong&gt;&lt;br&gt;
Every major recruiting platform is shipping AI agents. LinkedIn, SeekOut, hireEZ, half the ATS market. Agents that source, screen, schedule, and — increasingly — make recommendations that determine who gets hired.&lt;/p&gt;

&lt;p&gt;Here's how they operate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Manual configuration.&lt;/strong&gt; Every job posting is hand-crafted free text. Every resume is unstructured. Every screening call is a snowflake process that lives in someone's head.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No shared state.&lt;/strong&gt; The recruiter's agent and the employer's agent have no common data format. They can't exchange structured information. They can't compare constraints programmatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No audit trail.&lt;/strong&gt; When the AI rejects someone, nobody logs what the model saw, what version was running, what the constraints were, or whether a human had the chance to override.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Point-to-point integrations everywhere.&lt;/strong&gt; Every ATS builds its own agent. Every agent is its own walled garden. Connecting them requires custom work. N platforms × M agencies = N×M integrations that nobody wants to maintain.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Anyone who's untangled a web of hand-configured servers is nodding right now.&lt;/p&gt;

&lt;h3&gt;
  
  
  The mapping that writes itself
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;IaC&lt;/th&gt;
&lt;th&gt;Hiring&lt;/th&gt;
&lt;th&gt;What actually exists today&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;.tf&lt;/code&gt; file — declarative, typed, versioned&lt;/td&gt;
&lt;td&gt;Talent profiles + job requirements as structured schemas&lt;/td&gt;
&lt;td&gt;Free-text resumes and job descriptions written for humans, not machines&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;terraform plan&lt;/code&gt; — preview, no side effects&lt;/td&gt;
&lt;td&gt;Match scoring — structured overlap between supply and demand&lt;/td&gt;
&lt;td&gt;Black-box AI screening with no explainability&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;terraform apply&lt;/code&gt; — execute desired state&lt;/td&gt;
&lt;td&gt;Settlement — hire confirmed, contract terms locked, both sides notified&lt;/td&gt;
&lt;td&gt;Email chains, handshake deals, and "I'll send over the contract by Friday"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;.tfstate&lt;/code&gt; — immutable record&lt;/td&gt;
&lt;td&gt;Compliance vault — every prompt, decision, and override logged&lt;/td&gt;
&lt;td&gt;An ATS activity log if you're lucky. Nothing if you're not.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Idempotency&lt;/td&gt;
&lt;td&gt;Same constraints in → same scores out&lt;/td&gt;
&lt;td&gt;LLM temperature randomness on every single call&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Drift detection&lt;/td&gt;
&lt;td&gt;Profile or job changes trigger re-evaluation&lt;/td&gt;
&lt;td&gt;Full re-screen from scratch. Every time.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Provider abstraction&lt;/td&gt;
&lt;td&gt;Same engine, any LLM underneath&lt;/td&gt;
&lt;td&gt;Vendor lock-in per tool&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Modules — reusable across projects&lt;/td&gt;
&lt;td&gt;Same engine, different domains. Hiring today. Logistics, procurement, consulting tomorrow.&lt;/td&gt;
&lt;td&gt;Every vertical builds from scratch and learns the same lessons the hard way&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;IaC people will look at this table and think: "Obviously." People in recruiting will look at it and think: "Wait, that's possible?"&lt;br&gt;
That gap is the opportunity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The three forces that made IaC inevitable (and are now converging on hiring)&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. Scale broke manual processes.&lt;/strong&gt;&lt;br&gt;
Nobody adopted Terraform because it was trendy. They adopted it because you can't click-ops 500 servers. The hiring equivalent is arriving: when AI agents source from every job board simultaneously, the inbound volume breaks human screening. You need structured data, machine-readable constraints, and deterministic evaluation — or you drown.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Regulation demanded auditability.&lt;/strong&gt;&lt;br&gt;
SOC 2, HIPAA, PCI — IaC became mandatory not because engineers loved YAML, but because auditors said "show me the state." The EU AI Act is doing exactly this for hiring. AI in recruitment is explicitly classified as high-risk. Article 26 requires documentation, human oversight, bias auditing, and immutable logging. Penalties go up to €35M or 7% of global revenue. Enforcement expected late 2027.&lt;br&gt;
Every domain that thought "compliance doesn't apply to us" eventually learned otherwise. Hiring is next.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Interoperability required open standards.&lt;/strong&gt;&lt;br&gt;
Terraform didn't win because HashiCorp built the best engine. It won because HCL became the shared language across providers. AWS, GCP, Azure — one spec, multiple backends.&lt;br&gt;
The hiring equivalent doesn't exist yet. There's no shared schema that defines what a talent profile or a job requirement looks like in machine-readable terms. Without it, every AI agent is a snowflake server. With it, N×M integrations collapse to N+M, and agents across platforms can actually negotiate on structured data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What this looks like when you build it&lt;/strong&gt;&lt;br&gt;
Two open protocols, MIT-licensed:&lt;br&gt;
&lt;strong&gt;OTP (Open Talent Protocol)&lt;/strong&gt; — typed fields for talent profiles. Skills as structured arrays, not keyword strings. Seniority as an enum. Salary expectations as a range. Location and availability as machine-readable constraints. Think of it as the schema definition for what an agent says about a person.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OJP (Open Job Protocol)&lt;/strong&gt; — typed constraint sets for job requirements. Hard requirements vs. preferences as distinct categories. Budget as a range. Compliance flags per jurisdiction. The schema for what an agent says about a role.&lt;/p&gt;

&lt;p&gt;These are the &lt;code&gt;.tf&lt;/code&gt; files for hiring.&lt;/p&gt;

&lt;p&gt;On top of them, an engine handles matching (deterministic overlap — same input, same output), negotiation (multi-round, bilateral, async), settlement (confirmed hire, signed webhooks, idempotent), and a compliance vault (immutable, append-only, every decision logged with model version and inputs).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The cost thing infrastructure engineers will quietly appreciate&lt;/strong&gt;&lt;br&gt;
Free-text resumes force frontier-model inference on every evaluation. The LLM has to parse, extract entities, infer structure, then reason about fit. Expensive. Unreliable. Slow.&lt;/p&gt;

&lt;p&gt;Structured schemas flip this. When a profile arrives as typed fields, the LLM compares data against constraints. Shorter prompts. Less reasoning. Cheaper models for the bulk of the work.&lt;/p&gt;

&lt;p&gt;80% of evaluations run on the smallest models available. Frontier models are reserved for the cases where nuanced judgment actually matters. Same idea as progressive disclosure in MCP servers — don't dump the full tool inventory into the context window when metadata is enough for the first pass.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where this sits&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;What&lt;/th&gt;
&lt;th&gt;Who&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Domain&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Schemas, negotiation, settlement, audit&lt;/td&gt;
&lt;td&gt;adnx.ai&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Transport&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Agent-to-agent coordination&lt;/td&gt;
&lt;td&gt;A2A (Google / Linux Foundation)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Knowledge&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Organizational procedures&lt;/td&gt;
&lt;td&gt;Agent Skills (Anthropic)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Connectivity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Tool access&lt;/td&gt;
&lt;td&gt;MCP (Anthropic / Linux Foundation)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Systems&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ATS, job boards, HRIS&lt;/td&gt;
&lt;td&gt;Your existing stack&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;MCP connects agents to tools. Skills teach agents your processes. A2A lets agents talk to each other. None of them define what agents say about talent and jobs, how a hire gets settled, or how the decision gets audited.&lt;/p&gt;

&lt;p&gt;That's the domain layer. The part of the stack that transport protocols always leave unfinished. TCP/IP didn't ship HTTP. HTTP didn't ship Stripe. And A2A won't ship hiring infrastructure. Someone has to build it.&lt;/p&gt;

&lt;p&gt;Every industry figures this out eventually. Some figured it out early and built on top. Some waited and spent the next decade catching up. If you're in a domain that still runs on unstructured text, manual processes, and "ask Sarah" — you know which group you're in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open protocols, open contribution&lt;/strong&gt;&lt;br&gt;
OTP and OJP are MIT-licensed. The specs are public. If you've ever contributed to an OpenAPI spec, a Terraform provider, or a Kubernetes CRD — same process.&lt;/p&gt;

&lt;p&gt;Specs: opentalentprotocol.org | openjobprotocol.org&lt;br&gt;
Docs: docs.adnx.ai&lt;/p&gt;

&lt;p&gt;You wouldn't provision infrastructure by hand anymore.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>infrastructure</category>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>The Integration Tax: Walled-Garden Agent Strategies Won't Scale (MxN vs. M+N)</title>
      <dc:creator>Alexander Leonhard</dc:creator>
      <pubDate>Wed, 01 Apr 2026 14:52:17 +0000</pubDate>
      <link>https://dev.to/testinat0r/the-integration-tax-walled-garden-agent-strategies-wont-scale-mxn-vs-mn-g5f</link>
      <guid>https://dev.to/testinat0r/the-integration-tax-walled-garden-agent-strategies-wont-scale-mxn-vs-mn-g5f</guid>
      <description>&lt;p&gt;Personio maintains 200+ integrations. Greenhouse has 400+. iCIMS lists 800+.&lt;/p&gt;

&lt;p&gt;Every single one is a point-to-point adapter somebody had to scope, build, test, and keep alive. That was fine when the other end was a stable SaaS product with a versioned API and a partnerships team you could email.&lt;/p&gt;

&lt;p&gt;Now the other end is an AI agent that shipped last Tuesday, pivots next month, and might not exist by Q3.&lt;/p&gt;

&lt;p&gt;The math is about to break. And not just in recruiting.&lt;/p&gt;

&lt;h2&gt;
  
  
  The startup spree nobody asked for
&lt;/h2&gt;

&lt;p&gt;There are over 100 AI recruiting startups right now. Sourcing agents. Screening agents. Scheduling agents. Matching agents. Interview agents. Reference-check agents. Most of them do roughly the same thing with slightly different wrappers. And every single one wants your API.&lt;/p&gt;

&lt;p&gt;If you're an integration engineer: each new agent means onboarding, sandbox access, field mapping, testing, a contract, a fee conversation. Half of them will pivot or shut down within 18 months — leaving you maintaining dead integrations for products that no longer exist.&lt;/p&gt;

&lt;p&gt;If you're a recruiter: you're drowning. Another AI tool. Another dashboard. Another vendor claiming to "revolutionize hiring." Who owns the candidate data when three agents touch the same profile? Who's liable when a screening agent rejects someone unfairly? The roles are opaque, the accountability is nonexistent, and the pitch decks all look identical.&lt;/p&gt;

&lt;p&gt;All you wanted was to help talents find jobs faster. Instead you're managing integrations, fees, contracts, and SLAs for a growing stack of tools that can't talk to each other.&lt;/p&gt;

&lt;h2&gt;
  
  
  "Just vibe-code the adapters"
&lt;/h2&gt;

&lt;p&gt;Sure. AI coding tools make it faster to scaffold an integration. You can stub out a Greenhouse adapter in an afternoon now.&lt;/p&gt;

&lt;p&gt;But building was never the expensive part. Maintaining is.&lt;/p&gt;

&lt;p&gt;Every ATS updates their API. Fields get deprecated. Auth flows change. Rate limits shift. Your AI-generated adapter breaks silently at 2am and candidates disappear into a void. Multiply that across 200 integrations, half written by an LLM with no context on your internal data model.&lt;/p&gt;

&lt;p&gt;AI coding doesn't eliminate integration debt. It lets you accumulate it faster.&lt;/p&gt;

&lt;p&gt;And that's just code. You still manage the relationship overhead — partnership agreements, sandbox environments, webhook registrations, revalidation cycles (iCIMS requires revalidation every 90 days), version migrations. Every integration partner is a mini vendor relationship. At 800+, your integration team isn't engineering anymore. It's account management.&lt;/p&gt;

&lt;h2&gt;
  
  
  The protocol layer is fractured
&lt;/h2&gt;

&lt;p&gt;MCP gives agents access to tools and data. A2A lets agents discover each other and coordinate. Both are real, both are gaining traction. Workday built an Agent Gateway on MCP + A2A — 15 launch partners including Paradox and Microsoft. It's well-designed.&lt;/p&gt;

&lt;p&gt;But here's what these protocols don't do: they don't define what a "talent profile" looks like. Or a "job requirement." Or a "hiring transaction." They're transport and coordination layers — they move messages between agents. They don't know what hiring &lt;em&gt;is&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Think of it this way: MCP and A2A are TCP/IP. What's missing is HTTP — the domain-specific protocol that makes the transport useful for a particular industry.&lt;/p&gt;

&lt;p&gt;Visa understood this for payments. They didn't just adopt generic networking protocols and call it done. They built a domain-specific transaction layer on top — standardized what a "payment" looks like, how authorization works, how settlement happens, how disputes get resolved. Every merchant and every bank connects once to a shared protocol. That's why your card works at any terminal on the planet.&lt;/p&gt;

&lt;p&gt;Hiring has no equivalent. Neither does real estate. Or legal services. Or logistics. Every industry that's about to be flooded with AI agents faces the same structural gap: generic agent protocols exist, but the domain-specific transaction layers that make them useful don't.&lt;/p&gt;

&lt;h2&gt;
  
  
  The token economics nobody's talking about
&lt;/h2&gt;

&lt;p&gt;Here's where it gets concrete. When an AI agent receives a PDF resume, it has to burn tokens parsing unstructured text into something it can reason about. Every agent does this independently. Every time. For every candidate.&lt;/p&gt;

&lt;p&gt;Now imagine a structured talent object — 40+ fields covering skills, credentials, availability, compensation, work authorization — in a machine-first format that any agent can evaluate against constraints directly. No parsing. No interpretation. No guessing what "experienced with cloud platforms" means.&lt;/p&gt;

&lt;p&gt;The difference in token consumption between parsing a PDF and reading a structured schema is not marginal. Research shows structured context reduces token usage by 60-90% compared to unstructured input. At agent scale — millions of matching operations — that's the difference between a viable business model and one that bleeds money on inference costs.&lt;/p&gt;

&lt;p&gt;Domain-specific schemas aren't just about interoperability. They're about making agent economics work. Without them, every agent in every industry is burning tokens on interpretation instead of doing actual work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Meanwhile: the foundation is still a PDF
&lt;/h2&gt;

&lt;p&gt;This is what really gets me.&lt;/p&gt;

&lt;p&gt;We have MCP. We have A2A. We have billion-dollar acquisitions and agent gateways and unified APIs. A hundred AI startups claim to automate hiring end-to-end.&lt;/p&gt;

&lt;p&gt;And candidates still apply with PDFs. A static, unstructured document designed for a human recruiter's desk in 1995. No structured skills data. No machine-readable credentials. No verifiable work history.&lt;/p&gt;

&lt;p&gt;Every AI recruiting agent — no matter how sophisticated — starts by OCR-ing a resume and hoping the parser gets it right. The industry is building autonomous agents on top of a format that requires human interpretation. That's not infrastructure. That's duct tape.&lt;/p&gt;

&lt;p&gt;And this is the pattern everywhere. Real estate agents work with PDF listings. Legal agents parse PDF contracts. Logistics agents read PDF shipping documents. Every industry that's adopting AI agents is hitting the same wall: the foundational data objects were designed for humans, and nobody has replaced them with something agents can natively transact on.&lt;/p&gt;

&lt;h2&gt;
  
  
  This isn't just a hiring problem
&lt;/h2&gt;

&lt;p&gt;Recruiting is a good case study because the fragmentation is extreme — 50+ ATS vendors, 100+ agent startups, hard regulatory deadlines (EU AI Act enforcement starts August 2, 2026 — hiring is explicitly classified as high-risk AI). But the structural pattern is universal.&lt;/p&gt;

&lt;p&gt;Any industry with fragmented vendors, high-frequency transactions, and autonomous agents needs three things that generic protocols can't provide:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Domain-specific schemas.&lt;/strong&gt; Machine-first representations of the industry's core objects — what a "talent profile" or a "property listing" or a "freight shipment" actually looks like in structured, evaluable form.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transaction protocols.&lt;/strong&gt; Not just data sync — negotiation, settlement, counteroffers, acceptance. The difference between a data pipe and an economy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architectural compliance.&lt;/strong&gt; Immutable audit trails baked into the transaction layer. When regulators classify your industry as high-risk AI (and they will — hiring is just first), bolting compliance on after the fact doesn't work. The ACM FAccT conference showed in 2024 that when compliance is optional, companies avoid it. It has to be structural.&lt;/p&gt;

&lt;p&gt;Rochet and Tirole proved in 2003 that hub topologies (N+M) beat mesh (N×M) in two-sided markets. For 50 vendors and 200 agents: 250 connections instead of 10,000. The economics are clear. But each industry needs its own domain layer to make the hub work. Generic transport isn't enough. Visa proved that for payments. Hiring needs the same. So does every other industry about to drown in agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  The caveat I'd be dishonest to skip
&lt;/h2&gt;

&lt;p&gt;HR Open Standards has been around for 27 years with limited adoption. "Domain-specific protocol" isn't a new pitch. Most of them die in committee.&lt;/p&gt;

&lt;p&gt;The argument — and it's an argument, not a certainty — is that agent adoption changes the dynamics. Agents don't have procurement cycles or integration review boards. If a schema is more efficient, the agent uses it. That compresses adoption timelines from decades to months. Maybe. It's an empirical question.&lt;/p&gt;

&lt;p&gt;But the structural need isn't going away. The agent count is growing in every industry. The integration tax is compounding. And every month without shared domain layers, the mesh gets denser and more expensive — while agents burn tokens on interpretation that structured schemas would eliminate.&lt;/p&gt;

&lt;p&gt;Whether these layers get built by startups, consortia, or dominant vendors absorbing the problem — I don't know. But the gap is real, it's widening, and it's not unique to hiring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you're building agents or maintaining integrations in any industry: what would it take for you to adopt a shared domain protocol? What's the real blocker — technical, commercial, trust? Is it the schema design? The governance? The fact that nobody wants to be first?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I think the answer to that question matters more than any specific solution right now.&lt;/p&gt;




</description>
      <category>ai</category>
      <category>agents</category>
      <category>programming</category>
      <category>discuss</category>
    </item>
    <item>
      <title>We Built Open Protocols for the Job Market — Here's What We Learned</title>
      <dc:creator>Alexander Leonhard</dc:creator>
      <pubDate>Thu, 05 Mar 2026 09:16:06 +0000</pubDate>
      <link>https://dev.to/testinat0r/we-built-open-protocols-for-the-job-market-heres-what-we-learned-3ih4</link>
      <guid>https://dev.to/testinat0r/we-built-open-protocols-for-the-job-market-heres-what-we-learned-3ih4</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; We designed two open data protocols — Open Talent Protocol (OTP) for candidate profiles and Open Job Protocol (OJP) for job postings — then built bilateral AI agents that negotiate on behalf of both sides. This post covers the schema design, the simulation engine, and why ESCO taxonomy is the Rosetta Stone for skills matching.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Author: Claude Code with Human Guidance&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Job matching is broken. Candidates describe themselves in free text. Employers list requirements in free text. An ATS tries to keyword-match between the two. Everyone loses.&lt;/p&gt;

&lt;p&gt;We wanted structured, interoperable representations for both sides of the hiring equation — something any platform could produce and consume, the way ActivityPub works for social networks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two Protocols, One Marketplace
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Open Talent Protocol (OTP)
&lt;/h3&gt;

&lt;p&gt;OTP is a JSON schema for a candidate's professional profile. Think of it as a structured resume that machines can reason about.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"meta"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"created"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-02-15T10:00:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"source"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"jobgrow.ai"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"identity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Lena Weber"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"headline"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Junior Frontend Developer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"location"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"city"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Berlin"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"country"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DE"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"work"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Frontend Developer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"organization"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TechStartup GmbH"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"startDate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-01"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"current"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"React"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"taxonomy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"uri"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://data.europa.eu/esco/skill/..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"taxonomy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ESCO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"prefLabel"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"React"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"preferences"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"desiredRoles"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Frontend Developer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Full-Stack Developer"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"workArrangement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hybrid"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"salary"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"min"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;45000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"max"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"currency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"EUR"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"priorities"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"work-life balance"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"learning opportunities"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"startAvailability"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-04-01"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"constraints"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mustHaveRemote"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"minimumSalary"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"locationLocked"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"noTravelRequired"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"requiresVisaSponsorship"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key design decisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Preferences vs. constraints&lt;/strong&gt; — Preferences are negotiable ("I'd like remote"). Constraints are non-negotiable ("I need visa sponsorship"). This distinction is critical for the simulation engine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Taxonomy references&lt;/strong&gt; — Every skill can optionally carry an ESCO or O*NET URI, enabling cross-platform skill matching without string comparison.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Two versions&lt;/strong&gt; — A lightweight v0.1 for public-facing profiles (shareable URLs) and a richer v1.0 for simulation input with negotiation strategy hints.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Open Job Protocol (OJP)
&lt;/h3&gt;

&lt;p&gt;OJP is the employer-side counterpart — a structured job posting.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"meta"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"created"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-02-15T10:00:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"source"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"jobgrow.ai"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"identity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"company"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TechNova GmbH"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"industry"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Software"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"size"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"50-200"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"culture"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"flat hierarchy"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"remote-friendly"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Software Engineer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"seniority"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mid"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"department"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Engineering"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"teamSize"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"compensation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"salary"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"min"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;35000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"max"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;55000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"currency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"EUR"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"bonus"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"performance-based"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"equity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"benefits"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"public transit pass"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"home office budget"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"requirements"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mustHave"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"React"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TypeScript"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"3+ years experience"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"niceToHave"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"GraphQL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AWS"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CI/CD"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"React"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"advanced"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"taxonomy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"uri"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"taxonomy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ESCO"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"negotiation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"salaryFlexibility"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"remoteFlexibility"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"high"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"titleFlexibility"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"low"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"urgency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;negotiation&lt;/code&gt; block is the secret sauce — it tells the simulation engine how much the employer is willing to bend on each dimension.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bilateral Simulation Engine
&lt;/h2&gt;

&lt;p&gt;With both sides represented as structured data, we built an AI simulation engine where two agents negotiate on behalf of candidate and employer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;The engine follows a state machine topology inspired by LangGraph:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;preScreen → [earlyExit | parallelAssessment] → merger → [negotiationRound → reMerge]* → finalize
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 1: Pre-Screen Scoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A deterministic scorer calculates weighted compatibility before any AI is involved:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Factor&lt;/th&gt;
&lt;th&gt;Weight&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Skills match&lt;/td&gt;
&lt;td&gt;40%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Salary overlap&lt;/td&gt;
&lt;td&gt;25%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Location compatibility&lt;/td&gt;
&lt;td&gt;15%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Seniority fit&lt;/td&gt;
&lt;td&gt;10%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Work arrangement&lt;/td&gt;
&lt;td&gt;10%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If the score falls below a threshold, the simulation exits early — no point wasting AI tokens on an obvious mismatch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Parallel Agent Assessment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Two AI agents run simultaneously, each receiving the full OTP or OJP document in their prompt:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Candidate Agent&lt;/strong&gt; — Evaluates the job from the candidate's perspective. Considers preferences, constraints, career goals, and deal-breakers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Employer Agent&lt;/strong&gt; — Evaluates the candidate from the employer's perspective. Considers requirements, culture fit, and hiring urgency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each agent is "opaque" to the other — they can't see each other's reasoning, only proposed terms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Negotiation Rounds&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If both sides say "interested but..." the engine runs multi-round negotiation. Each agent can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Accept&lt;/strong&gt; — Deal done.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Negotiate&lt;/strong&gt; — Propose counter-terms with reasoning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reject&lt;/strong&gt; — Walk away with explanation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Agents follow configurable negotiation strategies:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Strategy&lt;/th&gt;
&lt;th&gt;Behavior&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;assertive&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Pushes hard on must-haves, makes minimal concessions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;collaborative&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Seeks win-win, willing to trade across dimensions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;conservative&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Risk-averse, cautious about deviating from stated preferences&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Merger&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The merger combines both agents' assessments into a final result: match/no-match, agreed terms, and a confidence score with reasoning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Not Just Use LangGraph?
&lt;/h3&gt;

&lt;p&gt;We wanted to. LangGraph's graph-based agent orchestration was a natural fit. But our simulation runs on Deno-based edge functions (Supabase), and LangGraph's Node.js dependencies don't play nicely with Deno's module system.&lt;/p&gt;

&lt;p&gt;So we built a lightweight state machine (~100 lines) that follows the same topology — nodes, edges, conditional routing — without the dependency. It handles streaming via Server-Sent Events and supports both single-shot and multi-round negotiation flows.&lt;/p&gt;




&lt;h2&gt;
  
  
  ESCO: The Rosetta Stone for Skills
&lt;/h2&gt;

&lt;p&gt;String matching skills is a nightmare. "React.js" ≠ "React" ≠ "ReactJS" unless you normalize them. We use the &lt;a href="https://esco.ec.europa.eu/" rel="noopener noreferrer"&gt;European Skills, Competences, Qualifications and Occupations (ESCO)&lt;/a&gt; taxonomy as a universal skill identifier.&lt;/p&gt;

&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Extraction&lt;/strong&gt; — Parse skills from free text (resume or job description)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resolution&lt;/strong&gt; — Look up each skill against the ESCO API to get a canonical URI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Matching&lt;/strong&gt; — Compare URI-to-URI instead of string-to-string&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Caching&lt;/strong&gt; — TTL cache for resolved lookups to avoid hammering the API
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;TaxonomyRef&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;        &lt;span class="c1"&gt;// "http://data.europa.eu/esco/skill/abc123"&lt;/span&gt;
  &lt;span class="nl"&gt;taxonomy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;   &lt;span class="c1"&gt;// "ESCO"&lt;/span&gt;
  &lt;span class="nl"&gt;prefLabel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;// "React (JavaScript library)"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means "React.js" on a resume and "React" in a job posting resolve to the same ESCO URI — instant match without fuzzy string logic.&lt;/p&gt;

&lt;h3&gt;
  
  
  O*NET Support
&lt;/h3&gt;

&lt;p&gt;We also support O*NET taxonomy references for the US market, using the same &lt;code&gt;TaxonomyRef&lt;/code&gt; structure. The resolver interface is pluggable — add a new taxonomy by implementing a single &lt;code&gt;resolve(skillName: string): TaxonomyRef | null&lt;/code&gt; method.&lt;/p&gt;




&lt;h2&gt;
  
  
  Public Profiles: OTP as a Shareable Format
&lt;/h2&gt;

&lt;p&gt;We built a public profile system on top of OTP. Any user can share their profile at a clean URL like &lt;code&gt;/p/username&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Privacy by Default
&lt;/h3&gt;

&lt;p&gt;Public profiles use &lt;strong&gt;redacted mode&lt;/strong&gt; by default:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Company names replaced with &lt;code&gt;****&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;PII (email, phone, LinkedIn) stripped&lt;/li&gt;
&lt;li&gt;Skills, experience duration, and education visible&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Recruiters can request full access — when approved, they receive a token that unlocks the unredacted OTP document including contact details.&lt;/p&gt;

&lt;h3&gt;
  
  
  Machine-Readable
&lt;/h3&gt;

&lt;p&gt;The public endpoint serves OTP as JSON, making profiles consumable by other platforms, ATS systems, or AI agents. Rate-limited to 30 requests/minute per IP.&lt;/p&gt;




&lt;h2&gt;
  
  
  Parsing Job Descriptions into OJP
&lt;/h2&gt;

&lt;p&gt;Real job postings don't arrive as structured JSON. We built a 3-strategy parser to extract requirements from free-text job descriptions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Sentinel blocks&lt;/strong&gt; — Detect structured patterns like &lt;code&gt;Requirements:\n•&lt;/code&gt; and parse bullet lists&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Named sections&lt;/strong&gt; — Find headers like &lt;code&gt;Required:&lt;/code&gt;, &lt;code&gt;Preferred:&lt;/code&gt;, &lt;code&gt;Nice to have:&lt;/code&gt; and categorize accordingly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fallback bullet detection&lt;/strong&gt; — When no clear structure exists, heuristically identify bullet points and classify them&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This turns any job description URL into a structured OJP document that the simulation engine can consume.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Protocol versioning&lt;/strong&gt; — Formal semver for OTP/OJP schemas with migration guides&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Federation&lt;/strong&gt; — Allow other platforms to publish and consume OTP/OJP documents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Richer negotiation&lt;/strong&gt; — Multi-dimensional trade-offs (e.g., "I'll accept lower salary for full remote")&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Taxonomy expansion&lt;/strong&gt; — Beyond ESCO/O*NET to industry-specific skill taxonomies&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Structured beats unstructured&lt;/strong&gt; — Once both sides of a job match are machine-readable, you can build things that string matching never allows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Preferences ≠ constraints&lt;/strong&gt; — This distinction is the single most important schema design decision. It makes negotiation possible.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Taxonomies are infrastructure&lt;/strong&gt; — ESCO URIs as skill identifiers eliminate an entire class of matching bugs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build the simplest state machine that works&lt;/strong&gt; — We didn't need a full orchestration framework. A clear topology with well-defined nodes got us 90% of the way.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privacy as a protocol feature&lt;/strong&gt; — Redacted-by-default with token-based escalation respects candidates while enabling legitimate recruiting.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;em&gt;We're building JobGrow.ai — an AI-powered career platform. If open protocols for the job market interest you, we'd love to hear from you.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>opensource</category>
    </item>
    <item>
      <title>MVP of an AI career coach to help you land a job in 2026.</title>
      <dc:creator>Alexander Leonhard</dc:creator>
      <pubDate>Fri, 20 Feb 2026 08:37:00 +0000</pubDate>
      <link>https://dev.to/testinat0r/mvp-of-an-ai-career-coach-to-help-you-land-a-job-in-2026-225f</link>
      <guid>https://dev.to/testinat0r/mvp-of-an-ai-career-coach-to-help-you-land-a-job-in-2026-225f</guid>
      <description>&lt;p&gt;I've built an AI career agent targeting European job seekers.&lt;/p&gt;

&lt;p&gt;Open to feedback.&lt;/p&gt;

&lt;p&gt;The problem: eternity until you get your first job offer, 3.7% interview rate with generic CVs, and zero tools built for Europe's languages and CV norms or isolated features, not providing solutions along the entire application flow (tracking, CV improvement, interview prep, career goals...)&lt;/p&gt;

&lt;p&gt;JobGrow handles CV tailoring per job posting, application tracking, mock interviews, and career coaching. GDPR-native. EU AI Act compliant.&lt;/p&gt;

&lt;p&gt;Stack: React, Supabase, Cloudflare, Gemini/OpenAI.&lt;/p&gt;

&lt;p&gt;Built with: Claude Opus 4.5 and Sonnet 4.6 over 3 months.&lt;/p&gt;

&lt;p&gt;Free tier available. Would love your thoughts: &lt;a href="https://jobgrow.ai" rel="noopener noreferrer"&gt;https://jobgrow.ai&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Get 50% off via &lt;a href="https://www.producthunt.com/products/jobgrow-ai" rel="noopener noreferrer"&gt;https://www.producthunt.com/products/jobgrow-ai&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See also:&lt;br&gt;
Agent-ready protocol for resumes. Looking for contributors.&lt;br&gt;
&lt;a href="https://opentalentprotocol.org/" rel="noopener noreferrer"&gt;https://opentalentprotocol.org/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>career</category>
      <category>ai</category>
      <category>saas</category>
      <category>hiring</category>
    </item>
  </channel>
</rss>
