Most job descriptions are born from copy-paste. A recruiter grabs last year's JD, swaps out a few bullet points, adjusts the title, and posts it. The result: inconsistent leveling, missing skills, salary ranges pulled from gut feel, and qualifications lists that scare away half the qualified candidates.
I wanted to see what happens when you throw a team of specialized AI agents at this problem instead. Not one prompt — four agents, each focused on a different slice of job architecture, passing structured data to the next. Market research feeds into skill mapping, which feeds into competency frameworks, which all merge into a final JD that actually holds together.
By the end of this tutorial, you will have:
- A single-prompt baseline that shows what a raw LLM produces for job descriptions
- A 4-agent crewAI pipeline that chains market research, skill taxonomy, competency framework, and JD composition
- Pydantic schemas that enforce structured, parseable output from every agent
- A side-by-side comparison with real metrics — tokens, cost, latency
The whole thing runs on Amazon Bedrock with Nova Pro. Total cost for the full tutorial: under $0.15.
Prerequisites
-
AWS account with Bedrock access in
us-east-1(Nova Pro model enabled) - Python 3.12+
-
AWS credentials configured (
aws configureor environment variables) - About $0.15 to run everything
Step 1 — Set Up crewAI with Amazon Bedrock
Install the dependencies:
pip install "crewai[tools]>=1.9.0" boto3 python-dotenv
Create a .env file in your project directory:
AWS_DEFAULT_REGION=us-east-1
MODEL=bedrock/amazon.nova-pro-v1:0
crewAI talks to Bedrock through litellm under the hood — the bedrock/ prefix in the model string handles the routing. No extra Bedrock SDK setup needed beyond having valid AWS credentials.
(If you're on an AISPL account like mine, stick with Nova models. Claude on Bedrock requires Marketplace billing sorted out first, and that's a separate adventure.)
Step 2 — Define the Output Schemas
This is the part that separates a demo from something you could actually plug into an ATS or HRIS. Instead of letting agents return free-form text, we define Pydantic models that force structured JSON output.
Create models.py:
"""Output schemas for the job architecture crew."""
from pydantic import BaseModel, Field
# --- Agent 1: Market Research ---
class SalaryRange(BaseModel):
currency: str = "USD"
min_annual: int
max_annual: int
median_annual: int
class MarketResearchOutput(BaseModel):
"""What the Market Research Analyst produces."""
job_title: str
alternative_titles: list[str] = Field(description="Common variations of this role title")
salary_range: SalaryRange
market_demand: str = Field(description="High / Medium / Low with brief explanation")
industry_context: str = Field(description="Where this role is most common and why")
typical_team_structure: str = Field(description="Who this role reports to and works alongside")
remote_prevalence: str = Field(description="Remote/hybrid/onsite trends for this role")
# --- Agent 2: Skills Taxonomy ---
class Skill(BaseModel):
name: str
importance: str = Field(description="required / preferred / nice-to-have")
context: str = Field(default="", description="Why this skill matters for the role")
class SkillTaxonomyOutput(BaseModel):
"""Structured skill breakdown from the Skills Analyst."""
technical_skills: list[Skill]
soft_skills: list[Skill]
domain_skills: list[Skill] = Field(description="Industry or function-specific knowledge")
tools_and_platforms: list[Skill] = Field(description="Specific tools, languages, or platforms")
certifications: list[Skill] = Field(description="Relevant professional certifications")
# --- Agent 3: Competency Framework ---
class ProficiencyLevel(BaseModel):
level: str = Field(description="e.g. Junior, Mid, Senior, Lead")
description: str
years_experience: str
class Competency(BaseModel):
name: str
description: str
proficiency_levels: list[ProficiencyLevel]
assessment_methods: list[str] = Field(description="How to evaluate this competency")
class CompetencyFrameworkOutput(BaseModel):
"""Competency framework from the Framework Designer."""
role_level: str = Field(description="Target seniority for this framework")
competencies: list[Competency]
experience_requirements: str
education_requirements: str
# --- Agent 4: Job Description ---
class JobDescriptionOutput(BaseModel):
"""Final structured JD from the Composer."""
title: str
department: str
summary: str = Field(description="2-3 sentence role overview")
responsibilities: list[str]
required_qualifications: list[str]
preferred_qualifications: list[str]
competency_requirements: list[str] = Field(
description="Key competencies with expected proficiency level"
)
salary_range: str
benefits_highlights: list[str]
growth_path: str = Field(description="Career progression from this role")
dei_statement: str
work_arrangement: str = Field(description="Remote / hybrid / onsite details")
The Field(description=...) annotations do double duty — they document the schema for us and they tell the LLM what to put in each field. crewAI serializes these descriptions into the prompt automatically.
The Skill model with its importance field is the one I keep coming back to. A flat list of "requirements" is useless for hiring. Tagging each skill as required, preferred, or nice-to-have forces the agent to make real decisions — and it gives recruiters something they can actually filter on.
Step 3 — The Baseline: One Prompt, One JD
Before building the crew, we need a control. What does a single LLM call produce when you ask for a job description?
Create 01_baseline_single_prompt.py:
"""
Baseline: generate a job description with a single LLM call.
No agents, no structured output — just one prompt to Nova Pro.
"""
import time
import json
import boto3
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
MODEL_ID = "amazon.nova-pro-v1:0"
role_title = "Senior ML Engineer"
prompt = f"""Create a complete job description for a {role_title} position.
Include: role summary, responsibilities, required qualifications,
preferred qualifications, salary range, and benefits."""
start = time.time()
resp = bedrock.converse(
modelId=MODEL_ID,
messages=[{"role": "user", "content": [{"text": prompt}]}],
inferenceConfig={"maxTokens": 2048, "temperature": 0.7}
)
elapsed = time.time() - start
output_text = resp["output"]["message"]["content"][0]["text"]
usage = resp["usage"]
input_tokens = usage["inputTokens"]
output_tokens = usage["outputTokens"]
# Nova Pro pricing: $0.0008/1K input, $0.0032/1K output
cost = (input_tokens / 1000 * 0.0008) + (output_tokens / 1000 * 0.0032)
print("=" * 60)
print(f"BASELINE — Single Prompt JD: {role_title}")
print("=" * 60)
print(output_text)
print("\n" + "-" * 60)
print(f"Latency: {elapsed:.1f}s")
print(f"Input tokens: {input_tokens}")
print(f"Output tokens: {output_tokens}")
print(f"Est. cost: ${cost:.4f}")
Run it:
python 01_baseline_single_prompt.py
============================================================
BASELINE — Single Prompt JD: Senior ML Engineer
============================================================
### Job Description: Senior ML Engineer
#### Role Summary:
We are seeking a highly skilled and experienced Senior Machine Learning Engineer to join our dynamic team. The ideal candidate will have a strong background in machine learning, data science, and software engineering. The Senior ML Engineer will be responsible for developing, implementing, and maintaining machine learning models and algorithms to drive business solutions and improve product performance. This role requires a blend of technical expertise, leadership skills, and the ability to collaborate across cross-functional teams.
#### Responsibilities:
- **Model Development:** Design, develop, and deploy machine learning models and algorithms to solve complex business problems.
- **Data Analysis:** Perform exploratory data analysis to identify patterns, trends, and insights. Preprocess and clean data for model training.
- **Model Evaluation:** Evaluate the performance of machine learning models using appropriate metrics and techniques. Optimize models for accuracy, scalability, and efficiency.
- **Deployment:** Implement machine learning models into production environments, ensuring they are robust, scalable, and maintainable.
- **Collaboration:** Work closely with data scientists, software engineers, product managers, and other stakeholders to integrate machine learning solutions into existing systems.
- **Research:** Stay up-to-date with the latest advancements in machine learning and AI. Conduct research to identify new techniques and tools that can be applied to business problems.
- **Mentorship:** Provide guidance and mentorship to junior team members, fostering a culture of learning and innovation.
- **Documentation:** Create and maintain comprehensive documentation for machine learning models, algorithms, and processes.
#### Required Qualifications:
- **Education:** Master's or Ph.D. in Computer Science, Data Science, Statistics, or a related field.
- **Experience:** Minimum of 5 years of experience in machine learning engineering or a related role.
- **Technical Skills:**
- Proficiency in programming languages such as Python, R, or Java.
- Strong understanding of machine learning algorithms and techniques (e.g., supervised and unsupervised learning, deep learning).
- Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, Scikit-learn).
- Solid understanding of data structures, algorithms, and software design principles.
- Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes).
- **Data Handling:** Strong skills in data manipulation, preprocessing, and feature engineering.
- **Communication:** Excellent verbal and written communication skills.
#### Preferred Qualifications:
- **Industry Experience:** Experience in a relevant industry (e.g., finance, healthcare, e-commerce).
- **Advanced Degrees:** Ph.D. in a related field.
- **Publications:** Published research papers in machine learning or related conferences/journals.
- **Certifications:** Certifications in machine learning or data science (e.g., AWS Certified Machine Learning – Specialty).
- **Leadership Experience:** Previous experience in a leadership or mentorship role.
#### Salary Range:
- **Base Salary:** $120,000 - $160,000 per year, depending on experience and qualifications.
- **Bonus:** Performance-based annual bonus.
- **Stock Options:** Eligibility for company stock options.
#### Benefits:
- **Health Insurance:** Comprehensive medical, dental, and vision insurance plans.
- **Retirement:** 401(k) with company match.
- **Paid Time Off:** Generous paid time off, including vacation, sick leave, and holidays.
- **Professional Development:** Opportunities for continued learning and professional growth, including conference attendance and training programs.
- **Wellness Programs:** Access to wellness programs and resources.
- **Flexible Work Arrangements:** Options for remote work and flexible hours.
- **Employee Discounts:** Discounts on company products and services.
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
------------------------------------------------------------
Latency: 5.8s
Input tokens: 32
Output tokens: 828
Est. cost: $0.0027
The output is a wall of markdown. Technically it covers the basics — title, responsibilities, qualifications. But look at what's missing: no skill taxonomy, no competency levels, no market-backed salary data, no separation between must-haves and nice-to-haves. The salary range ($120K–$160K) is a guess with no market justification. The qualifications are a flat dump where everything looks equally important.
32 input tokens, 828 output tokens, $0.0027. Fast and cheap. And you get what you pay for.
Step 4 — Build the 4-Agent Crew
Four agents, each with a specific role, wired in sequence so each one builds on what came before.
Save this as 02_job_architecture_crew.py:
"""
4-agent job architecture crew.
Takes a job title and produces a structured job description
backed by market research, a skill taxonomy, and a competency framework.
Each agent builds on the previous agent's output.
"""
import sys
import time
import json
from crewai import Agent, Task, Crew, Process, LLM
from models import (
MarketResearchOutput,
SkillTaxonomyOutput,
CompetencyFrameworkOutput,
JobDescriptionOutput,
)
# --- LLM setup ---
llm = LLM(model="bedrock/amazon.nova-pro-v1:0", temperature=0.7)
One LLM instance shared across all agents. The four agents each get a backstory that shapes how the LLM approaches the task:
market_researcher = Agent(
role="Market Research Analyst",
goal="Research market positioning, salary benchmarks, and industry demand for a given job title",
backstory=(
"You spent a decade at Mercer and Radford running compensation surveys "
"and labor market analyses for Fortune 500 companies. You know how to "
"benchmark roles across industries and geographies."
),
llm=llm,
verbose=True,
)
skills_analyst = Agent(
role="Skills Taxonomy Analyst",
goal="Map all required skills into a structured taxonomy with importance levels",
backstory=(
"Former head of skills architecture at LinkedIn, you built the skill "
"ontology that powers their talent intelligence platform. You think in "
"taxonomies — technical, soft, domain, tools, certifications — and always "
"tag each skill with how critical it really is."
),
llm=llm,
verbose=True,
)
competency_designer = Agent(
role="Competency Framework Designer",
goal="Define proficiency levels and assessment criteria for each core competency",
backstory=(
"You designed competency models for Deloitte's human capital practice. "
"Your frameworks map every competency from junior to principal level with "
"concrete behavioral indicators and practical assessment methods."
),
llm=llm,
verbose=True,
)
jd_composer = Agent(
role="Job Description Composer",
goal="Synthesize all research into a polished, structured, bias-aware job description",
backstory=(
"You write job descriptions for a living — currently at a top HR-tech "
"startup. You know what makes candidates click Apply: clear language, "
"no jargon walls, honest requirements, and inclusive framing. You always "
"separate must-haves from nice-to-haves because inflated requirements "
"drive away qualified candidates."
),
llm=llm,
verbose=True,
)
The backstories matter more than you'd think. crewAI injects them into the system prompt for each agent, and they shape how the LLM approaches the task. A "compensation survey veteran" writes differently than a generic assistant — more specific numbers, more market awareness, less hand-waving.
The task definitions are where context connects everything:
def build_tasks(role_title: str) -> list[Task]:
t1 = Task(
description=(
f"Research the market landscape for the '{role_title}' role. "
f"Provide salary benchmarks (USD), demand level, common alternative titles, "
f"typical team structure, industry context, and remote work trends. "
f"Base your analysis on current market conditions."
),
expected_output="Market research report with salary data, demand analysis, and industry positioning",
agent=market_researcher,
output_pydantic=MarketResearchOutput,
)
t2 = Task(
description=(
f"Using the market research provided, build a complete skill taxonomy for "
f"the '{role_title}' role. Categorize skills into: technical, soft, domain, "
f"tools/platforms, and certifications. Mark each as required, preferred, "
f"or nice-to-have. Add brief context for why each skill matters."
),
expected_output="Structured skill taxonomy with importance levels and context",
agent=skills_analyst,
output_pydantic=SkillTaxonomyOutput,
context=[t1],
)
t3 = Task(
description=(
f"Using the market research and skill taxonomy, design a competency "
f"framework for the '{role_title}' role. Define 4-6 core competencies, "
f"each with proficiency levels from junior to lead/principal. Include "
f"concrete assessment methods for each competency."
),
expected_output="Competency framework with proficiency levels and assessment criteria",
agent=competency_designer,
output_pydantic=CompetencyFrameworkOutput,
context=[t1, t2],
)
t4 = Task(
description=(
f"Synthesize all the research, skills, and competency data into a final "
f"job description for the '{role_title}' role. The JD must be:\n"
f"- Clear and jargon-free\n"
f"- Bias-aware (avoid gendered language, unnecessary requirements)\n"
f"- Structured with separate required vs preferred qualifications\n"
f"- Include salary range from the market research\n"
f"- Include a growth path and DEI commitment statement"
),
expected_output="Complete, structured job description ready for posting",
agent=jd_composer,
output_pydantic=JobDescriptionOutput,
context=[t1, t2, t3],
)
return [t1, t2, t3, t4]
crewAI serializes each task's Pydantic output and feeds it into the next agent's prompt. By task 4, the Composer sees all three prior outputs — market data, skills, and competency frameworks.
output_pydantic on each task tells crewAI to parse the LLM's response into the specified model. If the JSON doesn't match the schema, crewAI retries automatically. During my runs, it never needed a retry — Nova Pro handled the structured output on the first attempt every time.
The crew wires together with Process.sequential:
def run_crew(role_title: str):
tasks = build_tasks(role_title)
crew = Crew(
agents=[market_researcher, skills_analyst, competency_designer, jd_composer],
tasks=tasks,
process=Process.sequential,
verbose=True,
)
print(f"\n{'='*60}")
print(f" Job Architecture Crew — {role_title}")
print(f"{'='*60}\n")
start = time.time()
result = crew.kickoff()
elapsed = time.time() - start
# Print structured output
print(f"\n{'='*60}")
print(" FINAL STRUCTURED OUTPUT")
print(f"{'='*60}\n")
if result.pydantic:
print(json.dumps(result.pydantic.model_dump(), indent=2))
else:
print(result.raw)
# Metrics
usage = result.token_usage
input_tokens = usage.prompt_tokens if usage else 0
output_tokens = usage.completion_tokens if usage else 0
total_tokens = usage.total_tokens if usage else 0
cost = (input_tokens / 1000 * 0.0008) + (output_tokens / 1000 * 0.0032)
print(f"\n{'-'*60}")
print(f"Latency: {elapsed:.1f}s")
print(f"Input tokens: {input_tokens:,}")
print(f"Output tokens: {output_tokens:,}")
print(f"Total tokens: {total_tokens:,}")
print(f"Est. cost: ${cost:.4f}")
print(f"{'-'*60}")
return result
if __name__ == "__main__":
title = sys.argv[1] if len(sys.argv) > 1 else "Senior ML Engineer"
run_crew(title)
Run it on the same role we used for the baseline:
python 02_job_architecture_crew.py "Senior ML Engineer"
Watch the verbose trace — you can see each agent pick up the prior agent's output and build on it. The prompt sizes grow visibly with each step as context accumulates. Here's the first agent kicking off:
============================================================
Job Architecture Crew — Senior ML Engineer
============================================================
╭───────────────────────── 🚀 Crew Execution Started ──────────────────────────╮
│ │
│ Crew Execution Started │
│ Name: │
│ crew │
│ ID: │
│ 1acae4e9-a5e5-4bd7-bb28-c6c7e71b5949 │
│ │
│ │
╰──────────────────────────────────────────────────────────────────────────────╯
╭────────────────────────────── 📋 Task Started ───────────────────────────────╮
│ │
│ Task Started │
│ Name: Research the market landscape for the 'Senior ML Engineer' role. │
│ Provide salary benchmarks (USD), demand level, common alternative titles, │
│ typical team structure, industry context, and remote work trends. Base │
│ your analysis on current market conditions. │
│ ID: 5799e7d0-c79b-43a0-b40b-208ae84e1df0 │
│ │
│ │
╰──────────────────────────────────────────────────────────────────────────────╯
╭────────────────────────────── 🤖 Agent Started ──────────────────────────────╮
│ │
│ Agent: Market Research Analyst │
│ │
│ Task: Research the market landscape for the 'Senior ML Engineer' role. │
│ Provide salary benchmarks (USD), demand level, common alternative titles, │
│ typical team structure, industry context, and remote work trends. Base │
│ your analysis on current market conditions. │
│ │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────── ✅ Agent Final Answer ────────────────────────────╮
│ │
│ Agent: Market Research Analyst │
│ │
│ Final Answer: │
│ { │
│ "job_title": "Senior ML Engineer", │
│ "alternative_titles": [ │
│ "Principal Machine Learning Engineer", │
│ "Lead Machine Learning Engineer", │
│ "Staff Machine Learning Engineer", │
│ "Machine Learning Scientist", │
│ "Applied Machine Learning Engineer" │
│ ], │
│ "salary_range": { │
│ "currency": "USD", │
│ "min_annual": 120000, │
│ "max_annual": 220000, │
│ "median_annual": 170000 │
│ }, │
│ "market_demand": "High. The demand for Senior ML Engineers is high │
│ across various industries due to the increasing importance of data-driven │
│ decision-making and automation. Companies are investing heavily in machine │
│ learning capabilities to gain a competitive edge, driving up the demand │
│ for skilled professionals in this field.", │
│ ... │
│ } │
│ │
╰──────────────────────────────────────────────────────────────────────────────╯
The Skills Taxonomy Analyst, Competency Framework Designer, and Job Description Composer follow the same pattern — each picking up the previous agent's structured JSON and building on it. After all four finish, the final structured output:
============================================================
FINAL STRUCTURED OUTPUT
============================================================
{
"title": "Senior ML Engineer",
"department": "Engineering",
"summary": "We are seeking a Senior Machine Learning Engineer to join our team. You will design, implement, and optimize machine learning models to drive data-driven decision-making and automation across our organization. This role requires a blend of technical expertise, problem-solving skills, and the ability to collaborate with cross-functional teams.",
"responsibilities": [
"Design, implement, and optimize machine learning models.",
"Perform statistical analysis to understand data distributions, evaluate models, and test hypotheses.",
"Clean, transform, and prepare data for machine learning models.",
"Assess the performance and generalizability of machine learning models.",
"Collaborate with data scientists, software engineers, product managers, and domain experts.",
"Explain complex machine learning concepts to non-technical stakeholders.",
"Guide junior engineers and influence project direction.",
"Stay updated with the latest machine learning technologies and methodologies."
],
"required_qualifications": [
"6-8 years of experience in machine learning engineering roles.",
"Master's or Ph.D. in Computer Science, Data Science, or a related field.",
"Proficiency in Machine Learning Algorithms.",
"Strong skills in Statistical Analysis.",
"Expertise in Data Preprocessing.",
"Experience with Model Evaluation and Validation.",
"Proficiency in Python, TensorFlow, PyTorch, and Scikit-learn.",
"Excellent Problem-Solving skills.",
"Strong Communication skills.",
"Effective Collaboration skills."
],
"preferred_qualifications": [
"Experience with Deep Learning.",
"Knowledge of Natural Language Processing.",
"Familiarity with Reinforcement Learning.",
"Leadership experience.",
"Adaptability to new technologies and methodologies.",
"Industry-Specific Knowledge.",
"Business Acumen.",
"Experience with Apache Spark.",
"Experience with AWS/GCP/Azure.",
"Certified Machine Learning Engineer (CMLEng).",
"AWS Certified Machine Learning – Specialty.",
"Google Professional Machine Learning Engineer."
],
"competency_requirements": [
"Expert in Machine Learning Algorithms.",
"Expert in Statistical Analysis.",
"Expert in Data Preprocessing.",
"Expert in Model Evaluation and Validation.",
"Expert Problem-Solver.",
"Expert Communicator."
],
"salary_range": "USD 120,000 - USD 220,000",
"benefits_highlights": [
"Comprehensive health, dental, and vision insurance.",
"Retirement savings plan with company match.",
"Paid time off and holidays.",
"Professional development opportunities.",
"Flexible work arrangement (remote/hybrid)."
],
"growth_path": "This role offers a clear career progression path. Successful candidates can advance to Lead Machine Learning Engineer, Principal Machine Learning Engineer, or Director of Machine Learning, depending on their performance and career aspirations.",
"dei_statement": "We are committed to creating a diverse environment and are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.",
"work_arrangement": "This position offers a flexible work arrangement, including remote and hybrid options. Some onsite presence may be required for certain projects or team meetings."
}
------------------------------------------------------------
Latency: 22.3s
Input tokens: 32,648
Output tokens: 14,328
Total tokens: 46,976
Est. cost: $0.0720
------------------------------------------------------------
The full pipeline end to end:
Step 5 — Compare the Results
Same role, same LLM, wildly different output.
The baseline gave us a generic markdown JD — responsibilities, qualifications, a guessed salary range. Flat text, no structure, nothing a system could parse.
The crew produced structured JSON. Salary data came with a market-backed range ($120,000–$220,000, $170,000 median) instead of a guess. Skills were categorized — 7 technical, 5 soft, 2 domain, 6 tools/platforms, 3 certifications — each tagged as required, preferred, or nice-to-have. The competency framework included proficiency levels from junior to lead with concrete assessment methods.
Qualifications were split into required vs. preferred, so candidates can self-assess honestly. Even a DEI statement and growth path showed up — fields the baseline didn't attempt at all.
To confirm this generalizes, I ran the crew on a completely different role:
python 02_job_architecture_crew.py "HR Business Partner"
Different universe. The salary range shifted to $70,000–$130,000. Technical skills gave way to employment law, HRIS platforms, and conflict resolution. Competencies like "change management" and "stakeholder management" replaced "deep learning" and "model evaluation." Same pipeline, completely different output shaped by the first agent's market research.
The numbers
| Baseline | 4-Agent Crew | |
|---|---|---|
| Latency | 5.8s | 22.3s |
| Input tokens | 32 | 32,648 |
| Output tokens | 828 | 14,328 |
| Total tokens | 860 | 46,976 |
| Est. cost | $0.0027 | $0.0720 |
The crew is 27x more expensive and 3.8x slower. For $0.07 and 22 seconds, you get a structured, market-informed job architecture instead of a generic text dump. In a real HR workflow — where a bad JD means months of wrong candidates — that tradeoff isn't even close.
Conclusion
Four agents, one pipeline, $0.07 per role. Market data grounds the whole thing. From there, skills get tagged into a real taxonomy instead of a flat list, and proficiency levels mean something during interviews. The final JD is structured JSON a system can parse — not just prose for a human to skim.
The whole tutorial ran for under $0.14 on Nova Pro, including the baseline and two crew runs.
Where to take this next:
- Add a web search tool to the Market Research agent so salary data comes from live sources instead of the LLM's training data
- Wire the output into an ATS API (Greenhouse, Lever) to post JDs directly
- Batch-process an entire department — feed in 10 role titles, get back a consistent job architecture with aligned leveling
All the code is on GitHub.


Top comments (0)