Recruiting looks simple on paper. Resume comes in. Someone reviews it. If it is a fit, an interview gets scheduled.
In practice, it is a lot of coordination. Resumes in inboxes, updates in trackers, calendar back and forth, and follow-ups that slip when volume spikes.
During an agent buildathon, I set out to build an agent that owns a clean slice of the recruitment process end-to-end.
What I wanted the agent to own
A recruiting agent that can:
- Parse an incoming resume
- Match it against a job description
- Produce a fit score with a short rationale
- If the candidate clears a threshold, schedule an interview
- Update the tracking sheet
- Send the email and calendar invite
The point was not to build a chatbot. The point was to reduce context switching across the inbox, sheets, and calendar.
The constraints I worked within
I built this entirely using DronaHQ’s agentic platform. No external agent frameworks. No custom orchestration stack.
That allowed me to be precise about three building blocks.
- Trigger. How does the agent start.
- Tools. What systems can it read and write to.
- Success. What does ‘done’ look like.
Step 1. Define the trigger and the entry data
I started by deciding where resumes should land.
In most teams, resumes arrive in one of three ways.
- A shared inbox such as jobs@company.com
- A careers form submission
- A recruiter forwarding resumes from their own inbox
I used the shared inbox pattern because it maps to how a lot of lean teams actually operate.
The trigger is simple.
When a new email lands in jobs@company.com with a resume attachment, the agent starts.
At that moment, the agent needs a minimum payload to do its job without chasing people for context.
- The resume file
- The role the candidate applied for, if available in the subject line or form metadata
- Candidate email and name from the incoming message
- Any recruiter notes if present
If the incoming email does not specify a role, the agent can still proceed, but it should switch to a safer mode.
It can either ask a clarifying question internally, or run a multi-JD match and suggest likely fits instead of assuming.
This is where recruiting automations usually break.
If the trigger payload is thin, the agent wastes time asking for basics. If it guesses, the risk goes up quickly.
So I treated trigger design as part of the product, not plumbing.
Step 2. Give the agent a job description and instructions it can rely on
A recruiting agent is only as consistent as the reference it uses. So I anchored it in two things.
- The job description itself. I stored the JD as a knowledge base item so the agent always evaluates against the same source of truth.
- The agent instructions. This is where I defined how the agent should behave, what it should extract, and what it should never do.
The instruction set included:
- Role identity. You are an HR recruiting assistant focused on first-pass screening and coordination.
- Scoring rubric. What counts as strong evidence for each requirement.
- Output format. A fit score plus a short rationale with evidence.
- Safety rules. Do not invent experience. Do not overstate certainty. Flag missing data instead of guessing.
- Scheduling rules. Only schedule if the score crosses a threshold and required fields are present.
The JD alone is not enough. It describes the role. The instructions describe the evaluation method.
Without that method, you get subjective scoring, inconsistent outputs, and a system that feels unreliable in week two.
With a stable JD and a stable instruction set, the agent behaves predictably even when resumes vary widely.
This recruitment agent is now available as a ready template >
Step 3. Parse the resume into usable structure
The agent uses a file parser tool to read the resume.
The practical requirement here is not to extract every detail.
It is to extract enough signals to evaluate fit against the JD.
For example:
- Core skills and technologies
- Relevant experience and seniority
- Domain exposure where it matters
- Evidence that maps to the JD requirements
Step 4. Match and score with a rubric mindset
The scoring step is where trust is won or lost.
I kept it structured.
- Fit score
- Short reasoning
- Key skills detected
- Any red flags or missing requirements
Even when the output is correct, teams want to know why.
So I treated the rationale as part of the product, not an optional extra.
Step 5. Make the agent update the tracker
Once the agent has a score, it writes the candidate record into a tracking sheet.
This includes:
- Candidate name
- Score
- Skills summary
- Status
- Interview time when scheduled
This sounds small, but it removes a constant source of drift in hiring operations.
When data is not updated reliably, the funnel becomes hard to manage.
Step 6. Schedule the interview and send the email
If the score clears the threshold, the agent:
- Books an interview slot using calendar integration
- Sends an email to the candidate using a templated format
- Sends the calendar invite
This is the most sensitive step, because a wrong schedule is worse than no schedule.
In the current version, the cleanest approach is to keep a human checkpoint for auto scheduling until the team is confident.
You can still save time by having the agent propose slots and draft the email, then send only after approval.
What worked well
The most useful outcome was not the score.
It was the coordination layer.
When the agent could read the resume, match it to the JD, update the tracker, and schedule the interview, it removed the tedious handoffs that usually slow recruiting down.
What I would improve next
This build also made the next obvious step clear.
One resume rarely maps cleanly to one JD.
So the expanded version we are exploring is:
- One incoming resume
- Evaluated against a library of JDs
- Best fit role suggested with reasons
- Routed to the right recruiter or hiring manager
That is useful for high volume hiring, internal mobility, and reducing misrouting early in the funnel.
Beyond that, there are practical additions that matter in real hiring.
- Confidence thresholds and mandatory review points
- Clear logging of what the agent did and why
- Handling missing information without guessing
- Keeping candidate communication consistent and respectful
Closing thought
The biggest shift for me was this. Building an agent is not mainly about prompts. It is about ownership.
What does the agent fully own from start to finish, and what systems does it need access to in order to finish the job.
Once you treat it like that, recruiting becomes a very natural place to apply agents.
Not because hiring should be automated. Because coordination should be.
Want to build custom AI agent as easily as I did? Check out DronaHQ Agentic Platform
Top comments (0)