Applicant Tracking Systems used to be boring. For most of the 2010s, an ATS was essentially a database with a careers page bolted on top: a place to dump resumes, push them through a pipeline of stages, and email rejections in bulk. The interesting work happened around it, not inside it. That has shifted in the last two years, and the shift is deeper than the marketing pages suggest.
I have spent a long time reading resumes, both as a hiring manager and as someone who reviews them for a living. The questions candidates ask me have changed. They are no longer asking what font to use or whether to include a photo. They are asking why a human never seems to see their application, what "AI matching" actually does to their resume, and whether the systems are legally allowed to reject them without explanation. Those are the right questions to be asking in 2026, and the answers are more nuanced than either the ATS vendors or the LinkedIn influencers tend to admit.
This piece is an attempt to map what is actually happening inside modern ATS platforms, where the technology is heading, and what that means for anyone applying to a job today.
From keyword filters to structured understanding
The oldest myth about ATS software is that it is a keyword filter. That was largely true a decade ago. Early systems performed shallow text extraction on a PDF or DOCX file, ran a Boolean match against a recruiter's saved search, and ranked candidates by how often the requested terms appeared. If you submitted a resume that said "Python" three times, you ranked above one that said it twice. The folklore around ATS optimization (cram the keywords, mirror the job description, strip the formatting) grew out of that era and has stuck around long after the underlying systems changed.
What is in production today at the larger vendors looks different. Workday, Greenhouse, iCIMS, SmartRecruiters, Lever, Ashby, and a wave of newer entrants like Eightfold and Paradox have moved their parsing and matching layers onto modern NLP and, increasingly, fine-tuned large language models. Instead of treating a resume as a bag of words, they extract a structured representation: roles, employers, dates, responsibilities, skills, education, certifications, and the relationships among them. Some vendors call this a "candidate graph" or "skills graph." The terminology is marketing, but the underlying shift is real. The system is no longer asking "does this document contain the word Kubernetes," it is asking "has this person operated Kubernetes in production, and for how long, and at what scale."
A September 2025 paper on arXiv on fine-tuned LLMs for recruitment automation reported meaningful gains over classical parsers on entity extraction and job-resume matching, which lines up with what vendors have been quietly shipping. Springer published work in early 2026 on a Retrieval-Augmented Generation pipeline for resume refactoring that uses semantic similarity rather than lexical overlap. None of this is exotic research anymore. It is the new baseline.
The practical consequence for candidates is that the cheap tricks have stopped working, and in some cases have started actively hurting. White text stuffed into a resume to game keyword density is detected by most modern parsers, sometimes flagged, and occasionally treated as a negative signal. Skills sections padded with terms the candidate never demonstrates in their experience tend to get downweighted because the matcher is comparing claimed skills against the evidence in the work history. The systems are not infallible at this, but they are getting better at it, faster than the advice circulating online is updating.
The LLM layer, and what it is actually doing
Most large ATS vendors now sit in front of, or quietly call out to, a foundation model. Greenhouse rolled out generative-AI features for job description drafting and candidate summaries. Workday introduced its own "AI agents" framework for recruiting. iCIMS has been building agentic copilots into sourcing and screening. SmartRecruiters has talked openly about LLM-driven CRM and matching. Smaller players such as Ashby and Hirex lean even harder on AI matching as a core differentiator.
What these models actually do varies by vendor and by feature, but a few patterns are consistent.
The first is summarization. When a recruiter opens a candidate profile, what they see is increasingly a model-generated paragraph that synthesizes the resume against the job. This summary is what often determines whether the recruiter clicks into the full document. If your resume is structured well, the summary tends to read well. If it is a wall of vague bullet points, the summary tends to flatten you into a generic blob. The model is the first reader, and the recruiter is reading the model's notes.
The second is matching. Rather than scoring on keyword overlap, modern matchers embed both the job description and the resume into a vector space and compute semantic similarity, often with additional features layered on top: years of experience, seniority inference, location, work authorization signals, and so on. Vendors increasingly publish a "match score" or "fit score" alongside the candidate. Internally, that score is usually a weighted combination of several models, not a single number from a single classifier.
The third, still emerging, is agentic action. Some platforms now let the system take low-stakes actions on its own: scheduling screens, sending follow-ups, asking clarifying questions over chat, sometimes conducting an initial conversational screen. Paradox built a business around this. Workday and others are catching up. The line between "the ATS" and "the recruiter's assistant" is blurring in a way that matters, because the assistant has opinions, and those opinions feed back into the ranking.
The fourth is hallucination, which the vendors talk about less. LLMs invent details. A summary might assert that a candidate has five years of experience with a tool they used for one project, or downgrade someone whose resume the model misread. This is a known problem, and the better vendors are putting retrieval and citation layers in place to ground summaries in the actual document. The weaker implementations are not. If you have ever wondered why a recruiter's reaction to your resume seemed to reference something you never wrote, this is part of the answer.
Skills-first hiring and the slow death of the job title
A quieter shift is happening underneath the AI layer: the canonical unit of hiring is moving from the job title to the skill. Workday's Skills Cloud, Eightfold's talent intelligence platform, and several recent ATS launches all share the same premise, that titles are noisy, inconsistent across companies, and a poor predictor of capability. Skills, when modeled as a graph with relationships and proficiency levels, are more portable.
In practice this means that when you apply to a role tagged with, say, "distributed systems, Go, observability, on-call experience," the ATS is no longer scanning for those literal phrases. It is checking whether the inferred skill graph from your resume overlaps with the inferred skill graph of the role, weighted by recency and depth. A candidate who never wrote the word "observability" but spent three years owning Datadog dashboards and PagerDuty rotations may match anyway. A candidate who lists "observability" in a skills section without any supporting experience may not.
This has two consequences. For candidates, the value of writing concrete, evidence-backed bullet points has gone up, and the value of buzzword padding has gone down. For employers, the temptation to write job descriptions as wishlists has gotten more expensive, because the matcher will dutifully filter against every requirement, including the unrealistic ones, and shrink the candidate pool further than intended.
Regulation is finally arriving
For most of ATS history, the legal regime around automated hiring was thin. That is changing quickly, and the changes are reshaping how vendors design their products.
New York City's Local Law 144, in force since mid-2023, requires employers and employment agencies using an Automated Employment Decision Tool to commission an independent bias audit within the prior year, publish a summary of the results, and notify candidates that an AEDT is being used. The New York State Comptroller released a critical audit in late 2025 finding uneven enforcement, and DLA Piper published a January 2026 analysis warning of increased regulatory risk for employers who treat the law as a paperwork exercise. Whatever one thinks of the law's specifics, it has set a template that other US jurisdictions are copying, with bills proposed or passed in California, Illinois, Colorado, and at the federal level via EEOC guidance on algorithmic decision-making.
In Europe, the AI Act classifies AI systems used for recruitment, candidate evaluation, and employment decisions as high-risk under Annex III. The obligations that flow from that classification, including risk management, data governance, human oversight, transparency, and post-market monitoring, are non-trivial, and they apply to vendors and to deploying employers. Compliance work for HR tech vendors operating in the EU is a real cost center now, not a future one.
This matters to candidates more than it might appear. Regulation pushes vendors toward explainability, which means more candidates will start to see, in some form, why they were filtered or downranked. It also pushes vendors away from inference of protected characteristics, which had crept into some products through proxies. The visible behavior of ATS systems over the next two years will be shaped less by what the technology can do and more by what regulators will let it do.
What this means if you are the one applying
If the technology has changed, the advice has to change too. A few things hold up under the current generation of systems.
Resumes still need to parse cleanly. Tables, multi-column layouts, text inside images, and unusual fonts continue to break parsers more often than the vendors will admit, especially when a resume passes through an intermediate service like a careers page that re-extracts the file. A single-column layout in a standard PDF, with selectable text, remains the safest substrate. The visual minimalism that is fashionable on resume Twitter tends to align well with what parsers expect.
Evidence beats vocabulary. Because matchers now compare claimed skills against the work history, the bullet points that survive scoring well are the ones that describe scope, scale, and outcome. "Owned the migration of a 40-service monolith to Kubernetes, cutting deploy time from 45 minutes to 6" carries more signal than "experienced with Kubernetes, CI/CD, and cloud platforms."
Tailoring still matters, but the goal has shifted. The point is no longer to mirror keywords from the job description. It is to make sure the parts of your experience that are most relevant to the role are surfaced near the top, written concretely, and easy for a model to map to the role's skill graph. A generic resume sent to thirty postings will be summarized generically thirty times.
This is where targeted tooling has become genuinely useful, as long as it is honest about what it is doing. I have been using the Ajusta Chrome extension on top of my normal review workflow, mostly because it does the one thing I want a resume tool to do and refuses to do the things I do not. It reads the job posting in the tab you are already on, looks at the resume you upload, and produces a tailored version with one click, without inventing experience that is not there. That last part is the part I care about. Most AI resume tools will happily fabricate a credential or stretch a year of work into three to push the match score up; a hallucinated resume might rank well against a parser, but it falls apart the moment a human reads it or a reference call happens. Ajusta's pitch, which holds up in practice, is zero hallucination tailoring, and the candidates I have seen use it tend to get more interviews because the resume that lands in the ATS is both honest and aligned with the role. If you want the longer version of how it works, the writeup at ajusta.ai is reasonable.
Whatever tool you use, the underlying principle is the same: shape the truth, do not invent it. The systems on the other end are getting better at catching the difference, and the humans behind them are getting more skeptical.
Where this is heading
The direction of travel is fairly clear, even if the timelines are not. ATS platforms are converging toward something that looks less like a database and more like a hiring agent: a system that ingests structured representations of candidates and roles, reasons over them with a foundation model, takes a growing share of routine actions on its own, and exposes its decisions to auditors and regulators in ways the previous generation never had to. Skills graphs will continue to displace job titles as the primary unit of matching. Conversational interfaces will absorb a larger share of early-stage screening. Bias audits and explainability requirements will continue to expand, and the vendors who treat compliance as a feature rather than a tax will pull ahead.
The part that is genuinely uncertain is what happens to the candidate experience. Optimistically, better matching means fewer people sending hundreds of applications into the void, and more recruiters reaching out to candidates who actually fit. Pessimistically, automated screening pushes more decisions further upstream, with less recourse, and the people who learn to work the systems pull away from the people who do not. Both futures are visible in the data right now. Which one wins depends less on the technology and more on the rules we end up writing around it.
For anyone applying to a job today, the practical takeaway is narrower than the hype suggests and more demanding than the old advice. Write a resume that a reasonable human would believe, structure it so a parser can read it, tailor it so the relevant evidence is near the surface, and use tools that help you do those things without lying on your behalf. The ATS has changed. The fundamentals of being a credible candidate have not.
Top comments (0)