DEV Community

Deeya Jain
Deeya Jain

Posted on

How to Audit Your Own Job for AI Exposure (Before Someone Else Does It For You)

Anthropic published a study in March 2026 that measured actual AI usage data against 800 occupations. Programmers topped the list at 75% task coverage.
If you work in tech, this is worth understanding concretely - not as a news story, but as a framework you can apply to your own role.
This post breaks down the methodology, what it actually means for developers and tech workers, and gives you a practical way to assess your own exposure.

What the Anthropic study actually measured (and why it's different)

Most AI-and-jobs studies measure theoretical capability, they ask "could an AI do this task?" and aggregate by occupation. The problem is that theoretical capability is a bad proxy for actual displacement. AI could theoretically do a lot of things that nobody actually uses it for.
Anthropic's study measured observed exposure — a composite of three things:

Theoretical capability: Could an LLM complete this task at ≥2x human speed?
Actual usage: Is this task appearing in Claude's real conversation data in professional contexts?
Automation depth: Is AI completing the task (automation) or assisting with it (augmentation)?

Tasks that scored high on all three and especially on #3 - drove the "observed exposure" score for each occupation.
The data source was millions of real Claude conversations matched against O*NET (the US government's occupational task database covering ~800 job types).
Full breakdown at: Aadhunik AI's analysis of the Anthropic labor market study

The occupations with the highest observed exposure

Two things worth noting here:

  1. Programmers are #1. Not because programming is easy - because the task composition of a programming job (writing code, debugging, reviewing PRs, documenting, writing tests) maps almost entirely onto what LLMs are actively being used for.
  2. High earners are most exposed. Workers in the most-exposed occupations earn on average 47% more than those in the least-exposed occupations. The assumption that AI threatens low-wage work first is not supported by this data.

The three-property test: apply it to your own role

The high-exposure occupations share three characteristics. Use this as a self-audit:
Property 1: Text / structured data output
→ Is the primary deliverable of your work text, code, or structured data?
→ If yes: high LLM applicability

Property 2: Screen-based, already digitised
→ Does your work happen entirely within digital tools?
→ If yes: no physical-to-digital translation barrier for AI

Property 3: Repetitive, rule-based tasks exist in your workflow
→ What proportion of your daily tasks follow predictable patterns?
→ Templates, standard reports, routine queries, boilerplate code?
→ If >30%: meaningful automation surface
If all three apply, your task exposure is high. That doesn't mean your job exposure is high - and that distinction is the important one.

Task exposure vs. job exposure: why the difference matters

Here's the thing most coverage of this study misses: observed exposure measures tasks, not jobs.

A programmer with 75% task coverage doesn't face 75% job elimination risk. They face a role that is changing shape — where the proportion of their value that comes from routine tasks (boilerplate, first drafts, standard debugging) is declining, and the proportion that needs to come from everything else is increasing.
Think of it as a surface area calculation:
Your role's surface area = {routine tasks} + {judgment tasks} + {relational tasks}

AI exposure = the portion of {routine tasks} that AI can handle

Your differentiated value = {judgment tasks} + {relational tasks} + how well you
direct AI on {routine tasks}
The practical implication: the risk isn't that you get replaced. The risk is that one person with strong AI skills can now cover the surface area that previously required three people — and hiring managers know this.

What this looks like in practice for developers specifically

Developers are the #1 exposed occupation, so it's worth being specific.
High-exposure tasks in a typical dev role:

  • Writing boilerplate code and standard implementations
  • First-pass debugging of common error patterns
  • Writing unit tests for known logic
  • Documenting functions and modules
  • Code review of straightforward PRs
  • Drafting technical specs from requirements

Lower-exposure tasks (where human judgment remains the rate limiter):

  • Architecture decisions under ambiguity
  • Debugging novel, cross-system failures
  • Translating vague stakeholder requirements into technical specs
  • Performance tuning in production under constraints
  • Security decisions with real tradeoffs
  • Building and maintaining trust with non-technical stakeholders
  • Leading through technical disagreement

If you look at a junior developer's work allocation, it skews heavily toward the first list. This is why entry-level job postings in software are declining — not because junior developers aren't needed, but because AI has absorbed enough of the task load that a mid-senior engineer can now cover what used to require two people.

For senior and staff-level engineers, the shift is different: the expectation of what you own is expanding, not shrinking. You're expected to do more with AI, not to be protected from it.

A practical self-audit you can run in 20 minutes

Go through your last two weeks of work. List every task you completed. Then classify each one:
markdown## Task Audit Template

Task list (last 2 weeks)

  • [ ] Task 1: ___________________
  • [ ] Task 2: ___________________ ...

Classification

For each task, answer:

  1. Could an LLM do this with a good prompt? (Y/N)
  2. Am I already using AI for this? (Y/N/Partially)
  3. If AI did this, would anyone notice a quality difference? (Y/N)

Score

  • % of tasks where answer to Q1 is Y = your theoretical exposure
  • % of tasks where answer to Q3 is N = your automation risk surface
  • The gap between Q1 and Q2 = your personal productivity opportunity The goal isn't to find out if you're at risk. It's to understand your task composition clearly enough to make intentional decisions about which skills to develop.

What "quiet compression" means for hiring and what to do about it

The Anthropic research flagged something specifically worth paying attention to if you're earlier in your career: displacement is showing up in hiring data before unemployment data.

The mechanism: teams don't immediately shrink when AI tools improve. They stop replacing people who leave. Entry-level roles - the ones that used to exist as training grounds - get quietly deprecated. The same volume of work gets done by fewer people using better tools.
If you're a junior developer or recently graduated, the risk isn't that you'll be fired. It's that the on-ramp structure that previous generations used to build experience is narrower. The jobs that were the learning environment are fewer.

The response to this is not to avoid AI tools. It's the opposite: build genuine fluency with the tools, because fluency with AI is increasingly what separates the candidate who gets the narrower number of junior spots from the candidate who doesn't.

Three concrete things worth doing with this information

1. Audit your task mix and start shifting it intentionally.

If 60% of your current work is high-exposure routine tasks, spend the next quarter pushing into the judgment and relational work. Volunteer for the ambiguous project, not the defined one.

2. Get specific about your AI fluency.

"I use GitHub Copilot" is not differentiated. "I can architect a multi-step agent workflow, evaluate output quality across models, and integrate AI tooling into a production codebase" is. The latter is what compounds in value.

3. Pay attention to where your team is shrinking vs. growing.

If the data team that was ten people is now six, and the backfill isn't happening, that's a signal worth reading — not as a reason to leave, but as information about the direction of travel.

Further reading

The full occupational data, methodology breakdown, and the "quiet compression" analysis: Aadhunik AI — The Occupations Most at Risk from AI Right Now

The primary source: Anthropic, "Labor Market Impacts of AI: A New Measure and Early Evidence," March 2026, anthropic.com/research/labor-market-impacts

Discussion

Curious where others are landing on this. A few specific questions:

For senior/staff devs: has your expected scope changed meaningfully in the last 12 months because of AI tooling?
For anyone hiring: are you actually posting fewer entry-level roles, or does the data not match your experience?
Has anyone run a structured task audit on their own role? What did you find?

Top comments (0)