DEV Community

Mykola Kondratiuk
Mykola Kondratiuk

Posted on

Issue Tracking Is Dead - Here's What PMs Actually Manage Now

Linear's CEO just declared issue tracking dead. And honestly? The data's been saying this for months.

25% of new issues on Linear are created by AI agents. That number was 5x lower three months ago. 75% of enterprise workspaces have coding agents installed. The agent creates the issue, writes the code, opens the PR.

So what does the PM do when the entire delivery pipeline runs itself?

The Accountability Chain Problem

Here's what broke. The traditional workflow has an implicit accountability chain:

Human creates issue → Human picks it up → PM owns the outcome
      ↑ context          ↑ delivery          ↑ accountability
Enter fullscreen mode Exit fullscreen mode

When agents enter the chain:

Agent creates issue → Agent writes code → Agent reviews PR → PM owns... what?
      ↑ ???                ↑ ???               ↑ ???              ↑ everything
Enter fullscreen mode Exit fullscreen mode

The accountability chain fractures. Nobody validated the agent's issue was worth building. Nobody checked the agent's acceptance criteria. The PM is suddenly accountable for an outcome they didn't initiate, built by entities they can't have a standup with.

I hit this wall managing my own agent workflows about two months ago. Stopped assigning work, started auditing what agents decided to do. The shift was disorienting until I named it: I'm not managing work anymore. I'm governing outcomes.

Anthropic's Glasswing - A PM Framework in Disguise

Anthropic launched Project Glasswing yesterday. On the surface it's a safety program for their most powerful model. 12 vetted organizations. Scoped access. Usage audits. $100M in accountability infrastructure.

But look at the five principles they built:

  1. Scope access - not every agent gets access to everything
  2. Vet use cases - just because it can doesn't mean it should
  3. Audit outputs - systematic review of what agents produce
  4. Budget accountability - factor governance into the cost model
  5. Know when NOT to deploy - some workflows need humans

That's not a safety framework. That's a PM governance playbook.

If you're running agent workflows in your org, here's how those principles translate:

agent_governance:
  scope:
    - define which systems each agent can access
    - set boundaries BEFORE deployment, not after incidents
  vetting:
    - map which workflows benefit from agent-generated work
    - identify workflows that need human initiation
  audit:
    - build review cycles for agent-created issues
    - quality gates at each pipeline stage
  accountability:
    - define who owns outcomes when agents create the work
    - factor incident response into agent deployment costs
  boundaries:
    - identify workflows requiring human judgment
    - document why certain workflows stay manual
Enter fullscreen mode Exit fullscreen mode

What I Learned Rebuilding 17 Agent Accountability Chains in One Afternoon

A few weeks back, a platform change forced me to migrate my entire agent setup in one afternoon. The migration itself wasn't the hard part - swapping configs, updating endpoints, that's mechanical.

The hard part was rebuilding "who owns what."

Every agent had implicit accountability chains I'd never written down. This agent creates issues but a human reviews them. That agent writes code but only in specific repos. Another agent can deploy to staging but never production.

When I had to rebuild from scratch, I realized none of those chains were documented. They lived in my head. That's fine for one person running a handful of agents. It's a disaster for a team.

The exercise took an hour. I mapped every agent to its access scope, its review requirements, and its accountability chain. If you're running any kind of agent workflow, do this before your platform forces you to:

## Agent: Issue Creator
- Access: Linear workspace, read-only on codebase
- Can create: Bug reports, feature suggestions
- Review required: Human validates before issue enters sprint
- Accountable: PM (me) for issue quality

## Agent: Code Writer  
- Access: Specific repos only, staging environment
- Can create: PRs, branch commits
- Review required: Human code review before merge
- Accountable: Tech lead for code quality, PM for scope
Enter fullscreen mode Exit fullscreen mode

Simple? Yeah. But I guarantee most teams running agents haven't done it.

The 10x Employee Governance Gap

The "10x employee" narrative is everywhere right now. One person plus AI replaces five. Solo founder to $80M exit. The numbers are impressive.

Nobody's asking the governance question though.

The 10x employee makes judgment calls at 5x the rate. Runs agent stacks that drift incrementally. Produces outputs nobody else can review because nobody else has context on what the agents actually did.

I've seen agent drift firsthand. It's not dramatic. It's a half-percent quality shift per week. The agent interprets a requirement slightly off. Over a month, those tiny drifts compound into something you wouldn't have shipped manually.

What Dies, What Survives

Dead:

  • Status updates (agent knows its status)
  • Task assignment (agent picks up work)
  • Progress tracking (pipeline is observable)
  • Manual triage (agent prioritizes on data)

Survives and gets harder:

  • Outcome definition
  • Quality gates
  • Accountability chains
  • Agent governance
  • Stakeholder alignment

What to Do Monday Morning

If you're a PM reading this and thinking "ok but what do I actually change" - here's the practical version:

  1. Audit your current agent usage. Which agents create work items? Which write code? Which have production access?

  2. Map accountability chains. For each agent, who owns the outcome of what it produces?

  3. Build review cycles. Not for everything - for the high-risk outputs. Agent-created issues that go to sprint. Agent-written code that touches production.

  4. Document boundaries. Which workflows stay manual? Why? Write it down before someone automates them without asking.

  5. Set quality gates. What's the bar for agent outputs? How do you measure drift over time?

Linear's CEO just told you tracking is dead. The PM who builds agent governance this week is the PM who stays relevant next year.

What's your agent governance setup look like? Curious how other teams are handling this.

Top comments (0)