Your top engineer closed half the tickets the new grad did last quarter. The performance dashboard says they're underperforming. Your gut says the opposite.
I ran into this problem after moving from data engineering into management. One of my engineers spent three hours debugging why the nightly ETL job was timing out, which was blocking the entire team from running integration tests. They pair-programmed with our new hire to help them understand the data architecture. They wrote detailed documentation for the ETL migration because no one else would. They jumped into a Microsoft Teams thread at 9 PM to unblock a production deployment that couldn't wait until morning.
Then I pulled up the performance dashboard. They had closed half the tickets that our new grad closed that quarter.
The metrics said they were underperforming. My experience working with them said the opposite. They were the glue holding our team together.
This isn't a data engineering problem. It's an engineering management problem. Your best engineers spend their time on essential work that keeps the team functional, but none of it shows up in the systems that measure performance.
Why Your Metrics Are Lying to You
Jira and similar project management tools measure intent, not reality. They track what you planned to do, not what actually happened.
The problem compounds as engineers get more senior. Junior engineers can focus on tickets. Senior engineers spend most of their time on work that never gets a ticket: reviewing 40% of the team's PRs, helping others debug gnarly issues, fixing the CI pipeline that's been broken for weeks, sitting in architecture reviews to share context.
Creating a Jira ticket for "helped someone debug their environment setup" feels absurd. It took 20 minutes. But do that five times a week, and you've spent nearly two hours on invisible work. Multiply that across code reviews, production firefighting, mentoring conversations, and architectural guidance, and engineers can easily spend 50-70% of their time on work that has no corresponding ticket.
The result: visible output shrinks as actual value increases. The ticket dashboard shows your best engineer as underperforming while they're actually the team's MVP.
Tanya Reilly calls this "being glue", the technical leadership work that holds teams together but doesn't map cleanly to traditional promotion criteria.
Why Brag Documents Don't Work
The standard advice is to have engineers maintain brag documents. Julia Evans popularized this approach: engineers keep a running log of everything they do, then compile it for performance reviews.
This rarely works in practice.
It requires constant discipline. Engineers need to remember to update it after every contribution. Most forget until the week before their review, then try to reconstruct six months of work from memory.
It also feels self-aggrandizing. Engineers are trained to let their work speak for itself. Writing "I am great because I did X" feels uncomfortable, even when X is genuinely valuable.
And here's the real problem: without data backing it up, a brag document is just the engineer's word against the metrics. When you're under pressure to justify ratings, you'll default to the numbers.
Brag documents work for some people. But they put the burden on the wrong person. As a manager, you should have systems that capture your team's actual work, not force them to manually log it.
What the Data Actually Shows
Your engineers' work leaves digital traces everywhere. Every PR they review generates data in GitHub. Every commit they make has metadata. Every Teams conversation is timestamped. Every calendar event is logged.
This digital exhaust contains proof of the glue work. The challenge is assembling it into a coherent picture.
When an engineer reviews 127 PRs while their peers review 30, that's measurable. It explains why their ticket count is lower. They're multiplying everyone else's productivity.
Those 50-line PRs fixing type definitions, updating the Dockerfile, or adding error handling don't map to epics. But they're real work that keeps the system running.
When an engineer spends a week unblocking another team's API integration, that work might not even show up in your team's metrics. But it has real business impact.
The data exists. You just need to surface it.
How Tools Like Span Solve This Problem
I came across Span and I wish I'd known about them when I was dealing with the invisible work issue on my team.
The approach is different from traditional productivity tools. Instead of trying to make engineers track their work better, Span analyzes the actual work in your codebase and other systems. It looks at your code, PRs, Jira tickets, incident management tools, and calendars to understand what's really happening.
The platform uses AI to automatically categorize work into different types. That random Tuesday afternoon an engineer spends fixing the ETL timeout? Span classifies it as infrastructure work without them needing to create a ticket. The three hours spent reviewing PRs? That shows up in the data.
One feature that particularly addresses the glue work problem is called Investment Mix. According to one of their customers, Chad Bayer (VP of Technology at The Helper Bees), "Span's Investment Mix is the best on the market. It now powers our executive updates and board discussions, replacing what used to be a time-consuming manual process."
Investment Mix shows how engineering time is distributed across different categories of work. Instead of just counting tickets, you can see the actual breakdown between feature development, code reviews, infrastructure improvements, and maintenance work. That paints a completely different picture than "closed 8 tickets this quarter."
The platform also tracks metrics like PR cycle time and team velocity, helping you spot patterns in how work flows through your team. You can see if code reviews are a bottleneck or if infrastructure work is eating into feature development time.
The key thing is that it uses the actual work as the source of truth. It doesn't matter if an engineer created a ticket. If they made substantial changes to the infrastructure, that work gets classified and measured automatically.
How to Identify Glue Work in Your Team
You need to actively look for this. Your best engineers won't complain that their glue work is invisible. They're too busy doing the work.
Here are the patterns to watch for:
The Review Burden: Pull up your team's PR activity. If one engineer is reviewing 2-3x more PRs than everyone else, that's glue work. They're acting as a quality gate for the entire team.
The Unblocking Pattern: Watch for engineers who are constantly mentioned in other people's PRs or Teams/Slack threads. If someone's name keeps coming up when people need help, they're doing glue work.
The Infrastructure Tax: Some engineers volunteer to fix the annoying things that everyone complains about but no one wants to own. The flaky tests. The slow CI pipeline. The missing documentation. This work is nearly invisible but high-impact.
The Onboarding Load: If you have an engineer who every new hire gets paired with for their first few weeks, that's glue work. They're transferring institutional knowledge that doesn't exist anywhere else.
The Cross-Team Coordination: Watch for engineers who spend time in meetings with other teams or helping unblock dependencies. This work often doesn't even show up in your team's metrics, but it has real business impact.
How to Credit Glue Work in Performance Reviews
Once you identify the glue work, you need to explicitly credit it in performance reviews and compensation decisions.
Instead of apologizing for an engineer's ticket count, reframe the conversation around leverage and impact. Present the review with data, not gut feelings.
Show that they spent 35% of their time on developer enablement. Then show how that translated into team velocity improvements. If the team shipped 23% more features in the second half of the quarter compared to the first half, and the engineer's increased review load preceded that improvement, connect those dots.
Quantify specific contributions:
- They reviewed 130 PRs that quarter, representing a significant portion of the team's total output. Their review feedback reduced the revision cycle time by an average of 8 hours per PR.
- They spent 15% of their time on infrastructure improvements that cut the CI pipeline time from 45 minutes to 12 minutes, saving the team approximately 180 hours in aggregate wait time.
- They pair-programmed with three junior engineers on complex features. All three shipped their work on schedule with minimal revision cycles.
The data doesn't just prove they were busy. It proves they were strategic about where they invested time.
When you present this to your director, the response will be immediate. The engineer gets promoted instead of getting a "meets expectations" rating.
How to Change Your Team's Incentives
Measuring glue work isn't enough. You need to actively reward it, or engineers will optimize for what gets measured.
Change how you run performance reviews. Explicitly evaluate engineers on three dimensions:
Individual Contribution: Traditional feature development and bug fixes. This is what Jira measures.
Team Multiplication: Code reviews, mentoring, infrastructure work, and knowledge sharing. This is what tools like Span measure.
Cross-Team Impact: Work that helps other teams or the company, even if it doesn't show up in your team's metrics.
Weight all three dimensions equally. An engineer who excels at team multiplication but has lower individual contribution numbers can still get an "exceeds expectations" rating.
Also change how you assign work. When planning sprints, explicitly allocate time for glue work. If an engineer is going to spend 30% of their time on code reviews, plan for them to close 30% fewer tickets. This prevents the trap where engineers get overloaded with both feature work and glue work, then get dinged for not completing enough tickets.
The Long-Term Impact
Six months after implementing these changes, the results are clear.
Team velocity increases by 30%+. Not because people work more hours, but because the glue work gets distributed more evenly. Junior engineers close more tickets because they get unblocked faster. Mid-level engineers contribute to code reviews and documentation. Senior engineers still do the most glue work, but they're not penalized for it.
Retention improves. Engineers who were considering leaving because they felt like their contributions weren't valued become your strongest advocates and refer excellent engineers to your team.
Hiring gets easier. When candidates ask how you evaluate performance, you can show them the actual data on how you measure and reward different types of contributions. Engineers who care about doing high-quality work are attracted to teams that recognize all forms of contribution, not just ticket velocity.
What You Should Do Tomorrow
Start by identifying who on your team is doing the most glue work. Pull up your GitHub data and look at PR review patterns. Talk to your junior engineers about who they go to when they're stuck.
Then look at how you're measuring performance. If you're only looking at ticket velocity, you're missing half the picture. Find ways to quantify the other half, whether that's through manual data gathering or tools like Span.
Finally, have explicit conversations with your team about glue work. Tell them you value it. Show them you're measuring it. Make it clear that doing glue work won't hurt their career trajectory.
Glue work is career-making if it's visible. It's career-ending if it stays invisible. As a manager, making it visible is your job, not your engineers'.
The engineers who succeed at senior levels aren't the ones who close the most tickets. They're the ones who multiply their team's effectiveness. Code review, mentoring, infrastructure work, and technical guidance are how they create that leverage.
But leverage without measurement looks like low productivity. You need systems that capture the full picture of contributions, not just the subset that maps cleanly to ticket boards.
If you have engineers spending time on glue work, make sure it's being captured and credited. They shouldn't have to choose between doing high-impact work and having a successful performance review.
How do you measure glue work on your team? I'd love to hear what's worked (or hasn't worked) for other engineering managers. Drop a comment below.



Top comments (0)