A few months ago, we had to change a small piece of logic.
It sounded simple.
A minor tweak.
“Shouldn’t take more than a couple of hours.”
Then s...
For further actions, you may consider blocking this person and/or reporting abuse
Really enjoyed this piece!
“Corporate amnesia” is one of those problems everyone has felt, but few articulate this clearly.
What stood out to me is how closely this maps to the broader concept of organizational memory loss. It’s not just a code problem—it’s a systems problem. When knowledge lives primarily in people’s heads (or worse, in Slack threads and half-written docs), it becomes fragile by default. As research on corporate amnesia shows, a significant portion of operational know-how is tacit and walks out the door with team churn, forcing teams to constantly “re-learn” things they already solved.
I also think there’s an interesting tension here:
Most teams oscillate between the two without ever designing a real “memory architecture.”
One idea I’ve seen work well is treating knowledge like code:
In other words, not just what the system does, but why it ended up that way. That “why” is usually the first thing to disappear—and the most expensive thing to rediscover.
Curious how you think AI tooling will affect this. A lot of teams are hoping it will solve knowledge gaps, but if anything, it seems like it might amplify them if the underlying context isn’t preserved.
Thank you @elenchen for the comment!
That’s a thoughtful read of the piece, especially the framing of “memory architecture”. I like that because it shifts the conversation from how much we document to how the system retains meaning over time.
Your tension is exactly the trap: teams treat documentation as volume instead of structure. More pages ≠ more memory. Less documentation ≠ more clarity. Without intentional design, both paths decay, just in different ways.
The “knowledge as code” analogy is spot on, but I’d push it one step further: most teams adopt the mechanics (versioning, PRs, etc.) without adopting the intent. Code review works because it encodes context, trade-offs, and constraints at the moment decisions are made. Documentation often happens after the fact, when that context is already diluted. So the real shift isn’t just tooling, it’s timing and proximity to decision-making.
On AI, I share your skepticism. It’s not that it will fail, it’s that it will faithfully reflect whatever memory system you already have.
If your organization has:
AI doesn’t fix that. It accelerates access to it. Which can actually increase confidence in incomplete or incorrect understanding.
In that sense, AI can amplify corporate amnesia in two ways:
But there’s a more interesting upside if teams are deliberate.
AI becomes powerful when:
Then AI shifts from being a crutch to being a memory interface, something that helps navigate and recombine knowledge, rather than invent it.
So I don’t think AI solves corporate amnesia. But it raises the stakes:
teams with weak memory systems will feel the pain faster,
teams with strong ones will compound their advantage.
If anything, AI forces a more uncomfortable question:
are we building systems that remember, or just systems that appear to know?
This is such a sharp distinction—especially the idea of AI as a “memory interface” rather than a source of truth.
Your point about timing resonates a lot. Most documentation is indeed written after the decision, almost like a post-rationalization. By then, the real constraints, debates, and uncertainties are already gone.
It makes me wonder whether the real failure isn’t documentation at all, but decision capture.
We’re very good at capturing artifacts (code, tickets, docs), but very bad at capturing reasoning in motion.
Maybe the deeper issue is that reasoning is expensive to externalize, so teams default to shortcuts.
Do you think this is solvable culturally, or does it require structural changes in how teams work (e.g., enforced ADRs, decision logs, etc.)?
I think you're right to separate documentation from decision capture, they’re often conflated, but they solve different problems.
On whether it’s cultural or structural, I’d say culture alone isn’t enough. Teams can agree that capturing reasoning is valuable, but under delivery pressure, it’s the first thing to be dropped.
So it has to be embedded into the workflow itself.
The key is reducing the “activation energy” required to capture decisions. If it feels like extra work, it won’t happen consistently.
That’s why I like lightweight approaches:
The goal isn’t completeness, it’s continuity.
You don’t need perfect memory, you need enough breadcrumbs to reconstruct intent later.
Without that, teams aren’t really scaling knowledge, they’re just scaling output.
“Enough breadcrumbs to reconstruct intent” is a great way to frame it.
It also highlights something subtle: memory systems don’t need to be exhaustive, they need to be navigable.
I’ve seen teams try to solve this with massive documentation hubs, but they become graveyards because retrieval is harder than writing.
Which brings us back to AI again—ironically, retrieval is the part AI is very good at.
But that creates an interesting dependency:
AI is only as good as the structure and signals it can retrieve from.
So maybe the real design challenge isn’t just capturing decisions, but making them queryable.
Do you think we should start designing documentation explicitly for machine consumption, not just human readers?
Yes and I think that shift is already happening, even if implicitly.
Historically, documentation was written for humans reading linearly.
Now it’s increasingly consumed non-linearly by both humans (search-driven) and machines (embedding/retrieval-based).
That changes how we should think about structure.
Instead of long-form narratives, we need:
In a way, it’s similar to how we moved from monoliths to modular systems in software.
The risk, though, is over-optimizing for machine readability and losing human clarity.
So the balance becomes:
structured enough for machines to navigate,
meaningful enough for humans to understand.
If you get that right, AI doesn’t just retrieve knowledge, it preserves the relationships between ideas.
And that’s where it starts to feel less like a tool and more like an extension of organizational memory.
This hits very close to home.
I’ve seen cases where a “small change” turns into hours of reverse engineering — not because the system is complex, but because the context is missing.
The point about trust really stood out. Once a team stops trusting the codebase, everything slows down — even simple decisions feel risky.
I also like the idea of code being the source of truth. Clean structure and intent go a long way in preserving knowledge without relying on docs that go stale.
Curious — have you seen teams successfully maintain this over time, or does it always decay eventually?
This is exactly the kind of scenario that made me write the piece.
What you describe, hours of reverse engineering for a “small change”, is usually a symptom of missing decision context, not technical complexity. The system might be simple, but the reasoning behind it has evaporated.
And I completely agree on trust. Once that’s gone, the cost isn’t just time, it’s hesitation. Every change becomes a gamble, so teams either slow down or avoid touching things altogether. That’s when stagnation starts to look like stability.
On your question: I don’t think decay is inevitable, but it is the default.
Teams that manage to resist it tend to do a few things consistently:
What’s interesting is that none of this requires perfect discipline, just consistency at the edges.
So it’s less about “can this be maintained forever?” and more about “does the system actively fight forgetting, or passively allow it?”
Most systems decay because they’re passive.
The good ones have just enough friction in the right places to make remembering the path of least resistance.
That’s a really solid way to frame it — “missing decision context, not complexity.”
I think that’s exactly where most teams underestimate the problem.
The idea of systems being passive vs actively resisting forgetting really stands out. In my experience, most systems are passive by default, and by the time teams realize it, the cost is already showing up in slow delivery and hesitation.
Also agree on the “consistency at the edges” point — it’s rarely about big processes, more about small habits done repeatedly.
I’ve seen PR discussions actually become the best form of lightweight documentation when done well.
Curious — have you seen any teams use ADRs effectively without them becoming just another thing that gets ignored over time?
Great question @printo_tom, and, honestly, that’s where most ADR initiatives either work really well or quietly die.
Short answer: yes, I’ve seen teams use ADRs effectively, but only when they stop treating them as “documentation” and start treating them as part of decision-making itself.
Because the failure mode you’re hinting at is very real: ADRs become a graveyard the moment they’re written after decisions or stored somewhere detached from the workflow.
The teams where they actually work tend to share a few traits:
1. Written at decision time, not after: If you capture them while the trade-offs are still being debated, you get real context. If you write them later, you get a cleaned-up story.
And that difference is everything.
2. Lightweight by design: The moment ADRs feel like “writing a document”, adoption drops. The good ones are closer to a structured note than a report (1 page, clear context, decision, consequences).
3. Live with the code: If they’re in Confluence, they get ignored. If they’re in the repo, versioned and visible in PRs, they get used.
4. Triggered by change, not ceremony: The best teams don’t say “we should write ADRs”.
They say: “this decision is hard to reverse or affects multiple parts of the system → write one”.
What’s interesting is that ADRs don’t really fail because they’re a bad idea.
They fail because they violate the same principle we talked about earlier: they add friction instead of embedding into the flow.
When they work, they feel like a byproduct of good engineering hygiene.
When they don’t, they feel like compliance.
And that’s probably the litmus test:
Also +1 on PR discussions, honestly, I’ve seen exactly the same thing.
Well-done PR threads are often the highest-fidelity decision records teams have.
ADRs work best when they don’t replace that, but distill the parts you’ll wish you had 6 months later.
That framing is really helpful — especially the idea that ADRs should feel like finishing a decision, not documenting one after the fact.
I’ve seen exactly what you’re describing — when ADRs are written later, they almost become a “sanitized version” of reality, and you lose the actual trade-offs that mattered.
The point about keeping them close to the code also resonates. Anything outside the developer workflow tends to drift pretty quickly.
I also like how you connected it back to friction — it’s the same pattern everywhere. If something doesn’t fit naturally into how engineers already work, it just won’t stick long-term.
Feels like the real win is not ADRs themselves, but making decision-making visible while it’s happening.
Curious — have you seen any teams strike a good balance between ADRs and PR discussions without duplicating effort?
Yeah, that’s the key tension, if ADRs and PRs overlap, one becomes noise.
The best balance I’ve seen is:
Keeping ADRs lightweight and written before merge seems to make the difference. That way they stay useful without becoming overhead.
Feels like the real win is exactly what you said: making decisions visible while they’re happening, not after.
One thing I’m really curious about after writing this: how many “black boxes” are you all carrying in your current codebase?
Not the obvious messy parts, I mean the ones that technically work, nobody touches anymore, and everyone is a bit afraid to break.
Have you ever had a moment where someone left (or you joined a new team) and suddenly a critical part of the system became… untouchable?
I’d love to hear real stories:
Curious to see how different teams deal with this 👀
Really enjoyed this — especially the framing of “corporate amnesia” as a business risk, not just a code quality issue.
The example you shared hits close to home: what should be a small change turning into an investigation is something most teams have experienced. And it perfectly illustrates that the real problem isn’t complexity, it’s lost context. As you point out, once knowledge leaves with people, teams are forced to “re-learn” their own systems — a pattern widely recognized as a major productivity drain in organizations.
One thing that stood out is your emphasis on readable code as memory. That’s a subtle but important shift from the usual “just write more docs” advice. Documentation often decays, but code that clearly expresses intent tends to age much better.
I’d maybe add one complementary angle: beyond readability and ownership, decision visibility (why something exists, not just how it works) is often the missing piece. Without that, even clean code can become misleading over time.
Overall, this is a great reminder that:
we’re not just maintaining systems — we’re maintaining understanding
Really appreciate this, you captured the core tension perfectly: the problem isn’t complexity itself, it’s the disappearance of context.
I like how you framed “decision visibility.” That’s exactly the layer that tends to evaporate first. Even when code is clean and readable, it can still tell the wrong story if the original intent is missing. At that point, teams aren’t just reading code, they’re interpreting it, and interpretation is where time (and mistakes) creep in.
That’s part of why I lean on readability as a form of memory: it doesn’t just document what the system does, it constrains how much can be misunderstood later. But you’re right, it’s not sufficient on its own.
If I had to extend the idea, I’d say:
And long-term understanding depends on both.
Thanks @paolozero for adding that angle, it fits really naturally into the “corporate amnesia” framing.
I loved it!
The “2-hour fix” example turns into a half-day investigation hit close to home. That silent moment of “does anyone actually know how this works?” is probably one of the most expensive questions a team can ask—and it happens more often than people admit.
What stood out to me most is the idea that documentation alone isn’t enough. A lot of teams react to knowledge loss by writing more docs, but as you point out, those drift quickly. The deeper issue is that knowledge isn’t embedded in the system itself—so it decays as people leave or context shifts.
I’d add that there’s also a cultural angle here:
teams often reward shipping fast over making things understandable. Over time, that creates a kind of “local optimization trap” where each change makes sense in isolation, but the system as a whole becomes opaque.
One thing I’ve seen help is treating clarity as a deliverable:
It’s interesting how this connects to the broader idea of organizational memory—when knowledge lives mostly in people’s heads, it literally “walks out the door” with turnover.
Curious: have you seen teams successfully balance speed and clarity without slowing delivery too much? That seems to be the hardest trade-off in practice.
This is such a good expansion of the idea, especially the “local optimization trap.” That’s exactly how these systems drift into opacity: every change is reasonable in isolation, but the aggregate becomes harder and harder to reason about.
I also really like your framing of clarity as a deliverable. That’s a mindset shift more than a process change, and it’s probably where most teams struggle. Speed is visible and rewarded immediately; clarity is invisible until something breaks.
On your question about balancing speed and clarity, I’ve seen it work, but only when teams stop treating it as a trade-off.
The teams that get closest tend to:
So it’s less “slow down to be clear” and more “optimize for not having to rediscover things later.”
The hard part is that the payoff is delayed, while the pressure to ship is immediate, which is why this often ends up being a cultural decision, not just a technical one.
Really thoughtful comment @lucaferri and I’m glad the example resonated (even if for slightly painful reasons).
I feel personally attacked by this article, Gavin. 😅 I came to Dev.to to procrastinate, not to have a mirror aggressively held up to my team's 4,000-line utils/do_not_delete.js file! You call it "Corporate Amnesia," I call it "Job Security through Obscurity." If I'm the only dev who knows why a load-bearing console.log ('here 3') keeps the auth database from catching fire, they literally cannot fire me. It's just math.
That "It works, don't touch it" section triggered my fight-or-flight. We all have that one chunk of legacy code written by a guy named, "Jared," who drank six Red Bulls, refused to write comments, and then vanished to become a goat sim farmer. We don't refactor Jared's code. We pray to it. We offer it sacrifices in the form of disabled ESLint warnings.
I'm officially stealing your handover checklist. Will anyone actually fill it out before they rage-quit? Absolutely not. But I'll feel significantly better pointing at it while the servers burn. Fantastic read, Gavin!
I respect the honesty, “job security through obscurity” might be the most accurate anti-pattern I didn’t include. 😄😄😄
The scary part is that it works… right up until it doesn’t. The moment you’re unavailable, everything that looked like control turns into fragility.
Also, every team has a “Jared.”
Not a person anymore, but a force of nature embedded in the codebase:
The “load-bearing console.log” is real. I’ve seen versions of that which people were genuinely afraid to delete, not because of what it does, but because of what might happen if it stops doing it.
And your point about the checklist is fair, most processes fail not because they’re wrong, but because they rely on the worst possible moment (handover, burnout, deadlines) to be followed properly.
That’s why I think the checklist only works if it’s not a one-time thing.
If it becomes something lightweight and continuous (captured in PRs, small notes, decisions as they happen) it stops being a chore and starts being a byproduct.
Otherwise yeah… it just becomes another well-intentioned document we point at while everything burns. 🔥