Most QMS systems are designed to document change.
But that’s not where the real challenge is.
In practice, the problem starts after a change is approved.
What else did that change affect?
In most MedTech teams, this is still handled manually.
A requirement gets updated. But the related risk is not revisited.
A design change goes through. But the verification plan stays the same.
A deviation is closed. But no one checks what it means for the Technical File.
Everything is documented. But the connections are not.
So QMS managers end up doing the same thing over and over again.
Jumping between modules.
Cross-checking documents.
Trying to reconstruct the logic behind decisions.
Not because the system is missing data, but because it doesn’t show how that data is connected.
This is usually where traceability starts to break.
Not in big, obvious ways. In small gaps that only become visible during an audit.
A missing update. An outdated assumption. A link that was never made.
Most systems are not built to handle this.
They store records. They track workflows.
But they don’t actively evaluate the impact of change across the system.
And that’s exactly what auditors are starting to focus on.
Not just whether something was recorded, but whether the logic holds together.
If one thing changed, how did you make sure everything affected by that change was updated?
That’s a very different question.
What’s becoming clear is that documenting change is not enough.
Understanding change impact is the actual work.
That means being able to see:
which requirements are affected
which risks need to be re-evaluated
which tests are no longer valid
what needs to be updated before anything is implemented
Not after.
We’ve been working on this problem from a system perspective.
What happens if change is not treated as an isolated action, but as something that propagates?
What happens if the relationships between requirements, risk, and verification are actually used?
The approach we ended up with is simple in principle.
An event happens.
Its impact is analyzed across the system.
The necessary updates become visible.
Traceability is maintained as part of the workflow.
Instead of investigating impact manually, the system surfaces it.
If you’re dealing with MDR or FDA expectations, this is where things usually get difficult.
Not documentation, but consistency.
Not records, but connections.
If you want a more detailed breakdown of how this works in practice, we put it together here:
https://qmswrapper.com/ai-qms-for-medical-devices/
And if you’re curious how this looks inside an actual QMS workflow:
https://qmswrapper.com/qms-software-demo/
Most QMS systems help you record what changed.
Very few help you understand what that change actually means.
Top comments (6)
I keep running into this across different teams, so I’m curious how others are handling it in reality.
When a change gets approved… who actually checks that everything impacted has been covered?
Not just “their part”, but the full picture: risk, verification, docs, all of it.
In a lot of cases I’ve seen, everyone assumes someone else looked at it.
That’s a good question, and honestly, that assumption is exactly where things tend to fall apart.
In most teams there isn’t a single clear owner of the full impact. You usually have QA looking at compliance, engineering looking at design, maybe regulatory reviewing documentation, but no one is really accountable for connecting all of it together.
So what happens is everyone checks their part, signs off, and the overall picture is assumed to be covered. It works until something sits between those areas and doesn’t clearly belong to anyone.
That’s why these gaps are rarely obvious during the change itself. They only show up later, when someone tries to follow the logic end to end and realizes a piece is missing.
Some teams try to solve it with checklists or cross-functional reviews, which helps to a point. But once the Technical File grows, it becomes very dependent on individual experience and memory.
The real shift is when impact is treated as a shared, structured step rather than something distributed across roles. Not just asking each function to review, but making the connections between requirements, risk, verification, and documentation visible during the change itself.
Curious how it works in your case. Is there a defined owner for impact analysis, or is it more of a shared responsibility across the team?
This hits close to home. We had a design change go through last quarter and
nobody flagged that the verification plan was outdated until an internal audit
caught it three weeks later. The traceability was technically there but the
impact connections were not.
The manual cross-checking is what kills us. Every change request turns into a
2-hour investigation just to figure out what else it touches. And that is for
a relatively simple Class II device. I can't imagine doing this at scale with
a Class III Technical File.
The event-driven approach you describe makes sense. Treating change as
something that propagates rather than something you just document is the right
framing. Most of our nonconformities trace back to exactly this gap, the
change was recorded, the downstream impact was not.
That’s a very familiar situation.
On paper everything looks fine because the traceability exists, but the system doesn’t really force anyone to revisit the connections once something changes. So the responsibility shifts to people, and that’s usually where things start to slip.
What you described with the verification plan is a perfect example. Nothing was missing, but the logic was no longer aligned. That’s exactly the kind of gap audits tend to expose. Not missing documents, but inconsistencies between them.
The manual investigation part is something we hear often as well. It works while the system is relatively small, but as the Technical File grows, it becomes harder to keep up. Especially for higher class devices where the number of dependencies increases quickly.
That shift from asking where something is documented to asking what it actually affects is a big one. Treating change as something that propagates across requirements, risk and verification makes a lot more sense in practice, but most systems still don’t support that way of working.
Out of curiosity, how are you handling those impact checks today? Is it mostly manual review across documents or do you have some structured way of tracking dependencies?
Coming at this from the post-market side — the impact analysis gap that kills us isn't the one going forward, it's the one going backward. Field complaint comes in, CAPA gets opened, root cause points to a supplier component that changed 14 months ago. The change went through change control cleanly signed, dated, training records filed. But the link between "supplier component change" and "affected risk file sections" was never physically made. Just assumed. So now I'm reconstructing an impact assessment after the fact, for an auditor, on a Friday afternoon.
The 2-hour number James cited is generous for anything Class III. Last implantable I worked on, a single material change sent me into a 6-hour trace - biocompatibility testing in one folder, packaging validation in another, sterile barrier data in a third, all of it linked by a spreadsheet nobody had updated since the original design engineer left.
Honest question for the qmsWrapper team: when the event-driven model catches a missing link, does it flag at the time of the change, or only when something downstream queries it? That's the difference between preventing the gap and finding the gap faster. Both valuable, but they're different problems.
That backward trace is probably the hardest version of this problem.
When everything is clean from a change control perspective but the reasoning behind the impact was never fully connected, you end up reconstructing intent rather than reviewing it. And that’s exactly the situation auditors tend to focus on.
The supplier example you mentioned is a good illustration. The change itself is properly recorded, but the assumption about impact was never made explicit or linked. That’s what turns a normal review into a full investigation later on.
The Class III scenario you described is also very real. Once you have biocompatibility, packaging, sterilization, and validation all sitting in different places, the effort isn’t in finding the documents, it’s in proving how they relate to each other in the context of that one change.
To your question, the goal with the event-driven approach is to surface that impact at the time of the change, not only when something downstream forces the check.
When an event is evaluated and moves into impact analysis, the system looks at the existing relationships across requirements, risk, verification, and related elements in the Technical File. Based on that, it identifies what is likely affected and generates a structured set of items that need to be reviewed or confirmed.
So instead of assuming that a supplier component change has no effect on risk or validation, those areas are explicitly brought into the decision process while the change is still being assessed.
That doesn’t eliminate the need for judgment, but it reduces the chance that a connection is missed or left implicit.
If something was not linked at all in the system, then it becomes a different problem. In that case, you are closer to detection than prevention. But as the system builds those relationships over time, more of that impact can be surfaced earlier, when it is still manageable.
Out of curiosity, in your current setup, are those supplier changes typically linked anywhere outside the change control record, or is most of that context still sitting in separate documents?