Originally published on Medium. Cross-posted here for the dev.to community.
Quick context for devs:
Traditional docs and tests weren't built for AI-assisted development. When AI co-creates code, you need continuous, verifiable proof chains not just documentation. This is the story behind D-POAF®, a proof-oriented framework we built to address that gap.
The Moment Everything Became Clear
It was a Thursday afternoon.
My husband, Azzeddine, and I were in a meeting with a client's compliance team a financial services company rolling out AI-assisted development to accelerate their engineering pipeline.
The head of compliance asked what sounded like a straightforward question:
"Can you show me this feature was delivered as specified?"
The engineering lead didn't hesitate. They pulled up the usual artifacts:
- Requirements doc
- Pull request and code review
- Test results
- Release notes
"This is all here," they said. "We've documented it."
The compliance officer paused.
"That's documentation," they said. "I'm asking for traceability."
Then the follow-up that changed the room:
"Can you walk me through the chain, from intent to implementation to outcome including what changed along the way, and why?"
Silence.
Not because the engineers were incompetent.
Not because they cut corners.
But because they were using tools and practices built for a world where humans write every line of code, make every decision, and own every artifact end-to-end.
That world doesn't exist anymore.
The Pattern We Couldn't Ignore
That meeting wasn't unique. Over the next months, we saw the same tension surface everywhere regulated environments, critical systems, fast-moving product teams.

The same accountability question surfaces across industries when AI enters the engineering workflow
Different people said it differently:
- Healthcare teams building AI-assisted workflows: "How do we keep a clean trail for submissions?"
- Banking engineering leads modernizing legacy systems: "How do we show compliance when AI is in the loop?"
- Tech companies shipping AI-integrated features weekly: "How do we keep accountability when code is co-created?"
Different contexts. Same underlying problem:
Teams were moving faster than their ability to explain, justify, and own what they shipped.
And when you can't do that, it stops being a "process issue." It becomes an operational, legal, and reputational risk.
When AI Becomes Part of Engineering
AI doesn't just "help" anymore. It participates.
In modern teams, AI can:
- generate code from requirements
- propose architecture options
- refactor modules
- draft tests and documentation
- summarize reviews and decisions
- flag issues in production
This isn't "AI tools." This is AI-native software engineering where AI is woven into every phase of the lifecycle and influences both decisions and outputs alongside humans.
That's where our traditional practices start to break, in ways that feel familiar if you've lived through them.
1) The decision that "nobody" made
An engineer asks an assistant for implementation options. The assistant recommends a path. It's phrased confidently. It's fast. It works.
Two weeks later someone asks:
"Why did we choose this approach?"
And the honest answer is fuzzy:
- "It seemed reasonable at the time."
- "That's what the assistant suggested."
- "I don't remember the tradeoffs."
- "The discussion is somewhere."
Nobody acted irresponsibly. But the decision trail didn't survive the speed.
2) The PR that passed… without anyone truly owning it
The pull request looks clean. Tests are green. The summary is polished, because AI wrote the summary. Reviewers skim and approve.
Later, an incident happens and the question becomes:
"Who understood this change well enough to vouch for it?"
Everyone participated.
No one truly owned it.
3) The requirement that quietly drifted
The intent starts clear. But along the way:
- a prompt gets tweaked
- a generated implementation is "slightly adjusted"
- an edge case is rationalized away
- a test is rewritten to match the new behavior
In the end, everything looks aligned because all artifacts reflect the final state.
But the business asks:
"When did we decide to treat that edge case differently?"
No one can point to a moment. It just… happened.
4) The "same input, different output" problem
A team reruns a workflow that previously generated a stable result. Now the output changes.
Same repo. Same ticket. Same engineer. Different behavior, because the model updated, the system prompt changed, a tool version shifted, or the retrieval context evolved.
Now the question isn't only "what changed in the code?"
It's "what changed in the system that created the code?"
What Actually Broke
AI didn't break software engineering. It exposed what was already fragile:
our ability to connect intent → decisions → implementation → validation → outcomes in a reliable way.
Most teams assume accountability exists because they have artifacts:
- tickets
- docs
- PRs
- reviews
- test runs
- audit logs
But these were designed for a human-centric workflow, where decisions are made in meetings, code is written by people, and reasoning is implicit because the author can be asked later.
When AI participates materially, that assumption fails.
You can still produce documentation. But your ability to explain and defend the chain becomes inconsistentn, especially at scale, across teams, across time.
In high-stakes environments, "we think it's right" isn't a durable posture.
The Realization We Couldn't Unsee
We kept hearing versions of the same demand:
- "Show me the chain."
- "Show me what changed and why."
- "Show me who decided."
- "Show me what validated this behavior."
Not more documentation.
Not more dashboards.
Not another checklist.
A system that preserves accountability as the work happens, even with AI in the loop.
That's when the idea crystallized into something simple (and uncomfortable):
In AI-native engineering, legitimacy can't rely on authority alone, it has to rely on evidence that stays connected end-to-end.
So instead of "add governance later," we flipped the order:
Proof-first engineering
Start with the question: what would we need to demonstrate this work is aligned, justified, and safe over time?
Then build the lifecycle so those demonstrations are generated continuously not retroactively.
(Yes, "proof" is the word we ended up using for that standard of evidence. Not as a buzzword as a requirement.)
What We Tried (And Why It Didn't Work)
We looked for existing solutions.
- Agile? Great for collaboration. Mostly silent on AI participation and decision traceability.
- DevOps? Great for automation and monitoring. Not designed to preserve intent-to-outcome accountability.
- Compliance frameworks? Strong on controls and audits. Often episodic and external to daily engineering flow.
- AI governance guidelines? Frequently principles and risk taxonomies. Not an operational engineering model for teams shipping weekly.
We could bolt on documentation.
We could add approval gates.
We could create dashboards.
But none of that created what teams needed most:
continuous, verifiable linkage from intent → decisions → artifacts → outcomes, even with AI participating.
How D-POAF® Emerged
That's what became D-POAF® the Decentralized Proof-Oriented AI Framework.
A reference model for AI-native software engineering built around five principles:
Proof Before Authority: decisions are legitimate when justified by verifiable evidence, not hierarchy or automation.
Decentralized Decision-Making: authority is distributed across humans and AI with explicit boundaries, not concentrated in a black box.
Evidence-Driven Living Governance: rules evolve based on observed outcomes, not static mandates.
Traceability as a First-Class Property: intent, decisions, actions, artifacts, and outcomes remain linkable and auditable.
Human Accountability Is Non-Transferable: even with AI autonomy, humans retain explicit responsibility for boundaries, escalations, and acceptance.

Five foundational principles that shift legitimacy from authority to verifiable evidence
We structured work into Waves units of verifiable progress that move through three macro-phases:
Instruct & Scope → Shape & Align → Execute & Evolve
And we defined three continuous "proof streams" (the evidence a team can generate and refresh):
Proof of Delivery (PoD): what was built aligns with intent, with a traceable chain across decisions and artifacts
Proof of Value (PoV): what shipped produced measurable value (not just output, but outcome)
Proof of Reliability (PoR): the system continues to behave as intended as context changes over time

The three continuous proof streams that sustain accountability in AI-native engineering
The goal is simple to state:
You can walk the chain in any direction, from an outcome back to decisions, from decisions back to intent, from a change back to what validated it, from an artifact back to who accepted accountability.
Why We're Sharing It
Azzeddine and I spent three years developing D-POAF®. We validated it through research, formalized it with an ISBN-published specification (979-10-415-8736-0), deposited it with the National Library, and open-sourced it.
But this isn't "our framework."
The problem it addresses building accountable, traceable, governed systems in AI-native environments belongs to everyone:
- engineers trying to ship responsibly
- auditors trying to verify what "good" looks like now
- regulators translating laws into operational reality
- product teams trying to scale AI without losing control
Nobody asked for another framework.
But the world does need a reference model for proof-oriented AI-native engineering, because AI is already reshaping how software is built, whether our processes are ready or not.
So we're building it in public, with the community.
Because frameworks like this don't get stronger in isolation.
They get stronger when real teams pressure-test them.
Explore D-POAF®
📖 Canonical Specification (PDF)
🌐 Website
💻 GitHub
💬 Community: Discord | LinkedIn Group
Sara Ihsine is co-creator of D-POAF® alongside Azzeddine Ihsine. Both are research engineers specializing in AI-native software engineering, governance, and proof-oriented practices.
Connect: LinkedIn
Top comments (0)