
(I'm not handsome as this guy, AI generated lol)
"AI is not trustworthy. Go back to coding by hand."
It was a short sentence. But it perfectly captured the contradiction I had been living in.
As a product engineer, I spent my time turning business priorities into shipped software. Speed mattered. Quality mattered. What became intolerable was watching an organization demand both while rejecting the engineering discipline required to make both possible.
The market asks for faster delivery every quarter. But users allow almost no defects. Any engineer knows this combination is nearly unsustainable if you keep relying on grinding human labor. That is exactly why I believed AI was no longer optional. Even with probabilistic tools, we can still enforce deterministic reliability by filtering output through explicit control procedures. The core issue is not whether we use tools, but whether we build control systems.
My organization chose the opposite path.
Instead of building a control plane for AI usage, it framed AI itself as the problem. Disappointment from unskilled usage became evidence against the tool, not evidence of a missing system. The operating model became painfully clear:
Keep aggressive output targets.
Remove the highest-leverage tool.
Fill the gap with overtime and manual repetition.
That is not engineering. It is deferred cost.
This is exactly where my anger came from. Probabilistic output is supposed to be noisy. That is why deterministic quality gates exist.
What I argued for was not ideology. It was process design:
Pre-commit hooks to block policy violations before review
Procedure gates combining static analysis and tests before merge
Formal verification checks for critical intent-to-implementation consistency
Banning tools can look like control, but in practice it often means giving up control. If you refuse to build systems and rely on labor intensity instead, quality degrades and people burn out at the same time.
I could no longer treat that as normal.
My resignation was not an escape. It was a decision. I did not leave because AI is imperfect. I left because I could not align with a culture that refused to build deterministic safeguards around probabilistic intelligence.
That decision is what led to Archright.
I am not building just another app. I am rebuilding the production process itself:
how probabilistic intelligence is disciplined into deterministic integrity, how thought trajectory survives implementation without leaking intent, and how this becomes a repeatable system instead of a heroic individual effort.
So this is not just a resignation story. As an AI tsunami approaches, this is the first page of a journey to rebuild software engineering that has lost its direction.
Top comments (0)