The spec is the most important part of ai development. Any feature needs iteration on the spec until what is says in unambiguous. After the code is written, you can then ask is it 100% aligned with the spec.
Totally agree - the spec is everything. But here's what bugs me: right now the spec and the code are two separate things. You write a spec, then AI writes code, then you manually check "does code match spec?"
What if they're NOT separate? What if the spec IS the code?
You write the spec in layers — from high-level intent down to precise logic. Each layer reads like English. And the system mathematically verifies that every layer is consistent. Then it generates Python/TS from the lowest layer.
No manual "is it aligned?" check needed. The structure makes misalignment impossible.
We are saying the same thing. Define the layer architecture in the spec and the implementation will follow. People talk about LLM producing hard to read code. If you ask LLM "do a code quality review" and get yourself to A grade then its not.
You're right that we overlap on the foundation - spec-first is king. But I think there's a key difference in how we handle the "is it aligned?" step.
Your approach: good spec -> LLM codes -> LLM reviews -> A grade. That works, but the review is still LLM opinion - it can miss things, just like a human reviewer can.
What I'm proposing: the spec itself has layers, and each layer is formally checked against the one above. So by the time you reach code, alignment is already guaranteed - no review step needed.
Simple analogy: your way is like proofreading an essay really carefully. My way is like using a calculator - the structure doesn't allow wrong answers in the first place.
Both are better than what most people do today. I just think we can push it further.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
The spec is the most important part of ai development. Any feature needs iteration on the spec until what is says in unambiguous. After the code is written, you can then ask is it 100% aligned with the spec.
Totally agree - the spec is everything. But here's what bugs me: right now the spec and the code are two separate things. You write a spec, then AI writes code, then you manually check "does code match spec?"
What if they're NOT separate? What if the spec IS the code?
You write the spec in layers — from high-level intent down to precise logic. Each layer reads like English. And the system mathematically verifies that every layer is consistent. Then it generates Python/TS from the lowest layer.
No manual "is it aligned?" check needed. The structure makes misalignment impossible.
We are saying the same thing. Define the layer architecture in the spec and the implementation will follow. People talk about LLM producing hard to read code. If you ask LLM "do a code quality review" and get yourself to A grade then its not.
You're right that we overlap on the foundation - spec-first is king. But I think there's a key difference in how we handle the "is it aligned?" step.
Your approach: good spec -> LLM codes -> LLM reviews -> A grade. That works, but the review is still LLM opinion - it can miss things, just like a human reviewer can.
What I'm proposing: the spec itself has layers, and each layer is formally checked against the one above. So by the time you reach code, alignment is already guaranteed - no review step needed.
Simple analogy: your way is like proofreading an essay really carefully. My way is like using a calculator - the structure doesn't allow wrong answers in the first place.
Both are better than what most people do today. I just think we can push it further.