As a software developer I only know one thing, no matter how good a technical decision may seem, it will always look bad when you review it with the eyes of the present.
Exactly, I've been experimenting with AI with different approaches, from a "copilot" to using spec-driven development. And in all of them I feel lack of control.
It was the same like when RADs like Ruby on Rails or Grails appeared, they do thing "automagically" but at the end they were predictible: outputs were always the same and you could trust in the plugins as they were implemented by humans.
AI changes it all. The code becomes a black box and the specs becomes the new "code" which leads to a high cognitive load to describe every little detail to avoid AI allucinations and iterations of the same prompt until AI do what you want it to do.
A human being just needs a paragrah or even a diagram to understand something, AI needs to know every single detail of the system and I feel totally unable to do that. Which at the end can be another flavour of the "imposter syndrome"
Full-stack developer specializing in Cloudflare Workers, MCP (Model Context Protocol), and AI integration. Building production RAG systems with Workers AI and Vectorize at the edge.
The Rails analogy is exactly right and you found where it breaks. Rails was predictable magic — you could trace the convention, trust the output. The automagic had edges you could find.
AI doesn't. Same prompt, different output. Same spec, different architecture decision. You can't read the source of what it decided or why.
I wrote about this and built a spec-writer skill for Claude Code precisely because of this problem. Not to eliminate the cognitive load but to move it earlier — before the agent touches the codebase. Inspect the decision before the output, not after.
The control problem doesn't fully go away. But you can move from chasing output to steering decisions. That's at least legible.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Exactly, I've been experimenting with AI with different approaches, from a "copilot" to using spec-driven development. And in all of them I feel lack of control.
It was the same like when RADs like Ruby on Rails or Grails appeared, they do thing "automagically" but at the end they were predictible: outputs were always the same and you could trust in the plugins as they were implemented by humans.
AI changes it all. The code becomes a black box and the specs becomes the new "code" which leads to a high cognitive load to describe every little detail to avoid AI allucinations and iterations of the same prompt until AI do what you want it to do.
A human being just needs a paragrah or even a diagram to understand something, AI needs to know every single detail of the system and I feel totally unable to do that. Which at the end can be another flavour of the "imposter syndrome"
The Rails analogy is exactly right and you found where it breaks. Rails was predictable magic — you could trace the convention, trust the output. The automagic had edges you could find.
AI doesn't. Same prompt, different output. Same spec, different architecture decision. You can't read the source of what it decided or why.
I wrote about this and built a spec-writer skill for Claude Code precisely because of this problem. Not to eliminate the cognitive load but to move it earlier — before the agent touches the codebase. Inspect the decision before the output, not after.
The control problem doesn't fully go away. But you can move from chasing output to steering decisions. That's at least legible.