I would personally never trust an LLM to answer this for me. Unless it's for something fundamental, in which case I either already understand how to answer that question myself, or I need to understand how to answer that question myself. I do run into these debates with my coding partner a lot, but I think there's a lot of very valuable (perhaps contextual) knowledge that comes from having these debates, and I would never feel comfortable not understanding why we made a certain choice (or again, trusting an LLM to make that choice for me).
Totally fair, and I really appreciate the thoughtful response 🙌
I 100% agree that understanding why a decision is made is crucial—especially when it comes to architecture or trade-offs. The goal wouldn’t be to replace that process or make decisions for you, but more to act like a third teammate who can surface considerations you might miss (like hidden tech debt, coupling risks, delivery impact) without being emotionally invested.
Think of it less like “the LLM tells you what to do” and more like “here’s a summary of trade-offs or risks based on your context—now you decide.”
Curious—if there were a way to retain full understanding and control but speed up the back-and-forth or bring more objectivity into it… would that feel more useful? Or still too risky?
Thanks for the response, and I do like that idea! Something that intentionally only acts as a third thought in the room. That's really what I've found AI stuff to be useful for -- it doesn't do anything for me, but it points out potential downsides about the way that I'm doing things. Half the time it has no idea what it's talking about, but half the time I go "oh, yeah, maybe I should tweak that".
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I would personally never trust an LLM to answer this for me. Unless it's for something fundamental, in which case I either already understand how to answer that question myself, or I need to understand how to answer that question myself. I do run into these debates with my coding partner a lot, but I think there's a lot of very valuable (perhaps contextual) knowledge that comes from having these debates, and I would never feel comfortable not understanding why we made a certain choice (or again, trusting an LLM to make that choice for me).
Totally fair, and I really appreciate the thoughtful response 🙌
I 100% agree that understanding why a decision is made is crucial—especially when it comes to architecture or trade-offs. The goal wouldn’t be to replace that process or make decisions for you, but more to act like a third teammate who can surface considerations you might miss (like hidden tech debt, coupling risks, delivery impact) without being emotionally invested.
Think of it less like “the LLM tells you what to do” and more like “here’s a summary of trade-offs or risks based on your context—now you decide.”
Curious—if there were a way to retain full understanding and control but speed up the back-and-forth or bring more objectivity into it… would that feel more useful? Or still too risky?
Thanks for the response, and I do like that idea! Something that intentionally only acts as a third thought in the room. That's really what I've found AI stuff to be useful for -- it doesn't do anything for me, but it points out potential downsides about the way that I'm doing things. Half the time it has no idea what it's talking about, but half the time I go "oh, yeah, maybe I should tweak that".