This article (talk) is based on an idea I have been thinking about for a while: while everyone is talking about AI, either from a hype perspective ...
For further actions, you may consider blocking this person and/or reporting abuse
Let me know what you think, I think it would be an interesting talk and is not something I have heard anyone talk about before, but I am biased.
If it sucks, tell me why, if you like it, tell me why.
Any interesting thoughts around it that could make it even better? I definitely want to hear those! π
yes i want to see the talk please π
awesome insight thank you π
That's not only true but well said in a poetic way.
That's still one of the problems, but not on a junior or manager level. When I tried to reproduce the "AI can code" edge case as a senior, I got erronoeous code 90% of all times.
Glad to read one of your brilliant posts again, @grahamthedev !
Thanks Ingo, nice to see you as it is been ages!
So nice to read long-form, well-thought-through, pragmatic realism in an uncertain world. It's time we moved beyond the should I, shouldn't I debate into practical processes that can underpin the challenges we'll face in 2026.
Definitely. As you know I am not good at thinking, but I am trying to find the edges on what our roles look like going forward and where, realistically, the issues lie that we are going to experience.
Or, the hype people are right, AGI is born and we all just retire early on UBI...haha
Great article.
I'm curious what does the future hold as I'm also having the same challenges... But if with time we don't have to look at the code?
My approach right now/will be to instrument the tests with Allure reporting and creating visual representation about the new codebase (using modular approach with hexagonal architecture) and checking it visually.
If something goes off, we may not check the code, just prompt the AI again, but on the flip side right now I know I can't accept any code which I would no be able to understand. Like if it's a new tech like a Redis database or Kafka, if it would be new for me hell no I would accept anything. We need to know these tools much deeper to make sure with AIs cheap code generation we can utilize them.
Absolutely.
Obviously there is balance and in a talk like that you want to lean into your convictions rather than saying that there is nuance.
For me the nuance is how much damage can i thing cause and how many things does it touch.
Completely isolated single feature that doesn't touch any other code, ship it, but the more "edges" / "seams" it has, the more you need to be the guardian IMO.
Yeah. This is my experience as well, new features which depend on nothing existing is a clear AI winner, but if it has to untangle legacy code in this case it fails most times.
Guidelines are the biggest kings in this as well
Many execs forgot it's not just translating plain English requirements into plain lines of code.
It's taking very generic concepts such as "add a payment system" and creating something out of it, whether it's a basic shopping cart that just works and anyone can hack, or a high-friction one with all sorts of 20-factor authentication, GDPR, KYC and AML hell plus bugs where nobody even wants to sign up... and then striking a balance in between, with trial, error and experience.
It's all about designing extremely complex, bloated real-life user journeys, where there used to be a clerk in an office 40 years ago, using her own brain power with occasional manager escalation to solve each individual client request, now everything has to be understood, planned and taken care of in advance.
Little matters code if a user journey is broken, the architecture is therefore broken, so code itself can't make much sense...
Then lawmakers come, mess up all the rules again, nothing makes sense anymore logically, and all tech, which is 100% logic, obviously breaks. Then AI comes, forget binary logic, a sigmoid function decides if it's approved or rejected, sometimes based on obscure ML "features" like last week's hair colour, that nobody will be able to explain... so where's code, here, and how much liability should that bear overall in all this mess?
I prefer RelU to make my decisions :-P haha
But seriously this is key, the real world is messy, code never travels in a straight path and it takes judgement, experience and instinct to strike the balance.
Maybe one day we will install judgement into a model, but it certainly wont be with the current transformer architecture!
Excellent take! The value of spec-driven development is what we've been pushing at the ainativedev.io. Let me know if you're interested in writing more about how to steer agents in this new wild age of AI code gen - would be glad to feature you on there. βοΈ
Hey
Have bookmarked the site and will check it out when I get chance!
Great piece so far! I will read the whole article later.
Interesting, I had never looked at that situation from that perspective.
Best nice nice very goof
You obviously read my stuff often, I often make goofs :-P hahah
Seriously though, glad you found it interesting!
Wonderful post, its the difference between clean code (ai) and wise code (you).
I like that, "wise code" is a great goal now :-)
Love this perspective! I'm going to have to re-read that again more thoroughly, very interesting.
Thanks, the feedback is appreciated!
When you re-read it let me know if you have any thoughts or perspectives that might improve it.