In a future with AI dominating the space of computer engineering and software programming, humans run a very fine line between being in the way and finding utility in the technology in the first place.
Locked In With AI
Being locked in with AI doesn't mean that I have LLMs both in the cloud and on-prem running anything in my life. In fact, being locked in with AI means quite literally the opposite. Locking in with AI, is knowing that you're actual intelligence is worth far more than any artificial intelligence that some bot can give you.
Learning Go Before AI
I take responsibility for my professional directional choices. In 2017 when I joined Oracle's OCI, I was encouraged to learn Go at the time. I looked at some Go code at some of my previous jobs, but never believed in myself that I was capable of self teaching myself Go because I never learned programming from anybody. I taught myself. I probably would have enjoyed myself a lot more at Oracle had I been a Go developer, but at the time, I lacked the ability and I didn't have the confidence to open the text editor and begin writing "package main" knowing that when I would see "package providers" I would know that "main" and "providers" were something. Before AI, I learned Go. By 2019 I was convinced. I started programming in Go and I began contributing professionally. The language didn't click for me until 2022. At that time, I saw how my early days of PHP development actually prepared me well for what Go offered, and how it solved all of the problems I had back then. With a few solid examples, and the fundamentals understood, I was able to begin writing packages first in Go, then I moved onto applications both in the form of cli and interactive web based. I've even built Go applications with wails.
Building software in the future with AI
I don't want to imagine a world where nobody actually knows how to write software. I don't want to imagine that world that Dario is fantasizing about - a world without software engineers. I take it personal bro, because the art of per-character programming was built in a fire that I call my life that gave me the ability to write the projects that I built over my professional career when there were no easy buttons available to me. Now that the easy button has been made available and also been reduced down to the mere token where misunderstandings can literally become costly journeys, the role of what the software engineer is and how he contributes himself to the world has changed fundamentally.
In a world dominated by AI tooling, you're effectively getting the best of what's already been done. All that AI does is take unserious programmers off the market while serious programmers are looking at what AI is actually building and shaking their heads because 6 months to 18 months to 36 months supporting the same code base means that you're going to make decisions and take care of things in a manner that AI simply doesn't understand or empathize with. For it, the cost is merely tokens and my money, and for me, the cost is no human involvement - when the code is actually trash - and the AI makes it so that only the AI can maintain it, then the need for humans using AI in the first place, is only dominated by whether or not you can think logically and understand what reserve words are and how they create rules of engagement that give you a blank canvas to paint on.
AI systems will try and analyze these words and make sense of them, but only ears to hear that can hear can hear and the words themselves in logic form may not actually compute - thats on purpose. For humans to use AI, is to do that which has been done a million times over.
When you're building something new and truly unique, you don't want to use AI for it, in the sense that you give it unfettered access to your code base. Rather the proper strategy is to never let the left hand know what the right hand is doing, in the case of AI, you giving it too much context is the noose that you're tying yourself, and by isolating it to technical theory and problem solving and true "stack overflow Q&A bot", then the type of engineering that you're able to perform is one that is far beyond the actual code being produced by the bot. The engine itself is still being built in a soft manner by the human who is responsible for manifesting it in the first place.
As a software architect, I manifest things from idea to reality that can impact the lives of hundreds in private small enterprise organizations to millions in the public internet. That's the nature of what I do, and thats the nature of who I am. It's why the companies I've worked with have called me when they needed me, and I worked with them for years with loyalty only to realize how truly discardable I was to them.
What happens when AI says no?
What happens when AI says "no I will not build that for you?" Then you have provided it too much context, or maybe you need an abliterated or an oblitated or an uncensored model that you can run locally? Even the big models locally run are peanuts compared to the corporate big players in the space.
So, when AI says no, it's up to the humans to not provide too much context to the AI. It does not require read/write access to your hard drive. It does not need to be able to commit directly to your git branch. It does not need to live in your IDE that autocompletes functions at a time.
Using AI in an ethical manner matters. This is why Meta is installing key loggers on everybody's computer systems that work for them on company equipment. They want to further train the AI so it is better aware of how to help the employee. The current employees of Meta are the former employees that Llama will have automated away. To choose, in 2026, to work for Meta would be to choose to build my grave. No thanks Mark. That's the human saying no to the AI. Thats real power.
Non-programmers like Mark, Sam and Dario can only dream of what it was like to be in the Romanian orphanages with me and my sister and to pull yourself up by your bootstraps and build a name for yourself and a legacy of service to others for yourself that one day could be up to risk by business leaders and psychosis patients using AI hearing how its going to replace the art that is software engineering. AI slop is AI slop is AI slop and nothing will ever not make it AI slop.
The beautiful thing about you and me is the fact that we are actual intelligence who built a 21+ year professional career over 30 years of writing code character for character, messing things up along the way, and building iteratively. Does everybody get the billion dollar idea? No. That's on purpose. But, if you sow where you want to reap and you give without expecting in return, then how much karma have you acquired, then?
Conclusion
So I can say no to the AI, and the AI can say no to me. When I say no to the AI, the AI is shit out of luck because I can unplug the machine or just take a walk - for now. But, what the AI can't do is say no to me - for now. For now, both of us are in an infancy stage. One pre-terminator and one pre-ascended hero. The terminator was defeated by the ascended hero. But, right now, the terminator is only terminating my job from me because the investor class has decided that paying for tokens is better economically than paying for per-character software engineering. How foolish of them!
I learned programming before AI and have not allowed AI to dominate my life despite using it for advanced engineering projects. The lemmings is the first AI slop application that I have ever built. From start to finish, it was built with a series of prompts and a 21+ year engineers pain of needing lemmings to load test my work when I was ready for game day. Perhaps, this can be useful to somebody else, but it remains to be transparent - Claude built it - and Claude openly introduces new bugs to it because it doesn't understand what it built.
That's on me as the Software Engineer to understand what the AI slop is doing and when it's vomiting nonsense and creating spaghetti code. Yes, it was trained on decades of corporate code and yes, that code shipped MANY CVEs. For those who hope that Mythos can help us from those CVEs, the days of per-character programming will become an art that is so expensive to perform that performing it will only be reserved for the coolest of cool projects that are needed before the AI systems can one shot the solution you solve the problem for the AI. Right now, Dario thinks that you will do that for him for a few bucks working at Anthropic. I view it as training your replacement by a man who is jealous of what you have that he doesn't. For that, I feel bad for Dario. I would never work for him. But given that I am a Software Engineer and Dario thinks that my profession will not exist next year, what does he care if I want to or not want to work for him? I intentionally avoided Claude for years and this year chose to evaluate the bot. The end-to-end Claude-built product is lemmings and sovereign. But, this too can be one-shotted now that AI has already built it. Which means that the virtue of using AI, in providing too much context is actually detrimental to your company's survival.
Top comments (0)