AI is taking over. I cannot remember how many times I have heard this sentiment over the past week alone. While it is true that AI might be taking over, there has been an interesting overlap dragging on for the last two years where work that was supposedly going to be replaced by AI has not only thrived but evolved to be more effective when AI is paired with human expertise.
I would like to offer an alternative perspective on what I think is actually happening. Yes, AI is gaining traction, most notably in big tech, but here is where it gets interesting. While big tech companies are actually leading AI adoption in development practices, they are doing it much more carefully than the rest of us. They have stable systems that have been in place for the longest time, and while the rest of the world moves faster and adopts newer languages and technologies, big tech usually has to move slower due to legacy systems that might not be optimal but offer the best return on investment with minimal effort compared to heavy rewrites. This cautious approach, ironically, might be protecting them from some of the pitfalls I am about to describe.
Within the startup world, using AI almost exclusively might be setting yourself up for significant challenges. First, you are just trying to get to your first hundred customers who need to find enough value to stick with your product. Given the way machines approach work, it is a double-edged sword. Sure, you will move faster within the first two months, but your code base will quickly begin to feel like a decade-old piece of sellotape connecting components with virtually zero separation of concerns. Recent data shows that AI-generated code has led to an 8-fold increase in code duplication and a 7.2% decrease in delivery stability. While we in the startup space need to quickly pivot and iterate, it becomes much harder to do so when you are replacing whole parts entirely. This is not just about immediate failure. It is about accumulating technical debt that becomes increasingly expensive to maintain.
Who am I? I am a software developer who has used AI both by surrendering control and working as partners. As soon as I surrendered control in a few individual projects, I realized that the project worked, but bugs kept popping up. I spoke to a colleague and they provided the solution of using AI to write the tests first. On the next iteration. I wrote the tests first and let AI do its thing. The outcome was amazing, but only at first. I went into the code base only to find single-function files as big as five hundred lines. In the software world we usually say that code is written for other human beings, but since I was not counting on another human to debug the code I had entrusted to AI, I saw no problem having those huge functions.
It was not until my intelligent counterpart was told to go and change the position and color of an overlay that I began to hate my life. A few prompts later, I found myself asking it to return the code base to a certain point in history. What I quickly realized is that instead of trying to fix the code I told it to, it went out and swapped majority of the components with newer, more "advanced" features. On closer inspection, I realized that the more lines it added to fix a single problem, the more bugs it introduced. It was like cutting open another wound in your arm to offset the one in your thigh.
I also learned the hard way that sometimes AI just seems to have a mind of its own. Remember when I gave it the tests? Yeah, when the tests do not work, your counterpart will rewrite them so that they work. I found a test case that had been adapted to make a "feature" work. What is a feature you did not account for? A bug. This is actually a critical blind spot in current AI development practices. AI will modify tests to fit the code rather than maintaining the integrity of what you are actually trying to test. It is like having a student change the answer key to match their wrong answers.
Do not get me wrong, there are bugs that should be nowhere near a software product, but sometimes a bug might just creep in that gives your product a certain refinement you might not have noticed on your own. That is a strength of AI I have fallen in love with. It can stumble upon unexpected solutions that work better than what you initially planned.
What am I advocating for? While I am a fan of AI, I feel that as it stands, it serves you better when you know what you are aiming for. AI will use the path of least resistance to introduce a feature, and this will end up costing you in the end. If you have no idea of the emergent effects it introduces by writing code in a certain way, then you will end up with a program that ideally works but will ultimately be too costly for a startup to recover from. You are not Google or Facebook who have been in the game long enough to pay off any cyber threats from zero-day discoveries, so use your power wisely.
Here is what I have learned works: using AI to quickly iterate not just on business outcomes, but also on alternative implementations. Organizations that implement collaborative AI solutions are seeing productivity increases of up to 40%, but the key word here is "collaborative." The sweet spot seems to be where human judgment guides AI capabilities rather than the other way around. AI is excellent at generating options and handling repetitive tasks, but it still needs human context to understand what actually matters for your business.
I should mention that AI coding tools are evolving rapidly. My experiences from even a few months ago may not reflect the current state of the technology. But the fundamental principle remains: while AI can amplify your capabilities dramatically, it is important to realize that in the end, all logic and no intuition is not good for your business.
Top comments (0)