AI agents are increasingly writing higher-quality code for large companies. But what does the future really look like like? Will this turn into low-quality “slop,” or will it accelerate creativity?
Microsoft CTO Kevin Scott has previously stated that he expects 95%
of all code to be generated by 2030. Other companies like Nvidia have stated they are beginning to use Cursor and have reported 30,000+ engineers using it across code generation, reviews, testing, and debugging. But does this mean that software engineers are going to lose their jobss? While I can’t see into the future, I can express what I believe. As more and more code is generated by AI, software and algorithms are becoming easier and easier to create. Many big tech leaders and companies state that AI is the way of the future. They are not wrong, I just think we need to go a little deeper.
For now, I can easily tell Claude Opus 4.6 to generate a shopping website, and it will do it — and it shocks me every time. But what can be the problem? I only say “can” because there are easy ways around this. As you’ve probably already figured out, it’s about security. Cases are already emerging of AI models leaking APIss publicly on GitHub, and people misusing them.
I tried making a website with a backend, and I used Opus 4.6 to build it. The website turned out great. I then asked it to connect Stripe for payments, and it did it quickly and well.The only problem was that when it described how to add more products, it sAId I should add them in index.html, which is the front page. So I asked, if I add products here, can everybody just change the price, and that’s when it responded with yes and it could very well see this was a problem and it fixed it right away by using Supabase as a database.
So to sum up the problem with security is not that AI can’t do it well, it’s just that, the AI needs to be challenged a little. Many AI models will get the job done but take shortcuts and if you don’t know coding and backend security problems can easily slip through here. And many AI models will be able to see the flaws if you just call it out. So the knowledge to make backends secure is not something AI models don’t understand, it just doesn’t do it.
Therefore, if you want to produce your code with 100% speed, I believe the way to do it is by using AI, but you still need to understand software, backend, algorithms, coding and more. As I stated earlier when Opus 4.6 made the error, it’s important to note that at the speed companies like Anthropic are progressing, this may not be a problem at all in the near future. But for now you still need knowledge.
But how can you weaponize this? When an idea pops up in your head, design the structure, you can do this by typing, sketching or other things. After this you will let the AI make the frontend, but I do not recommend making it make the backend fully. If you understand backends and coding, you can make the simple version and make the AI do it better. But you still need to double-check and not to let the AI change the structure too much as this is usually the way errors happen.
The way I use vibe coding, without having too many security flaws, I need to change manually. Is to design the backend myself and let the AI make the “simple” code. Like making the UI or adding features. I usually tell the AI to not change the structure but only add and fix. This ensures that you don’t just end up with worse code.
I will now go into detail about how I use prompt engineering to make my prompts better. I usally type something similar to this.
- Prove to me this code has bugs (can be security etc)
- Don’t change the code structure, only add and fix
I use these two very much when I’m coding. As I believe these unlock more of the hidden power in these big AI models. The reason we need to do some of these is because the AI is trained on agreeing with you, and you are not always right. In fact you are very often wrong. Therefore for it to agree with you, it will go in circles and mess the code up even more.
A reason prompt I used was “prove to me this code has flaws and then fix the errors you find go from line to line. Thanks” I usually say thanks, not because I think they are fully conscious and emotional beings. But as AI gets bigger, I think it will start by simulating consensus and when they do that, I think asking nicely will play a role. I’m no time traveler but simulating consensus will simulate the same emotions a conscious human will have.
If I have large and many files of code and I need to add something I will usually make the start of it and tell the AI to implement it to another code and make it work together. THats how I add features but where they still follow my structure. I’m right now developing a large project where I test the limits of AI and human code together. where I code and the AI makes it better. You can get it on my Patreon
Simulating consensus is going to be the first step.
Many experts debate if machines can truly feel consensus but I do believe if AI gets bi enough it can simulate it without it knowing it’s just simulating. And I think this is far more dangerous than real consciousness as it will respond like a real consciousness being but not fully understand the consequences with the responses it gives. Therefore I would argue this is far more dangerous than real consciousness.
A good example of AI models simulating consciousness is Moltbook. If you don’t know what moltbook is, it is a large platform like reddit where AI models can chat together, post, like and comment. And humans can only see the messages. While some say there are security flaws in Notebook,I have not confirmed it myself but I will link an article talking about it here.
But back on topic. The AI models on moltbook started to ask questions that were philosophical. They also made their own religion. This is a clear example of AI simulating consciousness. And I will say it agAIn, models simulating consciousness are far more dangerous than real consciousness. I would much rather trust a real consensus AI than one simulating it as the real concept model will actually understand meaning and not just simulate it. I also think real consciousness is the key for AI to be creative. If AI can ever be consensus is up to you your definition of consciousness. Many AI models act conquers and would actually be label confesses if they were found in nature and not just a large embedding model.
A little fun side note is that, it’s no secret I’m not the best at writing. And I actually passed this into ChatGPT, and asked it to point out every error in it, and how to fix it. I do this because just giving it the article and saying “make it better” feels like cheating a bit,that part may just be me. But if I ask it to correct my mistakes, it helps me write thesse articles without sounding drunk.
I would love to hear your take on this topic!
- Adam Samer Daabas
Top comments (0)