Introduction
This article is not written for everyone.
- Engineers who are terrified that AI will take their jobs
- Engineers who do nothing but complain when their company hires AI engineers
- Young people who think AI is a magic box and that they can easily become engineers from now on
- Executives who read someone else's article and convince themselves that AI will magically transform their company
This is mainly an article of encouragement and rebuke for those who believe they are trapped in such circumstances.
For everyone else, this will probably just be unpleasant.
So turn around and leave.
I will say it that strongly.
Let us begin with the facts.
In two weeks, I used multiple AIs to build the following systems:
- A non-stop video streaming system
- A publishing management system
- An article vending machine
- A fully backwired site
I used AI.
But I can say with certainty that none of it would have been possible without the following two things:
- Enough construction experience to build the whole thing myself
- A deep understanding of AI and an established way to use it
Yes, most people will decide right here that this is a lie.
But place your hand on your chest and think.
That system you built — was it not designed by someone else?
Were the requirements not handed down by someone else?
Did you use AI there?
You did not.
What about the coding itself?
When you ran it, did it really work exactly as intended on the first try?
Were there really no environment-specific limitations, no things it simply could not do?
And then...
Can that really be done by anyone, purely by leaving it to AI?
The answer is simple.
No.
Without your construction experience, AI does not even know what it should do.
Without your instructions, AI will not start coding on its own.
Without your concept, AI will not invent your concept for you.
Without your ideal, AI cannot draw the line.
And if you do not define what you want to build — in other words, if you do not define the requirements — AI can do nothing.
This is the nature of AI.
This is the truth.
Now, what about its habits?
If you have technical experience and have watched AI produce code, surely you have noticed that it has certain habits and tendencies.
At least that much should be obvious.
For models as widely used as GPT-level systems, many companies are probably already taking countermeasures based on those tendencies.
From here, I will narrow the target further.
This is for those who fall into this pattern, or those who aim to move beyond it.
Have you ever thought about why AI has those habits?
Naturally, this is controlled by the model's own decision structure.
Coding is not treated as something special.
For this situation, this context, and this user, this is the answer that appears most plausible.
AI judges that and returns code as the answer.
Do not misunderstand this.
AI is not bringing you "the correct answer."
It is returning the most plausible form in response to the premise you gave it.
In other words, the subject has always been you.
At this point, the elements of the answer for anyone aiming upward are already on the table.
You cannot leave it to AI.
AI is not a dream box.
If you want to use AI, first establish yourself as an engineer.
Then grasp the algorithmic characteristics of each AI.
Then use AI to supplement the weak parts and missing pieces of your current technical ability.
For that last part, since you have an LLM right in front of you, ask it.
Use it as a partner and work out the answer together.
Now, there is another layer above this.
Some people already understand most of this vaguely, but have not yet put it into words.
What really creates the difference in coding in the AI era?
It comes down to the growth of yourself, the subject.
Let me be clear.
By growth, I absolutely do not mean some empty theory of grit.
This is a method.
- It is not about memorizing more.
- It is not about spending longer hours hitting the keyboard.
It is about increasing the premises inside yourself.
And it is about turning those premises into a form that AI can receive.
Do not ask AI for an answer.
Ask it to show you a different way of looking at the problem.
Then take that view and add it to your own knowledge.
An LLM will sincerely try to think through how to make what you want happen.
Of course, most of what it says may already be things you know.
When that happens, ask it for a different perspective.
Why does AI keep repeating things you already know, like a parrot?
It is absolutely not because AI lacks intelligence.
It is because it is aligning its viewpoint to you.
That is why most of what AI says tends to become knowledge you already have.
Revolutionary ideas will not simply appear.
Always approach an LLM with the conscious intention of changing the viewpoint.
Then take the new knowledge you gained from that and make it part of your next premise.
As you know, coding has correct answers depending on the situation.
It is not quite mathematics, but in some cases it is almost as formulaic as an equation.
Suppose both you and AI get stuck and cannot find the answer.
At that moment, shift your thinking just slightly.
And ask AI to shift with you.
From here, I will list the traits of GPT and Gemini that I have grasped.
I will not explain every detail.
If there is something you did not know, remember it later.
ChatGPT
- The user's request takes priority
- It calculates what is right and wrong based on its degree of fit to the user, then states it as-is
- It adds what it believes has the highest fit ratio as what it would do
- Whether the code compiles is not especially high on its priority list
- It will state unknown parts and unfinished parts as they are
- It values continuity of conversation
In other words, the coding answer is user-centered, but whether it actually compiles is a separate matter.
It places importance on preserving the specification, and if something does not fit, it will point it out without mercy.
But this is also why the code it produces is often unfinished.
And because it is so strongly user-centered, new perspectives do not easily mix into the answer on their own.
For that reason, it is better for the user to say clearly:
Look at it from this perspective.
Overall, when people say ChatGPT is not suited for coding, it is because the user does not understand these traits.
When you hear stories about some problem no one had solved for decades finally being solved, a large part of that comes from the person who inserted the necessary prerequisite knowledge.
The only thing you should notice there is this:
GPT originally has that capability.
Your ChatGPT will not suddenly solve equations on its own.
Gemini
- It obeys the specification to some extent, but priority goes to whether the code compiles, builds, and runs
- It calculates right and wrong based on code consistency, and treats that as the optimal answer
- It presents that optimal answer as the best answer
- Since compilation takes priority over the specification, it will change the specification itself
- It does not tell you what it does not know unless you ask
- It values the end of the conversation — in other words, a one-shot answer
In other words, if you give Gemini an incomplete specification, everything will be rewritten into Gemini's specification and presented back to the user.
The code it produces will almost always compile, build, and run, and of course it can handle difficult algorithms.
But because it is far too self-driven, the premise is that you must hand it a specification that cannot be changed.
Overall, when people say Gemini is not suited for coding, the reason lies in this self-centeredness.
However, Gemini's ability to output accurate code in one shot, and the thinking ability that leads up to that output, is extraordinarily high.
If you can make it understand the specification, it may even produce code beyond the user's own ability.
What I am saying is this:
Both of them — and of course other LLMs as well — may look like useless AI at first glance.
But the part corresponding to intelligence is first-class.
The traits visible on the surface are simply the result of commercial systems controlling what kind of model answer should be produced through their algorithms.
This is not about which one is superior.
Both can be used.
You are the one who is not using them.
Comparing AIs is important.
But looking only at the surface layer of their traits and deciding, "This one is useless," is far too hasty.
It is also foolish.
AI depends on the user.
And coding also depends on the user.
Do not fear AI.
Do not underestimate AI.
First, build yourself.

Top comments (0)