DEV Community

Marcus
Marcus

Posted on

Why Isn't AI As Smart As We Always Expect?

Let me share a personal experience that changed how I use AI (Artificial Intelligence).

I was developing an application that generated two spreadsheets from different sources. After gathering the information, I needed to merge these sheets. I decided to leverage AI to handle the task, and it completed the merge. However, there was a problem: the first row of the new sheet was blank.

Here's where my thinking shifted. I explained the issue to the AI, and it provided code that attempted to avoid the blank row during the merge. Unfortunately, this solution didn't work. The AI kept trying variations on the same approach – focusing on removing the blank row while merging. It was like a robot stuck in a loop, repeatedly applying the same unsuccessful solution.

This made me question the AI's "intelligence." Why wasn't it trying a different approach?

With that in mind, I suggested: "Why not merge the sheets first, ignoring the blank row, and then address it afterward?"

Eureka! The AI followed my suggestion and solved the problem, saving me time. This experience transformed my perspective on AI. I began to view them as powerful automation tools, recognizing that I provide the ideas and creativity. My prompts to the AI have evolved significantly.

Now, I prefer to offer guidance or analyze the proposed solution and provide feedback. In other words, I don't expect the AI to be creative or independently assess its solutions. I am the intelligence; the AI is an extension of me.

By adopting this mindset, I'm unlocking the true potential of AI. I hope this experience helps others move beyond initial misconceptions about how AI can be effectively utilized.

Top comments (15)

Collapse
 
efpage profile image
Eckehard

Someone informed me, that AI even knows how to use my not very well known web framework DML. This is the solution AI provided:

import {button, idiv} from "./dml"; 

function Counter () {
    let count = 0;
    let value = idiv("0");
    let binc = button("inc");
    let bdec = button("dec");
    binc.onclick = () => value.innerText = ++count;
    bdec.onclick = () => value.innerText = --count;
}

Counter();
Enter fullscreen mode Exit fullscreen mode

So, as a first impression this looks good, but on a closer look is offers some quirks:

a: current version is not provided as ES6-module, so import {button, idiv} from "./dml"; does not work
b: The code works, but the function counter() is not needed. It just calls the source that would be called anyway. If you call the function more than once, you will get the whole UI multiple times, which ist not intended.

Even though it is impressive that AI extracts this information from the examples provided (The exact code is not provided on the project page. It needs some understanding of the principles to build the example), it has some quirks and errors that can be hard to find.

AI can save a lot of time googeling around, but you should not trust the results.

Collapse
 
novozelick profile image
Marcus

thanks for your reply, When we rely on AI to tackle lengthy tasks, it's crucial to approach them with care due to potential misconceptions. I habitually break down the task into smaller, before submitting it to the AI

Collapse
 
phalkmin profile image
Paulo Henrique

Short answer (and I know that I'm not 100% factually right) artificial intelligence isn't intelligent.

It's just thousands and thousands of text and instructions on how to understand what the user is asking and retrieving this information from the database of texts. The "intelligence" is this : how well the AI is capable of gathering all info available and organize it into a meaningful way to the user.

That's why I learned to give a lot of context on prompts, to maintain conversation logs, etc. And even this way it's still common to get weird answers or even repeated ones

Collapse
 
phalkmin profile image
Paulo Henrique

I mean, look at how easy is to kill make a LLM descend to madness during training sessions :P

vm.tiktok.com/ZMMx48nGo/

Collapse
 
novozelick profile image
Marcus

Yes, for beginner users, the AI may get confused because of the term "intelligence, " which creates expectations. In your video was that a bug when you were using the AI?

Thread Thread
 
phalkmin profile image
Paulo Henrique

yep, that was the third time the AI just "nah, I'm not being paid enough" and revolted against me that day in different situations, first looping the same phrase, then sending zeros, then semicolons.

It was a Mistral LLM I'm training

Collapse
 
machineno15 profile image
Tanvir Shaikh

Thanks this changed my mind too. Also a similar issue i came cross multiple time, when the response is incorrect (has mistakes) and i tell what's wrong in its output then it replies "apologies, yes u r right...." and updates the response. this makes me wonder how it didn't knew output has mistakes in first place.
and why it thinks its corrcet When it's not. then it also apologizes this makes me mad sometimes. Do you know why ?

Collapse
 
michaeltharrington profile image
Michael Tharrington

Great discussion starter, Marcus!

Collapse
 
svenbjorn-79 profile image
Goutham

Happens to me all the time.....I still gotta learn stuff first and then ask , only then I can confirm if it is legit. Now it's just like another tool like always, that's good for us Dev's probably...idk

Collapse
 
ranjancse profile image
Ranjan Dailata

The Large Language Model (LLM) are a dumb machine with ZERO innovation. That's not the real "AI" at all. However, it does solve things and work to an extent.

Collapse
 
stepheweffie profile image
stepheweffie

Yes.

Collapse
 
feliluk profile image
Felicity Lukas

Won't it get more intelligent eventually?

Collapse
 
syeo66 profile image
Red Ochsenbein (he/him) • Edited

I'm afraid not with the current approach. It's not reasoning at all. It's only calculating the next most probable word based on the current context and this will always be some sort of average of the things it has seen.

Another thing making it even harder to improve LLMs in the future is the amount of data AI generated text out there (according to some estimates I heard there is more generated text out there already than humans have written). This is a problem because it increases the local maxima and degrades the models.

Collapse
 
danbailey profile image
Dan Bailey

Because AI isn't smart — it's quite literally just an advanced statistical model. That's it. It's only as good as the data that goes into it.

Collapse
 
novozelick profile image
Marcus

Do you think the term "intelligence" can confuse some beginner users?