DEV Community

marius-ciclistu
marius-ciclistu

Posted on • Originally published at marius-ciclistu.Medium on

AI almost broke laravel-crud-wizard-free and Maravel-Framework


laravel-crud-wizard-free

Usually I do NOT trust AI but for web search It automatically makes a resume from the results and it is tempting to read just that instead of clicking and scrolling through results. That happened while I was working on improving the casts from Eloquent (see related article). After one week of research and coding in search for the best possible solution, I made the mistake of trusting it for a simple question.

The question: “ How to check in Eloquent if a model has any listeners registered? ” received the following answer:

$this->getEventDispatcher()->hasListeners($this::class . '.updating')
Enter fullscreen mode Exit fullscreen mode

I asked for confirmation and the hipe of nowadays replied confident that this is the way.

The next day I moved on with the testing to sandbox and found myself debugging for a couple of hours. I was sure that my changes could not had been the culprit because the logic was rock solid, just to discover that the AI just f…. with me by writing confident hallucinations as sure answers.

What happened was that the c ode returned always false that lead to caching of dirty and after updating event the getDirtyForUpdate did not return the changes from updating event.

Turned out the answer was:

$this->getEventDispatcher()->hasListeners('eloquent.updating: ' . $this::class)
Enter fullscreen mode Exit fullscreen mode

When I questioned it, it denied. I tried different question formulations and after 3 or 4 tabs, I managed to reproduce the initial wrong and confident answer. After more that 5 replies it said that it was wrong.

This was a happy case where real testing found the corner case issue and it did not end up into production.

I decided to write this to showcase how AI can ruin your world, not by being bright, but by being confident in being stupid. This is the biggest danger because you expect it to be more intelligent than a human being but you don’t expect it to lie with confidence.

I asked it afterwards in a different tab why AI hallucinates and I was NOT surprised to read that this IS HOW IT WAS TRAINED to do in order to get rewards. It will almost never say I don’t know and it will always choose to hallucinate. This is what I call “MARKETING”.

Top comments (0)