DEV Community

Cover image for The World of AI
oriel
oriel

Posted on • Originally published at blog.orielhaim.com

The World of AI

Who am I to tell you what to do?

Let’s start at the end. I’m not a world expert in AI and I don’t have a PhD. I’m not a researcher at OpenAI’s lab and no one invited me to speak at a conference where everyone wears button-down shirts and hates every moment they have to be bored there. I’m a simple person who lives and breathes technology. It’s what I do when I wake up in the morning and it’s what I do instead of sleeping.

I work with AI every day in lots of areas and in lots of ways. I’m not someone who just throws out prompts and hopes for the best. I build things and try to understand what’s going on under the hood. And when you do that long enough, you start to understand why everything happens and what just sounds good in a tweet on Twitter and what the gap is between the two.

What I’m going to write here is probably going to cause controversy. Everyone will come with their own opinions, their own experience and their own “but actually…” and that’s fine. This is my perspective based on the way I work, build and succeed. I don't pretend to give the one and only correct answer - I give my answer.

Who it's not for

If you came here because someone promised you there was a "magic word" that turns AI into a genius - you're in the wrong place. If you're looking for the "secret prompt that will improve your code tenfold" - really, really not here. I don't sell dreams and I don't sell shortcuts. If that's what you're looking for, it's time to turn around and go back to the "AI course that will improve your code a hundredfold" that they sold you. There's nothing wrong with that. Right now. What's really wrong with that - you'll find out later.

Who it's for

For those who really want to understand how things work. Not at the level of "here's a cool tip" but at the level of "what's really happening here. What are the concepts and how deep are things?" It requires investment. It requires a real desire to learn. There are no magic bullets - there's work.

I'm going to explain everything from the basics in a way that's also intended for those who are just entering this world and don't know where to start. But I don't intend to stay there for long. I'm going to close the gaps quickly and start rushing forward to the really interesting stuff.

Because at the end of the day, this series is for two types of people - beginners who want to understand this world from scratch, and the tech guns who are already there but want to finally understand what they're actually doing.

It's time to start talking about the point.

So what is AI anyway?

As soon as you hear "AI," your mind automatically jumps to ChatGPT. To a chat window where you can write something and get a smart answer (sort of). Which makes sense because that's what most people first encountered. But thinking that AI is ChatGPT is like thinking that music is a guitar. The guitar is one instrument in a whole world. And ChatGPT? Exactly the same.

The concept of "artificial intelligence" wasn't born when Sam Altman took the stage. It's been around since the 1950s. Even back then, researchers were sitting around debating the question "Can a machine think?" And since then, this field has evolved, stalled, evolved again, stalled again, and each time someone declared it dead and someone else proved it wasn't. AI is not a product. It is not an application. It is an entire field of computer science that deals with the ability of machines to do things that require - or at least appear to require - intelligence. Solve problems. Recognize patterns. Make decisions. Learn from experience without someone telling it exactly what to do in every situation.

So what actually happened in recent years that made the whole world go crazy?

What happened is that one specific category within this huge world just exploded. Its name is LLM - short for Large Language Model, a large language model, and it is what most people mean when they say “AI” today. But it is one piece of a much bigger picture.

There are many other types of models in this world that are not related to chat at all. There are computer vision models that recognize faces, objects, movements - everything that allows an autonomous car not to run you over at an intersection. There are models that generate images, video and music from scratch. There are robotics, there are recommendation systems, there are all sorts of things that run in the background of your life without you even noticing.

LLM is the thing that everyone has come to and therefore everyone thinks that it is the whole story. But it is not. It is one chapter.

So what does LLM really do?

This is where it gets interesting. Because when you ask ChatGPT a question and get an answer that looks like someone smart wrote it, it is very easy to think that there is “someone” there who understands what he is saying. But what is happening under the hood is something else entirely.
A language model does one thing at its core: it tries to predict the next word. That is the whole magic.

You write “the sky today is” and the model calculates what is most likely to come after it. “Blue”? “Cloudy”? “Beautiful”? It chooses. Then it looks at what is there now - “the sky today is blue” - and predicts the next word. And the next. And the next. Word after word after word until a complete sentence, a complete paragraph, a complete answer is built.

You’re probably saying, “If it only guesses words, how does it write code? How does it explain quantum physics? How does it write songs?”
The answer is that when you take this simple idea of ​​“predict the next word” and train it on vast amounts of text—the entire internet, books, code, scientific papers, forum discussions, everything—something happens that no one really planned would happen. The model starts to “understand” patterns. Not understand like a human would, but develop the ability to recognize structures, connections, logic, style.

It has “seen” so much code that it “knows” what the next logical line is. It has “read” so many explanations of physics that it “can” produce a new explanation that sounds like someone who understands wrote it.
Does it really understand? This is a philosophical question that people with PhDs have been arguing about for years. What interests us on a practical level is that it works. And sometimes it works to a degree that is hard to believe. And sometimes it fails to a degree that is hard to believe. And knowing why and when is exactly what separates those who really know how to use this tool from those who just hope for the best.

Why everyone is using it wrong

The biggest mistake people make with AI is not technical. It’s not about the prompt and it’s not about which model you chose. It’s about the approach. How you approach this thing in your head. And most people approach it with the completely wrong approach.
They think it’s magic.

They open ChatGPT, write “build me a professional website” and expect something amazing to come out. And then when something comes out Mediocre They say “AI doesn’t work” or “AI isn’t there yet” but the problem isn’t the AI. The problem is that you told it “professional” and that’s a word that doesn’t mean anything. Professional is what? Minimalist with lots of white space? Dark with big typography? Gradients? Round buttons?
“Professional” is not a guideline – it’s an empty phrase you throw around because you don’t know what you really want. And if you don’t know what you want, there’s no reason the model should know for you.

The model doesn’t think. Not the way you think of “thinking,” at least. It doesn’t sit there and consider options. It doesn’t judge. It doesn’t invent. What you gave it is what it has. If you gave it “make something beautiful” it will spit out the average of everything it’s seen in its life that’s called “beautiful.” And you know what the average of everything is? Mediocre. Always mediocre.

On the other hand, if you tell it “I want a landing page with a dark background, large sans-serif typography, a fade-in animation on the title, and a single central CTA in a neon color” - suddenly it knows exactly what to do. Not because it “understood” you better. But because you gave it enough information for its prediction to be accurate. And if these words sound like a foreign language to you - great. That’s exactly what we’re here for.

And that brings us to the most important point in this entire post and one of the most central points in this entire series:
You are the brain. It is the tool.

Not the other way around. Never the other way around.
The moment you approach AI with the “do it for me” attitude - you’ve lost. Because it doesn’t know what’s good for you. It doesn’t know your project, your users, your constraints. It knows how to produce text that sounds convincing. And that’s exactly what’s dangerous because an answer that sounds good is not necessarily an answer that is good.

The right approach is completely the opposite. Instead of telling the model “Build me this feature in the best possible way” - you use it first to learn. You tell it “What are the accepted ways to build something like this? What are the advantages and disadvantages of each approach?” You read. You understand. You decide. Only after you know what you want and why do you go back and tell it “Build it like this, with this architecture, with these libraries, in this style.” And suddenly the output is completely different.

The difference is not in the tool. The difference is in you. You are not supposed to know everything in advance and that is okay. Everything is confusing at first. That is exactly why you have an AI model that can explain. Take advantage of it.

Those who use AI as a replacement for thinking get mediocre results and think that the tool is bad. Those who use AI as an accelerator for thinking get results that people don’t understand how they got to. Exactly the same tool. Exactly the same model. The only difference is what went on in the head of the person sitting on the other side.

The illusion of the perfect answer

AI models are the most charming people you will ever meet.
Write bad code? “Looks great! Here are some small suggestions:” Propose an idea that doesn’t stand up to any test of logic? “That’s a really interesting approach!”

It’s not a bug. Nobody forgot to fix it. That’s how the models were trained. They were trained to be helpful, to be pleasant, to be agreeable. Because whoever built them knew that if the model told you “your code is bad and you need to relearn” you would close the chat and walk away. So they taught it to smile. Always smile. Especially when there’s no reason.
And you know what the problem is with someone who always agrees with you? That you stop checking. You get an answer and it sounds good, the structure is neat, there’s confidence in it, there’s professional language in it and your brain clicks and says “Oh so that’s true” because that’s how we work. Someone who speaks confidently sounds credible to us. Someone who explains in an organized way seems smart to us. And a language model? He always speaks confidently. Always organized. Even when he makes things up out of thin air.

And that’s exactly the catch. Because an answer that seems perfect is not necessarily the right answer. A model can spit out an entire paragraph about a library that doesn't exist, with a convincing name, with a code sample that looks legitimate, with an explanation of why it's better than the alternatives, and it's all an invention. Without any hint that anything here is inaccurate. Without a little footnote that says "By the way, I invented this," he doesn't know he's an fabricator. He doesn't know he's not an fabricator. He just generates the most plausible sequence of words and it sounds great until the moment you discover that it's nonsense.

So what do you do with it?

First of all - and whoever is going to remember only one thing from this post, remember this - never. But never. Don't treat a model's answer as absolute truth. Every answer a model gives you is an option. A suggestion. A starting point. Not a verdict. Not a source of authority. If someone on the street told you "Listen, I'm pretty sure it should be built like this," you would go check it out. So why, when an AI model says the exact same thing, do you suddenly treat it as if it came down from Mount Sinai?
Check. Search. Ask again and in a different way. Open another model and cross-check it. This is not paranoia, this is sanity.

And if you want to see it for yourself? Here's a little exercise that will open your eyes:
Go to any model - Claude, ChatGPT, it doesn't matter - and ask it to write something for you. A function, a marketing text, whatever you want. Now take what it wrote, go to a completely different model and tell it "look at this and tell me what needs to be improved." It will give you a list. Excellent. Now go back to the first model, tell it to correct according to the comments, take the result and go to a third model and tell it "What needs to be improved here?"
It will give you a new list.

You can continue like this indefinitely. Literally forever. Because this loop will never end. No model will tell you "You know what? It's perfect, I have nothing to add." There will always be another comment. There will always be more “improvement” There will always be more “but it can be done anyway”

And why? Because if you tell him “find a problem” - he will find a problem. That’s what he does. He produces the answer that best fits the question you gave him. If your question is “what’s wrong here” his answer will always be a list of bad things. Not because there really are bad things. But because you asked him to find them, so he found them. Or invented them. It doesn’t really matter to him.

That doesn’t mean that feedback from a model is worthless - it is. Sometimes it’s even excellent. It means that you have to be the ones who decide what of this feedback is really relevant and what is simply noise created because you asked for noise. And again, we go back to the same principle: you are the brain. He is the tool. If you don’t know how to distinguish between a comment That's really worth something and a comment that the model emitted because it had to emit something - you're not using the tool. The tool is using you.

What now?

This post was just a warm-up. We laid the groundwork. What AI really does, why most people approach it the other way around, and why the answer that seems perfect is exactly the one that should turn on a red light for you.
In the next post, we start to get inside. The things that, once understood, completely change the way you work, think, and build with AI. I'm not going to tell you what exactly because I'm still not sure what will blow my mind by then. But I will tell you one thing - if this post made you feel like you're starting to understand, the next one is going to make you feel like you didn't understand anything.

In the meantime, if you want to know as soon as a new post comes out - I have a Telegram channel where I update you on everything new that comes up. Besides, you are welcome to subscribe to the blog to receive updates directly to your email and if you have questions, thoughts, or just something you want to say - the comments below are open. I read everything.

Top comments (0)