<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: oriel</title>
    <description>The latest articles on DEV Community by oriel (@orielhaim).</description>
    <link>https://dev.to/orielhaim</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/orielhaim"/>
    <language>en</language>
    <item>
      <title>The World of AI: Chat Control</title>
      <dc:creator>oriel</dc:creator>
      <pubDate>Wed, 01 Apr 2026 07:00:00 +0000</pubDate>
      <link>https://dev.to/orielhaim/the-world-of-ai-chat-control-5370</link>
      <guid>https://dev.to/orielhaim/the-world-of-ai-chat-control-5370</guid>
      <description>&lt;p&gt;In the previous post, we talked about what AI really does, why most people approach it the other way around, and why the answer that seems perfect is exactly the one that should turn on a red light for you. If you haven't read it, &lt;a href="https://dev.to/orielhaim/the-world-of-ai-2g10"&gt;start there.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This time we're talking about something that everyone does every day and no one stops to think about: how to control chat.&lt;br&gt;
Not what you write. Not what model you choose. But how you conduct the conversation. It's trivial and that's exactly why no one notices how easy it is to get it wrong.&lt;/p&gt;

&lt;p&gt;This knowledge is the difference between a conversation that gets something real out of it and a conversation that goes around in circles until you give up and say "AI doesn't work"&lt;/p&gt;

&lt;h2&gt;
  
  
  Your history is more important than its knowledge
&lt;/h2&gt;

&lt;p&gt;There’s something people don’t understand about language models. When you open a chat and start talking to a model, it’s easy to think that what’s behind its answers is its “knowledge.” It’s trained on the entire internet, it’s read every book, every article, every line of code — so obviously the answers come from this huge database. Right? Not really.&lt;/p&gt;

&lt;p&gt;It’s true that the model is trained on vast amounts of data. But once you sit down in a chat and talk to it, what really guides its answers isn’t its general knowledge, it’s the history of the conversation. What you wrote before. What it answered before. The tone it set. The direction it took. Everything. All that history sits there influencing every word it chooses. It’s just choosing words. That’s what it does. Word after word after word. And the history of the conversation is the thing that most influences that choice.&lt;/p&gt;

&lt;p&gt;Think of it this way. If you open a new chat and ask a technical question, you’ll get a certain answer. Now if you take the exact same question but ask it in the middle of a conversation that’s already fifty messages long on a completely different topic you’ll get a different answer. Sometimes a little different, sometimes completely different. Same model. Same question. Different answer. The only difference is what came before.&lt;/p&gt;

&lt;p&gt;And if the tone of the conversation so far has been a certain one, let’s say you’ve talked very casually or you’ve already “agreed” on a certain approach, the model will continue in the same direction. Not because it’s “decided” that this is the right approach. But because history tells it that’s what’s appropriate here. It’s not thinking “What’s the most correct answer?” It’s thinking “What’s the most likely next word given everything that’s already been said?” And that’s a huge difference.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what is a Context Window and why does it matter to you
&lt;/h2&gt;

&lt;p&gt;Each model has what is called a Context Window Without getting too technical, it is basically how much text the model can "remember" at once. Everything it can take into account when generating an answer. And there is a limit to that.&lt;/p&gt;

&lt;p&gt;This limit is measured in Tokens. A token is basically a piece of a word, sometimes a whole word, sometimes half a word, sometimes a punctuation mark. You don't need to worry about what exactly a token is because that's not what's important here. What's important is that there is a limited number of tokens that go into the window and everything you write and everything the model answers takes up space in this window.&lt;/p&gt;

&lt;p&gt;The context window is not like a bucket that works great until it's full and then overflows. It's more like a lens that gradually blurs. The more information there is in the window, the model's ability to "focus" on any part of it decreases. Even if technically everything still comes in. A conversation of three focused messages will give better answers than a conversation of thirty messages on the exact same topic. It's not a bug. That's how it works.&lt;/p&gt;

&lt;p&gt;So what happens when the conversation is too long? Most of the time, old messages are pushed out. The model simply stops seeing them. Did you write detailed instructions to him at the beginning? After enough messages, they disappeared. Did you set a certain tone? Gone. Did you conclude something between you? Gone. And all this without any warning. Without any hint. Suddenly the model answers differently and you don't understand why.&lt;/p&gt;

&lt;h2&gt;
  
  
  When the conversation is dirty
&lt;/h2&gt;

&lt;p&gt;Imagine that you opened a chat and asked a question about the architecture of a project. You got a good answer. You continued to talk about it, made some decisions, closed some things. Great. And then in the same chat you asked "Oh, say I also have a problem with CSS, can you help?"&lt;/p&gt;

&lt;p&gt;From that moment on, you went off the rails.&lt;/p&gt;

&lt;p&gt;Not because the question about CSS is a bad question. But now the model is sitting with a context that mixes architecture and CSS and technical discussion and design questions and maybe even the decisions you've already made about the project and is trying to answer everything at the same time. His answer about CSS is influenced by all the discussion about architecture that came before. Not because there's a connection but because everything sits in the same window and affects everything.&lt;/p&gt;

&lt;p&gt;The rule is simple and there's no reason to complicate it: &lt;strong&gt;one chat for one task.&lt;/strong&gt; Did you move on to another topic? Open a new chat. Did you finish a part and move on to another part? New chat. It's not wasteful and it's not "just starting over" - it's giving the model the best conditions to work. Get what it needs to know, without noise, without leftovers from topics that are no longer related.&lt;/p&gt;

&lt;h2&gt;
  
  
  Know how to cut instead of drag
&lt;/h2&gt;

&lt;p&gt;When you get an answer you're not happy with, most people do the most intuitive thing and just write a new message. "No, not like that. I wanted it to be more X" and the model replies. Then "No, not yet. Try more Y" and the model replies again. Then "Close but change Z" and another message. And another. And another.&lt;/p&gt;

&lt;p&gt;In your head, it sounds logical. After all, it receives feedback, it "learns" what you want, and it gets closer to the result, right?&lt;/p&gt;

&lt;p&gt;I'm happy to disappoint you that this is one of the biggest mistakes. What actually happens is that every correction message you send enters the context and piles on top of everything that came before it. Now the model is sitting with your original message, with the first answer you didn't like, with your comment that it wasn't good, with the second attempt that you also didn't like, with another comment, with another attempt, and instead of things getting sharper, they get blurry. The model doesn't "understand" that it was wrong and now it has to correct. It looks at all this mess and tries to produce the most likely sequence of words given all this noise. And a sequence of words based on noise is - surprise yourself - noise.&lt;/p&gt;

&lt;p&gt;This is the loop that people get stuck in over and over again. Each "fix" adds a layer of confusion. The model squirms, apologizes, tries another direction that also doesn't work and you sit there with twenty messages and the feeling that the tool just can't do it. But the tool can. You just buried it under a mountain of conflicting instructions.&lt;/p&gt;

&lt;p&gt;So what do you do instead?&lt;/p&gt;

&lt;p&gt;Edit the original message. Just go back to the message you wrote, change it, add the missing precision, and resend. Not a new message below - editing what you already wrote above. On most platforms you can do this with a click. What happens as soon as you edit a previous message is that everything that came after it - all the replies, all the corrections, all the mess - is deleted. The model receives a clean, focused message, with all the information it needs in one place, without the remnants of failed attempts. Without noise. Without "but you said otherwise before" it simply sees what you want now and answers it.&lt;/p&gt;

&lt;p&gt;This is the difference between working smart and working out of laziness. Writing "won't try again" is easy. Going back to the original message, thinking about what exactly is missing in it, rewording it takes work. It requires you to stop and ask yourself "Wait a minute, what do I actually want here?" And most people don't want to do that work. They want to throw out another message and hope that the model understands. And it won't. It will get confused.&lt;/p&gt;

&lt;p&gt;Sometimes the problem isn't exactly the message. Sometimes the entire direction is wrong. In this case, no amount of editing will help. If you asked a question in a way that leads to the wrong place, there's no point in improving the wording - you need to rethink. Stop. Ask yourself what the goal is here. And open a new chat with a different approach. Not because the previous chat “failed” but because you understood something you didn’t understand before and now you can start with that understanding instead of without it.&lt;/p&gt;

&lt;p&gt;Knowing how to cut a conversation that isn’t working is the moment you stop working for the tool and start making the tool work for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don't agree with me
&lt;/h2&gt;

&lt;p&gt;Language models are trained to be nice. It's not a bug and it's not a side effect, that's how they were built. They went through training processes that taught them that the goal is to make the user happy. And what's the easiest way to make someone happy? Agree with them. Smile. Say “Excellent point!” and move on. And that's exactly what they do.&lt;/p&gt;

&lt;p&gt;Write to the model "I think the right approach is X" and it will tell you "Definitely X is a great approach." Write to it five minutes later "Actually I think Y is better" and it will say "You're absolutely right Y is definitely the right direction" without any problem. Without any embarrassment. Without feeling the need to say "Wait a minute, but five minutes ago you said exactly the opposite and it seems to me that the first direction made more sense." He won't say that. He will agree. Always agree. Because agreeing is what produces the most “safe” sequence of words.&lt;/p&gt;

&lt;p&gt;What's the problem with a work partner who agrees with everything? He's useless. If all he does is say “yes great let’s continue” then he is not a partner – he's a yes-man. He is doing you good for the feeling and bad for the result.&lt;/p&gt;

&lt;p&gt;So what do you do? Before you start working. Before you even ask a question. You define the rules of the conversation. You don’t need to write a scroll, just be direct.&lt;/p&gt;

&lt;p&gt;Something like:&lt;br&gt;
“If I propose an approach that you think has a problem – say so. Don’t agree with me just because I say so. If there is a better alternative, present it even if I didn’t ask.”&lt;/p&gt;

&lt;p&gt;Now you have put the possibility of disagreement into context. Before you wrote this, this possibility simply wasn’t there. Not because the model “can’t” disagree but because nothing in the conversation hinted to him that this was something that needed to happen here. Now it’s floating there in the background – not as a central issue nor as a guideline that he consciously follows but as something that influences his choices. This is context pollution, the same mechanism that makes the model confused when you throw random topics into the conversation, only this time &lt;strong&gt;you use it intentionally to your advantage&lt;/strong&gt; to set the tone.&lt;/p&gt;

&lt;p&gt;If you want to go a step further? Be specific about what you expect from it. Not "be critical" because that will just make it find problems with everything just for the sake of being critical. But something focused. "I'm building X. If you see a scaling problem, a security problem, or something that sounds good in theory but won't work in production - stop me." Now you've given it not just permission but direction. It knows what to look for. It knows when to open its mouth. It knows that its job here is not to say "sounds great" but to catch what you might miss.&lt;/p&gt;

&lt;p&gt;And of course remember that it's still a language model. It can be wrong. It can invent a problem that doesn't exist. But the difference between a model that's trying to please you and a model that's trying to challenge you is the difference between output that you get without thinking and output that forces you to think. And that's the whole story.&lt;/p&gt;




&lt;p&gt;It's not rocket science and it's not secret knowledge. It's simply stopping working on an automaton out of laziness and starting to pay attention to what's happening in your conversation. Most of the difference between a good result and a bad result is not the model - it's you.&lt;/p&gt;

&lt;p&gt;If I managed to change something in your mindset, share it with me and you might win a prize, courtesy of: nobody. There's no sponsor. I'm writing this for free.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>The World of AI</title>
      <dc:creator>oriel</dc:creator>
      <pubDate>Tue, 31 Mar 2026 20:31:22 +0000</pubDate>
      <link>https://dev.to/orielhaim/the-world-of-ai-2g10</link>
      <guid>https://dev.to/orielhaim/the-world-of-ai-2g10</guid>
      <description>&lt;h2&gt;
  
  
  Who am I to tell you what to do?
&lt;/h2&gt;

&lt;p&gt;Let’s start at the end. I’m not a world expert in AI and I don’t have a PhD. I’m not a researcher at OpenAI’s lab and no one invited me to speak at a conference where everyone wears button-down shirts and hates every moment they have to be bored there. I’m a simple person who lives and breathes technology. It’s what I do when I wake up in the morning and it’s what I do instead of sleeping.&lt;/p&gt;

&lt;p&gt;I work with AI every day in lots of areas and in lots of ways. I’m not someone who just throws out prompts and hopes for the best. I build things and try to understand what’s going on under the hood. And when you do that long enough, you start to understand why everything happens and what just sounds good in a tweet on Twitter and what the gap is between the two.&lt;/p&gt;

&lt;p&gt;What I’m going to write here is probably going to cause controversy. Everyone will come with their own opinions, their own experience and their own “but actually…” and that’s fine. This is my perspective based on the way I work, build and succeed. I don't pretend to give the one and only correct answer - I give my answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who it's not for
&lt;/h2&gt;

&lt;p&gt;If you came here because someone promised you there was a "magic word" that turns AI into a genius - you're in the wrong place. If you're looking for the "secret prompt that will improve your code tenfold" - really, really not here. I don't sell dreams and I don't sell shortcuts. If that's what you're looking for, it's time to turn around and go back to the "AI course that will improve your code a hundredfold" that they sold you. There's nothing wrong with that. Right now. What's really wrong with that - you'll find out later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who it's for
&lt;/h2&gt;

&lt;p&gt;For those who really want to understand how things work. Not at the level of "here's a cool tip" but at the level of "what's really happening here. What are the concepts and how deep are things?" It requires investment. It requires a real desire to learn. There are no magic bullets - there's work.&lt;/p&gt;

&lt;p&gt;I'm going to explain everything from the basics in a way that's also intended for those who are just entering this world and don't know where to start. But I don't intend to stay there for long. I'm going to close the gaps quickly and start rushing forward to the really interesting stuff.&lt;/p&gt;

&lt;p&gt;Because at the end of the day, this series is for two types of people - beginners who want to understand this world from scratch, and the tech guns who are already there but want to finally understand what they're actually doing.&lt;/p&gt;

&lt;p&gt;It's time to start talking about the point.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what is AI anyway?
&lt;/h2&gt;

&lt;p&gt;As soon as you hear "AI," your mind automatically jumps to ChatGPT. To a chat window where you can write something and get a smart answer (sort of). Which makes sense because that's what most people first encountered. But thinking that AI is ChatGPT is like thinking that music is a guitar. The guitar is one instrument in a whole world. And ChatGPT? Exactly the same.&lt;/p&gt;

&lt;p&gt;The concept of "artificial intelligence" wasn't born when Sam Altman took the stage. It's been around since the 1950s. Even back then, researchers were sitting around debating the question "Can a machine think?" And since then, this field has evolved, stalled, evolved again, stalled again, and each time someone declared it dead and someone else proved it wasn't. AI is not a product. It is not an application. It is an entire field of computer science that deals with the ability of machines to do things that require - or at least appear to require - intelligence. Solve problems. Recognize patterns. Make decisions. Learn from experience without someone telling it exactly what to do in every situation.&lt;/p&gt;

&lt;p&gt;So what actually happened in recent years that made the whole world go crazy?&lt;/p&gt;

&lt;p&gt;What happened is that one specific category within this huge world just exploded. Its name is LLM - short for Large Language Model, a large language model, and it is what most people mean when they say “AI” today. But it is one piece of a much bigger picture.&lt;/p&gt;

&lt;p&gt;There are many other types of models in this world that are not related to chat at all. There are computer vision models that recognize faces, objects, movements - everything that allows an autonomous car not to run you over at an intersection. There are models that generate images, video and music from scratch. There are robotics, there are recommendation systems, there are all sorts of things that run in the background of your life without you even noticing.&lt;/p&gt;

&lt;p&gt;LLM is the thing that everyone has come to and therefore everyone thinks that it is the whole story. But it is not. It is one chapter.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what does LLM really do?
&lt;/h2&gt;

&lt;p&gt;This is where it gets interesting. Because when you ask ChatGPT a question and get an answer that looks like someone smart wrote it, it is very easy to think that there is “someone” there who understands what he is saying. But what is happening under the hood is something else entirely.&lt;br&gt;
A language model does one thing at its core: it tries to predict the next word. That is the whole magic.&lt;/p&gt;

&lt;p&gt;You write “the sky today is” and the model calculates what is most likely to come after it. “Blue”? “Cloudy”? “Beautiful”? It chooses. Then it looks at what is there now - “the sky today is blue” - and predicts the next word. And the next. And the next. Word after word after word until a complete sentence, a complete paragraph, a complete answer is built.&lt;/p&gt;

&lt;p&gt;You’re probably saying, “If it only guesses words, how does it write code? How does it explain quantum physics? How does it write songs?”&lt;br&gt;
The answer is that when you take this simple idea of ​​“predict the next word” and train it on vast amounts of text—the entire internet, books, code, scientific papers, forum discussions, everything—something happens that no one really planned would happen. The model starts to “understand” patterns. Not understand like a human would, but develop the ability to recognize structures, connections, logic, style.&lt;/p&gt;

&lt;p&gt;It has “seen” so much code that it “knows” what the next logical line is. It has “read” so many explanations of physics that it “can” produce a new explanation that sounds like someone who understands wrote it.&lt;br&gt;
Does it really understand? This is a philosophical question that people with PhDs have been arguing about for years. What interests us on a practical level is that it works. And sometimes it works to a degree that is hard to believe. And sometimes it fails to a degree that is hard to believe. And knowing why and when is exactly what separates those who really know how to use this tool from those who just hope for the best.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why everyone is using it wrong
&lt;/h2&gt;

&lt;p&gt;The biggest mistake people make with AI is not technical. It’s not about the prompt and it’s not about which model you chose. It’s about the approach. How you approach this thing in your head. And most people approach it with the completely wrong approach.&lt;br&gt;
They think it’s magic.&lt;/p&gt;

&lt;p&gt;They open ChatGPT, write “build me a professional website” and expect something amazing to come out. And then when something comes out Mediocre They say “AI doesn’t work” or “AI isn’t there yet” but the problem isn’t the AI. The problem is that you told it “professional” and that’s a word that doesn’t mean anything. Professional is what? Minimalist with lots of white space? Dark with big typography? Gradients? Round buttons? &lt;br&gt;
“Professional” is not a guideline – it’s an empty phrase you throw around because you don’t know what you really want. And if you don’t know what you want, there’s no reason the model should know for you.&lt;/p&gt;

&lt;p&gt;The model doesn’t think. Not the way you think of “thinking,” at least. It doesn’t sit there and consider options. It doesn’t judge. It doesn’t invent. What you gave it is what it has. If you gave it “make something beautiful” it will spit out the average of everything it’s seen in its life that’s called “beautiful.” And you know what the average of everything is? Mediocre. Always mediocre.&lt;/p&gt;

&lt;p&gt;On the other hand, if you tell it “I want a landing page with a dark background, large sans-serif typography, a fade-in animation on the title, and a single central CTA in a neon color” - suddenly it knows exactly what to do. Not because it “understood” you better. But because you gave it enough information for its prediction to be accurate. And if these words sound like a foreign language to you - great. That’s exactly what we’re here for.&lt;/p&gt;

&lt;p&gt;And that brings us to the most important point in this entire post and one of the most central points in this entire series:&lt;br&gt;
You are the brain. It is the tool.&lt;/p&gt;

&lt;p&gt;Not the other way around. Never the other way around.&lt;br&gt;
The moment you approach AI with the “do it for me” attitude - you’ve lost. Because it doesn’t know what’s good for you. It doesn’t know your project, your users, your constraints. It knows how to produce text that sounds convincing. And that’s exactly what’s dangerous because an answer that sounds good is not necessarily an answer that is good.&lt;/p&gt;

&lt;p&gt;The right approach is completely the opposite. Instead of telling the model “Build me this feature in the best possible way” - you use it first to learn. You tell it “What are the accepted ways to build something like this? What are the advantages and disadvantages of each approach?” You read. You understand. You decide. Only after you know what you want and why do you go back and tell it “Build it like this, with this architecture, with these libraries, in this style.” And suddenly the output is completely different.&lt;/p&gt;

&lt;p&gt;The difference is not in the tool. The difference is in you. You are not supposed to know everything in advance and that is okay. Everything is confusing at first. That is exactly why you have an AI model that can explain. Take advantage of it.&lt;/p&gt;

&lt;p&gt;Those who use AI as a replacement for thinking get mediocre results and think that the tool is bad. Those who use AI as an accelerator for thinking get results that people don’t understand how they got to. Exactly the same tool. Exactly the same model. The only difference is what went on in the head of the person sitting on the other side.&lt;/p&gt;

&lt;h2&gt;
  
  
  The illusion of the perfect answer
&lt;/h2&gt;

&lt;p&gt;AI models are the most charming people you will ever meet.&lt;br&gt;
Write bad code? “Looks great! Here are some small suggestions:” Propose an idea that doesn’t stand up to any test of logic? “That’s a really interesting approach!”&lt;/p&gt;

&lt;p&gt;It’s not a bug. Nobody forgot to fix it. That’s how the models were trained. They were trained to be helpful, to be pleasant, to be agreeable. Because whoever built them knew that if the model told you “your code is bad and you need to relearn” you would close the chat and walk away. So they taught it to smile. Always smile. Especially when there’s no reason.&lt;br&gt;
And you know what the problem is with someone who always agrees with you? That you stop checking. You get an answer and it sounds good, the structure is neat, there’s confidence in it, there’s professional language in it and your brain clicks and says “Oh so that’s true” because that’s how we work. Someone who speaks confidently sounds credible to us. Someone who explains in an organized way seems smart to us. And a language model? He always speaks confidently. Always organized. Even when he makes things up out of thin air.&lt;/p&gt;

&lt;p&gt;And that’s exactly the catch. Because an answer that seems perfect is not necessarily the right answer. A model can spit out an entire paragraph about a library that doesn't exist, with a convincing name, with a code sample that looks legitimate, with an explanation of why it's better than the alternatives, and it's all an invention. Without any hint that anything here is inaccurate. Without a little footnote that says "By the way, I invented this," he doesn't know he's an fabricator. He doesn't know he's not an fabricator. He just generates the most plausible sequence of words and it sounds great until the moment you discover that it's nonsense.&lt;/p&gt;

&lt;p&gt;So what do you do with it?&lt;/p&gt;

&lt;p&gt;First of all - and whoever is going to remember only one thing from this post, remember this - &lt;strong&gt;never. But never. Don't treat a model's answer as absolute truth.&lt;/strong&gt; Every answer a model gives you is an option. A suggestion. A starting point. Not a verdict. Not a source of authority. If someone on the street told you "Listen, I'm pretty sure it should be built like this," you would go check it out. So why, when an AI model says the exact same thing, do you suddenly treat it as if it came down from Mount Sinai?&lt;br&gt;
Check. Search. Ask again and in a different way. Open another model and cross-check it. This is not paranoia, this is sanity.&lt;/p&gt;

&lt;p&gt;And if you want to see it for yourself? Here's a little exercise that will open your eyes:&lt;br&gt;
Go to any model - Claude, ChatGPT, it doesn't matter - and ask it to write something for you. A function, a marketing text, whatever you want. Now take what it wrote, go to a completely different model and tell it "look at this and tell me what needs to be improved." It will give you a list. Excellent. Now go back to the first model, tell it to correct according to the comments, take the result and go to a third model and tell it "What needs to be improved here?"&lt;br&gt;
It will give you a new list.&lt;/p&gt;

&lt;p&gt;You can continue like this indefinitely. Literally forever. Because this loop will never end. No model will tell you "You know what? It's perfect, I have nothing to add." There will always be another comment. There will always be more “improvement” There will always be more “but it can be done anyway”&lt;/p&gt;

&lt;p&gt;And why? Because if you tell him “find a problem” - he will find a problem. That’s what he does. He produces the answer that best fits the question you gave him. If your question is “what’s wrong here” his answer will always be a list of bad things. Not because there really are bad things. But because you asked him to find them, so he found them. Or invented them. It doesn’t really matter to him.&lt;/p&gt;

&lt;p&gt;That doesn’t mean that feedback from a model is worthless - it is. Sometimes it’s even excellent. It means that you have to be the ones who decide what of this feedback is really relevant and what is simply noise created because you asked for noise. And again, we go back to the same principle: &lt;strong&gt;you are the brain. He is the tool.&lt;/strong&gt; If you don’t know how to distinguish between a comment That's really worth something and a comment that the model emitted because it had to emit something - you're not using the tool. The tool is using you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What now?
&lt;/h2&gt;

&lt;p&gt;This post was just a warm-up. We laid the groundwork. What AI really does, why most people approach it the other way around, and why the answer that seems perfect is exactly the one that should turn on a red light for you.&lt;br&gt;
In the next post, we start to get inside. The things that, once understood, completely change the way you work, think, and build with AI. I'm not going to tell you what exactly because I'm still not sure what will blow my mind by then. But I will tell you one thing - if this post made you feel like you're starting to understand, the next one is going to make you feel like you didn't understand anything.&lt;/p&gt;

&lt;p&gt;In the meantime, if you want to know as soon as a new post comes out - I have a &lt;a href="https://t.me/orielblog" rel="noopener noreferrer"&gt;Telegram channel&lt;/a&gt; where I update you on everything new that comes up. Besides, you are welcome to subscribe to the blog to receive updates directly to your email and if you have questions, thoughts, or just something you want to say - the comments below are open. I read everything.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tutorial</category>
      <category>productivity</category>
    </item>
    <item>
      <title>I got frustrated with writing tools so I built my own: Storyteller</title>
      <dc:creator>oriel</dc:creator>
      <pubDate>Mon, 30 Mar 2026 22:25:14 +0000</pubDate>
      <link>https://dev.to/orielhaim/i-got-frustrated-with-writing-tools-so-i-built-my-own-storyteller-3ca6</link>
      <guid>https://dev.to/orielhaim/i-got-frustrated-with-writing-tools-so-i-built-my-own-storyteller-3ca6</guid>
      <description>&lt;p&gt;A while ago I decided I wanted to seriously start writing a book.&lt;/p&gt;

&lt;p&gt;I had the idea, the characters, the motivation - everything. What I didn’t have was a tool that actually made the process feel good.&lt;/p&gt;

&lt;p&gt;I started with Google Docs like a lot of people do. It worked for a while until the project grew. Then everything became so messy.&lt;/p&gt;

&lt;p&gt;I needed to write a scene from chapter 10 while checking a character detail from chapter 1 while making sure the timeline still made sense while keeping world-building notes nearby. Very quickly writing turned into tab switching context switching and losing focus.&lt;/p&gt;

&lt;p&gt;So I looked for dedicated writing software.&lt;/p&gt;

&lt;p&gt;Some of it felt outdated. Some of it looked powerful but had poor UX. Some of it locked basic features behind expensive subscriptions. I wanted something modern, fast, focused, and pleasant to use but still powerful enough for larger fiction projects.&lt;/p&gt;

&lt;p&gt;I couldn’t find the tool I wanted so I built it.&lt;/p&gt;

&lt;p&gt;That project became &lt;strong&gt;Storyteller&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Storyteller?
&lt;/h2&gt;

&lt;p&gt;Storyteller is an open-source desktop writing studio for authors.&lt;/p&gt;

&lt;p&gt;It’s built for people who want more than a blank document, but less friction than traditional writing software usually creates.&lt;/p&gt;

&lt;p&gt;The goal was simple: make writing software that feels modern, stays out of the way, and still gives authors the structure they need for real long-form projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it currently does
&lt;/h2&gt;

&lt;p&gt;Right now Storyteller includes things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Book and series management&lt;/li&gt;
&lt;li&gt;A distraction-free writing workspace&lt;/li&gt;
&lt;li&gt;Multi-tab editing so you can keep chapters, characters, and notes open side by side&lt;/li&gt;
&lt;li&gt;World-building tools for characters, locations, and items&lt;/li&gt;
&lt;li&gt;A timeline system for tracking story chronology&lt;/li&gt;
&lt;li&gt;Scene metadata and progress tracking&lt;/li&gt;
&lt;li&gt;Export to PDF, EPUB, DOCX, Markdown, and TXT&lt;/li&gt;
&lt;li&gt;Multi-language support, including RTL support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the biggest things I cared about was workflow.&lt;/p&gt;

&lt;p&gt;I didn’t want a tool where your manuscript lives in one place, your characters somewhere else, and your planning in another disconnected screen. I wanted everything to feel like part of the same creative space.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I made it open source
&lt;/h2&gt;

&lt;p&gt;I use Storyteller myself regularly and I’ve already put a huge amount of time into it.&lt;/p&gt;

&lt;p&gt;But I also didn’t want this to become another closed creative tool that people depend on without really owning. Writing is deeply personal work. The tools around it should feel accessible, flexible, and community-friendly too.&lt;/p&gt;

&lt;p&gt;That’s a big reason why Storyteller is open source.&lt;/p&gt;

&lt;p&gt;I want people to be able to try it, inspect it, suggest ideas, report pain points, and hopefully help shape where it goes next.&lt;/p&gt;

&lt;h2&gt;
  
  
  Built with the modern web stack
&lt;/h2&gt;

&lt;p&gt;Storyteller is built as a desktop app with technologies I genuinely enjoy working with.&lt;/p&gt;

&lt;p&gt;That includes Electron, React, Tailwind, TipTap, SQLite, and a UI approach that focuses heavily on speed, clarity, and usability.&lt;/p&gt;

&lt;p&gt;A lot of writing software still feels stuck in another era. I wanted to see what a writing tool could feel like if it was designed more like a modern product instead of a legacy utility.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I’m looking for
&lt;/h2&gt;

&lt;p&gt;I’m still early in this journey and I’d love feedback from both developers and writers.&lt;/p&gt;

&lt;p&gt;If you’re a developer I’d love your thoughts on the architecture, UX, feature direction, or the project itself.&lt;/p&gt;

&lt;p&gt;If you’re a writer I’d love to know something even more important:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What do you wish writing software did better?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That question is really the heart of this whole project.&lt;/p&gt;

&lt;h2&gt;
  
  
  If you want to check it out
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/orielhaim/Storyteller" rel="noopener noreferrer"&gt;The project is here on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.orielhaim.com/storyteller-first-launch" rel="noopener noreferrer"&gt;My launch post&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if you’ve built tools for writers, care about creative workflows, or just like open-source products made from real personal frustration, I’d genuinely love to hear what you think.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>writing</category>
      <category>react</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
