<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Engineered Log</title>
    <description>The latest articles on DEV Community by Engineered Log (@engineered_log).</description>
    <link>https://dev.to/engineered_log</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/engineered_log"/>
    <language>en</language>
    <item>
      <title>LLM Act</title>
      <dc:creator>Engineered Log</dc:creator>
      <pubDate>Fri, 25 Jul 2025 08:49:31 +0000</pubDate>
      <link>https://dev.to/engineered_log/llm-act-520e</link>
      <guid>https://dev.to/engineered_log/llm-act-520e</guid>
      <description>&lt;p&gt;On 30 November 2022, ChatGPT, the product of OPENAI, saw the light of day, and it was my first “really conscious” technology drift of my life. I experienced both the rise of social networks and the spread of fast internet connections, but at the time, I was too young to have a before-and-after comparison.&lt;/p&gt;

&lt;p&gt;This time, it was different. I was completing my Master's degree, and my last few years were quite similar (learning, studying, trying, doing other stuff-ing, completing exams), so I have a clear comparison of tasks influenced by this tool. To be honest, at the beginning it was a great tool for writing, summarizing, and generating Q&amp;amp;A for my mock exams. It was not a good programmer, though. But it solved a lot of boring stuff about my student life.&lt;/p&gt;

&lt;p&gt;I started to use it. Recognizing this change in my behavior, I started to keep an eye on what other people were doing regarding the topic. In the first moment, I noticed that ChatGPT initially appealed mostly to technical students, the ones interested not only in the outcome, but also in new tech tools. Not too much time passed before most of the screens on my university library PCs were showing a dark-background chat with a LLM model. Meanwhile, peers with a different background did not show much interest in the topic. In less than a year, ChatGPT became the chatty version of Facebook. People of all ages and backgrounds believe that most of the stuff in their life could be shared with it to have feedback and a suggestion. Some find it the only way to solve text-based tasks, and another part believes it is just more bullshit from modern kids. Anyway, it became an everyday topic and tool.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fueegrcfy93z4wxxmacpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fueegrcfy93z4wxxmacpv.png" alt=" " width="776" height="412"&gt;&lt;/a&gt;&lt;br&gt;
ChatGPT users chart from this &lt;a href="https://www.demandsage.com/chatgpt-statistics/" rel="noopener noreferrer"&gt;demandsage.com&lt;/a&gt; that has lots of others stats&lt;/p&gt;

&lt;p&gt;Back to my personal experience, I was not immune to this. Right now, I always have a tab open in my browser pointing to the same LLM, and it helps me with writing emails, writing code, generating images, finding cooking recipes for me, and so on… &lt;strong&gt;It is my personal assistant&lt;/strong&gt;. At the moment, the free version of most systems is good enough for me, and I don’t feel it’s necessary to subscribe to a more powerful one. I have very contrasting feelings about how this tool influenced my productive life, and I would like to  try to build a publicly committed &lt;strong&gt;"fair use" set of rules&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;But before legislating on a topic (as expected from any act of regulation) a full understanding of the subject is needed.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ED. Note: In this text, I have used a "you"-based communication style, but everything said obviously reflects my own behavior and intentions. But, for the sake of beauty, a "we"-based communication was not the best fit.&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  What is a LLM
&lt;/h1&gt;

&lt;p&gt;The core functionality of ChatGPT is based on Machine Learning. Machine Learning (ML) is a subfield of Artificial Intelligence that specifically focuses on pattern recognition in data. As you can imagine, o*&lt;em&gt;nce you recognize a pattern, you can apply that pattern to new observations&lt;/em&gt;*. That’s the essence of the idea. The most powerful tool in the ML field is deep learning, which exploits neural networks to find that patterns in big amounts of data. &lt;/p&gt;

&lt;p&gt;The use of these techniques in the field of &lt;strong&gt;Natural Language Processing (NLP)&lt;/strong&gt; allows to create a Large Language Model (LLM), which is a particular kind of neural network model that is used to “predict” the next word in a sentence. Iterating over this process, we can build up complex texts starting from a user input.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flazut7kkxwphrfrmp1p8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flazut7kkxwphrfrmp1p8.png" alt=" " width="800" height="353"&gt;&lt;/a&gt;&lt;br&gt;
LLMs framing problem from this &lt;a href="https://medium.com/data-science-at-microsoft/how-large-language-models-work-91c362f5b78f" rel="noopener noreferrer"&gt;article&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Of course, the neural network is trained using human-generated text to learn what should be the “most probable” word that should be used to fill the blank space. For a usable solution, we need this probability to be as precise as possible, and to do so we need loOoOoots of data, coming from any sort of human text artifact like books, blog posts (sigh!), movie scripts, social posts…&lt;/p&gt;

&lt;p&gt;At the end of the day, &lt;strong&gt;we have by design a system that has no thinking, learning, information or perception system, but “just” a word prediction system.&lt;/strong&gt; Even if this is just a word prediction system, it is incredibly effective in practice when we try to ask for some kind of help, and this is astonishing. But it leaves me with many doubts about the potential of this kind of system.&lt;/p&gt;

&lt;p&gt;I would like to explore those doubts and recognize that this tool modifies our behavior, but first, I would feel incomplete without a paragraph where I go a bit deeper into how these kinds of systems work.&lt;/p&gt;

&lt;h2&gt;
  
  
  A (hopeful  brief) technical explanation
&lt;/h2&gt;

&lt;p&gt;Let’s be honest, this part is a bit out of scope compared to the topic of this note. But, in the end, I'm still an engineer, and this is my blog, so a seemingly useless technical part explaining how things work has to be there.&lt;/p&gt;

&lt;p&gt;Feel free to skip this part; it is not necessary to understand the point of the article. At the same time, if you are mildly interested in the topic, but not enough to read a few hundred keystrokes, I suggest this short video: &lt;a href="https://www.youtube.com/watch?v=LPZh9BOjkQs&amp;amp;list=WL&amp;amp;index=11" rel="noopener noreferrer"&gt;&lt;em&gt;Large Language Models Explained Briefly&lt;/em&gt;&lt;/a&gt;, for a better visual understanding of the topic.&lt;/p&gt;

&lt;p&gt;Anyway, a good way to explore and navigate the point is &lt;strong&gt;breaking down the GPT acronym&lt;/strong&gt;, which stands for &lt;strong&gt;G&lt;/strong&gt;enerative-&lt;strong&gt;P&lt;/strong&gt;retrained-&lt;strong&gt;T&lt;/strong&gt;ransformer. We can go quickly over the meaning of &lt;em&gt;generative&lt;/em&gt; (used to generate stuff) and start to focus on the second term instead. As I already said, this kind of chats are based on the ability of a model to predict the next words in an incomplete text. But how do they do it?&lt;/p&gt;

&lt;h3&gt;
  
  
  On Model and (Pre)Training
&lt;/h3&gt;

&lt;p&gt;It’s no secret that, at the end of the day, most of ChatGPT’s underlying system is based on a neural network. Deep Neural Networks are a computational model formed by an interconnected group of nodes, inspired by a simplified model of biological neurons [&lt;a href="https://en.wikipedia.org/wiki/Neural_network" rel="noopener noreferrer"&gt;Wikipedia&lt;/a&gt;]. Each node typically transforms its input using a non-linear function called &lt;strong&gt;an activation function&lt;/strong&gt; (99% of the time, it’s &lt;a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)" rel="noopener noreferrer"&gt;ReLU&lt;/a&gt;), and each connection has a weight used to transform the output of a node before it is passed to the next layer of nodes.&lt;/p&gt;

&lt;p&gt;The training typically consists of “showing” the neural network an input, letting it predict the corresponding output, and then computing the error between the predicted and real output using a &lt;a href="https://en.wikipedia.org/wiki/Loss_function" rel="noopener noreferrer"&gt;loss function&lt;/a&gt;. The quantitative error is used to “correct” the weights of each connection to try to minimize the error between predicted and real output in the long run. This method is called &lt;a href="https://en.wikipedia.org/wiki/Backpropagation" rel="noopener noreferrer"&gt;&lt;strong&gt;backpropagation&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;By iterating over the datasets (lots of samples are needed), we are able to use the network to approximate the function that links input with output (any kind of function: i.e. one that maps an image to 1 or 0 depending on whether a dog is present in the image).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrvizuonuo59jnbdkcs9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrvizuonuo59jnbdkcs9.png" alt=" " width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The Universal Approximation Theorem states that a neural network with a single hidden layer containing a sufficient number of neurons can approximate any continuous function, given a suitable activation function.&lt;br&gt;&lt;br&gt;
-George Cybenko&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Roughly, the same process is used to produce our &lt;strong&gt;Large Language Model&lt;/strong&gt;, which is a model built with &lt;strong&gt;lots&lt;/strong&gt; of neurons and is used to predict the next words in a sentence.&lt;/p&gt;

&lt;p&gt;The composition of the dataset is quite straightforward: &lt;strong&gt;any sequence of text can serve as a valid sample&lt;/strong&gt; (e.g., &lt;em&gt;a cat is on the table&lt;/em&gt;). The sequence is truncated and given as input (&lt;em&gt;a cat is on ...&lt;/em&gt;), and the missing &lt;strong&gt;word&lt;/strong&gt; is used as the output (&lt;em&gt;table&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;At this point, a question naturally comes to mind: following the training step, we have to compute quantitatively the error between the network output (which, at the beginning, is a pseudo-random word) and the real missing word (&lt;em&gt;table&lt;/em&gt;) in a quantitative way. But how is this done? There is a branch of Natural Language Processing that studies how to map words to numbers, trying to &lt;strong&gt;capture the semantic and syntactic meaning of words in context,&lt;/strong&gt; and to find a “mathematical distance” that is meaningful between words (e.g., the distance between &lt;em&gt;man&lt;/em&gt;-&lt;em&gt;woman&lt;/em&gt; should be similar to that between &lt;em&gt;king&lt;/em&gt;-&lt;em&gt;queen&lt;/em&gt;). This branch is called &lt;strong&gt;word embedding&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you're interested, &lt;a href="https://medium.com/@harsh.vardhan7695/a-comprehensive-guide-to-word-embeddings-in-nlp-ee3f9e4663ed" rel="noopener noreferrer"&gt;this article&lt;/a&gt; presents the most common approaches to do this, from the simplest to the most effective ones. But what’s important is that this step is indeed performed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk53uqmm9x667gozkj9p0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk53uqmm9x667gozkj9p0.jpeg" alt=" " width="800" height="311"&gt;&lt;/a&gt;&lt;br&gt;
A cutted images from &lt;a href="https://medium.com/@hari4om/word-embedding-d816f643140" rel="noopener noreferrer"&gt;this article&lt;/a&gt; to visually explain word embedding&lt;/p&gt;

&lt;p&gt;Of course, this method is used not only for loss computation but also to pre-process each sentence in the dataset, so that, in the end, the neural network has to deal only with numbers.&lt;/p&gt;

&lt;p&gt;Returning to our main thread, that is, the meaning of GPT: &lt;em&gt;P&lt;/em&gt; stands for Pre-trained (Neural Network). This is because the step described above is only the first part of the process of training a functional chat model, and typically this part is reused across different iterations or types of chatbot (it’s like the pizza base).&lt;/p&gt;

&lt;p&gt;Let’s list the complete recipe:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Self-Supervised Learning:&lt;/strong&gt; this is the process where we take a &lt;strong&gt;HUGGGGE&lt;/strong&gt; amount of text and train the model to predict the next word in randomly selected text, as explained before. In this step, the “knowledge” of the model is embedded, because being exposed to lots of content modify the network’s weights, allowing the model to more likely produce the correct text if the answer has already been seen by the model (just like if you take the same test twice, the chances of getting a better score are higher).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fine-Tuning&lt;/strong&gt;: now the algorithm has an approximate working sentence-completing function, but there is a drawback in the previous stage. Because a very large amount of data is needed, quality wasn’t always guaranteed, so the algorithm could have learned something totally irrelevant (e.g., it could answer with a random disclaimer found at the bottom of many documents in the dataset). To solve this problem, a new round of learning is done, starting from the previous weights and with a lower impact on them (just a fine-tune), using an extremely high-quality (and expensive) dataset, validated by humans to remove corner cases and guide the model toward more human-like conversation. If needed, the dataset could contain specific data on which we want the model to be “an expert.”
&lt;/li&gt;
&lt;li&gt;(Optional) &lt;strong&gt;Reinforcement Learning from Human Feedback (RLHF):&lt;/strong&gt; An additional step is used to better align the behavior of the model with the human thinking system. The method consists of &lt;strong&gt;giving a ranking to different answers generated by the model and adjusting the model to optimize for this ranking&lt;/strong&gt;. The scores for each answer are computed using a model, called &lt;strong&gt;a reward model&lt;/strong&gt;, that ranks the text based on human feedback on past responses. Of course, having this human feedback is quite expensive, and it should be validated to ensure alignment with human morals, that is another reason why this step is performed. The scoring system is quite similar to the Elo system used in various competitive ranking environments. This feedback system is then used in &lt;a href="https://en.wikipedia.org/wiki/Reinforcement_learning" rel="noopener noreferrer"&gt;reinforcement learning&lt;/a&gt; as a reward signal to improve the model’s performance.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1jv1skvkd619bboyew6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1jv1skvkd619bboyew6.png" alt=" " width="800" height="410"&gt;&lt;/a&gt;&lt;br&gt;
  Representation of the ranking and rewarding system from &lt;a href="https://towardsdatascience.com/explained-simply-reinforcement-learning-from-human-feedback/" rel="noopener noreferrer"&gt;this article&lt;/a&gt;.   &lt;/p&gt;

&lt;p&gt;You have probably seen the data collection phase of this training step, when ChatGPT presents you with two different answers and you need to choose the most appropriate one.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasf61max5ofopwn6dxx4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasf61max5ofopwn6dxx4.png" alt=" " width="800" height="493"&gt;&lt;/a&gt;&lt;br&gt;
    An example of double choice to collect human feedback in ChatGPT&lt;/p&gt;

&lt;h3&gt;
  
  
  Transformer
&lt;/h3&gt;

&lt;p&gt;Ok, but we should go deeper into the concept of models. Of course, the model is not a simple neural network; it includes more complex components. One of the most impactful innovations was introduced by Google in the paper &lt;em&gt;"&lt;a href="https://arxiv.org/abs/1706.03762" rel="noopener noreferrer"&gt;Attention Is All You Need&lt;/a&gt;"&lt;/em&gt;. This paper must be cited for the impact it has had on the industry.&lt;/p&gt;

&lt;p&gt;Before the introduction of the &lt;strong&gt;attention mechanism&lt;/strong&gt;, when generating text, at some point the first part of the context used to generate the following words would be forgotten (e.g., only the last 400 characters of the text are used to generate the next words). Now, with attention, the model is able to retain all useful information, keeping only what it learns to be important, by focusing attention on it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ybh8udgbc00elqha691.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ybh8udgbc00elqha691.png" alt=" " width="800" height="353"&gt;&lt;/a&gt;&lt;br&gt;
A representation of “attention”: how impactful is the input for output prediction. &lt;/p&gt;

&lt;p&gt;The process to train this mechanism is quite similar to the prediction process, but with the focus on optimizing the model's understanding of the input to enhance the prediction. (Of course, even this process is simplified for the reader, but understanding the concept is the most important thing)&lt;/p&gt;

&lt;h1&gt;
  
  
  On its use
&lt;/h1&gt;

&lt;p&gt;Here we come to the core of this note, which is meant to find a rationale, a &lt;strong&gt;philosophy for a fair and meaningful usage of these models.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I start this section with one assumption: whenever you start using any of the AI-powered chatbots, you are fascinated by their power and begin delegating the most boring tasks to them. The next step is to use them as a search engine, assuming they are correct. Then comes the tipping point: &lt;strong&gt;for everything you do, you first check if the LLM can solve it for you. If after some iterations, you are not satisfied with the result, you, tired and frustrated, finally start thinking about how to solve the problem yourself&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This way of acting could be a problem, not only for the current task you are meant to do, but also for the long run, limiting your ability to find a solution to problems, eliminating the search of different sources for comparison, limiting your exposure to new stuff and the process of elaborate something useful but that has to be adapted to your situation. All this missing process leads the user to a sort of mental fatigue and laziness that could be problematic for the long run. &lt;/p&gt;

&lt;p&gt;Don’t misunderstand me, I’m a supporter of these tools, or in general, any new technology. I’d like to integrate them into a typical knowledge worker’s workflow, confident that they can improve productivity and free people from low-value-added tasks. But I also believe this should be done in a conscious and effective way, especially in the long run.&lt;/p&gt;

&lt;p&gt;That’s why this note has a few more paragraphs!&lt;/p&gt;

&lt;h2&gt;
  
  
  Not assume them to be right (used)
&lt;/h2&gt;

&lt;p&gt;Especially if you have followed the “brief” technical explanation, ChatGPT-like models will no longer seem like intelligent beings, but rather a long and complex mathematical and statistical process. I hope this perspective helps you (and me) use them more rationally. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The most important thing that a user must keep in mind is that they are just a proxy for the next words, they don't have reasoning power.&lt;/strong&gt; With this in mind, it’s clear that a user should never fully trust any information coming from the system, especially when the task or question requires logical reasoning rather than simple information retrieval. Of course, not all the information we use needs to be highly accurate, so it’s up to you to judge when a fact check is really valuable.&lt;/p&gt;

&lt;p&gt;Even when we just need to retrieve and manipulate information (e.g., summarizing a text), we cannot assume that chatbots have all the information we need, especially if that information decays over time (e.g., shop opening hours). &lt;strong&gt;You cannot assume the system is omniscient and up to date&lt;/strong&gt;. A lot of uncommon information is missing from the training set, or, even if present, it might not be properly ingested by the model. If we assume that the given answer is correct and complete we risk losing valuable information, which can lead to misunderstandings and missing key aspects of the topic. This can prevent us from gaining a broader understanding, and in the best case, result in a short, unchecked, and limited answer.  &lt;/p&gt;

&lt;p&gt;Each source of information, if taken alone, can give a partial view of reality. The danger here is that you typically do not find a source that directly answers your specific question, so you need to adapt and integrate multiple ones. This time is different: &lt;strong&gt;these tools try to generate an answer that feels as satisfying as possible&lt;/strong&gt;, with no aim of &lt;strong&gt;completeness or accuracy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To be honest, the idea that these systems know how to answer every question is also influenced by the fact that they are quite unable to respond “I don’t know”, or to admit when information is missing. Instead, they tend to invent a plausible-sounding solution, but when tested, proves to be completely incorrect. This phenomenon is known as &lt;strong&gt;hallucination&lt;/strong&gt;, and we have to learn how to deal with it. One of the most common reasons for hallucination is that, in the training dataset, you rarely find documents where the author says they don’t know something, or surveys where many responses are simply “I don’t know.” Who would ever write a document like that? And, keeping in mind that the engine behind these systems is essentially a &lt;strong&gt;probabilistic text completer&lt;/strong&gt;, based on knowledge acquired from previously analyzed text, it doesn’t have the capacity to simply say, &lt;em&gt;“Sorry, I don’t know the answer.”&lt;/em&gt; Even because that kind of response would require a level of reasoning that goes well beyond predicting the next word. &lt;/p&gt;

&lt;p&gt;If you do not check the answer, you will start to trust a system that seems to know all the answers, without hesitation, until the test day comes…&lt;/p&gt;

&lt;h2&gt;
  
  
  Their impact on us
&lt;/h2&gt;

&lt;p&gt;Since now we’ve analyzed the limitations of the tool, but not those of the user, and, to be honest, many of the issues actually come from the users themselves (us). Starting from a more rational point of view, one of the main risks in using GPTs is the laziness users can develop by constantly relying on the system to solve textual tasks.&lt;/p&gt;

&lt;p&gt;Until now, technology has mostly impacted “muscle-driven” tasks, where humans were quite limited and allowed us to unleash our mental power in more impactful ways. First of all, in terms of scale, physical tasks are not easily scalable (e.g. if moving one stone requires one person, then moving three stones typically requires three people) and an automatic system can dramatically multiply productivity in a way that human labor alone cannot. And when a task becomes a commodity, we often lose the ability to perform it ourselves, or the process changes so much that it no longer resembles how a human would do it, making it difficult to learn again. To be honest, I’ve personally lost the ability to walk long distances without shoes, build a shelter, or scavenge for food, and I’m fine with that. In general, I wouldn't say that this process has reduced humanity's well-being.&lt;/p&gt;

&lt;p&gt;The ability to generate text that is understandable to others, to acquire data and information, process it, find solutions to specific problems, be creative, or &lt;strong&gt;simply sustain prolonged mental effort are the very capabilities that allowed Homo sapiens to thrive on Earth.&lt;/strong&gt; These activities don’t suffer from issues of scalability, and they’re extremely hard to replace with something better. Mental capabilities can’t simply be replaced, or even meaningfully trained. We cannot create a gym for mental training without putting something in practice in the real world. &lt;/p&gt;

&lt;p&gt;If you'll allow me a bit of hyperbole: these systems have so far used knowledge generated by human beings. If we stop exploring complex topics and generating new knowledge through deep thinking, can we really say we’re making progress? And even if progress continues, can we be sure it’s leading to a better version of humanity? After this dystopian philosophical digression, let’s come back down to earth and look at something more tangible.&lt;/p&gt;

&lt;p&gt;The aspect of human cognitive limitation is the loss of mental fatigue, or more precisely, the &lt;strong&gt;loss of situations that produce it&lt;/strong&gt;, which is typically what drives learning, skill acquisition, or the improvement of existing abilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8k3pihfb63myjznk830h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8k3pihfb63myjznk830h.png" alt=" " width="800" height="428"&gt;&lt;/a&gt;&lt;br&gt;
Effort spent to solve a task with and without LLMs &lt;/p&gt;

&lt;p&gt;The process of learning &lt;strong&gt;i&lt;/strong&gt;mplies, after acquiring the necessary theoretical foundations, a period of &lt;strong&gt;deliberate practice&lt;/strong&gt;, something that helps the skill stick in our minds and makes it more tangible. But let’s be honest: doing something outside our comfort zone is hard, and it makes us feel like idiots. We only do it when we must and we’d never do it if someone (or something) could do it for us.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s the critical point: we’ve begun asking GPTs not to explain how to do things, but to do them for us, gradually losing control of the creation process.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Even if we produce something new this way, we won’t be able to repeat it. In reality, all you did was delegate. This is especially problematic for juniors, who miss out on the struggle and the tools that experience builds on the path to becoming experts. These experiences are often traded away for the illusion of saving time  and for the sake of laziness.&lt;/p&gt;

&lt;p&gt;I would like to analyze “one more thing”. I would like to suggest you &lt;a href="https://futurism.com/altman-please-thanks-chatgpt" rel="noopener noreferrer"&gt;not say thanks to chatGPT&lt;/a&gt;, not only from an energy consumption point of view, but also because it’s not a human being and doesn’t follow social rules like humans do. This isn’t a rant about &lt;a href="https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine" rel="noopener noreferrer"&gt;extreme edge cases&lt;/a&gt;, but a warning alert for humans' tendency to anthropomorphize things, especially in this context. &lt;strong&gt;It shouldn’t be considered normal to assign morality, feelings, or intentions to a probability distribution&lt;/strong&gt;. When we do that, we begin to change our behavior in response to these systems, attributing to them abilities they simply do not possess. On the other hand, this phenomenon can even raise moral barriers around non-existent entities, potentially slowing down the development of new technologies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use GPT as a powerful tool,  not as an expert human being.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  But they works
&lt;/h2&gt;

&lt;p&gt;I can write lots of other words that try to persuade someone to use GPTs, but the reality is that wherever you give them some unstructured, badly descriptive, mixed-language task with wrong grammar, it completes it, almost every time, and we cannot deny it. I tried it on my skin. And to be honest, performing a task, when you know it can be completed with little or no effort, is not tricky to consider. And this is why here I would like to formalize when I use it, how I use it, and how I would like to use it to find the best balance between performance, laziness and learning. &lt;/p&gt;

&lt;h3&gt;
  
  
  Proofreader
&lt;/h3&gt;

&lt;p&gt;Currently, it happens to me to write some text, for different reasons and in different contexts: email writing, study notes, todos, bureaucratic documents and so on… and it is quite boring, especially when a corporate-like speaking is required. In this case, to be honest, I typically write the core concept and forward it to the magic tool, sometimes with some additional meta-data about how the text should sound and the parts to highlight, other times with just an “improve this text” prompt. Of course, grammar and lexical errors will be corrected, and I’m quite happy with it. Probably, if I stuck with doing this kind of job, I could improve my performance in corporate-speaking and formal writing, but to be honest, at least at this moment, I really don’t get the point of investing energy in it.&lt;/p&gt;

&lt;p&gt;What I would like to pay attention to and avoid doing is letting the GPTs enrich the text with some other information, transform the meaning of my writing, or help me to brainstorm what to do with that stuff. This is my task, and it is the stuff in which I believe I can give more value and be impactful, especially in the long run.My intention is to avoid letting the tool “respond to this mail” or “write this document for me”. It has no meaning; it keeps it “on the same path” as everything else and doesn't let me diverge onto a new creative path. So even if it could solve the problem in the short term, it kills me over the long run. So, on my path, this feature is disallowed. It is not a valuable trade-off for my growth.&lt;/p&gt;

&lt;h3&gt;
  
  
  Coding &amp;amp;  Software
&lt;/h3&gt;

&lt;p&gt;I have contrasting feelings about coding with GPTs. I consider it the most skilled junior software engineer in the world. In my use, I have seen that if you ask it a defined, short task, with little or no integration with something else (like the development of a short function), it works — and it works f*cking well. Probably the training set is filled with LeetCode-like code, with lots of coding best practices and so on. At the same time, when you have to deal with uncommon bugs deriving from systems integration, or develop something that doesn’t have to be perfect itself but has to work really well in a complex system, the pain starts — especially if you have to let it repeat the task due to some hallucination. The probability of losing time and energy trying to debug, re-prompt, highlight the bugs, and integrate the generated code into the codebase can exponentially increase.&lt;/p&gt;

&lt;p&gt;But we have to talk about the elephant in the room: how to handle generated code? First of all, a review of the code is needed, and in theory, a plagiarism check is necessary… Anyway, if you didn't study and design the system, how could we be sure that the code is optimal or at least looks good? An exploration of some of the possible solutions with some deep thinking on the task is still needed to judge the code, and typically it is the most time-consuming part of programming and it is still necessary to judge the work. So, if we want to really do the job, not just do a monkey copy-paste, we still need a huge amount of mental energy and time. In practice, the trade-off is typically: delegate the task to GPTs and cross your fingers that it works, like a black box: losing the learning, mental-gym, and confidentiality part of the task. Sometimes, we can almost talk about technical debt of generated code. Or perform the task by yourself, investing time, but at the same time running on the computer science learning path.&lt;/p&gt;

&lt;p&gt;After this brief overview, the conclusion is that in this kind of work, if you have just to complete the job in the most trivial way possible (i.e. some script that you have to run once) and nothing can be learned from it, GPT could be one of your best friends. In other contexts, it is not valuable to use this kind of tools intensively, when effectively saved, the effort saved is not so valuable with respect to the quality given or, from a personal point of view, the lost possibility of sharpening your coding skills.&lt;/p&gt;

&lt;p&gt;An interesting way to use it is to let these tools analyze your code to give feedback on the produced code, to have an “expert” feedback on what has been produced, or just a different point of view, that can enrich your coding experience. A AI-Rubber duck 🦆.&lt;/p&gt;

&lt;p&gt;You can find many excellent articles on this topic on the &lt;a href="https://antirez.com" rel="noopener noreferrer"&gt;Antirez blog&lt;/a&gt;. If you're looking for a good starting point, or simply a reference to previous work, I recommend &lt;a href="https://antirez.com/news/154" rel="noopener noreferrer"&gt;this article&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  BrainStorming and Information Retrieval
&lt;/h3&gt;

&lt;p&gt;Linked to the last lines, I really like to use GPTs as a sort of brainstorm. It happens quite often that I need to retrieve information about a topic, or some technical information about something that I have to do. For complex stuff, the process is typically the following: start to search about the topic, highlight the important stuff, search about them, repeat until satisfied. With this process in mind, asking GPTs for some information about the topic, some side questions that I have, or just “this is what I know about X, am I missing something?” Let me double-check my assumptions and beliefs, or give another perspective on the topic. It is extremely useful for side questions — stuff that is related to a topic but in reality is not really that topic, but a variation of it. Here I can retrieve information that has to be verified of course, and perspectives that cannot simply be searched on Google, where usually you can find a more “pure” topic analysis.&lt;/p&gt;

&lt;p&gt;From this perspective, I have no regrets about using it. I introduced a new tool into my search toolbox, that not only gives me information, but there is a probabilistic distribution of reassessment of words and improves the fitting of information with my goal. This gives me more confidence &lt;strong&gt;in&lt;/strong&gt; the theme, absorbing and analyzing a different way of approaching stuff that is always valuable, without the effort to ask another human expert. Whenever I have some thoughts like “Ok, so if this is how X works, if I perform Y I expect Z”, and have a second (but always to be verified) opinion, that even if wrong can enrich my mental model of that stuff. “An idea has not to be correct to lead to something valuable!”&lt;/p&gt;

&lt;p&gt;Of course, GPTs cannot be the only source of truth, but can be a really good topic summary and information organization tool. And like any other tool, whenever you depend on it to continue with your work, it can become dangerous.&lt;/p&gt;

&lt;h1&gt;
  
  
  On Ai Agent
&lt;/h1&gt;

&lt;p&gt;AI agents are software systems that use artificial intelligence to autonomously perform tasks on behalf of users, with a focus on achieving specific goals. To be honest, in this snippet of text, I don’t want to go deep into how they work; just know that you can allow LLMs to call some programmatic function (i.e., to surf the internet) and then rearrange the text in a programmatic format (like JSON, XML,...) to programmatically do something.&lt;/p&gt;

&lt;p&gt;Honestly, they seem to work at the moment, but I don’t know how it is possible to rely on the fact that they will give a well-formatted JSON at output, or the fact that it doesn’t hallucinate, and the fact that they are not quite deterministic, so even testing is not a sound process for them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43e2k21b8nqidm5yegpk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43e2k21b8nqidm5yegpk.png" alt=" " width="399" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But still, they work, and whenever you don’t need pharma-reliability, they can do the job. This note is starting to take a bit too long for me to write, this is not so in focus. But I really like to cite them as the “next step” in the worlds of LLMs for the sake of completeness, and to be honest, they seem to be promising, in case any of you would like to delve deeper into the topic.&lt;/p&gt;

</description>
      <category>mindlog</category>
      <category>chatgpt</category>
      <category>productivity</category>
      <category>llm</category>
    </item>
    <item>
      <title>On the existence of this Space</title>
      <dc:creator>Engineered Log</dc:creator>
      <pubDate>Fri, 25 Jul 2025 08:35:42 +0000</pubDate>
      <link>https://dev.to/engineered_log/on-the-existence-of-this-space-11pn</link>
      <guid>https://dev.to/engineered_log/on-the-existence-of-this-space-11pn</guid>
      <description>&lt;p&gt;The existence of this website is quite clear to me (and it will be clear to you in the next chapter). The existence of the blog section, a bit less so (and it won’t be clear to you either). So I decided to write this note with a double goal in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Formalize the ideas in my mind for better clarity.
&lt;/li&gt;
&lt;li&gt;Establish some guidelines for future notes and concepts in this part of the web.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, to illustrate what (if I don’t get bored) you might find on these pages.&lt;/p&gt;

&lt;p&gt;I’ve split all the pieces of information into three main sections: &lt;strong&gt;Why&lt;/strong&gt;, &lt;strong&gt;How&lt;/strong&gt;, and &lt;strong&gt;When&lt;/strong&gt;, listed from most important to least. This split helps me focus on one element at a time, aiming for deeper reflection (and easier changes in the future). It also gives the impression that I “thought” before writing it [cit. il tigre r24], leading the reader to believe this is a serious and worthwhile way to spend their time.&lt;/p&gt;

&lt;p&gt;But first of all, I need to define the container for all this information: &lt;strong&gt;my Digital Garden&lt;/strong&gt;. To be honest, I don’t have a formal definition of a digital garden in mind, but I like the mood it evokes when I think about it. Googling the topic, I found &lt;a href="https://walterteng.com/digital-garden" rel="noopener noreferrer"&gt;this article&lt;/a&gt; to be a good starting point, and &lt;a href="https://maggieappleton.com/garden-history" rel="noopener noreferrer"&gt;this other one&lt;/a&gt; to be the most comprehensive take on the topic. From the latter, I’ve borrowed a definition of a digital garden that I’ll use as a reference:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It’s a different way of thinking about our online behaviour around information — one that accumulates personal knowledge over time in an explorable space. [...] Gardens present information in a richly linked landscape that grows slowly over time. Everything is arranged and connected in ways that allow you to explore.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are lots of definitions and variations on the theme, but in my opinion, after a while, we’re all just diverging in how we express the same core idea:  &lt;strong&gt;“I like to put random but reachable stuff on my webspace.”&lt;/strong&gt; Let’s get back on track.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why
&lt;/h1&gt;

&lt;h2&gt;
  
  
  For a bit of valuable narcissism
&lt;/h2&gt;

&lt;p&gt;It’s quite a habit of mine to write things down when they matter — also because, as you’ll learn, I have a terrible memory. So, I tend to record anything I find relevant, and I’m not at all jealous of my notes. I believe that if something I produce might be valuable, it would be a shame to be the only one who can use it (and admire its craftsmanship). This has been true for my school and university notes, but not only.&lt;/p&gt;

&lt;p&gt;I love the idea of leaving something meaningful behind, especially my thoughts, mental models, and my way of shaping life. This seems like the best way to share them: the most durable, and the most in tune with who I am. At least, for as long as there’s money to pay for the domain and hosting. &lt;em&gt;[ed. note: At the moment, I’m using free domain and hosting.]&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I believe this could have value. I’m committed to sharing the best parts of myself and what I have done - ideally in a clear and useful format - with the hope that what I share might be useful to someone.&lt;/p&gt;

&lt;h2&gt;
  
  
  To Go Deep
&lt;/h2&gt;

&lt;p&gt;Producing an artifact and writing things down is time-consuming and typically a slow process. However, this allows me the time to reflect deeply on what I believe is valuable in a topic. &lt;/p&gt;

&lt;p&gt;Additionally, I don’t want to “just share my opinion.” In fact, my opinion, like anyone’s, is not particularly useful (or even totally useless) on its own. I would like to write with some sort of scientific approach, aiming to create something that could serve as a starting point for building a foundation of knowledge on a topic. &lt;/p&gt;

&lt;p&gt;To each note, I’ll try to add documentation, references, data insights, or at least some citations, so that I can dive deeper into the subject and, in turn, you don’t have to simply trust me if the topic matters to you. It’s a win-win situation!&lt;/p&gt;

&lt;h2&gt;
  
  
  To Push Myself
&lt;/h2&gt;

&lt;p&gt;Let’s be honest, at the moment I don’t have many relevant topics to expose to the internet, or at least not enough to justify the development of this website. But this is a great way to push myself to explore the world, learn amazing things, and try to be relevant (this could be linked to the first reason).&lt;/p&gt;

&lt;p&gt;Additionally, the idea of “I do it so I can write about it” is a card I can play in the battle against procrastination - a sort of delayed public commitment. On the other hand, these notes will also serve as a way to align all the steps in my life, making things clearer for me, and hopefully, have a line derivative greater than 0 (or at least I’ll try to push it upward).&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is “what” not present
&lt;/h2&gt;

&lt;p&gt;This is my digital garden, so what I plant here depends on the season of my life and on what interests me at the moment. I hope that rationality, logic, and computer science will remain evergreen in my life. &lt;/p&gt;

&lt;p&gt;At the same time, I hope that new themes will enrich my intellectual diet - maybe for limited periods, maybe not so deeply - but some form of exploration must always be present. That’s why I prefer not to define a specific main topic for this space.&lt;/p&gt;

&lt;h1&gt;
  
  
  How
&lt;/h1&gt;

&lt;h2&gt;
  
  
  With a not-so-serious tone
&lt;/h2&gt;

&lt;p&gt;I can’t stand people who take themselves too seriously, especially when they feel the need to say something in an overly complicated way just to make it sound important. To me, language is simply a tool for sharing knowledge and facts between humans, and the best way to do that is the simplest one. When someone expresses a concept in a needlessly fancy  way, the discussion isn’t really about the topic anymore, it becomes a performance about the person speaking.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"By loading an article with a plethora of scientific citations, a humanist can make another believe that his material bears the stamp of scientific approval."&lt;br&gt;&lt;br&gt;
Fooled by Randomness - Nassim Taleb&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Of course, that doesn’t mean technical content should be dumbed down or stripped of its substance — it all depends on who’s &lt;del&gt;listening&lt;/del&gt; reading. If I write about technical stuff, I’ll assume a minimal shared background. That’s for three reasons: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First of all, these are my notes, they need to be useful for me above all.
&lt;/li&gt;
&lt;li&gt;If you’re reading something technical, I assume you’re interested in the technical parts.
&lt;/li&gt;
&lt;li&gt;It’s just way easier for me to write that way :)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Trying to give some value
&lt;/h2&gt;

&lt;p&gt;This is mostly linked to the first points in the "why" section. To keep it short, whenever I write an article, one of my guiding principles is, "How can this be useful to someone?". This is mainly because I believe this question is a good heuristic for determining whether something is valuable, helping me highlight the key points of the topic. Additionally, following the philosophy of "if you can’t teach it well, you don’t understand it well enough," I think it’s a useful tip for writing well-structured papers.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Theoretical Writing process
&lt;/h2&gt;

&lt;p&gt;I enjoy designing processes for tasks, especially when there's a high probability of repeating them. I believe it’s an interesting way to optimize and understand where the value lies. I hope to write enough notes to justify this formalization, but here we go. This process begins once I’ve found the idea (which, as you might imagine, is not a trivial task) and assumes that I have a sufficient level of familiarity with the topic.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once the theme is decided, I start by outlining the main points. This shouldn't be a "linear" process; I prefer to write my outline on my smartphone whenever I come across valuable ideas.
&lt;/li&gt;
&lt;li&gt;When I'm satisfied with the outline, I set aside a block of time to dive deeper into each point, writing the draft of the article, rearrange them, and/or delete any that no longer seem relevant.
&lt;/li&gt;
&lt;li&gt;The next step is to search for the information I identified as missing during the previous phase, as well as explore additional articles to gain other perspectives. I do this at this stage to avoid external influence before writing the first draft.
&lt;/li&gt;
&lt;li&gt;Finally, I add the information and insights I’ve gathered to the draft, improving the clarity and grammar of the notes so that they are ready to be posted online without the risk of getting a call from my primary school teacher.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I came across a &lt;a href="https://www.emgoto.com/my-writing-process/" rel="noopener noreferrer"&gt;blog post&lt;/a&gt; that I found quite interesting on this topic&lt;/p&gt;

&lt;h2&gt;
  
  
  On the use of LLM
&lt;/h2&gt;

&lt;p&gt;At the moment, I make extensive use of LLMs for tasks that I find particularly boring, but I don't believe this is a sustainable practice in the long run. My goal for these notes, and for everything I create in general, is to use them with a clear purpose and not allow them to influence my ideas or the way I express them. For my notes, I want to use LLMs strictly as a grammar consultant. In short, I only use LLMs during the fourth phase of my writing process.&lt;/p&gt;

&lt;p&gt;Since I’m not a native English writer (though, to be honest, I also need this kind of check in my native languages), my typical prompt would be something like, “Highlight any grammar or clarity errors in this text,” allowing the LLM to correct snippets of my notes. This helps me fix my text and, over time, improve my writing skills.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;When&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;The main goal of this section is to commit to at least one article every two months. While it feels like a challenging interval while writing this, I believe it’s a feasible time span. Feel free to ping me if I don't respect my deadline!&lt;/p&gt;

&lt;h1&gt;
  
  
  Some Reference
&lt;/h1&gt;

&lt;p&gt;I want to end these notes with a reference to two pages that I found really interesting and inspiring.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When I landed on &lt;a href="https://www.taniarascia.com/" rel="noopener noreferrer"&gt;Tania’s webpage&lt;/a&gt;, I found a peaceful space in the chaotic world of the internet, a place where content is king and noise is reduced to the minimum. I browsed through her notes pages and found the philosophy behind her article &lt;em&gt;“&lt;a href="https://www.taniarascia.com/making-the-internet-a-better-place/" rel="noopener noreferrer"&gt;Making the Internet a Better Place&lt;/a&gt;”&lt;/em&gt; particularly compelling. I got quite inspired by her, and the framework of my own space is heavily influenced by hers.
&lt;/li&gt;
&lt;li&gt;I didn’t discover &lt;a href="https://retireinprogress.com/" rel="noopener noreferrer"&gt;Mr. RIP&lt;/a&gt; through his blog, which I found a bit chaotic at first, but rather through some visual content. That’s where his philosophy of a “conscious life,” the FIRE movement, and how he applies it to his personal journey really caught my attention. His engineer-like way of seeing the world is very aligned with my own.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hope that throughout my life path, I won’t forget these two pillars of thinking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Meanwhile, in My Head…&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
While writing this article, my very first one, I often smiled thinking about my old self in high school. I used to hate writing essays, whether as homework or during tests. Nowadays, that same frustration has shifted to writing code documentation 🙂 ... which somehow makes writing this article feel like a pleasure in comparison!&lt;/p&gt;

</description>
      <category>digitalgarden</category>
      <category>personaldevelopment</category>
      <category>knowledgemanagement</category>
      <category>mindlog</category>
    </item>
  </channel>
</rss>
