<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ênrell</title>
    <description>The latest articles on DEV Community by Ênrell (@enrell).</description>
    <link>https://dev.to/enrell</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/enrell"/>
    <language>en</language>
    <item>
      <title>Defining the architecture decisions of Navi</title>
      <dc:creator>Ênrell</dc:creator>
      <pubDate>Thu, 26 Feb 2026 08:51:45 +0000</pubDate>
      <link>https://dev.to/enrell/defining-the-architecture-decisions-of-navi-4jga</link>
      <guid>https://dev.to/enrell/defining-the-architecture-decisions-of-navi-4jga</guid>
      <description>&lt;p&gt;It was 3 AM when I had the idea for Navi a few months ago. I was in bed thinking about the impact of LLMs on developers' hard skills. Before the LLM boom, I improved my coding skills by building projects for my own use. But when OpenAI launched GPT-3, I saw that this technology could be useful. I spent a lot of time playing with GPT-3 code generation, and I remember the feeling I had when I used it to learn OOP. I was like, "What the F*! How the f* do these guys do that?" That was the spark that made my hyperfocus kick in to study the area.&lt;/p&gt;

&lt;p&gt;I studied the basics to understand all model architectures, and recently I completed the Information Retrieval and Artificial Intelligence course in college. Now I have solid knowledge to start understanding the present and future of LLMs.&lt;/p&gt;

&lt;p&gt;In this article, I will present the architectural decisions behind Navi, and some useful insights about agent strengths and weaknesses based on my humble LLM knowledge.&lt;/p&gt;

&lt;h2&gt;
  
  
  The difference between agents and agency
&lt;/h2&gt;

&lt;p&gt;Look, I've been deep in the AI space for a while now, and I can tell you one thing: &lt;strong&gt;everyone throws around the word "agent" like it's going out of style&lt;/strong&gt;. But here's the thing — an agent by itself? It's just a fancy function call with commitment issues.&lt;/p&gt;

&lt;p&gt;Let me break it down with something real. Imagine you have a single AI model that can answer questions. Cool, right? That's an &lt;strong&gt;agent&lt;/strong&gt;. It's reactive. You ask, it answers. You prompt, it responds. Nothing wrong with that — but it's not exactly... autonomous.&lt;/p&gt;

&lt;p&gt;Now, what if that same AI could &lt;strong&gt;decide&lt;/strong&gt; when to search the web, &lt;strong&gt;choose&lt;/strong&gt; to call a database, and &lt;strong&gt;opt&lt;/strong&gt; to save results somewhere? Still an agent. But the moment you connect multiple agents together, give them roles, responsibilities, and a way to communicate?&lt;/p&gt;

&lt;p&gt;That's when you have an &lt;strong&gt;agency&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Solo Agent Trap
&lt;/h3&gt;

&lt;p&gt;I've seen this mistake too many times. Developers build a "super agent" that tries to do everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// Don't do this&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;HandleEverything&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// Check if it needs to search&lt;/span&gt;
    &lt;span class="c"&gt;// Check if it needs to calculate&lt;/span&gt;
    &lt;span class="c"&gt;// Check if it needs to save&lt;/span&gt;
    &lt;span class="c"&gt;// Check if it needs to call API&lt;/span&gt;
    &lt;span class="c"&gt;// ...you get the idea&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what I call the "god agent" pattern. It works for demos, but falls apart in production. Why? Because &lt;strong&gt;single agents lack perspective&lt;/strong&gt;. They're trying to be everything at once.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Agency Approach
&lt;/h3&gt;

&lt;p&gt;An agency is different. Think of it like a team:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// This is the way&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Agency&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Planner&lt;/span&gt;    &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Agent&lt;/span&gt; &lt;span class="c"&gt;// Decides the steps&lt;/span&gt;
    &lt;span class="n"&gt;Researcher&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Agent&lt;/span&gt; &lt;span class="c"&gt;// Searches and gathers info&lt;/span&gt;
    &lt;span class="n"&gt;Coder&lt;/span&gt;      &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Agent&lt;/span&gt; &lt;span class="c"&gt;// Writes and reviews code&lt;/span&gt;
    &lt;span class="n"&gt;Executor&lt;/span&gt;   &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Agent&lt;/span&gt; &lt;span class="c"&gt;// Runs tools and APIs&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each agent has a &lt;strong&gt;single responsibility&lt;/strong&gt;. The planner doesn't code. The coder doesn't execute. The executor doesn't plan. They specialize, communicate, and together they solve problems that would overwhelm any single agent.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Matters for Navi
&lt;/h3&gt;

&lt;p&gt;When I started designing Navi, I made the agent mistake first. I built a monolithic agent that tried to handle orchestration, tool execution, memory management, and response formatting all at once.&lt;/p&gt;

&lt;p&gt;It was a mess.&lt;/p&gt;

&lt;p&gt;The breakthrough came when I realized: &lt;strong&gt;Navi isn't an agent. Navi is an agency.&lt;/strong&gt; It's a system where specialized agents work together, each with clear boundaries and purposes.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Agent&lt;/th&gt;
&lt;th&gt;Agency&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Scope&lt;/td&gt;
&lt;td&gt;Single task&lt;/td&gt;
&lt;td&gt;Coordinated workflow&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Decision Making&lt;/td&gt;
&lt;td&gt;Reactive&lt;/td&gt;
&lt;td&gt;Strategic + Reactive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Failure Mode&lt;/td&gt;
&lt;td&gt;All or nothing&lt;/td&gt;
&lt;td&gt;Graceful degradation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scalability&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;Horizontal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maintenance&lt;/td&gt;
&lt;td&gt;Hard to debug&lt;/td&gt;
&lt;td&gt;Clear responsibility&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The moment you understand this difference, your entire approach to AI orchestration changes. You stop asking "How do I make my agent smarter?" and start asking "How do I make my agents work better together?"&lt;/p&gt;

&lt;p&gt;That's the foundation Navi was built on.&lt;/p&gt;

&lt;h2&gt;
  
  
  LLM Weaknesses
&lt;/h2&gt;

&lt;p&gt;I love this tech. I studied it. I built with it. I'm building &lt;strong&gt;Navi&lt;/strong&gt; on top of it. But here's the thing I learned the hard way: if you don't understand where LLMs break, you're not building infrastructure — you're building a house of cards.&lt;/p&gt;

&lt;p&gt;Let me share what I discovered after countless debugging sessions at 4 AM.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Context Wall That Nobody Talks About
&lt;/h3&gt;

&lt;p&gt;Everyone celebrates bigger context windows like we solved everything. Cool, but here's what actually happens: your model starts forgetting stuff before it even hits the limit.&lt;/p&gt;

&lt;p&gt;I was building this feature where Navi needed to remember conversation history plus tool results plus system instructions. Sounds simple, right? Well, around token 8,000 on a 32K model, things got weird. The model would:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ignore instructions I placed at the beginning&lt;/li&gt;
&lt;li&gt;Start giving generic answers&lt;/li&gt;
&lt;li&gt;Hallucinate more frequently&lt;/li&gt;
&lt;li&gt;Lose track of constraints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's like when you study for 8 hours straight and by hour 7 you're just... reading words without absorbing anything.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// What I thought would work&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;BuildContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;instructions&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;instructions&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;history&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;tools&lt;/span&gt; &lt;span class="c"&gt;// Simple concatenation, right?&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// What actually happens inside the model&lt;/span&gt;
&lt;span class="c"&gt;// Attention dilution = 📉&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The lesson? &lt;strong&gt;More context ≠ more intelligence&lt;/strong&gt;. Sometimes more context = more confusion.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Confidence Problem
&lt;/h3&gt;

&lt;p&gt;LLMs have no idea when they're wrong. None. Zero. They'll tell you the most confidently incorrect answer with 99% certainty.&lt;/p&gt;

&lt;p&gt;I built a test where I asked the same question 100 times with slight variations. The model would give contradictory answers, each time sounding absolutely certain. That's when it hit me: &lt;strong&gt;confidence ≠ correctness&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;LLMResponse&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Answer&lt;/span&gt;     &lt;span class="kt"&gt;string&lt;/span&gt;
    &lt;span class="n"&gt;Confidence&lt;/span&gt; &lt;span class="kt"&gt;float64&lt;/span&gt; &lt;span class="c"&gt;// This number means nothing&lt;/span&gt;
    &lt;span class="n"&gt;IsCorrect&lt;/span&gt;  &lt;span class="kt"&gt;bool&lt;/span&gt;    &lt;span class="c"&gt;// The model doesn't know this&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're building systems that execute code, handle money, or touch security — and you're trusting self-reported confidence — buddy, you're gonna have a bad time.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Memory That Isn't There
&lt;/h3&gt;

&lt;p&gt;LLMs don't have memory. They don't learn from conversations. They don't update their knowledge. After every response, it's like they're born again — pure amnesia with swagger.&lt;/p&gt;

&lt;p&gt;Everything you think is "memory" is actually built by the developer. Without external memory architecture, you have a goldfish with a PhD. Brilliant, but forgets everything in 3 seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Generalist Trap
&lt;/h3&gt;

&lt;p&gt;LLMs know everything and nothing at the same time. Ask about Kubernetes, quantum physics, anime plot lines, and medieval history in the same conversation? No problem. But dig deeper on any single topic, and you'll find the limits.&lt;/p&gt;

&lt;p&gt;They're statistical generalists. Pattern matchers on steroids. Which is incredible for breadth, but dangerous when you need depth.&lt;/p&gt;

&lt;p&gt;I learned this when I was vibe coding with active reviewing using Claude Opus models. It was generating code that looked perfect but had subtle bugs. The model knew the syntax, understood the pattern, but missed the edge cases because it doesn't actually &lt;em&gt;understand&lt;/em&gt; code — it predicts tokens based on patterns it's seen.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Hallucination Reality
&lt;/h3&gt;

&lt;p&gt;Let's be clear: &lt;strong&gt;hallucination isn't a bug, it's a feature&lt;/strong&gt;. Well, not really a feature, but an inevitable byproduct of how these models work. They predict tokens probabilistically. Sometimes that prediction is wrong. And they have zero idea when it happens.&lt;/p&gt;

&lt;p&gt;What makes it worse:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ambiguous instructions&lt;/li&gt;
&lt;li&gt;Conflicting context&lt;/li&gt;
&lt;li&gt;Too much information&lt;/li&gt;
&lt;li&gt;Questions about things that don't exist
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// The scary part&lt;/span&gt;
&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Ask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Something that doesn't exist"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;              &lt;span class="c"&gt;// Returns something plausible&lt;/span&gt;
&lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;KnowsItsWrong&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="c"&gt;// false - method doesn't exist&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Why Multi-Agent Systems Actually Matter
&lt;/h3&gt;

&lt;p&gt;So why all this talk about weaknesses? Because understanding them changes everything about how you build.&lt;/p&gt;

&lt;p&gt;Most people think multi-agent systems are about speed. Parallelization. Getting things done faster. They're wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It's about cognitive distribution.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When I split Navi into specialized agents, something interesting happened:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Single Agent&lt;/th&gt;
&lt;th&gt;Agency&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Context dilution&lt;/td&gt;
&lt;td&gt;Focused context per agent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;High hallucination rate&lt;/td&gt;
&lt;td&gt;Lower per-agent hallucination&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hard to debug&lt;/td&gt;
&lt;td&gt;Clear failure boundaries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Generic responses&lt;/td&gt;
&lt;td&gt;Specialized outputs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Everything fails together&lt;/td&gt;
&lt;td&gt;Graceful degradation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each agent operates in a smaller cognitive scope. Smaller scope means less confusion, better instruction adherence, and easier debugging.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// Before: One agent trying to do everything&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;GodAgent&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// Planning&lt;/span&gt;
    &lt;span class="c"&gt;// Research&lt;/span&gt;
    &lt;span class="c"&gt;// Coding&lt;/span&gt;
    &lt;span class="c"&gt;// Execution&lt;/span&gt;
    &lt;span class="c"&gt;// Verification&lt;/span&gt;
    &lt;span class="c"&gt;// Memory&lt;/span&gt;
    &lt;span class="c"&gt;// Everything...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// After: Specialized agents&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Agency&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Planner&lt;/span&gt;    &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Agent&lt;/span&gt; &lt;span class="c"&gt;// Just plans&lt;/span&gt;
    &lt;span class="n"&gt;Researcher&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Agent&lt;/span&gt; &lt;span class="c"&gt;// Just researches&lt;/span&gt;
    &lt;span class="n"&gt;Coder&lt;/span&gt;      &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Agent&lt;/span&gt; &lt;span class="c"&gt;// Just codes&lt;/span&gt;
    &lt;span class="n"&gt;Executor&lt;/span&gt;   &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Agent&lt;/span&gt; &lt;span class="c"&gt;// Just executes&lt;/span&gt;
    &lt;span class="n"&gt;Verifier&lt;/span&gt;   &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Agent&lt;/span&gt; &lt;span class="c"&gt;// Just verifies&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An agency isn't necessarily faster by default. &lt;strong&gt;It's more stable.&lt;/strong&gt; Speed is a side effect. Stability is the goal.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hexagonal Architecture Decision
&lt;/h2&gt;

&lt;p&gt;The AI landscape is in constant flux. New API versions, model capabilities, providers, and technologies emerge every month.&lt;/p&gt;

&lt;p&gt;So how does this connect to architecture? Simple: if LLMs are inherently volatile and the AI field evolves so rapidly, your architecture needs to be decoupled to protect your system from these shifts.&lt;/p&gt;

&lt;p&gt;Hexagonal architecture (ports and adapters) gives me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clear boundaries&lt;/strong&gt; between LLM logic and business logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testable interfaces&lt;/strong&gt; without calling actual models&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Swappable implementations&lt;/strong&gt; (swap implementations without changing core)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Isolation&lt;/strong&gt; of hallucination-prone components
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// Port: What the LLM can do&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;LLMPort&lt;/span&gt; &lt;span class="k"&gt;interface&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="n"&gt;Prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;Embed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Vector&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// Adapter: How we actually do it (OpenAI, Anthropic, local, etc.)&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;OpenAIAdapter&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Client&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// Core: Business logic that doesn't care which LLM&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Orchestrator&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="n"&gt;LLMPort&lt;/span&gt;
    &lt;span class="c"&gt;// Doesn't know or care about the implementation&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This isn't just clean code — it's survival. Let me be real about what can change in the next 12 months:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Technology changes&lt;/strong&gt; — OpenAI changes their API. Anthropic launches a new protocol. Your architecture determines if this is a Tuesday afternoon config update or days of rewrites.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fallback scenarios&lt;/strong&gt; — OpenAI goes down. Rate limits hit. Your API key gets throttled. Hexagonal architecture means your orchestrator doesn't care — it just calls the port, and the adapter handles the failover.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid deployments&lt;/strong&gt; — Some agents run locally for privacy, others in the cloud for power. Some use OpenAI, others use Claude, others use open-source models you host yourself. Same interface, different adapters.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// The orchestrator doesn't know or care&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Orchestrator&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="n"&gt;LLMPort&lt;/span&gt;
    &lt;span class="c"&gt;// Could be OpenAI&lt;/span&gt;
    &lt;span class="c"&gt;// Could be Anthropic&lt;/span&gt;
    &lt;span class="c"&gt;// Could be your local Ollama instance&lt;/span&gt;
    &lt;span class="c"&gt;// Could be a load balancer across 5 providers&lt;/span&gt;
    &lt;span class="c"&gt;// Doesn't matter. Interface stays the same.&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is why hexagonal architecture isn't overengineering — it's acknowledging that &lt;strong&gt;the LLM landscape will change&lt;/strong&gt;. The question isn't if you'll need to adapt. It's whether your architecture lets you adapt in hours or months.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;LLMs are tools, not thinkers&lt;/strong&gt; — They're incredibly powerful pattern matchers, but they don't "understand" like we do.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture matters more than ever&lt;/strong&gt; — Bad architecture with LLMs doesn't just break, it breaks unpredictably.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specialization beats generalization&lt;/strong&gt; — Both for agents and for the systems that contain them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human oversight is non-negotiable&lt;/strong&gt; — No matter how autonomous your system is, keep humans in the loop for critical decisions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The hype cycle is real&lt;/strong&gt; — Ignore the "AI will replace developers" noise. Build useful things. Solve real problems.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What's Next for Navi
&lt;/h2&gt;

&lt;p&gt;I'm still early in this journey. The architecture is stabilizing, but there's still so much code to write and decisions to make, like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sandbox&lt;/li&gt;
&lt;li&gt;Memory management with vector stores&lt;/li&gt;
&lt;li&gt;Security layers against prompt injection&lt;/li&gt;
&lt;li&gt;Observability for agent interactions&lt;/li&gt;
&lt;li&gt;Self-healing workflows&lt;/li&gt;
&lt;li&gt;Human-in-the-loop interfaces&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But the foundation is solid. And it's solid because I designed it around LLM weaknesses, not their strengths.&lt;/p&gt;




&lt;p&gt;If you're building with LLMs, I'd love to hear your war stories. What architectural mistakes did you make? What worked? What completely failed?&lt;/p&gt;

&lt;p&gt;Hit me up in the comments, on &lt;a href="https://x.com/enrellsan" rel="noopener noreferrer"&gt;X&lt;/a&gt; or &lt;a href="https://discord.gg/eNsMFGZU" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;. And if you're curious about Navi's progress, the &lt;a href="https://github.com/enrell/navi" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; is where all the messy experimentation happens in public (there's no code yet, but soon).&lt;/p&gt;

&lt;p&gt;Remember: infrastructure survives bubbles. Hype doesn't. Build the former.&lt;/p&gt;

</description>
      <category>go</category>
      <category>ai</category>
      <category>navi</category>
      <category>llm</category>
    </item>
    <item>
      <title>I'm building Navi: a truly secure and useful AI orchestrator | cry about it openclaw</title>
      <dc:creator>Ênrell</dc:creator>
      <pubDate>Thu, 26 Feb 2026 08:50:25 +0000</pubDate>
      <link>https://dev.to/enrell/im-building-navi-a-truly-secure-and-useful-ai-orchestrator-cry-about-it-openclaw-100i</link>
      <guid>https://dev.to/enrell/im-building-navi-a-truly-secure-and-useful-ai-orchestrator-cry-about-it-openclaw-100i</guid>
      <description>&lt;h1&gt;
  
  
  Hello world guys!
&lt;/h1&gt;

&lt;p&gt;The TL;DR is: I've tested openclaw and other AI orchestrators, and they always follow the exact same pattern:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;They are built as products to be sold, not as open-source projects for the community. They are created by the hype and for the hype, pushing a generic idea of "agency"—a bloated product with a bunch of features and skills that, at the end of the day, aren't even that useful. That's because they aren't built to solve real problems; they're built for marketing and to sell big tech subscriptions. &lt;br&gt;
That's exactly why OpenAI hired Peter Steinberger—a classic acqui-hire just to have another avenue to sell their API keys and subscriptions, not to solve actual problems.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I can name a few projects that follow this exact pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Windsurf&lt;/li&gt;
&lt;li&gt;Adept&lt;/li&gt;
&lt;li&gt;Covariant&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All hyped up as the "next generation of agents/coding/robotics". Google, Amazon, and Meta swoop in with licensing deals and poach the founders/team.&lt;/p&gt;

&lt;p&gt;The result? The startups turn into ghost companies or run on skeleton crews. Employees cry in all-hands meetings, and the product dies or fades into irrelevance while the team goes off to work on big tech clouds/models.&lt;/p&gt;

&lt;h1&gt;
  
  
  Navi's differentiator
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;Navi's purpose is to be a useful, secure agent orchestrator built for the tech/dev community.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I've pointed out some useless aspects of openclaw, and you might be wondering why I'm thrashing openclaw specifically instead of other AI orchestrators. The answer is simple: because openclaw is the most popular, the most hyped, the most sold, and the most used right now. It’s the perfect example. Anyone in the tech bubble who hasn't heard of openclaw in the last few weeks has been living under a rock.&lt;/p&gt;

&lt;p&gt;I won't deny that I'm in a bubble, that LLMs form a bubble that could pop at any minute, and that an AI winter will arrive sooner or later. But my point is: even if it is a bubble, even with all the hype and massive big tech investments, some companies will survive and open-source models will stick around. That's because there &lt;em&gt;is&lt;/em&gt; real value and real demand; it's just not the kind of value and demand big tech is trying to sell. It's a more niche, specific, real, and useful demand. And that’s Navi's goal: to survive the bubble bursting.&lt;/p&gt;

&lt;p&gt;I believe the bubble will pop, and the companies that survive will be the ones actually delivering value, whether that's a product, a service, or foundation models.&lt;br&gt;
LLMs are highly useful technologies, especially in these areas (in no particular order):&lt;/p&gt;

&lt;h2&gt;
  
  
  The utility of agents
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Software Development:&lt;/strong&gt; It has never been easier to get a project off the ground. Testing ideas, learning a new framework, or just doing some "vibe coding" to see if a product holds up has become absurdly fast. The trap here is for those who just copy and paste without understanding what's going on under the hood. (If you want to avoid that and learn something useful, check out this post: &lt;em&gt;how to use LLMs the right way&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Headache-free Customer Support:&lt;/strong&gt; Forget those dumb bots from the past. A smart agent integrated into WhatsApp or internal systems handles 60% to 80% of the daily grind, 24/7. I have friends who have worked in support, and the reality is: 90% of issues are mundane things, often from the elderly or people with zero digital literacy. AI handles this without breaking a sweat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sales and Marketing at scale (without sounding like a robot):&lt;/strong&gt; You can automate everything from lead qualification (SDR agents) to abandoned cart recovery and call analysis. It’s about generating content and sending personalized mass emails that actually sound natural.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finding things within the company (the famous RAG):&lt;/strong&gt; You know that massive legacy documentation or internal policies nobody knows where to find? Point an internal chat at it. If implemented well, documenting and querying company knowledge becomes trivial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cybersecurity and Blue/White Hats:&lt;/strong&gt; Parsing logs by hand is boring and repetitive. A well-tuned model can chew through logs, detect threats, run superficial pentests, and generate reports in no time. It's an absolute lifesaver for security folks who need to reduce incident response times.&lt;/p&gt;

&lt;p&gt;Despite all the benefits agents bring, most people have an exaggerated view of them and, more often than not, give them superpowers they shouldn't have. The catch is thinking that agents can solve &lt;em&gt;everything&lt;/em&gt;. That couldn't be further from the truth: they should be &lt;em&gt;part&lt;/em&gt; of the process, not take over the entire process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;One of the saddest things in the tech bubble right now is the sheer negligence regarding security. It doesn't need to be enterprise-grade, but it has to be solid; have a minimum level of responsibility. &lt;br&gt;
Don't be like certain devs: don't deploy a hobby project with open ports, plaintext credentials, and 1-click Remote Code Execution. Even explicitly warning people multiple times isn't enough—it's a recipe for disaster.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Navi project
&lt;/h2&gt;

&lt;p&gt;After that extensive rant, it's time to talk about the project that's been pulsing in my mind: Navi. The idea came to me years ago as a personal project (I'm clearly late to the party, right? XD). I've always looked for ways to automate my dev environment, whether it was self-hosted apps, CLI tools, or automation scripts, but I was never satisfied with &lt;em&gt;how&lt;/em&gt; the automation was done, and I figured out why.&lt;/p&gt;

&lt;p&gt;Task automation, especially on Linux, is extremely decentralized. This means we have plenty of automation tools, but they don't communicate natively or in a standardized way. You have a bash script for one thing, a cronjob for another, and several excellent CLI tools that require you to write ugly "glue code" just to make them talk to each other. The tools don't talk to each other, and they shouldn't, for security reasons.&lt;/p&gt;

&lt;p&gt;Navi was born exactly to be that link, the maestro of this orchestra, but with one golden rule: you are always in control, and security is non-negotiable. It's not a "magical autonomous agent" that will run &lt;code&gt;rm -rf /&lt;/code&gt; because it hallucinated mid-task or misinterpreted a loose prompt.&lt;/p&gt;

&lt;p&gt;The name comes from NAVI, a computer from the anime &lt;a href="https://anilist.co/anime/339/serial-experiments-lain/" rel="noopener noreferrer"&gt;Serial Experiments Lain (1998)&lt;/a&gt;. It's one of my favorite anime and I highly recommend it: it's dense, intellectual, and philosophical. Anyway, this computer is used by the protagonist, Lain, to access the Wired (the ultra-advanced global network in the anime, sort of like an internet that mixes virtual reality, collective consciousness, and much more). NAVI is the hardware + software bundle to access the Wired. It features both navigation-based and voice-based interfaces.&lt;/p&gt;

&lt;p&gt;NAVI: &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnelrd3qgy2altvhdp5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnelrd3qgy2altvhdp5i.png" alt="navi screen" width="800" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;She does an insane upgrade to her NAVI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F061epjg5mux4pe0bemvv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F061epjg5mux4pe0bemvv.png" alt="navi full view" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Navi's Architecture
&lt;/h1&gt;

&lt;p&gt;To ensure the project is robust, scalable, and above all, testable, I chose to write Navi in Go. Besides delivering absurd performance and compiling everything into a single binary (which makes life on Linux so much easier), Go allows me to handle concurrency in a very elegant and straightforward way.&lt;/p&gt;

&lt;p&gt;The foundation of the project follows the Hexagonal Architecture (Ports and Adapters).&lt;/p&gt;

&lt;p&gt;This means Navi's core intelligence and orchestration are completely isolated from external tools and interfaces. If tomorrow I want to swap out the LLM provider, the local database, or how it executes a script on my OS, I just write a new "adapter". The business logic remains intact and isolated from side effects.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do you interact with it?
&lt;/h2&gt;

&lt;p&gt;As I mentioned at the beginning, Navi doesn't lock you into a heavy, proprietary web UI that tries to shove a subscription down your throat, nor is it an agent that will expose your credentials. It was designed to have multiple entry points (the Ports in our architecture):&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TUI (Terminal User Interface):&lt;/strong&gt; For terminal dwellers (and anyone who uses Neovim and a window manager like Sway knows the value of never having to take your hands off the keyboard), having a fast, beautiful, and responsive interface right in the console is essential.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;REST/gRPC API:&lt;/strong&gt; If you want to integrate Navi into another system, build your own frontend, or trigger webhooks from other applications, the door is open.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Messaging Bots:&lt;/strong&gt; Native integration with Discord and Telegram. You can trigger automations on your server or your home machine by sending a text from your phone, in a secure and authenticated way.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The roadmap is still being drawn up, and the core is currently under development. The goal is to build something open-source, focusing on solving real productivity and system orchestration problems, without selling our souls to the hype.&lt;/p&gt;

&lt;p&gt;I'll be dropping the GitHub repository soon for anyone who wants to take a look at the code (or chip in on the PRs). Until then, we keep coding!&lt;/p&gt;

&lt;p&gt;-- Present day, present time! hahahahaha&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openclaw</category>
      <category>llm</category>
      <category>go</category>
    </item>
    <item>
      <title>AnimeSubs an LLM Subtitle Translator</title>
      <dc:creator>Ênrell</dc:creator>
      <pubDate>Tue, 02 Dec 2025 21:11:22 +0000</pubDate>
      <link>https://dev.to/enrell/animesubs-an-llm-subtitle-translator-4eek</link>
      <guid>https://dev.to/enrell/animesubs-an-llm-subtitle-translator-4eek</guid>
      <description>&lt;p&gt;Hey guys, I want to share with you this application that I build to solve one problem that is common on original anime blue-rays - The lack in the native subtitles (Brazilian Portuguese for my case).&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;Most media content on the internet is in English, Anime episodes included. I consider myself proficient on English, I can write, read, listening and speak but is not my native language, I’m from Brazil my native language is Portuguese. But, if I can read the Anime subtitle on English — What is the problem with that? — Comfort. I can read the subtitle on English, but my vocabulary is short, and limited by technical English, that is the content that I consume more, so, if some word that I don’t know appears on subtitle, I have to stop the episode, and translate, this break the attention, in my native language is more comfortable and fun to watch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ways to solve
&lt;/h2&gt;

&lt;p&gt;There are some ways to solve this problem, the most common is to download subtitle files from the internet, there are many sites that provide subtitle files in many languages, but this approach has some problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Not all animes have subtitles in all languages, some animes are very niche and don’t have subtitles in many languages.&lt;/li&gt;
&lt;li&gt;The quality of the subtitles is not guaranteed, some subtitles are poorly translated, or have many errors.&lt;/li&gt;
&lt;li&gt;The timing of the subtitles may not match the video, causing a bad experience.&lt;/li&gt;
&lt;li&gt;It requires manual effort to find, download and sync the subtitles with the video.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another way to solve this problem is to use machine translation services, like Google Translate, DeepL, etc. But this approach also has some problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The quality of the translation is not always good, especially for complex sentences or idioms.&lt;/li&gt;
&lt;li&gt;The context of the dialogue may be lost, leading to awkward or incorrect translations.&lt;/li&gt;
&lt;li&gt;It may require internet connection, which is not always available.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The solution: AnimeSubs
&lt;/h2&gt;

&lt;p&gt;To solve this problem, I decided to build an application that uses Large Language Models (LLMs) to translate anime subtitles from any language to any other language that the model supports. The application is called AnimeSubs, and it works as follows:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Requirements: mkvmerge (part of mkvtoolnix) and ffmpeg must be installed and available in the system PATH.&lt;/p&gt;

&lt;p&gt;Note: The application is in its early stages, may you encounter some bugs (mainly on Windows and Mac, I do not focus on those platforms right now), this is my first desktop application using Tauri, so, please, report any issues on the GitHub repository.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Download animesubs here: &lt;a href="https://github.com/enrell/animesubs/releases" rel="noopener noreferrer"&gt;AnimeSubs&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user provides the video file or folder containing the video files and the LLM provider.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jxnpes8184t8g7rt5z3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jxnpes8184t8g7rt5z3.png" alt="settings-api-configuration" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the source and target languages for the translation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxl4bmwm1ytijf4768b8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxl4bmwm1ytijf4768b8.png" alt="settings-translation" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the output type for subtitles (ass, srt or vtt)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6egmreps7cdiec2rs3s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6egmreps7cdiec2rs3s.png" alt="settings-output-type" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After configuring the settings, the user can start the translation process.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Is very important to select the use mkvmerge for muxing the subtitles into the video file.&lt;br&gt;
Because ffmpeg have some issues with subtitle muxing.&lt;br&gt;
If you do not select mkvmerge, the subtitles may not work.&lt;/p&gt;

&lt;p&gt;You can add a custom prompt to, for example, if I just keep the default english -&amp;gt; portuguese prompt, the model can mix Portugal portuguese and Brazilian portuguese, the two are correct, but have different nuances, so you may want to specify which one you prefer.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk51l2o5zzd63j0qsic6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk51l2o5zzd63j0qsic6.png" alt="home-options" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to select some specific subtitle track, select manually.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37s8ch8ottxqpjl61kv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37s8ch8ottxqpjl61kv4.png" alt="home-subtitle-track-selection" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on start translation button and wait for the process to finish (do not close the app on muxing process).&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AnimeSubs is a powerful tool for translating anime subtitles using LLMs. It addresses the common issues faced by anime fans who struggle with language barriers and provides a more comfortable viewing experience. By leveraging the capabilities of modern AI, AnimeSubs can deliver high-quality translations that respect the nuances of different languages and cultures.&lt;br&gt;
I hope you find this application useful and enjoy watching anime or other media in your native language! If you have any questions or feedback, feel free to reach out to me on GitHub. Happy watching!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: AnimeSubs is licensed under the AGPL-3.0 License.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>showdev</category>
      <category>braziliandevs</category>
      <category>llm</category>
      <category>ai</category>
    </item>
    <item>
      <title>Git + GitHub para iniciantes</title>
      <dc:creator>Ênrell</dc:creator>
      <pubDate>Tue, 26 Dec 2023 13:22:37 +0000</pubDate>
      <link>https://dev.to/enrell/git-github-para-iniciantes-4n3h</link>
      <guid>https://dev.to/enrell/git-github-para-iniciantes-4n3h</guid>
      <description>&lt;h2&gt;
  
  
  O que é GitHub?
&lt;/h2&gt;

&lt;p&gt;O GitHub é uma plataforma de desenvolvimento baseada em Git. Na prática, ele funciona como um grande hub onde desenvolvedores armazenam código, colaboram em projetos e automatizam processos relacionados ao desenvolvimento de software. Ele é utilizado tanto por pessoas físicas quanto por empresas, em projetos pequenos ou de grande escala.&lt;/p&gt;

&lt;p&gt;Apesar de ser a plataforma mais conhecida, o GitHub não é a única solução disponível. Existem outras plataformas que oferecem ferramentas semelhantes, como SourceForge, Bitbucket, GitLab — que não pertence à Microsoft e não é um complemento do GitHub — e o GNU Savannah, voltado principalmente para projetos de software livre.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ferramentas
&lt;/h2&gt;

&lt;p&gt;Uma das funcionalidades centrais do GitHub é o sistema de armazenamento de código por meio dos chamados repositórios. Os repositórios são responsáveis por guardar o código-fonte de uma aplicação de forma segura, versionada e distribuída.&lt;/p&gt;

&lt;p&gt;Por utilizar o Git como base, o GitHub herda um modelo de controle de versão distribuído. Isso permite que várias pessoas trabalhem no mesmo projeto ao mesmo tempo, cada uma com sua própria cópia do repositório, sem que o trabalho de uma interfira diretamente no da outra.&lt;/p&gt;

&lt;p&gt;Além do versionamento, o GitHub facilita bastante a colaboração entre desenvolvedores. É possível contribuir com projetos, propor alterações no código, revisar mudanças feitas por outras pessoas e discutir decisões técnicas diretamente na plataforma.&lt;/p&gt;

&lt;p&gt;Outro ponto importante é o rastreamento de problemas. As chamadas issues são usadas para organizar tarefas, registrar bugs, planejar melhorias e centralizar discussões relacionadas ao projeto. Isso ajuda muito na comunicação e na organização do trabalho, principalmente em equipes.&lt;/p&gt;

&lt;p&gt;Quando alguém propõe uma alteração no código, normalmente isso é feito por meio de um pull request. Esse recurso permite que o código seja revisado antes de ser incorporado ao projeto principal, reduzindo erros e melhorando a qualidade do software.&lt;/p&gt;

&lt;p&gt;O GitHub também oferece suporte à integração contínua. Isso significa que é possível configurar processos automatizados para testar, validar e construir o código sempre que uma alteração é enviada ao repositório. Dentro desse contexto, o GitHub Actions se destaca como uma ferramenta nativa que permite automatizar fluxos de trabalho como testes, build e deploy diretamente na plataforma.&lt;/p&gt;

&lt;p&gt;Outro detalhe importante é o licenciamento. A maioria dos repositórios inclui um arquivo LICENSE, que define como o código pode ser usado, modificado e redistribuído por outras pessoas. Isso é essencial tanto para projetos open source quanto para projetos privados.&lt;/p&gt;

&lt;h2&gt;
  
  
  É seguro?
&lt;/h2&gt;

&lt;p&gt;O GitHub adota diversas práticas de segurança para proteger tanto os usuários quanto os projetos hospedados na plataforma.&lt;/p&gt;

&lt;p&gt;Uma das formas mais comuns de autenticação é o uso de chaves SSH. Esse método utiliza um par de chaves, uma pública e uma privada. A chave pública é associada à sua conta no GitHub, enquanto a chave privada permanece apenas no seu computador. A autenticação acontece sem que você precise informar sua senha a cada operação.&lt;/p&gt;

&lt;p&gt;Além disso, o GitHub permite configurar permissões de acesso nos repositórios. É possível definir quem pode apenas visualizar o código, quem pode enviar alterações e quem pode administrar o projeto. Isso garante que apenas pessoas autorizadas tenham acesso a determinadas ações.&lt;/p&gt;

&lt;p&gt;A plataforma também oferece autenticação em dois fatores, adicionando uma camada extra de segurança ao exigir uma segunda forma de verificação além da senha. Somado a isso, o GitHub mantém políticas de segurança ativas, monitorando atividades suspeitas e aplicando medidas para reduzir riscos.&lt;/p&gt;

&lt;p&gt;Outro ponto relevante é a realização de backups regulares, que ajudam a evitar perda de dados em caso de falhas na infraestrutura.&lt;/p&gt;

&lt;h2&gt;
  
  
  Como criar um repositório no GitHub?
&lt;/h2&gt;

&lt;p&gt;A forma mais indicada de criar um repositório é diretamente pelo site do GitHub. O processo é simples e guiado.&lt;/p&gt;

&lt;p&gt;Durante a criação, você define o nome do repositório, que deve conter apenas letras, números e alguns caracteres especiais como ponto, hífen ou underline. Também é possível adicionar uma descrição, que ajuda outras pessoas a entenderem rapidamente o objetivo do projeto.&lt;/p&gt;

&lt;p&gt;É altamente recomendável criar o repositório já com um arquivo README. Esse arquivo funciona como a porta de entrada do projeto, explicando o que ele faz, como usar e, em muitos casos, como contribuir.&lt;/p&gt;

&lt;p&gt;Caso você já tenha um projeto pronto no seu computador, o fluxo costuma ser criar o repositório com o README, clonar esse repositório localmente, copiar os arquivos do projeto para dentro da pasta clonada e então enviar tudo para o GitHub com um push.&lt;/p&gt;

&lt;h2&gt;
  
  
  Importando um projeto e enviando para o GitHub
&lt;/h2&gt;

&lt;p&gt;Antes de enviar código para o GitHub de forma segura, o ideal é configurar a autenticação via SSH. Para entender melhor os conceitos envolvidos, vale a pena ler um material introdutório sobre SSH.&lt;/p&gt;

&lt;p&gt;O primeiro passo é gerar um par de chaves SSH. O algoritmo recomendado atualmente é o ed25519, por ser mais moderno e seguro. Isso pode ser feito com o seguinte comando:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh-keygen &lt;span class="nt"&gt;-t&lt;/span&gt; ed25519
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Durante o processo, o sistema perguntará onde a chave deve ser salva. Na maioria dos casos, manter o caminho padrão é a melhor escolha. Se quiser, você pode apenas alterar o nome do arquivo. Esse arquivo será sua chave privada e não deve ser compartilhado com ninguém.&lt;/p&gt;

&lt;p&gt;Em seguida, você deverá definir uma senha para proteger essa chave. Essa senha será solicitada sempre que a chave for utilizada, então é importante não esquecê-la. O uso de um gerenciador de senhas é altamente recomendado.&lt;/p&gt;

&lt;p&gt;Após isso, o sistema criará automaticamente a chave pública, que terá a extensão .pub e ficará no mesmo diretório da chave privada.&lt;/p&gt;

&lt;p&gt;Durante a geração da chave, também será exibido um fingerprint. O fingerprint é uma representação curta e única da chave SSH, usada para verificar sua autenticidade. Como estamos apenas criando uma chave local, essa informação não exige atenção especial neste momento.&lt;/p&gt;

&lt;p&gt;Com a chave criada, basta copiar todo o conteúdo da chave pública e adicioná-la nas configurações da sua conta no GitHub, na seção de chaves SSH. Nessa etapa, você define um nome para a chave e cola o conteúdo copiado no campo correspondente.&lt;/p&gt;

&lt;p&gt;Depois disso, sua conta estará configurada para autenticação via SSH.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clonando repositórios via SSH
&lt;/h2&gt;

&lt;p&gt;Com a chave SSH configurada, você pode clonar repositórios utilizando a URL SSH fornecida pelo GitHub. A partir desse momento, será possível enviar commits para o repositório remoto de forma segura, sem a necessidade de informar usuário e senha a cada operação.&lt;/p&gt;

&lt;p&gt;Esse fluxo torna o uso do Git mais prático e alinhado com as boas práticas de segurança adotadas atualmente.&lt;/p&gt;

&lt;h2&gt;
  
  
  Próximos capítulos
&lt;/h2&gt;

&lt;p&gt;No próximo capítulo, vamos sair um pouco da prática e mergulhar na teoria do Git. A ideia é explicar como o controle de versão funciona e como os conceitos que usamos hoje surgiram e evoluíram ao longo do tempo.&lt;/p&gt;

</description>
      <category>git</category>
      <category>github</category>
      <category>ssh</category>
      <category>beginners</category>
    </item>
    <item>
      <title>SSH para iniciantes</title>
      <dc:creator>Ênrell</dc:creator>
      <pubDate>Tue, 26 Dec 2023 13:21:03 +0000</pubDate>
      <link>https://dev.to/enrell/ssh-para-iniciantes-126</link>
      <guid>https://dev.to/enrell/ssh-para-iniciantes-126</guid>
      <description>&lt;h2&gt;
  
  
  O que é SSH?
&lt;/h2&gt;

&lt;p&gt;Descrito no documento oficial do &lt;a href="https://datatracker.ietf.org/doc/html/rfc4253" rel="noopener noreferrer"&gt;SSH&lt;/a&gt; como:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;" The Secure Shell (SSH) Protocol is a protocol for secure remote login&lt;br&gt;
   and other secure network services over an insecure network."&lt;/p&gt;

&lt;p&gt;"O Protocolo Secure Shell (SSH) é um protocolo para login remoto seguro e outros serviços de rede seguros sobre uma rede não segura."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Resumindo, é um protocolo de rede criptografado usado para comunicações seguras em uma rede não segura. Ele permite que usuários acessem e controlem remotamente dispositivos e servidores pela internet de forma segura.&lt;/p&gt;

&lt;p&gt;O SSH fornece uma maneira segura de autenticar e trocar dados criptografados entre dois sistemas, prevenindo potenciais ameaças de interceptação ou manipulação de dados durante a transmissão. Isso é particularmente crucial ao acessar servidores remotos ou realizar tarefas administrativas em sistemas distribuídos.&lt;/p&gt;

&lt;p&gt;Como SSH é um protocolo de rede, não uma ferramenta, temos que baixar uma ferramenta que nos disponha dos recursos necessários para usar o SSH.&lt;/p&gt;

&lt;p&gt;No nosso caso vamos usar a "suite" (conjunto de ferramentas) &lt;a href="https://www.openssh.com/" rel="noopener noreferrer"&gt;OpenSSH&lt;/a&gt;, que é uma implementação de código aberto do protocolo SSH, em conjunto com ferramentas essenciais para nós desenvolvedores, todo desenvolvedor deve estudar SSH e Criptografia, é indispensável.&lt;/p&gt;

&lt;p&gt;Vamos começar criando um par de chaves SSH com a ferramenta "ssh-keygen".&lt;/p&gt;

&lt;p&gt;Para criar um par de chaves SSH usando um algoritmo moderno e rápido como o "ed25519" só precisamos passar o parâmetro "-t" para o gerador, e ele lhe guiará para a criação do par de chaves &lt;code&gt;ssh-keygen -t ed25519&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mas o que é esse tal de "ed25519"?
&lt;/h3&gt;

&lt;p&gt;O "ed25519" é um sistema de chaves públicas de curvas elípticas de utilizado para criptografia assimétrica, onde um par de chaves é gerado: uma chave privada e uma chave pública. Ele faz parte de uma família de curvas elípticas chamada "Curve25519", desenvolvida pelo renomado matemático Daniel J. Bernstein no ano de 2011. O algoritmo "ed25519" oferece vantagens significativas de segurança e desempenho sobre algoritmos mais antigos como o RSA e SHA-256.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chaves assimétricas
&lt;/h2&gt;

&lt;p&gt;Chaves assimétricas, também conhecidas como criptografia de chave pública, são um tipo de sistema de criptografia que utiliza um par de chaves relacionadas, composto por uma chave pública e uma chave privada. Essas chaves são matematicamente relacionadas, mas as informações codificadas com uma chave só podem ser decodificadas pela outra chave do par. Isso significa que o que é criptografado com a chave pública só pode ser descriptografado pela chave privada correspondente, e vice-versa.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Chave Pública:&lt;/strong&gt; Essa chave é geralmente distribuída livremente e conhecida por todos. Ela é usada para criptografar informações que só podem ser decodificadas pela chave privada correspondente.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Chave Privada:&lt;/strong&gt; Esta chave é mantida em segredo e protegida pelo proprietário. É usada para descriptografar as informações que foram criptografadas com a chave pública correspondente.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;O uso de chaves assimétricas é fundamental em vários aspectos da segurança digital, como autenticação, assinaturas digitais e troca segura de chaves de sessão em protocolos como o SSH (Secure Shell) e o SSL/TLS (usado para comunicação segura na web).&lt;/p&gt;

&lt;p&gt;Um exemplo comum de aplicação é quando você acessa um site seguro (usando HTTPS). O servidor web possui uma chave privada que corresponde a uma chave pública incorporada no certificado SSL. Quando você envia informações para o servidor, a conexão é criptografada usando a chave pública no certificado, e apenas o servidor, com a chave privada correspondente, pode descriptografar essas informações. Isso garante a confidencialidade e a integridade dos dados transmitidos.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusão
&lt;/h2&gt;

&lt;p&gt;Em conclusão, o SSH desempenha um papel fundamental na garantia da segurança das comunicações em redes, especialmente quando se trata de acesso remoto a dispositivos e servidores. Ao utilizar o protocolo SSH e implementar chaves assimétricas, como o algoritmo "ed25519", os desenvolvedores podem estabelecer conexões seguras, prevenindo ameaças de interceptação ou manipulação de dados durante a transmissão.&lt;/p&gt;

&lt;p&gt;A escolha do algoritmo "ed25519" destaca a importância de algoritmos modernos e eficientes em comparação com métodos mais antigos. As chaves assimétricas desempenham um papel crucial na criptografia de chave pública, possibilitando autenticação segura, assinaturas digitais e a troca segura de informações em diversas aplicações, incluindo protocolos como SSH e SSL/TLS.&lt;/p&gt;

&lt;p&gt;No contexto do desenvolvimento, a utilização da suite OpenSSH e a compreensão dos princípios de criptografia, incluindo chaves assimétricas, são consideradas indispensáveis. A criação de pares de chaves SSH, como exemplificado com o comando "ssh-keygen -t ed25519", demonstra um passo inicial essencial para garantir a segurança nas comunicações e acessos remotos.&lt;/p&gt;

&lt;p&gt;Em resumo, o SSH e as chaves assimétricas desempenham um papel crucial na construção de um ambiente de desenvolvimento seguro, protegendo a integridade e confidencialidade das informações transmitidas pela rede. O conhecimento e a aplicação adequada desses conceitos são essenciais para qualquer desenvolvedor comprometido com a segurança de seus sistemas e comunicações.&lt;/p&gt;

</description>
      <category>ssh</category>
      <category>beginners</category>
      <category>security</category>
    </item>
    <item>
      <title>Git para iniciantes</title>
      <dc:creator>Ênrell</dc:creator>
      <pubDate>Tue, 12 Dec 2023 21:28:50 +0000</pubDate>
      <link>https://dev.to/enrell/git-para-iniciantes-23hn</link>
      <guid>https://dev.to/enrell/git-para-iniciantes-23hn</guid>
      <description>&lt;h2&gt;
  
  
  Disclaimer
&lt;/h2&gt;

&lt;p&gt;Leia tudo de forma sequencial, é importante não pular etapas. Algumas partes do texto dão o contexto necessário para o entendimento.&lt;/p&gt;

&lt;h2&gt;
  
  
  O que é Git?
&lt;/h2&gt;

&lt;p&gt;No repositório do &lt;a href="https://github.com/git/git" rel="noopener noreferrer"&gt;Git&lt;/a&gt; no GitHub os desenvolvedores do Git o definem como:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"É um sistema de controle de revisão rápido, escalável e distribuído, com um conjunto de comandos excepcionalmente rico que oferece operações em níveis elevados e acesso completo às partes internas."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;O Git é um projeto de código aberto coberto pela Licença Pública Geral GNU versão 2 (algumas partes estão sob licenças diferentes, compatíveis com a GPLv2). Foi originalmente escrito por Linus Torvalds criador do &lt;a href="https://github.com/torvalds/linux" rel="noopener noreferrer"&gt;Linux&lt;/a&gt; com a ajuda de um grupo de hackers ao redor da internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Por que usar Git?
&lt;/h2&gt;

&lt;p&gt;Se você já se viu duplicando projetos e renomeando pastas para criar versões diferentes, sabe como isso pode complicar as coisas. Trabalhar assim em equipe aumenta os desafios.&lt;/p&gt;

&lt;p&gt;O Git funciona como um histórico de alterações para o seu projeto. Em vez de modificar diretamente o projeto principal, você cria "ramificações" para experimentar ideias. Quando tudo estiver pronto, você "confirma" essas mudanças, criando um registro claro do que foi feito.&lt;/p&gt;

&lt;p&gt;Isso não apenas mantém tudo organizado, mas também facilita a colaboração em equipe. Se outros estiverem trabalhando no projeto, o Git ajuda a juntar todas as mudanças de maneira tranquila.&lt;/p&gt;

&lt;p&gt;Então, da próxima vez que pensar em fazer cópias e renomeações loucas, considere adotar o Git. Ele tornará seu projeto mais organizado, eficiente e colaborativo, sem aquela confusão toda. &lt;/p&gt;

&lt;h2&gt;
  
  
  Comandos básicos do Git
&lt;/h2&gt;

&lt;p&gt;OBS: Alguns comandos deste tópico tem vantagens e desvantagens em serem usados, neste tópico não irei tratar delas, ficará para outro post.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;git init&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;O primeiro comando básico que você vai usar é o &lt;code&gt;git init&lt;/code&gt;, mas o que ele faz? para que serve?&lt;/p&gt;

&lt;p&gt;O &lt;code&gt;git init&lt;/code&gt; é usado para iniciar um novo repositório Git em um diretório existente ou em um novo diretório vazio. Ao executar &lt;code&gt;git init&lt;/code&gt;, você está basicamente informando ao Git para começar a rastrear as alterações nos arquivos dentro desse diretório.&lt;/p&gt;

&lt;p&gt;Quando você executa &lt;code&gt;git init&lt;/code&gt;, alguns arquivos e diretórios específicos são criados dentro do diretório do seu projeto. O principal deles é o diretório oculto chamado .git, que contém toda a estrutura necessária para o Git controlar as versões dos seus arquivos.&lt;/p&gt;

&lt;p&gt;NÃO EXCLUA A PASTA .git DE FORMA ALGUMA!&lt;br&gt;
Se você fizer isso, apagará todo seu histórico de modificações do seu repositório local.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;git config user.name&lt;/code&gt; e &lt;code&gt;git config user.email&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Os comandos &lt;code&gt;git config user.name&lt;/code&gt; e &lt;code&gt;git config user.email&lt;/code&gt; são usados para configurar o nome do usuário e o endereço de e-mail associados aos commits que você faz em um repositório Git. Essas informações são incluídas em cada commit para identificar quem fez as alterações. Esses comandos são um passo importante para a identificação da pessoa que fez o commit, fundamental para a comunicação da equipe. Você irá usar da seguinte forma: &lt;code&gt;git config user.name (seu nome aqui)&lt;/code&gt;, não esqueça de remover os (), a mesma sintaxe vale para o email.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;git add&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;O comando &lt;code&gt;git add&lt;/code&gt; é utilizado no Git para adicionar mudanças específicas em arquivos ao chamado "index" (ou "staging area") vou explicar esses termos mais tarde. Essa etapa é necessária antes de realizar um commit para registrar as alterações no histórico do repositório.&lt;/p&gt;

&lt;p&gt;Quando você faz alterações em seus arquivos, o Git precisa saber quais dessas alterações você deseja incluir no próximo commit. O &lt;code&gt;git add&lt;/code&gt; permite que você selecione as mudanças específicas que você quer incluir, preparando-as para o próximo commit. Você irá usar da seguinte maneira: &lt;code&gt;git add (nome do seu arquivo com sua extensão)&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;git add .&lt;/code&gt;
O comando &lt;code&gt;git add .&lt;/code&gt; é utilizado para adicionar todas as mudanças que você fez ao "staging" (ou "área de preparação"), ou seja, todas as mudanças feitas nos seus arquivos serão preparadas para o commit.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eu recomendo cuidado ao usar o comando &lt;code&gt;git add .&lt;/code&gt;, por exemplo:&lt;/p&gt;

&lt;p&gt;Imagine que você tenha feito a página "sobre.html" do seu projeto e também feito algumas mudanças na página "home.html". No entanto, você só deseja fazer o commit das alterações na página "sobre" para manter as coisas organizadas.&lt;/p&gt;

&lt;p&gt;Se você usar &lt;code&gt;git add .&lt;/code&gt;, você estará adicionando as mudanças tanto da página "sobre.html" quanto da "home.html" para o próximo commit, o que pode não ser o que você quer. Para evitar esse problema, é melhor usar &lt;code&gt;git add sobre.html&lt;/code&gt;. Em seguida, você pode fazer o commit para registrar apenas as alterações na página "sobre.html" e manter seu histórico de versões mais preciso e organizado.&lt;/p&gt;

&lt;h2&gt;
  
  
  O que é Commit?
&lt;/h2&gt;

&lt;p&gt;E finalmente, o tão citado commit, uma peça fundamental no uso do Git.&lt;/p&gt;

&lt;p&gt;Em termos simples, um commit no Git representa uma captura instantânea do seu projeto em um determinado momento. É como tirar uma foto do estado atual dos seus arquivos, incluindo todas as alterações que você adicionou ao "staging area" (ou "área de preparação") com o comando &lt;code&gt;git add&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Cada commit é associado a uma mensagem descritiva, que serve para explicar o propósito daquela alteração. Essas mensagens são cruciais para entender o histórico do seu projeto e facilitam a colaboração em equipe, já que outros desenvolvedores podem compreender suas mudanças apenas lendo as mensagens dos commits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Como usar &lt;code&gt;git commit -m "mensagem descritiva"&lt;/code&gt;:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Após usar &lt;code&gt;git add&lt;/code&gt; para preparar as alterações, o próximo passo é usar &lt;code&gt;git commit&lt;/code&gt; para criar um commit.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sintaxe básica:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"mensagem descritiva"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-m&lt;/code&gt;: Permite adicionar uma mensagem descritiva diretamente na linha de comando.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dicas para mensagens de commit:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Seja Descritivo:&lt;/strong&gt; A mensagem deve explicar o que a alteração realiza. Evite mensagens vagas como "correções" ou "mudanças".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Seja Conciso:&lt;/strong&gt; Mantenha a mensagem curta e objetiva. Evite explicações excessivamente detalhadas.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use o Presente:&lt;/strong&gt; Mantenha a consistência no tempo verbal. Prefira o tempo presente, como em "Adiciona funcionalidade" em vez de "Adicionou funcionalidade".&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Caso esteja disposto a aprender um padrão de commits recomendo o &lt;a href="https://www.conventionalcommits.org/pt-br" rel="noopener noreferrer"&gt;Conventional Commits&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Próximos capítulos
&lt;/h2&gt;

&lt;p&gt;Para o próximo capítulo vou mostrar como usar o git juntamente com o GitHub.&lt;/p&gt;

</description>
      <category>git</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
