<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Maksym Mosiura</title>
    <description>The latest articles on DEV Community by Maksym Mosiura (@maksym_mosiura_7dd1c98618).</description>
    <link>https://dev.to/maksym_mosiura_7dd1c98618</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/maksym_mosiura_7dd1c98618"/>
    <language>en</language>
    <item>
      <title>AI and Human Will</title>
      <dc:creator>Maksym Mosiura</dc:creator>
      <pubDate>Tue, 10 Mar 2026 09:42:17 +0000</pubDate>
      <link>https://dev.to/maksym_mosiura_7dd1c98618/ai-and-human-will-9hi</link>
      <guid>https://dev.to/maksym_mosiura_7dd1c98618/ai-and-human-will-9hi</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc93hshy8p8bz5fv4gwuj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc93hshy8p8bz5fv4gwuj.png" alt="Human decides where to go with AI" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wherever you look today - a news feed, a podcast, a conference keynote — someone is always telling that AI will transform everything. It can be your job, your community, your world or even your thoughts. The signal is genuine. The transformation is real. And the most important question goes largely unasked:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;how do we choose to think about it?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's what I will try to answer in this article.&lt;/p&gt;




&lt;p&gt;The idea is simple - how we think about it will determine how we live through it.&lt;/p&gt;




&lt;h2&gt;
  
  
  This Has Happened Before
&lt;/h2&gt;

&lt;p&gt;History is reassuring, if you know where to look. The rise of AI is not the first time a technological leap made whole categories of human work feel suddenly obsolete.&lt;/p&gt;

&lt;p&gt;Consider the assembly line. Before its invention a lot of industries of producing complex goods (e.g. automobiles, metal structures, packaged food) required skilled workers at every stage. These stages were moving materials, inspecting quality, assembling components by hand, etc.. The process was slow, expensive, and deeply human. Then Ford and others reorganized production around continuous flow - that's where nearly everything changed. Their assembly lines changed the whole industry:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;manufacturing became faster;&lt;/li&gt;
&lt;li&gt;products became more affordable to the masses;&lt;/li&gt;
&lt;li&gt;and produces becomes cheaper.
That's when a wide swath of workers found their specialized knowledge replaced by repetitive, interchangeable tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pattern that followed is pretty instructive: industries were disrupted, but new industries emerged.&lt;/p&gt;

&lt;p&gt;Many of skills became obsolete, and new skills took their place. Many moved to specialized places for some time to support uniqueness. Still a lot really disappear. The people who adapted, people who understood the new tools, who found the human layer that automation could not replicate were the ones who shaped what came next.&lt;/p&gt;

&lt;p&gt;That same pattern is unfolding again today. And like every time before, it is not reversible. The future has already begun.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Race No One Wins by Standing Still
&lt;/h2&gt;

&lt;p&gt;A programmer who was excellent last year may find that AI can now produce comparable code faster and cheaper. This is not a reflection of their talent. It is a reflection of the tool's capability. The uncomfortable truth is that being good at your craft is no longer sufficient protection. A machine can approximate that craft on demand. Maybe it is not ideal, maybe it is buggy, but still allowable in general.&lt;/p&gt;

&lt;p&gt;So how do you think about a future that looks, at first glance, so threatening?&lt;/p&gt;

&lt;p&gt;The answer is straightforward, even if the path is not:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;improve yourself.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not in the generic, or motivational sense. Rather in a very specific sense. You have to ask a question: What AI cannot do? What no tool can do?&lt;/p&gt;

&lt;p&gt;The answer is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;carry responsibility.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The One Thing Machines Don't Have
&lt;/h2&gt;

&lt;p&gt;For an AI system, a failed outcome is simply a failed output. It can be logged, retried, discarded, revisited etc.. There is no consequence felt, no lesson internalized, no stake in what happens next. Even if context is correct, even if the previous lessons learned and cached - the problem is the same - no responsibility.&lt;/p&gt;

&lt;p&gt;In the real world, consequences are not always recoverable. Decisions ripple outward — into people's lives, into ecosystems, into economies. When something goes wrong, someone must answer for it. Someone has to be obliged to explain the reasons of that and solutions to correct consequences.&lt;/p&gt;

&lt;p&gt;Think about how AI-only decision-making might unfold inside an organization:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Company → request to AI → AI acts → wrong decision made → no accountability → reputational or financial damage&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now compare that to a process where a human is in the loop:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Company → decision maker → validated reasoning → AI executes → decision maker accountable → outcomes reviewed and refined&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The difference is not efficiency. The difference is ownership. The second process is slower in places — and that slowness is a feature, not a bug. &lt;strong&gt;It is where judgment lives.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Full automation may be appropriate in narrow or well-defined scenarios. Still as a general model for consequential decisions, it fails the moment complexity enters the picture: it can be hidden motivations, competing priorities, long-term goals, political context, ethical nuance or many other things that can't be explained to AI or put to the context. These are not edge cases. They are the substance of real decisions.&lt;/p&gt;

&lt;p&gt;AI is an extraordinary lens: it can surface options you hadn't considered, test your reasoning against scenarios you hadn't imagined, and identify blind spots you didn't know you had. But the lens does not look at itself. A person can. &lt;strong&gt;You do!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Again, this is about the ownership and responsibility.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Skills That Actually Matter Now
&lt;/h2&gt;

&lt;p&gt;This reframing opens something important. If AI handles the execution layer — the generation, the computation, the pattern-matching — then the human layer moves upward. The skills that grow in value are not the ones that compete with AI. They are the ones that use it well.&lt;/p&gt;

&lt;p&gt;Systems thinking. Logical reasoning under uncertainty. The ability to hold a complex picture in mind and ask the right questions of it. Critical validation — not accepting an output because it sounds plausible, but interrogating it: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is this accurate?&lt;/li&gt;
&lt;li&gt;Is this context-appropriate?&lt;/li&gt;
&lt;li&gt;Is this a hallucination, a misinterpretation, a confident-sounding error?&lt;/li&gt;
&lt;li&gt;Is this ...?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The shift is from using AI as a &lt;strong&gt;tool&lt;/strong&gt; that solves your problems, to using AI as a &lt;strong&gt;partner&lt;/strong&gt; that makes you sharper and smarter at solving them yourself. The former makes you dependent. The latter makes you stronger.&lt;/p&gt;




&lt;h2&gt;
  
  
  What We Can Teach the Next Generation
&lt;/h2&gt;

&lt;p&gt;This question has a particular urgency when it comes to young people. Today's teenagers and children have grown up with instant answers. Ask ChatGPT. Craft an essay. Get a solution. The friction that builds capacity - the cognitive work, it just gets bypassed.&lt;/p&gt;

&lt;p&gt;The brain, like any muscle, develops through resistance. When young people outsource their thinking to the AI, they are not saving time. They are skipping the training that builds judgment, skepticism, and intellectual confidence.&lt;/p&gt;

&lt;p&gt;How old system would solve this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;more homework;&lt;/li&gt;
&lt;li&gt;more lessons;&lt;/li&gt;
&lt;li&gt;more class hours, etc.
This approach will not fix this. What can fix it is &lt;strong&gt;Will&lt;/strong&gt; — the deliberate choice to engage with hard problems rather than hand them off. To use AI as a scaffold for exploration rather than a substitute for thought. Instead of blindly trust any answer from the AI, the critical thinking should be triggered. Why? To ask: is this answer actually right? How do I know? What would change it? Is this a fact or someone's joke or misinterpretation?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Critical thinking is &lt;strong&gt;not a subject&lt;/strong&gt;. It is a &lt;strong&gt;habit&lt;/strong&gt;. And habits are built through practice, not policy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Will Is the Differentiator
&lt;/h2&gt;

&lt;p&gt;The people who thrive in the era of AI will not necessarily be the most technically skilled. They will be the ones who choose to remain active, rather than passive. They will be the ones who use these tools to &lt;strong&gt;extend their thinking&lt;/strong&gt; rather than replace it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;learning instead of consuming&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Will is the genuine, self-directed commitment to growth. Will is what separates a consumer from a creator and a user from a builder. Someone carried by the current will be behind from someone who learns to navigate it. It was always a case, this is not something new.&lt;/p&gt;

&lt;p&gt;The future does not belong to those who fear AI, nor to those who blindly trust it. It belongs to those who understand what it is: a powerful, irresponsible, context-blind instrument.&lt;br&gt;
The ones who bring their own compass to it, who refuse to outsource their judgment along with their tasks, are the ones who will define what comes next.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>psychology</category>
      <category>productivity</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>Revolutionizing Wine Industry Technology: Why Micro Frontends Require Specialized Expertise</title>
      <dc:creator>Maksym Mosiura</dc:creator>
      <pubDate>Sat, 11 Oct 2025 23:15:36 +0000</pubDate>
      <link>https://dev.to/maksym_mosiura_7dd1c98618/revolutionizing-wine-industry-technology-why-micro-frontends-require-specialized-expertise-3gmp</link>
      <guid>https://dev.to/maksym_mosiura_7dd1c98618/revolutionizing-wine-industry-technology-why-micro-frontends-require-specialized-expertise-3gmp</guid>
      <description>&lt;p&gt;&lt;strong&gt;The Digital Transformation Imperative in Wine&lt;/strong&gt;&lt;br&gt;
The wine industry stands at a critical crossroads in 2025. As demographic shifts accelerate and consumer preferences evolve, wineries face unprecedented pressure to modernize their digital infrastructure. The consolidation of platforms like Commerce7's acquisition of WineDirect, the emergence of sophisticated DTC systems, and the explosive growth of the subscription economy signal a fundamental transformation in how wineries must engage with customers.&lt;/p&gt;

&lt;p&gt;Today's wine businesses require complex, multi-faceted digital ecosystems that seamlessly integrate e-commerce platforms, wine club management, tasting room experiences, inventory systems, customer relationship management, compliance tracking, and virtual engagement tools. The traditional monolithic approach to building these platforms has become a bottleneck, creating development friction, deployment delays, and scalability nightmares.&lt;br&gt;
This is where micro frontends represent not just an evolution, but a revolution for the wine industry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Makes Micro Frontends with Module Federation Truly Innovative&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Micro frontends break the monolithic frontend architecture into smaller, independently deployable applications. When combined with Webpack 5's Module Federation, NX monorepos, and Zephyr Cloud deployment, this architecture creates a powerful paradigm that seems tailor-made for the wine industry's unique challenges.&lt;/p&gt;

&lt;p&gt;Module Federation introduces a groundbreaking capability - applications can share code and consume components from other applications at runtime without rebuilding or redeploying the entire system. For wineries, this means the e-commerce team can deploy new features to the shopping cart experience while the wine club team simultaneously updates membership management, all without coordination nightmares or system-wide deployments.&lt;/p&gt;

&lt;p&gt;NX monorepos provide the orchestration layer, offering intelligent caching, code generation, and build optimization that can reduce CI/CD pipeline times from minutes to seconds. Meanwhile, Zephyr Cloud revolutionizes deployment by taking snapshots of applications and deploying them to the edge in sub-seconds, making "testing in production" a viable reality rather than a dangerous gambit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Brand-New Challenges That Demand Expertise&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's the critical truth that many organizations overlook: &lt;strong&gt;&lt;em&gt;implementing micro frontends with Module Federation is deceptively complex&lt;/em&gt;&lt;/strong&gt;. The challenges facing development teams in 2025 are not merely technical—they represent entirely new problem spaces that require specialized knowledge and battle-tested expertise.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foir21vtdumyd0x42pb1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foir21vtdumyd0x42pb1g.png" alt="Modules Dependencies" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;1. Dependency Version Management: The Distributed Nightmare&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
In traditional monolithic applications, managing dependencies is straightforward. In a micro frontend architecture with Module Federation, it becomes an intricate chess game. Consider this scenario: your host application uses React 18.2.0, but a remote wine club module depends on React 17.0.2. This version mismatch doesn't just cause warnings—it breaks fundamental features like &lt;code&gt;useState&lt;/code&gt;, &lt;code&gt;useEffect&lt;/code&gt;, and shared context, potentially crashing the entire user experience.&lt;/p&gt;

&lt;p&gt;Module Federation's shared API provides a solution through singleton enforcement. Though you can have them made with a factory. Anyway, configuring it correctly requires deep understanding of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Semantic versioning and compatibility matrices&lt;/li&gt;
&lt;li&gt;The webpack Module Federation configuration file (which exists outside the build process)&lt;/li&gt;
&lt;li&gt;How to handle breaking changes across distributed teams&lt;/li&gt;
&lt;li&gt;The trade-offs between strict versioning and flexible integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The complexity multiplies exponentially as you add more micro frontends. A wine business platform might have separate modules for inventory, e-commerce, club management, tasting room bookings, compliance tracking, and marketing automation. Each module potentially introduces its own dependency tree, and ensuring they all work harmoniously requires sophisticated dependency conflict resolution strategies that most developers have never encountered.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xnpbyv3n3sag5m6m88u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xnpbyv3n3sag5m6m88u.png" alt="Versioning issue" width="540" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;2. Runtime Integration and Performance Optimization&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Module Federation enables runtime code sharing, but this introduces performance challenges that are fundamentally different from traditional bundling approaches. Key issues include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increased Payload Size&lt;/strong&gt;: Each micro frontend must include its own runtime and initialization code. Without careful optimization, users could download duplicate dependencies, bloating the application and degrading performance—a critical concern for customer-facing wine e-commerce experiences where every millisecond of load time impacts conversion rates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lazy Loading Orchestration&lt;/strong&gt;: Experts must implement sophisticated lazy loading strategies, determining which modules load on initial page render versus on-demand. For a winery's online store, this might mean instantly loading the product catalog while deferring the wine club signup module until needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cache Management&lt;/strong&gt;: With multiple independently deployed modules, cache invalidation becomes a distributed systems problem. When the tasting room booking module updates, how do you ensure users get the latest version without forcing a full page reload or breaking the user experience?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;3. The NX Monorepo Mastery Requirement&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
While NX dramatically simplifies micro frontend development, mastering it requires significant expertise. Development teams must understand:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code Generators and Schematics&lt;/strong&gt;: NX provides powerful code generation tools, but using them effectively requires understanding the underlying patterns and architectural decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependency Graph Management&lt;/strong&gt;: NX visualizes and manages dependencies between apps and libraries within the monorepo. For a wine platform with dozens of shared libraries (authentication, design system, payment processing, compliance utilities), understanding and maintaining this graph is crucial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Incremental Builds and Computation Caching&lt;/strong&gt;: NX's intelligent caching can speed up builds by 10x or more, but only if configured correctly. This requires deep knowledge of task orchestration, affected project detection, and distributed caching strategies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Module Federation Configuration&lt;/strong&gt;: NX streamlines Module Federation setup, but developers still need to understand the underlying webpack configuration, remote entry points, and how to expose and consume federated modules correctly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;4. Zephyr Cloud Deployment: The New Frontier&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Zephyr Cloud represents a paradigm shift in micro frontend deployment, but leveraging it effectively requires understanding concepts that didn't exist in traditional deployment workflows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sub-Second Deployments&lt;/strong&gt;: Zephyr can deploy to the edge in milliseconds, enabling true preview environments and rapid iteration. However, orchestrating multiple micro frontends with different deployment cadences requires sophisticated release management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Version Management Across Distributed Frontends&lt;/strong&gt;: When you have five micro frontends deployed independently, managing version compatibility and rolling back problematic releases becomes exponentially more complex than traditional deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Edge Distribution&lt;/strong&gt;: Zephyr deploys to the edge for optimal performance, but this introduces new considerations around cache propagation, regional consistency, and debugging production issues that manifest only in specific geographic regions.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;5. Testing in a Distributed Architecture&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Testing micro frontends represents an entirely new challenge domain:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration Testing Across Boundaries&lt;/strong&gt;: Each micro frontend may work perfectly in isolation, but how do you test their integration? You need comprehensive integration test suites that can load and test multiple federated modules together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Version Compatibility Testing&lt;/strong&gt;: With independently versioned modules, you must test all possible version combinations—a combinatorial explosion that requires intelligent test strategies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;End-to-End Testing Complexity&lt;/strong&gt;: E2E tests must now account for modules loading asynchronously, potential network failures during module loading, and the complexity of multiple independently deployed frontends.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;6. Monitoring and Debugging: The Distributed Systems Challenge&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
When something goes wrong in a monolithic application, debugging is relatively straightforward. In a micro frontend architecture:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distributed Tracing&lt;/strong&gt;: Errors can originate from any of dozens of federated modules. Implementing comprehensive distributed tracing to track user actions across module boundaries requires specialized tools and expertise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance Monitoring&lt;/strong&gt;: You need to monitor not just overall application performance, but the load times and performance of individual federated modules, identifying bottlenecks in the distributed architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error Attribution&lt;/strong&gt;: When a production error occurs, determining which micro frontend, which version, and which team is responsible requires sophisticated logging and error tracking infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;7. Security and Authentication in a Distributed Context&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft8yb2lxi3bztpbmanbe3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft8yb2lxi3bztpbmanbe3.png" alt="Auth state" width="784" height="512"&gt;&lt;/a&gt;&lt;br&gt;
Sharing authentication state and managing security across independently deployed micro frontends introduces challenges that don't exist in monolithic apps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shared Authentication State&lt;/strong&gt;: How do you ensure all micro frontends share the same authentication token and user session without creating security vulnerabilities?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authorization Consistency&lt;/strong&gt;: Each module may have different authorization requirements (e.g., regular customers vs. wine club members vs. tasting room staff). Maintaining consistent authorization logic across distributed modules is non-trivial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secure Module Loading&lt;/strong&gt;: Module Federation loads code from multiple sources at runtime. Ensuring this doesn't create security vulnerabilities (like code injection or man-in-the-middle attacks) requires careful configuration and security expertise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Why the Wine Industry Needs This Innovation Now&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Despite these challenges, micro frontends with Module Federation, NX, and Zephyr represent exactly what the wine industry needs in 2025:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Team Autonomy&lt;/strong&gt;: Different teams can work on e-commerce, wine clubs, tasting rooms, and inventory independently, matching the organizational structure of modern wineries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rapid Innovation&lt;/strong&gt;: Deploy new features to specific customer touchpoints without risking the entire platform—critical in an industry racing to meet evolving consumer expectations.&lt;br&gt;
Scalability: As wineries grow, add new micro frontends for new business units or acquisition integration without architectural rewrites.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance&lt;/strong&gt;: Edge deployment with Zephyr ensures fast load times for customers worldwide, directly impacting conversion rates and customer satisfaction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technology Flexibility&lt;/strong&gt;: Different teams can use different frameworks or versions as needed, future-proofing the architecture as technology evolves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;The Bottom Line: Expert Implementation is Non-Negotiable&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
The convergence of Module Federation, NX monorepos, and Zephyr Cloud deployment represents the cutting edge of frontend architecture in 2025. For the wine industry, this technology stack offers transformative potential—but only if implemented correctly.&lt;/p&gt;

&lt;p&gt;The challenges outlined above aren't hypothetical. They're real problems that development teams encounter daily when building micro frontend architectures. These are fundamentally new problems that require specialized expertise. A team experienced in traditional monolithic frontends, or even microservices on the backend, will struggle without guidance from experts who have battle-tested knowledge of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Advanced webpack Module Federation configuration&lt;/li&gt;
&lt;li&gt;NX monorepo architecture and optimization&lt;/li&gt;
&lt;li&gt;Zephyr Cloud deployment strategies&lt;/li&gt;
&lt;li&gt;Distributed systems monitoring and debugging&lt;/li&gt;
&lt;li&gt;Micro frontend testing strategies&lt;/li&gt;
&lt;li&gt;Version management across independent modules&lt;/li&gt;
&lt;li&gt;Performance optimization in distributed architectures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feetaua0jop24vvn1rz5v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feetaua0jop24vvn1rz5v.png" alt="Advanced Configurations" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For wineries investing in digital transformation, partnering with experts who understand these challenges isn't optional—it's essential. The technology is innovative and powerful, but the path is littered with pitfalls that can waste months of development time and millions in investment.&lt;/p&gt;

&lt;p&gt;The future of wine industry technology is modular, distributed, and sophisticated. Success requires not just adopting new tools, but embracing new paradigms—with the guidance of those who have already navigated the complexity.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>frontend</category>
      <category>productivity</category>
    </item>
    <item>
      <title>MCP? What is that?</title>
      <dc:creator>Maksym Mosiura</dc:creator>
      <pubDate>Tue, 07 Oct 2025 07:10:58 +0000</pubDate>
      <link>https://dev.to/maksym_mosiura_7dd1c98618/mcp-what-is-that-38ap</link>
      <guid>https://dev.to/maksym_mosiura_7dd1c98618/mcp-what-is-that-38ap</guid>
      <description>&lt;p&gt;MCP stands for Model Context Protocol. It’s like a special language or system that helps AI programs talk to other software and data easily. Think of it like a super helpful bridge or bridges.&lt;/p&gt;

&lt;p&gt;Today, AI models like ChatGPT or Claude need to get information from lots of places. Without MCP, this is hard. MCP makes it easy for AI to access data, connect to apps, and do tasks. This saves time and makes AI smarter.&lt;/p&gt;

&lt;p&gt;People use MCP because it solves their unique problems. Before, AI models couldn’t easily work with other tools or databases. MCP changes that. It lets AI not just read information but also take actions—like booking a ride or checking a calendar.&lt;/p&gt;

&lt;p&gt;MCP is mostly used by developers building apps today. It’s popular in coding tools and automation software. It helps AI work faster and do more things on its own.&lt;/p&gt;

&lt;p&gt;Also, MCP is easy to make and share with others, including AI tools.&lt;/p&gt;

&lt;p&gt;We, as humans, can communicate with words. As developers we can make interfaces to our applications. But AI can't make the interfaces or consume them correctly. That's the place where MCP needed - to help with this kind of communication. This is an adapter that can be used by AI to use one application or service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4blkb46qg0mhttng47o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4blkb46qg0mhttng47o.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How MCP Changes the AI Market&lt;/strong&gt;&lt;br&gt;
Today MCP is shaking up the AI world. It’s like giving every AI superpowers to connect and work with real-world data and tools.&lt;/p&gt;

&lt;p&gt;Because MCP makes connections standard and simple, many new AI apps and tools are popping up. More people can create AI-powered solutions without building everything from scratch.&lt;/p&gt;

&lt;p&gt;Companies see MCP as a way to win in AI. It lowers costs and speeds up projects. Also, MCP is inspiring new business ideas where developers can make money by sharing AI tools that talk through MCP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Much Is MCP Used? Here Are Some Numbers&lt;/strong&gt;&lt;br&gt;
A lot of MCP servers exist today. Some lists show from 5,000 to 16,000 different MCP servers out there. But here’s the catch: most of these servers aren’t used much or used at all. Studies show that about 90% or more are mostly untouched 🤯.&lt;/p&gt;

&lt;p&gt;Usage is really focused on just a few popular servers. The top 10 servers get almost half of all the attention and use. Most MCP servers are still in early stages or experimental.&lt;/p&gt;

&lt;p&gt;Even though many exist, that small few do most of the real work. This means the MCP ecosystem is still young and growing. The excitement and downloads are increasing quickly, with month-over-month growth sometimes more than 30%.&lt;/p&gt;

&lt;p&gt;It reminds me early days of crypto world - a lot of services that were not used, and only a few won the race.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Could MCP Look Like in the Future?&lt;/strong&gt;&lt;br&gt;
MCP’s future looks bright and full of cool possibilities. Experts think MCP will become the universal way AI connects with everything—like a global standard, similar to how the internet got standardized by things like HTTP.&lt;/p&gt;

&lt;p&gt;We might see AI work with real-time data from healthcare, finance, or education in ways never done before. Imagine AI that remembers all your health history or helps invest in stocks by accessing real market data live.&lt;/p&gt;

&lt;p&gt;MCP could also get smarter with new tech like quantum computing and better ways to keep data private and secure.&lt;/p&gt;

&lt;p&gt;In the years ahead, MCP might create a whole economy where developers build and sell special AI connections. It could unlock new jobs and tools that help everyone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt; &lt;br&gt;
MCP helps AI get smarter by giving it the power to connect, learn, and act in the real world. While many MCP servers exist, only a few are used a lot now. Though the future holds big growth and new tech that could change how AI is part of everyday life.&lt;/p&gt;

&lt;p&gt;This exciting technology is still growing fast and could be a game changer in AI in a few years.&lt;/p&gt;

&lt;p&gt;Read my other articles&lt;br&gt;
&lt;strong&gt;Other posts in this serious:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/maksym_mosiura_7dd1c98618/drill-down-ai-agents-part-1-38j"&gt;Part 1 - Drill Down AI Agents&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/maksym_mosiura_7dd1c98618/rag-smarter-ai-agents-4ej9"&gt;Part 2 - RAG: Smarter AI Agents&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/maksym_mosiura_7dd1c98618/ai-x-web-evolution-how-intelligent-systems-are-powering-the-future-of-the-internet-25l3"&gt;Part 3 - AI x Web Evolution: How Intelligent Systems Are Powering the Future of the Internet&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>computerscience</category>
      <category>learning</category>
    </item>
    <item>
      <title>AI x Web Evolution: How Intelligent Systems Are Powering the Future of the Internet</title>
      <dc:creator>Maksym Mosiura</dc:creator>
      <pubDate>Sun, 06 Jul 2025 15:51:20 +0000</pubDate>
      <link>https://dev.to/maksym_mosiura_7dd1c98618/ai-x-web-evolution-how-intelligent-systems-are-powering-the-future-of-the-internet-25l3</link>
      <guid>https://dev.to/maksym_mosiura_7dd1c98618/ai-x-web-evolution-how-intelligent-systems-are-powering-the-future-of-the-internet-25l3</guid>
      <description>&lt;p&gt;The internet has undergone profound shifts—from static HTML pages to dynamic social platforms to decentralized protocols. In parallel, &lt;strong&gt;Artificial Intelligence&lt;/strong&gt; has evolved from rule-based automation to deep neural networks and now to intelligent agents with memory and reasoning.&lt;/p&gt;

&lt;h2&gt;
  
  
  But what’s most exciting?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Web and AI&lt;/em&gt;&lt;/strong&gt; — are now intertwining and reshaping how we access, filter, and understand information. These two evolutions happened. And we can use the fruits of them now.&lt;/p&gt;

&lt;p&gt;This article explores how AI is not just adapting to the next web; how it’s powering it; and how new architectures like &lt;a href="https://dev.to/maksym_mosiura_7dd1c98618/rag-smarter-ai-agents-4ej9"&gt;Retrieval-Augmented Generation (RAG)&lt;/a&gt; make it possible to find and synthesize information faster than ever before.&lt;/p&gt;

&lt;p&gt;Let’s briefly revisit the phases of the web:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zt69vt00uch582afbzo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zt69vt00uch582afbzo.png" alt="Image description" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each generation of the web unlocks new possibilities. Each time AI evolves with it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Rise of AI That Thinks
&lt;/h2&gt;

&lt;p&gt;In early Web2, AI was about predicting your next click (mostly in ads and sales).&lt;/p&gt;

&lt;p&gt;Now? It's about answering your question even before you finish asking it, or automatically connecting data across platforms in real-time.&lt;/p&gt;

&lt;p&gt;As AI agesnts built with technologies like vector databases, semantic search, and long-term memory, they enables systems to act more like researchers, not just responders.&lt;/p&gt;

&lt;p&gt;A core technology behind this is &lt;a href="https://dev.to/maksym_mosiura_7dd1c98618/rag-smarter-ai-agents-4ej9"&gt;RAG&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;RAG allows an AI to search through external and internal knowledge (structured or unstructured) and then generate responses based on both the user query and the retrieved context.&lt;/p&gt;

&lt;p&gt;In contrast to traditional AI pipelines for new era of web, RAG:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understands semantic meaning (not just keywords) - that helps to search quicker, show more precise results;&lt;/li&gt;
&lt;li&gt;Finds contextually relevant info across massive datasets - that range results differently than in Google, Yahoo or Bing&lt;/li&gt;
&lt;li&gt;Synthesizes personalized, high-quality answers - no need to go thru  5 first websites to find your answer or product&lt;/li&gt;
&lt;li&gt;Enables modular and reusable AI components across Web3/Web4 apps - this is about components or micro-frontends that will be used in responses to requester.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s imagine a user interacting with a Web3 dashboard for DAOs and tokens. They ask &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“What are the trending governance proposals in DeFi this week?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A traditional system might &lt;strong&gt;return a list of links&lt;/strong&gt;. This list requires filtering by hand, discovering and making research of each item.&lt;/p&gt;

&lt;p&gt;A RAG-powered AI agent does this instead:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Searches vectorized DAO forum content from IPFS or Arweave&lt;/li&gt;
&lt;li&gt;Retrieves proposals semantically similar to &lt;strong&gt;governance&lt;/strong&gt; and &lt;strong&gt;DeFi&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Generates a summary using LLMs like GPT-4&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Outputs an up-to-date answer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There are 3 key proposals active this week in Aave, MakerDAO, and Curve, all focused on yield delegation and cross-chain governance…&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The general idea is a simplicity of making analysis to get the final results. AI will do everything for you and you would be able to consume result. Isn't it cool?&lt;/p&gt;

&lt;h2&gt;
  
  
  Not that fast, rabbit...
&lt;/h2&gt;

&lt;p&gt;The most problematic part here hides in which data will be consumed by AI; how frequently it will be updated; and how AI will prioritize or rank that data.&lt;/p&gt;

&lt;p&gt;At first glance, this sounds like a technical detail. In reality it’s a fundamental risk for the next generation of intelligent systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does this mean for users?
&lt;/h2&gt;

&lt;p&gt;It means that if someone creates a new token that suddenly goes viral— even if it's a scam — AI may surface it as a top recommendation or trend. Reason is simple - because the data signals (mentions, volume, velocity) suggest it's important.&lt;/p&gt;

&lt;p&gt;And here's the problem:&lt;/p&gt;

&lt;p&gt;AI doesn’t &lt;em&gt;understand&lt;/em&gt; truth. It understands patterns. That's it for now.&lt;/p&gt;

&lt;p&gt;If the training data shows massive engagement, rapid trading, or a flood of social mentions, be sure the AI may interpret that as relevance or value. Even if the token is malicious, unverified, or manipulative, for AI in current iterration it's just enough.&lt;/p&gt;

&lt;p&gt;This happens because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;RAG and vector search prioritize semantic relevance, not factual correctness.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Language models are non-opinionated unless explicitly tuned or filtered.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attention = priority, unless you introduce trust-weighted signals.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So a scam with good marketing can exploit AI's data ingestion the same way it manipulates human psychology.&lt;/p&gt;

&lt;p&gt;Moreover, it's not just about tokens in web3, it's about everything!!!&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters More in Web3
&lt;/h2&gt;

&lt;p&gt;In Web2 scams are filtered by central gatekeepers - app stores, SEO penalties, community reporting (in social media, articles, etc.).&lt;/p&gt;

&lt;p&gt;But in Web3, data is decentralized, fast-moving, and often unverifiable. AI agents working across these environments must:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Decide what to trust&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Evaluate how much weight to give a source&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Possibly cross-reference on-chain vs off-chain data&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This becomes even more critical when these agents are acting on your behalf—recommending protocols, approving transactions, or giving financial summaries.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix: Explainability and Signal Hygiene
&lt;/h2&gt;

&lt;p&gt;To prevent AI from blindly promoting noise or scams, we must:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Introduce source ranking layers in vector databases (reputation, historical accuracy, verification).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add metadata weighting to embeddings (e.g., verified contributor flag).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use counter-signals (blocklists, anomaly detection) to detect hype vs trust.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trust only verified Smart Contracts (that can be scanned and parsed to find even hidden scums)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eventually, we’ll need AI agents that explain their reasoning:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This token is trending due to a large volume of posts in the last 12 hours, but it lacks verified smart contract audits and is flagged by 3 DAO reputational feeds.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Until then, users must remember that AI agents are only as good as the data it eats, and the signals they learn to trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the next stage?
&lt;/h2&gt;

&lt;p&gt;We’re entering a phase where the web is no longer a place, but a conversation between you, your data, and intelligent agents that live across platforms.&lt;/p&gt;

&lt;p&gt;These agents will help us navigate decentralized worlds. Also they will help to extract meaning from fragmented ecosystems and act as companions, advisors, and co-builders&lt;/p&gt;

&lt;p&gt;But this is just the beginning... It requires more itterrations to become smarter and trusted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Other posts in this serious:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/maksym_mosiura_7dd1c98618/drill-down-ai-agents-part-1-38j"&gt;Part 1 - Drill Down AI Agents&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/maksym_mosiura_7dd1c98618/rag-smarter-ai-agents-4ej9"&gt;Part 2 - RAG: Smarter AI Agents&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>learning</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>RAG: Smarter AI Agents</title>
      <dc:creator>Maksym Mosiura</dc:creator>
      <pubDate>Thu, 22 May 2025 03:54:50 +0000</pubDate>
      <link>https://dev.to/maksym_mosiura_7dd1c98618/rag-smarter-ai-agents-4ej9</link>
      <guid>https://dev.to/maksym_mosiura_7dd1c98618/rag-smarter-ai-agents-4ej9</guid>
      <description>&lt;p&gt;Most developers who works with AI eventually hit the same wall - &lt;strong&gt;context&lt;/strong&gt;. You can pipe tools together, chain AI prompts, or write clever workflows, but at some point, you realize your agent isn’t really thinking. It’s reacting. You need something different.&lt;/p&gt;

&lt;p&gt;I hope you used n8n, LangChain, or a similar tool, and you probably created pipelines where each AI step feeds the next. That works for formatting data or guiding workflows. And it is fine for simple agents, but what if your agent needs to remember? What if it needs to learn across conversations? How about adapt to changes? ... or retrieve knowledge like a human?&lt;/p&gt;

&lt;p&gt;Before diving into the code, let’s break down AI memory into three simple categories:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stateless&lt;/strong&gt; (&lt;em&gt;No Memory&lt;/em&gt;):&lt;br&gt;
The agent processes each prompt independently. It’s a great for reformat data, transform or have quick answer. Let's call it "Simple transformation"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Short-Term Memory&lt;/strong&gt;:&lt;br&gt;
Think of a chatbot that remembers the last couple of interactions. Usually 10-20 last messages. Each chat is isolated. Context is limited to a session window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long-Term Memory&lt;/strong&gt;:&lt;br&gt;
This is more about intelligence. This agent builds an evolving knowledge base across all chats. It &lt;del&gt;try to remember&lt;/del&gt; remembers previous user interactions, and connects concepts. This is made possible by vector databases and semantic embeddings.&lt;/p&gt;

&lt;p&gt;In this article we will explore how RAG works and what it is, how it differs from traditional AI pipelines, and how you can build your own local memorable agent (using Python and FAISS—fully intelligent memory for offline use or deploying to your own infrastructure).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;RAG stands for Retrieval-Augmented Generation. &lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  AI Pipelines vs. RAG
&lt;/h2&gt;

&lt;p&gt;Let’s start with a misconception:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“I already have an AI agent that processes inputs through multiple steps. Isn’t that the same?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not quite.&lt;/p&gt;

&lt;p&gt;So what’s the Difference?&lt;/p&gt;

&lt;p&gt;What is a &lt;strong&gt;Traditional AI Pipeline&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;steps chained together (for example, summarize → extract → classify)&lt;/li&gt;
&lt;li&gt;each step operates on the output of the previous one or wait for multiple&lt;/li&gt;
&lt;li&gt;no persistent memory or knowledge base (data come - data leave)&lt;/li&gt;
&lt;li&gt;repetition of work if context is lost&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's what people usually see after 10-20 messages. Their context has been lost and they need to remind AI agent about it. It is sad... But that's the cost of simple approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RAG Architecture&lt;/strong&gt; on other hand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;uses a vector database as external memory&lt;/li&gt;
&lt;li&gt;retrieves semantically relevant information (instead of guessing from the prompt)&lt;/li&gt;
&lt;li&gt;uses both the current prompt and retrieved knowledge to generate a smarter response&lt;/li&gt;
&lt;li&gt;memory is structured, persistent, and scalable (&lt;strong&gt;&lt;em&gt;...and costly, hahah&lt;/em&gt;&lt;/strong&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So you might think "How Memory Works in RAG"?&lt;br&gt;
RAG agents don’t store raw text — they store meanings using embeddings. Think of it like associative memory: when you say “I want to automate tasks”, the system doesn’t look for exact matches, it looks for concepts that are semantically close. Each AI is just an LLM on steroids. Where one of the steroid is RAG. So the RAG should consume data, let's call it "memory entry".&lt;/p&gt;

&lt;p&gt;Each memory entry includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;original text&lt;/li&gt;
&lt;li&gt;vector (embedding)&lt;/li&gt;
&lt;li&gt;metadata (who, when, source, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allows fast, flexible search across tens of thousands of interactions without leaking sensitive data to the public. This is very important to users. Moreover "&lt;em&gt;public&lt;/em&gt;" here means users that will use the shared between them RAG. So "&lt;em&gt;public&lt;/em&gt;" in this context is a group of people, that can be a real public... this article is not about philosophy anyway...&lt;/p&gt;

&lt;p&gt;Let’s rather build a basic RAG memory system using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FAISS — Facebook’s local vector search engine&lt;/li&gt;
&lt;li&gt;OpenAI’s embedding API (or you replace it with any public/local embedding model later)
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# deps to install
&lt;/span&gt;&lt;span class="n"&gt;pip&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;faiss&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;cpu&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now when we have dependencies installed, let's store the memory on a local machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;faiss&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;

&lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="c1"&gt;# Sample data
&lt;/span&gt;&lt;span class="n"&gt;texts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Client wants to automate invoice generation.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Client asks about CRM integration options.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;He discussed API for syncing customer data.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Convert to embeddings
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_embedding&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Embedding&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text-embedding-ada-002&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;embedding&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;float32&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nf"&gt;get_embedding&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;texts&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;faiss&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;IndexFlatL2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is a link to &lt;a href="https://openai.com/index/introducing-text-and-code-embeddings/" rel="noopener noreferrer"&gt;OpenAI embedding intro&lt;/a&gt; and &lt;a href="https://platform.openai.com/docs/guides/embeddings#embedding-models" rel="noopener noreferrer"&gt;OpenAI embedding models&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So when data is stored, we should be able to retrieve it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;How do I automate customer data sync?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;query_vec&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get_embedding&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;D&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;I&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;query_vec&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;idx&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;I&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Relevant memory:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;texts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This request will return us only relevant memory from DB. Output will be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Relevant memory: He discussed API for syncing customer data.
Relevant memory: Client wants to automate invoice generation.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a CORE of your system. Now you are able to pick data by asking human-language questions.&lt;/p&gt;

&lt;p&gt;While we just jump on this, let's see why local RAG rocks:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7y1su4lvkzq70whzueg8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7y1su4lvkzq70whzueg8.png" alt="Table comparison RAG with Pipeline" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can easily scale this to 100K+ entries, integrate it with a local LLM like LLama - find one yourself on &lt;a href="https://huggingface.co/" rel="noopener noreferrer"&gt;huggingface&lt;/a&gt;. ...or deploy it to your own infrastructure. No cloud dependencies required 💪&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Other posts in this serious:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/maksym_mosiura_7dd1c98618/drill-down-ai-agents-part-1-38j"&gt;Part 1 - Drill Down AI Agents&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/maksym_mosiura_7dd1c98618/ai-x-web-evolution-how-intelligent-systems-are-powering-the-future-of-the-internet-25l3"&gt;Part 3 - AI x Web Evolution: How Intelligent Systems Are Powering the Future of the Internet&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Drill Down AI Agents</title>
      <dc:creator>Maksym Mosiura</dc:creator>
      <pubDate>Fri, 09 May 2025 08:31:54 +0000</pubDate>
      <link>https://dev.to/maksym_mosiura_7dd1c98618/drill-down-ai-agents-part-1-38j</link>
      <guid>https://dev.to/maksym_mosiura_7dd1c98618/drill-down-ai-agents-part-1-38j</guid>
      <description>&lt;p&gt;AI isn't some futuristic buzzword anymore. It's a game-changing technology. But how to use it? We need an agent. The most popular agents at the beginning of 2025 were: Chat GPT, Siri, Google Assistant, Perplexity, IBM Watson and a couple of more. And the number of such quickly growing with new names like DeepSeek and Manus.&lt;/p&gt;

&lt;p&gt;These agents are for general purposes or for “research” as called now. They solve general problems. Our goal is to build more specific agents - agents which can solve user’s issues in the environment of a specific company.&lt;br&gt;
These custom AI agents are like having a superhero team of digital assistants that can do things traditional automation can only dream about.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpkd4tvjzkuvslp7d9aa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpkd4tvjzkuvslp7d9aa.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Three Superpowers of AI Agents:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Adaptability: Smart Enough to Roll with the Punches&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Remember those old automation tools that would break the moment something unexpected happened? Forget that! AI agents are like digital chameleons. They can adapt and get smarter. Moreover, they can save a context of a specific user’s conversation and use it later for different purposes: pushing to act, provide some personalized information, request some details for suggestions or even predict behaviour.&lt;br&gt;
It actually learns what customers like, remembers their preferences, and becomes more helpful with every interaction. Can it be reached with any other tool? Definitely, no.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Scalability: Handling Massive Workloads Without Breaking a Sweat&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Like any modern software, AI agents can be scaled. The scaling depends on which kind of agent is built. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Humans have limits. AI agents laugh in the face of massive workloads. Their limits are easily extendable.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Example:&lt;/em&gt; Take fraud detection. An AI agent can scan through thousands of financial transactions in a second than a team of accountants could in a month. And it does this with laser-sharp accuracy, catching suspicious patterns that might slip past human eyes.&lt;br&gt;
This means our AI agents can assist us with provided data or use such from provided resources. Scalability allows these agents to understand how complex the request is and run it in the appropriate environment.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Autonomous Decision-Making: No Supervision Required&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Here's the really cool part: AI agents don't need constant babysitting. They can make independent decisions based on real-time data and past experience.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example:&lt;/em&gt; Stock-trading AI. It can watch the market 24/7 and make trading decisions. AI can track how good it is during a certain time and decide when to make trades and when to do nothing. While human traders are sleeping, grabbing coffee, or getting distracted, these AI agents are working non-stop, analyzing data, spotting opportunities, and taking actions.&lt;/p&gt;

&lt;p&gt;AI agents aren't just a technology upgrade. They're a complete rethinking of how we approach automation, decision-making, and problem-solving. Businesses that embrace these technologies aren't just staying current - they're staying ahead.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;So, what the core component of AI Agents?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;An AI agent is a system that perceives its environment using sensors like input, API, envs, cache, and its LLM. The agent processes this information and acts. &lt;/p&gt;

&lt;p&gt;In order to build the intelligent agent, it should have the ability to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;perceive ("understand") its environment&lt;/li&gt;
&lt;li&gt;consume, process, and save information&lt;/li&gt;
&lt;li&gt;act in this environment&lt;/li&gt;
&lt;li&gt;save and learn from experience&lt;/li&gt;
&lt;li&gt;analyse and improve performance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Any AI agent requires some helpers. Without the helpers, the agent can't work clearly or provide adequate results. Let's call them components and here they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LLMs&lt;/li&gt;
&lt;li&gt;APIs&lt;/li&gt;
&lt;li&gt;Tools&lt;/li&gt;
&lt;li&gt;Memory&lt;/li&gt;
&lt;li&gt;Cache&lt;/li&gt;
&lt;li&gt;Other Agents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh624oef38fcux38mha94.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh624oef38fcux38mha94.png" alt="Image description" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In other words, these are core components of an AI agent, where&lt;br&gt;
&lt;strong&gt;LLM&lt;/strong&gt; → is a brain. It processes inputs, understands context, and generates responses. &lt;strong&gt;Memory/Cache&lt;/strong&gt; → stores tokens, context, past interactions, user data, or learned information. &lt;strong&gt;Tools/APIs&lt;/strong&gt; → external functionality that can be used. &lt;strong&gt;Other Agents&lt;/strong&gt; → to process or preprocess data that will be consumed by the main agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Other posts in this serious:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/maksym_mosiura_7dd1c98618/rag-smarter-ai-agents-4ej9"&gt;Part 2 - RAG: Smarter AI Agents&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/maksym_mosiura_7dd1c98618/ai-x-web-evolution-how-intelligent-systems-are-powering-the-future-of-the-internet-25l3"&gt;Part 3 - AI x Web Evolution: How Intelligent Systems Are Powering the Future of the Internet&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
