<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bobur Umurzokov</title>
    <description>The latest articles on DEV Community by Bobur Umurzokov (@bobur).</description>
    <link>https://dev.to/bobur</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bobur"/>
    <language>en</language>
    <item>
      <title>Our Migration Story: From Azure App Service to Container Apps</title>
      <dc:creator>Bobur Umurzokov</dc:creator>
      <pubDate>Fri, 13 Feb 2026 12:04:09 +0000</pubDate>
      <link>https://dev.to/bobur/our-migration-story-from-azure-app-service-to-container-apps-53m3</link>
      <guid>https://dev.to/bobur/our-migration-story-from-azure-app-service-to-container-apps-53m3</guid>
      <description>&lt;h2&gt;
  
  
  Why Infrastructure Decisions Are Business Decisions
&lt;/h2&gt;

&lt;p&gt;As CTO and Co-founder of &lt;a href="https://datox.ai/" rel="noopener noreferrer"&gt;Datox&lt;/a&gt;, I’ve always believed that infrastructure decisions are business decisions.  When we started building our Datox platform, &lt;a href="https://azure.microsoft.com/en-us/products/app-service" rel="noopener noreferrer"&gt;Azure App Service&lt;/a&gt; was the fastest way to get to market. It allowed us to focus on product, customers, and regulatory workflows without over-engineering from day one. But as Datox evolved, serving more customers, supporting multi-tenant deployments, and expanding into different Azure subscriptions, our architecture needed to evolve too.&lt;/p&gt;

&lt;p&gt;This post is the story of how and why we migrated from Azure App Service to Azure Container Apps. It was a strategic move that reduced our non-production costs by 90%, enabled true independent scaling, simplified customer deployments, and positioned Datox for a cloud-native, enterprise-ready future.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Are Azure Container Apps?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/products/container-apps" rel="noopener noreferrer"&gt;Azure Container Apps&lt;/a&gt; is a fully managed serverless platform for running containerized applications. Built on &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, it abstracts away infrastructure complexity while providing powerful features for modern application development. Think of it as the sweet spot between &lt;a href="https://learn.microsoft.com/en-us/azure/architecture/guide/design-principles/managed-services" rel="noopener noreferrer"&gt;Platform as a Service (PaaS)&lt;/a&gt; simplicity and &lt;a href="https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-iaas" rel="noopener noreferrer"&gt;Infrastructure as a Service (IaaS)&lt;/a&gt; flexibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Characteristics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Serverless container hosting&lt;/li&gt;
&lt;li&gt;Built-in autoscaling (scale to zero capability)&lt;/li&gt;
&lt;li&gt;Microservices-native architecture&lt;/li&gt;
&lt;li&gt;Event-driven scaling with KEDA (Kubernetes Event-Driven Autoscaling)&lt;/li&gt;
&lt;li&gt;Integrated ingress and service discovery&lt;/li&gt;
&lt;li&gt;Built-in traffic splitting for blue-green deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Business Drivers Behind the Migration
&lt;/h2&gt;

&lt;p&gt;As our product matured, we faced several strategic business requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-tenancy&lt;/strong&gt;: Customers wanted to deploy our solution in their own Azure subscriptions or on-premises.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud-native architecture&lt;/strong&gt;: Need for portable, containerized deployments that could run anywhere&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Independent scaling&lt;/strong&gt;: Frontend and backend services had different load patterns and needed to scale independently&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost optimization&lt;/strong&gt;: Non-production environments (dev, staging, demo) were incurring costs 24/7 even when idle&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Microservices evolution&lt;/strong&gt;: Growing need to break down the monolith into independently deployable services&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technical Constraints with Azure App Service
&lt;/h2&gt;

&lt;p&gt;While App Service helped us move fast early on, we started hitting structural limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always-on pricing meant we paid for idle development environments overnight and on weekends&lt;/li&gt;
&lt;li&gt;Difficult to package and deploy the entire stack to customer clouds&lt;/li&gt;
&lt;li&gt;Frontend (React SPA) and backend (Python FastAPI) had to scale together despite different traffic patterns&lt;/li&gt;
&lt;li&gt;Limited support for running multiple interconnected services efficiently&lt;/li&gt;
&lt;li&gt;Complex custom domain and SSL management across multiple environments&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Migration Strategy
&lt;/h2&gt;

&lt;p&gt;We made the strategic decision to containerize our entire application stack and migrate to Azure Container Apps. The migration involved:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Containerization&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Created multi-stage Dockerfiles for frontend (React) and backend (Python FastAPI)&lt;/li&gt;
&lt;li&gt;Containerized &lt;a href="https://azure.microsoft.com/en-us/products/functions" rel="noopener noreferrer"&gt;Azure Functions&lt;/a&gt; for background processing and &lt;a href="https://azure.microsoft.com/en-us/products/signalr-service" rel="noopener noreferrer"&gt;SignalR&lt;/a&gt; real-time messaging&lt;/li&gt;
&lt;li&gt;Optimized images using Alpine Linux and multi-stage builds&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Automated deployment pipelines with &lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Environment-specific configurations using build arguments and runtime environment variables&lt;/li&gt;
&lt;li&gt;Secret management via &lt;a href="https://azure.microsoft.com/en-us/products/key-vault" rel="noopener noreferrer"&gt;Azure Key Vault&lt;/a&gt; integration&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-environment strategy&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Development, Staging, and Production environments&lt;/li&gt;
&lt;li&gt;Each environment is independently scalable and configurable&lt;/li&gt;
&lt;li&gt;Branch-based deployment automation (dev branch → dev environment, main → production)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Measurable Results
&lt;/h2&gt;

&lt;p&gt;The migration delivered significant benefits across cost, operations, and business capabilities:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Before (App Service)&lt;/th&gt;
&lt;th&gt;After (Container Apps)&lt;/th&gt;
&lt;th&gt;Improvement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Monthly Cost (Non-Prod)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~$450 (3 environments, always-on)&lt;/td&gt;
&lt;td&gt;~$45 (scale to zero overnight/weekends)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;90% reduction&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Deployment Time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;8-12 minutes per environment&lt;/td&gt;
&lt;td&gt;3-5 minutes per environment&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;50% faster&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Frontend Scaling&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Coupled with backend&lt;/td&gt;
&lt;td&gt;Independent (0-10 replicas)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Independent scaling&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Backend Scaling&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Coupled with frontend&lt;/td&gt;
&lt;td&gt;Independent (0-30 replicas)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Independent scaling&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cold Start Time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;N/A (always-on)&lt;/td&gt;
&lt;td&gt;&amp;lt;10 seconds from zero&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Acceptable trade-off&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Environment Spin-up&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2-3 hours (manual setup)&lt;/td&gt;
&lt;td&gt;15 minutes (automated)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;88% faster&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Customer Deployment&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1-2 days (complex process)&lt;/td&gt;
&lt;td&gt;2-4 hours (containerized)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;80% reduction&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SSL Certificate Management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Manual per environment&lt;/td&gt;
&lt;td&gt;Automated with Container Apps&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Zero manual effort&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Service Discovery&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Manual configuration&lt;/td&gt;
&lt;td&gt;Built-in internal ingress&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Automatic&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Infrastructure Overhead&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Medium (App Service plans)&lt;/td&gt;
&lt;td&gt;Low (serverless)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Minimal management&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What key benefits have been realized&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Cost Optimization&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Development and demo environments now scale to zero during off-hours&lt;/li&gt;
&lt;li&gt;Production uses dedicated capacity for consistent performance&lt;/li&gt;
&lt;li&gt;Eliminated over-provisioning—each service sized appropriately&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Customer Deployments&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Entire platform packaged as container images&lt;/li&gt;
&lt;li&gt;Customers deploy to their Azure subscriptions in hours, not days&lt;/li&gt;
&lt;li&gt;Consistent experience across all customer environments&lt;/li&gt;
&lt;li&gt;Easy version upgrades via container image updates&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Independent Scaling&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Frontend scales based on HTTP requests (user traffic)&lt;/li&gt;
&lt;li&gt;Backend scales based on API load and queue depth&lt;/li&gt;
&lt;li&gt;Azure Functions scale based on message queue length&lt;/li&gt;
&lt;li&gt;Each service optimized for its workload pattern&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Developer Productivity&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Local development with Docker Compose mirrors production exactly&lt;/li&gt;
&lt;li&gt;CI/CD pipelines deploy to any environment with branch push&lt;/li&gt;
&lt;li&gt;No more "works on my machine" issues&lt;/li&gt;
&lt;li&gt;Faster iteration cycles for development team&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5. Operational Excellence&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Built-in health checks and auto-recovery&lt;/li&gt;
&lt;li&gt;Application Insights integration for centralized monitoring&lt;/li&gt;
&lt;li&gt;Log Analytics for troubleshooting across all services&lt;/li&gt;
&lt;li&gt;Blue-green deployments via traffic splitting&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;6. AI-Ready Architecture&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As Datox evolved, our AI workloads expanded from document processing to real-time compliance validation and background data enrichment.&lt;/p&gt;

&lt;p&gt;Azure Container Apps significantly improved how we run and scale our AI services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI workers scale independently based on queue depth and workload&lt;/li&gt;
&lt;li&gt;Background model execution runs in isolated containers&lt;/li&gt;
&lt;li&gt;No need to over-provision compute for sporadic AI spikes&lt;/li&gt;
&lt;li&gt;Easier integration with &lt;a href="https://azure.microsoft.com/en-us/products/ai-foundry/tools" rel="noopener noreferrer"&gt;Azure AI services&lt;/a&gt; (Azure Foundry) and GPU-enabled environments (when needed)&lt;/li&gt;
&lt;li&gt;Faster experimentation with new AI services without impacting core APIs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Previously, under App Service, AI workloads were tightly coupled with the main application runtime. Now, they are independently deployable and scalable microservices.&lt;/p&gt;

&lt;p&gt;This architectural shift allowed us to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy AI features faster&lt;/li&gt;
&lt;li&gt;Control AI infrastructure costs more precisely&lt;/li&gt;
&lt;li&gt;Improve reliability during heavy document-processing workloads&lt;/li&gt;
&lt;li&gt;Prepare for future GPU-backed and high-throughput AI services&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Lessons Learned&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What Worked Well:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-stage Docker builds kept images small and secure&lt;/li&gt;
&lt;li&gt;Scale-to-zero for non-production saved significant costs&lt;/li&gt;
&lt;li&gt;GitHub Actions integration made CI/CD seamless&lt;/li&gt;
&lt;li&gt;Internal ingress simplified service-to-service communication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Challenges Overcome:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initial cold start optimization required tuning image sizes&lt;/li&gt;
&lt;li&gt;Secret management required Key Vault integration planning&lt;/li&gt;
&lt;li&gt;Team needed Docker knowledge (addressed with training)&lt;/li&gt;
&lt;li&gt;Database connection pooling configuration for scaled replicas&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Would We Do It Again?
&lt;/h2&gt;

&lt;p&gt;Absolutely. The migration paid for itself in the first quarter through cost savings alone, and the business benefits, especially customer deployments, have been transformative for our go-to-market strategy. In the &lt;strong&gt;next post&lt;/strong&gt;, I’ll go hands-on and show you exactly how to make a similar migration from containerization to infrastructure setup and CI/CD.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>azure</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Agent Knowledge vs Memories: Understanding the Difference</title>
      <dc:creator>Bobur Umurzokov</dc:creator>
      <pubDate>Fri, 09 Jan 2026 11:41:54 +0000</pubDate>
      <link>https://dev.to/bobur/agent-knowledge-vs-memories-understanding-the-difference-4pgj</link>
      <guid>https://dev.to/bobur/agent-knowledge-vs-memories-understanding-the-difference-4pgj</guid>
      <description>&lt;p&gt;Most developers are still confused about what "memory" means in AI and why they should use it. Or they keep asking: &lt;em&gt;what’s the difference between knowledge and memory? How to use them together?&lt;/em&gt; Many of them treat memory as just cached conversations. Others try to build their own version by storing data in files. &lt;/p&gt;

&lt;p&gt;Knowledge and memories serve very different purposes inside an AI agent. When you clearly separate them and design for each intentionally, your agent stops behaving like a scripted chatbot, saves up to 80% LLM tokens, and starts acting like a helpful assistant that actually remembers customers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Knowledge: Your Agent's Reference Library
&lt;/h2&gt;

&lt;p&gt;Think of it as your agent’s reference library. Every customer reads from the same book, and that consistency is what makes your agent reliable.&lt;/p&gt;

&lt;p&gt;Knowledge is everything that is true for all customers, regardless of who is asking. It represents your business facts: documentation, pricing, policies, shipping rules, FAQs, API references, and internal procedures.&lt;/p&gt;

&lt;p&gt;Knowledge is stable. It changes only when your business changes, not when the customer changes.&lt;/p&gt;

&lt;p&gt;When a customer asks about shipping rates, the agent doesn’t need personal context. It simply retrieves the correct information from the knowledge base and responds. The answer should be identical for every customer, every time.&lt;/p&gt;

&lt;p&gt;This consistency is the strength of knowledge. If it’s wrong, your agent confidently gives incorrect answers. If it’s missing, your agent starts guessing. That’s why knowledge must be curated and maintained carefully.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Knowledge Characteristics&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Static &amp;amp; Structured:&lt;/strong&gt; Contains business information that doesn't change frequently—product catalogs, FAQs, policies, procedures&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Universally Shared:&lt;/strong&gt; All customers access the same knowledge base—what's true for one customer is true for all&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manually Curated:&lt;/strong&gt; You upload, organize, and maintain this content based on what your business offers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Provides accurate, consistent answers grounded in your business reality&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Real-World Knowledge Example&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Customer: "What are your shipping rates to Canada?"

Agent: [Searches knowledge base]

"We offer three shipping options to Canada: Standard (5-7 days) for $12.99, 
Express (2-3 days) for $24.99, and Overnight for $49.99. Free shipping on orders over $150."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent pulled this directly from your knowledge base, the same answer every customer gets, because it's factual business information.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Memories: Your Agent's Personal Journal for Each Customer&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Memories are the opposite of knowledge. They are personal, dynamic, and unique to each customer. Memory captures things like preferences, past purchases, previous issues, and important details the customer has already shared.&lt;/p&gt;

&lt;p&gt;Memory answers a different question: &lt;em&gt;what do we already know about this person?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If a customer says they prefer blue sneakers in size 10, that information should never live in your knowledge base. It belongs in memory, scoped only to that customer. When the same customer comes back weeks later on a different channel the agent can continue the conversation naturally without asking again.&lt;/p&gt;

&lt;p&gt;This is what prevents the “AI amnesia” problem. Without memory, every interaction resets. Customers repeat themselves. Context disappears. Trust erodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Memory Characteristics&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic &amp;amp; Personal:&lt;/strong&gt; Captures conversation history, preferences, past issues, and context specific to each customer&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Individually Isolated:&lt;/strong&gt; Each customer has their own memory space—what Sarah said never shows up in John's context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatically Captured:&lt;/strong&gt; AI extracts and stores important details from conversations without manual work&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Channel:&lt;/strong&gt; Follows customers across WhatsApp, Telegram, web chat—one continuous memory&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Purpose:&lt;/strong&gt; Enables personalized, context-aware interactions that feel natural and continuous&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Real-World Memory Example&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Week 1 - WhatsApp:

Customer: "I need sneakers, size 10, prefer blue colors"

Agent: [Stores: prefers blue, size 10, interested in sneakers]

Week 3 - Telegram (same customer, different channel):

Customer: "Do you have new arrivals?"

Agent: "Yes! We just got new blue sneakers in size 10—based on your previous interest, you might love our Nike Runner collection. 
Want to see them?"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice how the agent remembered the customer's preferences across different channels and weeks. This is the power of memories. It's personal, persistent, and creates a seamless experience.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;AI memory for customer support chats&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Stop making customers repeat themselves. Add memory so AI remembers, learns, and supports like a human. 🔗 &lt;a href="https://www.chatmemory.ai" rel="noopener noreferrer"&gt;https://www.chatmemory.ai&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How Agent Knowledge and Memory Work Together
&lt;/h2&gt;

&lt;p&gt;The best AI agents don’t choose between knowledge or memory. They use both, in sequence.&lt;/p&gt;

&lt;p&gt;First, the agent checks memory to understand who it’s talking to and what context already exists. Then it checks knowledge to ensure the response follows business rules and factual accuracy. The final answer combines both into a response that is correct &lt;em&gt;and&lt;/em&gt; personal.&lt;/p&gt;

&lt;p&gt;For example, when a customer asks to return an order, memory tells the agent which order the customer placed and when. Knowledge tells the agent what the return policy allows. The response feels helpful because it references the specific order while correctly applying company rules.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Customer: "I want to return my order"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Agent's Process:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Check Memory:&lt;/strong&gt; "This is Sarah, she ordered blue sneakers (order #1234) 2 weeks ago via WhatsApp"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check Knowledge:&lt;/strong&gt; "Return policy allows 30 days, need receipt, items must be unworn"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Combine:&lt;/strong&gt; Personalized response with accurate policy
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Agent: "Hi Sarah! I can help with returning your blue sneakers (Order #1234). 
Our 30-day return policy applies. Since you ordered 2 weeks ago, you're well within the window. 
Just make sure they're unworn. Would you like me to generate a return label?"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the difference? The agent combined &lt;strong&gt;knowledge&lt;/strong&gt; (return policy details) with &lt;strong&gt;memories&lt;/strong&gt; (Sarah's specific order, timeline, and preferences) to create a response that's both accurate and personal.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Mixing Knowledge and Memory Breaks AI Agents&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Many teams make the mistake of storing personal conversations inside their knowledge base or passing entire chat histories with every request. This causes multiple problems at once.&lt;/p&gt;

&lt;p&gt;Answers become noisy because personal data pollutes shared facts. Token usage explodes because the agent is constantly reprocessing irrelevant context. Privacy becomes harder to manage because personal data is mixed with permanent knowledge.&lt;/p&gt;

&lt;p&gt;A clean separation fixes all of this. Knowledge stays global and stable. Memory stays personal and contextual. The agent retrieves only what it needs, when it needs it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Knowledge vs Memories: Side-by-Side Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Aspect&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Knowledge&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Memories&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Content Type&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Business facts &amp;amp; information&lt;/td&gt;
&lt;td&gt;Personal history &amp;amp; preferences&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Who Has Access&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;All customers (shared)&lt;/td&gt;
&lt;td&gt;Individual customer only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;How It's Created&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Manually uploaded by you&lt;/td&gt;
&lt;td&gt;Auto-captured from conversations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Update Frequency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Rarely (when business changes)&lt;/td&gt;
&lt;td&gt;Constantly (every conversation)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Persistence&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Permanent until you change it&lt;/td&gt;
&lt;td&gt;Configurable retention (7-90+ days)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Primary Purpose&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Provide accurate answers&lt;/td&gt;
&lt;td&gt;Enable personalization&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Example Content&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Product specs, pricing, policies&lt;/td&gt;
&lt;td&gt;Order history, preferences, past issues&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;When to Use What&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Use Knowledge For:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Product catalogs and specifications&lt;/li&gt;
&lt;li&gt;Company policies and procedures&lt;/li&gt;
&lt;li&gt;FAQs and troubleshooting guides&lt;/li&gt;
&lt;li&gt;Pricing and shipping information&lt;/li&gt;
&lt;li&gt;Training materials and best practices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use Memories For:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Customer purchase history&lt;/li&gt;
&lt;li&gt;Personal preferences and interests&lt;/li&gt;
&lt;li&gt;Past support issues and resolutions&lt;/li&gt;
&lt;li&gt;Communication preferences&lt;/li&gt;
&lt;li&gt;Conversation context and continuity&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to Build It the Right Way
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start with Knowledge:&lt;/strong&gt; Upload your docs, APIs, FAQs. Make sure your agent can answer factual questions accurately and consistently. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add Memory:&lt;/strong&gt; Turn on automatic context capture. Let it learn about each user over time.
Set Retention: Decide how long to keep memories. 7 days? 90 days? Forever? Depends on your use case.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Watch and Adjust:&lt;/strong&gt; See what questions come up repeatedly. Add them to knowledge. See what context matters. Make sure memory captures it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable Memory Capture:&lt;/strong&gt; Configure your agent to automatically extract and store customer-specific context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set Retention Policies:&lt;/strong&gt; Decide how long to keep memories based on your business needs and compliance requirements&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor &amp;amp; Refine:&lt;/strong&gt; Watch how your agent uses both systems and adjust your knowledge content based on common questions&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Ready to Build Smarter Agents?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.chatmemory.ai/" rel="noopener noreferrer"&gt;ChatMemory&lt;/a&gt; gives you both. Knowledge bases and automatic memory capture. Works across WhatsApp, Telegram, web chat, wherever your users are.&lt;/p&gt;

&lt;p&gt;Get Started Free: &lt;a href="http://app.chatmemory.ai" rel="noopener noreferrer"&gt;app.chatmemory.ai&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>RAG vs Memory for AI Agents: What’s the Difference</title>
      <dc:creator>Bobur Umurzokov</dc:creator>
      <pubDate>Wed, 22 Oct 2025 13:16:30 +0000</pubDate>
      <link>https://dev.to/bobur/rag-vs-memory-for-ai-agents-whats-the-difference-2ad0</link>
      <guid>https://dev.to/bobur/rag-vs-memory-for-ai-agents-whats-the-difference-2ad0</guid>
      <description>&lt;p&gt;AI agents are becoming more powerful every day. They can chat, write code, answer questions, and help with tasks that once required human reasoning. They all share one challenge: &lt;strong&gt;how to handle knowledge/context over time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Two architectural patterns have emerged to fill this gap: &lt;strong&gt;Retrieval-Augmented Generation (RAG)&lt;/strong&gt; and &lt;strong&gt;Memory&lt;/strong&gt;. Both aim to make large language models (LLMs) more capable, context-aware, and cost-efficient. Yet they solve different problems and fit different stages of an agent’s lifecycle. In this article, we’ll explore both in simple terms, show how they differ, and explain when to use each, or both together.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: LLMs Without Context
&lt;/h2&gt;

&lt;p&gt;LLMs are stateless by design. Each prompt is processed independently; once you send a new request, the model forgets everything that happened before unless you include it again in the input.&lt;/p&gt;

&lt;p&gt;This leads to three core limitations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;No persistence&lt;/strong&gt; – The model doesn’t remember past sessions or user-specific data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High token cost&lt;/strong&gt; – To “remind” the model of context, you must keep appending long histories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limited factual grounding&lt;/strong&gt; – Models can hallucinate or give outdated answers if information was not in their training set.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What is RAG?
&lt;/h2&gt;

&lt;p&gt;RAG is a &lt;strong&gt;retrieval layer&lt;/strong&gt; built around an LLM. Instead of relying only on the model’s internal parameters, RAG injects external knowledge dynamically at query time.&lt;/p&gt;

&lt;p&gt;The architecture typically has three parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Indexing pipeline&lt;/strong&gt; – Preprocesses and embeds documents into a vector database (e.g., Pinecone, Weaviate, Qdrant, pgvector).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval pipeline&lt;/strong&gt; – On each query, converts the user input into an embedding and finds semantically similar documents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generation step&lt;/strong&gt; – Combines the query with the retrieved context and sends it to the LLM for final answer generation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This pattern can be expressed as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Answer = LLM(prompt + top_k(retrieve(query)))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Example of RAG in action
&lt;/h3&gt;

&lt;p&gt;Take an example of an AI assistant for your company’s internal documentation. The model doesn’t know your private documents because they weren’t part of its training data. With RAG, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Store all your company docs in a vector database (like Pinecone, Weaviate, or Qdrant).&lt;/li&gt;
&lt;li&gt;When a user asks, “How do I reset my password?”, the assistant retrieves similar text from those documents.&lt;/li&gt;
&lt;li&gt;The model then reads the retrieved text and generates an answer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this setup, the &lt;strong&gt;knowledge source is external&lt;/strong&gt; (e.g., a document corpus or database) and &lt;strong&gt;stateless&lt;/strong&gt; (each query starts fresh).&lt;/p&gt;

&lt;h3&gt;
  
  
  Why RAG became popular
&lt;/h3&gt;

&lt;p&gt;RAG is powerful because it solves two big problems of LLMs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Out-of-date knowledge&lt;/strong&gt; – The model was trained months or years ago and doesn’t know the latest facts. With RAG, you can retrieve new information anytime.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private data&lt;/strong&gt; – You can feed the model your own documents without retraining it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s why RAG became a standard method in enterprise AI systems and chatbots.&lt;/p&gt;

&lt;h3&gt;
  
  
  Limitations of RAG
&lt;/h3&gt;

&lt;p&gt;However, RAG has clear boundaries:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;No persistence&lt;/strong&gt; – It doesn’t learn from interactions; every query is independent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limited personalization&lt;/strong&gt; – Retrieval is document-based, not user-based.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Noise in embeddings&lt;/strong&gt; – Semantic similarity can return irrelevant or redundant text.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational cost&lt;/strong&gt; – Vector databases require maintenance, tuning, and embedding updates.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From a user experience view, RAG feels like a smart search engine — informative, but not personal.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Memory in AI Agents?
&lt;/h2&gt;

&lt;p&gt;Memory refers to a &lt;strong&gt;persistent context store&lt;/strong&gt; that agents can &lt;strong&gt;read, write, and update&lt;/strong&gt; across interactions. Instead of only pulling facts from external sources, the agent records what it learns and reuses that later. Memory is not just a cache as well, and it’s part of the agent’s reasoning state.&lt;/p&gt;

&lt;p&gt;A memory allows an AI agent to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Recall previous interactions,&lt;/li&gt;
&lt;li&gt;Learn from them,&lt;/li&gt;
&lt;li&gt;Update its knowledge,&lt;/li&gt;
&lt;li&gt;And behave consistently over time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s not just about retrieval, but it’s about &lt;strong&gt;experience&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example of memory in an agent
&lt;/h3&gt;

&lt;p&gt;Imagine you tell your AI assistant:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“I don’t like coffee.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then tomorrow, you ask:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Can you recommend a drink for breakfast?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If the agent replies “Espresso,” it clearly forgot what you said. But if it answers:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Maybe tea or juice — since you don’t like coffee,”&lt;/p&gt;

&lt;p&gt;then it remembered.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s what memory enables: continuity and context across multiple conversations or tasks. See also &lt;a href="https://www.gibsonai.com/use-cases/customer-support&amp;lt;br&amp;gt;%0A![Uploading%20image](...)" rel="noopener noreferrer"&gt;an example use case for a customer support AI Agent with the memory&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture Layers of Memory
&lt;/h3&gt;

&lt;p&gt;A typical memory often includes several layers:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Typical Storage&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Short-term memory&lt;/td&gt;
&lt;td&gt;Keeps recent conversation turns or active context&lt;/td&gt;
&lt;td&gt;In-memory buffer / prompt window&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Long-term memory&lt;/td&gt;
&lt;td&gt;Persists knowledge beyond a single session&lt;/td&gt;
&lt;td&gt;SQL DB, JSON store, or vector DB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Working memory&lt;/td&gt;
&lt;td&gt;Tracks intermediate steps in reasoning or planning&lt;/td&gt;
&lt;td&gt;In-process memory / scratchpad&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each layer serves a different purpose in balancing accuracy, context, and performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Technical Implementations of Memory
&lt;/h3&gt;

&lt;p&gt;Memory can be implemented in multiple ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Vector Memory&lt;/strong&gt; – Summaries or key facts are embedded and retrieved by similarity (like RAG but for personal context).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key-Value Store&lt;/strong&gt; – Store structured entries like &lt;code&gt;{user_id: preferences}&lt;/code&gt; for fast lookup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQL-based Memory&lt;/strong&gt; – Systems like &lt;a href="https://github.com/gibsonai/memori" rel="noopener noreferrer"&gt;Memori&lt;/a&gt; treat memories as relational data with timestamps, TTLs, and lineage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Graph Memory&lt;/strong&gt; – Represents relationships between entities and concepts (useful for reasoning).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each approach has different strengths:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vector memory captures semantics,&lt;/li&gt;
&lt;li&gt;SQL memory offers structure and governance,&lt;/li&gt;
&lt;li&gt;Graph memory supports reasoning,&lt;/li&gt;
&lt;li&gt;Key-value memory is simple and fast.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Limitations of Memory
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Storage complexity&lt;/strong&gt; – Managing and summarizing large histories is non-trivial.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Forgetting and decay&lt;/strong&gt; – The system must decide what to retain or drop.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Versioning and conflict resolution&lt;/strong&gt; – Updating facts without duplication or contradiction.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privacy and compliance&lt;/strong&gt; – Persistent data must be encrypted, access-controlled, and deletable on request.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words: memory improves user experience but introduces data-management challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  RAG vs Memory: Architectural Comparison
&lt;/h2&gt;

&lt;p&gt;Let’s summarize the difference in technical terms.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;RAG&lt;/th&gt;
&lt;th&gt;Memory&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Goal&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Retrieve external knowledge on demand&lt;/td&gt;
&lt;td&gt;Retain internal experiences over time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Source&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Document corpus / external data&lt;/td&gt;
&lt;td&gt;Conversation history / agent state&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Statefulness&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Stateless&lt;/td&gt;
&lt;td&gt;Stateful&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Retrieval method&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Embedding similarity&lt;/td&gt;
&lt;td&gt;Structured or contextual recall&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Update mechanism&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Update document index&lt;/td&gt;
&lt;td&gt;Write to memory store&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Common storage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Vector DB (Pinecone, Qdrant, etc.)&lt;/td&gt;
&lt;td&gt;SQL DB, KV store, hybrid&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Use case&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Q&amp;amp;A, search, knowledge grounding&lt;/td&gt;
&lt;td&gt;Personalization, reasoning, long-term continuity&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In simple terms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RAG helps your agent &lt;em&gt;know more&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Memory helps your agent &lt;em&gt;remember better&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why RAG Alone Isn’t Enough
&lt;/h2&gt;

&lt;p&gt;Many production LLM solution today rely purely on RAG. It works for document-heavy tasks but fails in long-running or adaptive contexts.&lt;/p&gt;

&lt;h3&gt;
  
  
  No Temporal Awareness
&lt;/h3&gt;

&lt;p&gt;RAG retrieves documents but doesn’t evolve. An agent can’t say, “Last week you told me…” unless you manually re-feed that conversation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inefficient Context Windows
&lt;/h3&gt;

&lt;p&gt;Without persistent memory, developers must send the full conversation each time — expensive and slow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lack of User Adaptation
&lt;/h3&gt;

&lt;p&gt;RAG can personalize results by user ID, but it doesn’t adapt from behavior. Memory enables “learning-by-interaction.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Memory Alone Isn’t Enough Either
&lt;/h2&gt;

&lt;p&gt;Memory store experience but may lack external factual grounding.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A sales assistant can remember your clients and notes,&lt;/li&gt;
&lt;li&gt;But it still needs to &lt;strong&gt;retrieve&lt;/strong&gt; the latest CRM records or pricing sheets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without RAG, memory-driven agents risk becoming &lt;strong&gt;contextually aware but factually outdated.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Thus, in modern architectures, &lt;strong&gt;RAG and Memory complement each other.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  RAG + Memory: The Hybrid Pattern
&lt;/h2&gt;

&lt;p&gt;The hybrid approach combines &lt;strong&gt;retrieval&lt;/strong&gt; (for facts) and &lt;strong&gt;memory&lt;/strong&gt; (for experiences).&lt;/p&gt;

&lt;p&gt;At runtime, the agent pipeline looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="err"&gt;→&lt;/span&gt; &lt;span class="n"&gt;Retrieve&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="nb"&gt;long&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;term&lt;/span&gt; &lt;span class="nf"&gt;memory &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;personal&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;→&lt;/span&gt; &lt;span class="n"&gt;Retrieve&lt;/span&gt; &lt;span class="n"&gt;external&lt;/span&gt; &lt;span class="nf"&gt;documents &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;RAG&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;→&lt;/span&gt; &lt;span class="n"&gt;Merge&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;
&lt;span class="err"&gt;→&lt;/span&gt; &lt;span class="n"&gt;Generate&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="n"&gt;via&lt;/span&gt; &lt;span class="n"&gt;LLM&lt;/span&gt;
&lt;span class="err"&gt;→&lt;/span&gt; &lt;span class="n"&gt;Write&lt;/span&gt; &lt;span class="n"&gt;back&lt;/span&gt; &lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="n"&gt;knowledge&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;memory&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This architecture mirrors how humans operate. We recall personal experience, look up external information, then act.&lt;/p&gt;

&lt;h2&gt;
  
  
  From RAG to Memory-First Architectures
&lt;/h2&gt;

&lt;p&gt;RAG was the first major step toward intelligent retrieval. But the future lies in &lt;strong&gt;memory-first architectures&lt;/strong&gt; where the agent starts from what it already knows and uses retrieval only when necessary.&lt;/p&gt;

&lt;p&gt;A memory-first agent workflow might look like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Query memory: “Do I already know this?”&lt;/li&gt;
&lt;li&gt;If missing, trigger RAG to retrieve external data.&lt;/li&gt;
&lt;li&gt;Merge results.&lt;/li&gt;
&lt;li&gt;Respond and store a summary for future use.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This dramatically reduces latency and API costs because retrieval is conditional, not constant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;RAG was a breakthrough. It gave AI systems access to live information without retraining.&lt;/p&gt;

&lt;p&gt;But it was only the first step. &lt;strong&gt;Memory extends this foundation&lt;/strong&gt;, enabling agents to learn, adapt, and personalize across sessions.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Evolution&lt;/th&gt;
&lt;th&gt;Focus&lt;/th&gt;
&lt;th&gt;Analogy&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;RAG&lt;/td&gt;
&lt;td&gt;Information retrieval&lt;/td&gt;
&lt;td&gt;Search engine&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory&lt;/td&gt;
&lt;td&gt;Persistent learning&lt;/td&gt;
&lt;td&gt;Human cognition&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;AI memory for customer support chats&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Stop making customers repeat themselves. Add memory so AI remembers, learns, and supports like a human.&lt;br&gt;
🔗 &lt;a href="https://www.chatmemory.ai" rel="noopener noreferrer"&gt;https://www.chatmemory.ai&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>agents</category>
      <category>python</category>
    </item>
    <item>
      <title>AutoGen Multi-agent Conversations Memory</title>
      <dc:creator>Bobur Umurzokov</dc:creator>
      <pubDate>Mon, 29 Sep 2025 07:57:52 +0000</pubDate>
      <link>https://dev.to/bobur/autogen-multi-agent-conversations-memory-1i90</link>
      <guid>https://dev.to/bobur/autogen-multi-agent-conversations-memory-1i90</guid>
      <description>&lt;p&gt;In this tutorial, you'll learn how to create &lt;a href="https://github.com/microsoft/autogen" rel="noopener noreferrer"&gt;AutoGen&lt;/a&gt; AI agents that can &lt;strong&gt;remember&lt;/strong&gt; conversations and use that memory in future discussions. We'll build a simple &lt;strong&gt;Software Development Consulting Team&lt;/strong&gt; with two agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Alex&lt;/strong&gt; - Technical Architect (designs systems)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sam&lt;/strong&gt; - Full-Stack Developer (builds applications)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They'll help a client build an e-commerce website by remembering everything discussed and providing more informed suggestions for the client.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Memori?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/GibsonAI/memori" rel="noopener noreferrer"&gt;Memori&lt;/a&gt;&lt;/strong&gt; is an open-source memory engine that provides persistent, intelligent memory for any LLM using standard SQL databases. Memori uses multiple agents working together to intelligently promote essential long-term memories to short-term storage for faster context injection.&lt;/p&gt;

&lt;p&gt;With a single line of code &lt;code&gt;memori.enable()&lt;/code&gt; any LLM gains the ability to remember conversations, learn from interactions, and maintain context across sessions. The entire memory system is stored in a standard SQLite database (or PostgreSQL/MySQL for enterprise deployments), making it fully portable, auditable, and owned by the user.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key features:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Auto-recording&lt;/strong&gt;: Automatically saves all conversations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Works with existing databases&lt;/strong&gt;: SQLite, PostgreSQL, MySQL, MongoDB&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart memory&lt;/strong&gt;: AI decides what's important to remember&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-session&lt;/strong&gt;: Agents remember between different conversations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero setup&lt;/strong&gt;: Just initialize and enable - that's it!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;Before we start, we need to install some packages and set up our environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;memorisdk autogen-agentchat &lt;span class="s2"&gt;"autogen-ext[openai]"&lt;/span&gt; python-dotenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Set up your OpenAI API key
&lt;/h3&gt;

&lt;p&gt;You'll need an OpenAI API key to run this example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-api-key-here&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1: Import Libraries
&lt;/h2&gt;

&lt;p&gt;Let's import everything we need for our multi-agent conversation system.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;

&lt;span class="c1"&gt;# AutoGen imports - for creating AI agent teams
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;autogen_agentchat.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AssistantAgent&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;autogen_agentchat.conditions&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MaxMessageTermination&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;autogen_agentchat.teams&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RoundRobinGroupChat&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;autogen_ext.models.openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAIChatCompletionClient&lt;/span&gt;

&lt;span class="c1"&gt;# Memori import - for giving agents memory
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;memori&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Memori&lt;/span&gt;

&lt;span class="c1"&gt;# For loading environment variables
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;

&lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;All libraries imported successfully!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Initialize Memory System
&lt;/h2&gt;

&lt;p&gt;This is the magic step! We create a memory system that will automatically record and remember all conversations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Create the memory system - this is where all conversations will be saved
&lt;/span&gt;&lt;span class="n"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Memori&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;database_connect&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sqlite:///consulting_memory.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Local database file
&lt;/span&gt;    &lt;span class="n"&gt;auto_ingest&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;        &lt;span class="c1"&gt;# Automatically save all conversations
&lt;/span&gt;    &lt;span class="n"&gt;conscious_ingest&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;# AI decides what's important to remember
&lt;/span&gt;    &lt;span class="n"&gt;verbose&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;           &lt;span class="c1"&gt;# Set to True to see what's happening behind the scenes
&lt;/span&gt;    &lt;span class="n"&gt;namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;consulting&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;   &lt;span class="c1"&gt;# Separate memory space for this project
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Enable the memory system
&lt;/span&gt;&lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;enable&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Memory system initialized!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Database: consulting_memory.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Auto-recording enabled - all conversations will be remembered!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Create AI Agents
&lt;/h2&gt;

&lt;p&gt;Now let's create our consulting team! We'll make two AI agents with different expertise.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Set up the AI model (OpenAI GPT-4o-mini)
&lt;/span&gt;&lt;span class="n"&gt;model_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAIChatCompletionClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Create Alex - Technical Architect
&lt;/span&gt;&lt;span class="n"&gt;alex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AssistantAgent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Alex&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;model_client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model_client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;system_message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;You are Alex, a Senior Technical Architect.

    You have persistent memory and remember:
    - Client requirements and constraints
    - Technical decisions made in past conversations
    - Budget and timeline discussions

    Always reference previous conversations when relevant.
    Keep your responses focused and practical.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Create Sam - Full-Stack Developer
&lt;/span&gt;&lt;span class="n"&gt;sam&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AssistantAgent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sam&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;model_client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model_client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;system_message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;You are Sam, a Senior Full-Stack Developer.

    You have persistent memory and remember:
    - Client&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s technical preferences and team skills
    - Implementation decisions from past discussions
    - Development approaches we&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ve recommended

    Build upon previous conversations and maintain consistency.
    Focus on practical implementation advice.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Alex (Technical Architect) created&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sam (Full-Stack Developer) created&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Both agents have persistent memory enabled!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Create the Team of Agents
&lt;/h2&gt;

&lt;p&gt;Let's put our agents together in a team that can collaborate on client problems.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Create a team where agents take turns (round-robin)
&lt;/span&gt;&lt;span class="n"&gt;consulting_team&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;RoundRobinGroupChat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;participants&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;alex&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sam&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;  &lt;span class="c1"&gt;# Our two agents
&lt;/span&gt;    &lt;span class="n"&gt;termination_condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;MaxMessageTermination&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Stop after 6 messages
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Consulting team created!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Team members: Alex (Architect) + Sam (Developer)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;They&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ll take turns responding to client questions&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: First Consultation - Setting Requirements
&lt;/h2&gt;

&lt;p&gt;Let's simulate our first client meeting where they share their project requirements.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# First client conversation - gathering requirements
&lt;/span&gt;&lt;span class="n"&gt;client_request_1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Hi team! I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m Sarah, and I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m building a new e-commerce platform for my retail business.

Here are my requirements:
- Need to handle 10,000+ products
- Process payments securely
- Manage inventory in real-time
- My budget is $50,000
- My team knows React and Python well
- We prefer modern, maintainable technology

What architecture would you recommend?
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CLIENT REQUEST 1: Initial Requirements&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client_request_1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Team Response:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Run the team conversation
&lt;/span&gt;&lt;span class="n"&gt;result_1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;consulting_team&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;client_request_1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Show the team's response
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result_1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;. &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 6: Follow-up Consultation - Database Decision
&lt;/h2&gt;

&lt;p&gt;Now let's see the magic of memory! The client asks a follow-up question, and our agents should remember the previous conversation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Second client conversation - building on previous discussion
&lt;/span&gt;&lt;span class="n"&gt;client_request_2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Great recommendations from our last meeting! 

Now I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m concerned about the database choice. Given our product catalog size 
and the budget constraints we discussed, what specific database solution 
would work best for our e-commerce platform?

Also, how should we handle the inventory tracking?
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CLIENT REQUEST 2: Database &amp;amp; Inventory (Notice: References previous meeting!)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client_request_2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Team Response (with memory of previous conversation):&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Run the team conversation - they should remember the $50K budget and 10K+ products
&lt;/span&gt;&lt;span class="n"&gt;result_2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;consulting_team&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;client_request_2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Show the team's response
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result_2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;. &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7: Third Consultation - Development Approach
&lt;/h2&gt;

&lt;p&gt;Let's test the memory even more! The client asks about development approach, referencing team size and timeline that weren't explicitly mentioned before.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Third client conversation - development strategy
&lt;/span&gt;&lt;span class="n"&gt;client_request_3&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Perfect! The database recommendations make sense.

Now for the development approach - should we build this as a monolith first 
or go straight to microservices? 

Remember, we have a small team (just 3 developers) and need to launch in 6 months.
Also, keep in mind our React and Python skills that I mentioned earlier.
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CLIENT REQUEST 3: Development Approach (References team skills from first meeting!)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client_request_3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Team Response (should remember React/Python skills + budget):&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Run the team conversation - they should remember all previous context
&lt;/span&gt;&lt;span class="n"&gt;result_3&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;consulting_team&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;client_request_3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Show the team's response
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result_3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;. &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 8: Check What's in Memory
&lt;/h2&gt;

&lt;p&gt;Let's peek behind the scenes and see what our memory system has learned!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Let's see what the memory system has learned
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MEMORY SYSTEM ANALYSIS&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Get memory statistics
&lt;/span&gt;    &lt;span class="n"&gt;stats&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_memory_stats&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Total conversations recorded: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;total_conversations&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Total memories stored: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;total_memories&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Database location: consulting_memory.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Namespace: consulting&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Memory stats not available: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;All conversations have been automatically saved!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;If you restart this notebook and run the agents again,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   they will remember everything from today&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s conversations.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 9: Test Memory Persistence
&lt;/h2&gt;

&lt;p&gt;Let's test if our agents truly remember by asking them directly what they learned about the client.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Test memory recall
&lt;/span&gt;&lt;span class="n"&gt;memory_test&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Hey team, I want to make sure we&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;re all on the same page.

Can you remind me of my key project requirements and the decisions 
we&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ve made so far? I want to make sure nothing was missed.
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MEMORY TEST: What do you remember about our project?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;memory_test&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt; Team&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s Memory Recall:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Run the memory test
&lt;/span&gt;&lt;span class="n"&gt;result_test&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;consulting_team&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;memory_test&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Show what they remember
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result_test&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;. &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Amazing! The agents remembered the key details from all our conversations!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Congratulations! You've Built Memory-Enhanced AI Agents!
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What you accomplished:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Created AI agents that work together as a team
&lt;/li&gt;
&lt;li&gt;Gave them persistent memory using Memori
&lt;/li&gt;
&lt;li&gt;Ran multiple conversations that build on each other
&lt;/li&gt;
&lt;li&gt;Saw how memory makes conversations more helpful
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key insights from this demo:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Memory makes agents smarter&lt;/strong&gt;: They remembered budget ($50K), team skills (React/Python), and project constraints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conversations build naturally&lt;/strong&gt;: Each discussion referenced previous context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero manual work&lt;/strong&gt;: Memori automatically captured and used relevant information&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistent across sessions&lt;/strong&gt;: Restart the notebook and the agents will still remember!&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Real-world applications:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Customer Support&lt;/strong&gt;: Remember customer history and preferences&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Project Management&lt;/strong&gt;: Track decisions, requirements, and progress&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personal Assistant&lt;/strong&gt;: Remember your preferences and past conversations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Educational Tutoring&lt;/strong&gt;: Track student progress and learning style&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medical Consultation&lt;/strong&gt;: Remember patient history and treatment plans&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Running the Complete Example
&lt;/h2&gt;

&lt;p&gt;Here's the complete code that you can copy and run as a single Python script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;autogen_agentchat.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AssistantAgent&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;autogen_agentchat.conditions&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MaxMessageTermination&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;autogen_agentchat.teams&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RoundRobinGroupChat&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;autogen_ext.models.openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAIChatCompletionClient&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;memori&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Memori&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Initialize Memori
&lt;/span&gt;    &lt;span class="n"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Memori&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;database_connect&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sqlite:///consulting_memory.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;auto_ingest&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;conscious_ingest&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;verbose&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;consulting&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;enable&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Create model client
&lt;/span&gt;    &lt;span class="n"&gt;model_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAIChatCompletionClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Create agents
&lt;/span&gt;    &lt;span class="n"&gt;alex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AssistantAgent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Alex&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;model_client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model_client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;system_message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are Alex, a Senior Technical Architect with persistent memory...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;sam&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AssistantAgent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sam&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;model_client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model_client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;system_message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are Sam, a Senior Full-Stack Developer with persistent memory...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Create team
&lt;/span&gt;    &lt;span class="n"&gt;consulting_team&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;RoundRobinGroupChat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;participants&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;alex&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sam&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;termination_condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;MaxMessageTermination&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Run conversations
&lt;/span&gt;    &lt;span class="n"&gt;client_requests&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hi team! I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m Sarah, building an e-commerce platform...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Great recommendations! Now about the database choice...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Perfect! Now for development approach - monolith or microservices?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client_requests&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;=== CLIENT REQUEST &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; ===&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;consulting_team&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;. &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this as a Python file and run it to see the memory-enhanced agents in action!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Why Use SQL Databases for AI Agent Memory</title>
      <dc:creator>Bobur Umurzokov</dc:creator>
      <pubDate>Sat, 13 Sep 2025 09:26:25 +0000</pubDate>
      <link>https://dev.to/bobur/why-use-sql-databases-for-ai-agent-memory-2cl5</link>
      <guid>https://dev.to/bobur/why-use-sql-databases-for-ai-agent-memory-2cl5</guid>
      <description>&lt;p&gt;Why SQL databases are the best choice for AI Agent Memory. Because SQL is everywhere, it’s transparent, it’s cheap, and it just works.&lt;/p&gt;

&lt;p&gt;If you want AI Agents to remember past chats, user preferences, or important facts, you need to give them memory. So how do we store memory for AI agents?&lt;/p&gt;

&lt;p&gt;There are many options: some people use vector databases, some use JSON files, and others use custom storage. But one of the &lt;strong&gt;best&lt;/strong&gt; and &lt;strong&gt;simplest&lt;/strong&gt; options is using a &lt;strong&gt;SQL database&lt;/strong&gt;. Let's explore why in this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  Most Database Are SQL Based
&lt;/h2&gt;

&lt;p&gt;SQL has been powering the world's applications for 50+ years. From mobile apps to web platforms, almost everything runs on a SQL engine like &lt;strong&gt;SQLite, PostgreSQL, or MySQL&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SQLite&lt;/strong&gt; alone has over &lt;strong&gt;4 billion active deployments&lt;/strong&gt;, powering every iPhone, Android device, and web browser.&lt;/li&gt;
&lt;li&gt;SQL databases handle &lt;strong&gt;trillions of queries daily&lt;/strong&gt; across industries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; has been ranked the &lt;strong&gt;#1 most-loved database&lt;/strong&gt; in developer surveys for several years.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MySQL&lt;/strong&gt; still drives over &lt;strong&gt;30% of the web's databases&lt;/strong&gt;, including some of the world's largest e-commerce platforms.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQLite&lt;/strong&gt; processes &lt;strong&gt;trillions of transactions per day&lt;/strong&gt; inside browsers and mobile apps without dedicated servers.&lt;/li&gt;
&lt;li&gt;SQL is the foundation for mission-critical systems in &lt;strong&gt;finance (stock exchanges), healthcare (EHR systems), government (tax systems), and telecoms (billing platforms)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;SQL's ACID compliance ensures &lt;strong&gt;data integrity even during power failures or crashes&lt;/strong&gt;, something vector databases still struggle with.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If SQL can run mission-critical apps at a global scale, why not use it for AI memory?&lt;/p&gt;

&lt;h2&gt;
  
  
  SQL Is Great for Storing Structured Memory
&lt;/h2&gt;

&lt;p&gt;AI agent memory often stores and looks for the following information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who the user is&lt;/li&gt;
&lt;li&gt;What the agent and user talked about&lt;/li&gt;
&lt;li&gt;What tasks were done&lt;/li&gt;
&lt;li&gt;What was said, when, and why&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of this is &lt;strong&gt;structured data:&lt;/strong&gt; facts, preferences, skills, rules, and relationships. And SQL is made for that. With SQL, you can easily store, search, and update this kind of information. This mirrors how humans store short-term and long-term memory, with rules and preferences kept permanently.&lt;/p&gt;

&lt;h2&gt;
  
  
  SQL Makes It Easy to Search and Filter
&lt;/h2&gt;

&lt;p&gt;With SQL, memory is transparent and queryable. Let's say your agent needs to answer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"What did I ask you last week about my project?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you use a SQL database, you can run a simple query like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;memory&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="k"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'Alice'&lt;/span&gt;
&lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;topic&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'project'&lt;/span&gt;
&lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="nb"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;last_week&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Boom. Instant answer, and this gives you direct control. With SQL, you get precise queries, relationship handling via joins, and easy backups. One of the biggest pain points with vector databases is debugging and selective recall. You often don't know &lt;em&gt;why&lt;/em&gt; a memory was retrieved. &lt;/p&gt;

&lt;h2&gt;
  
  
  SQL Is Cheaper Than Vector Databases
&lt;/h2&gt;

&lt;p&gt;Vector databases are powerful, but they can also be &lt;strong&gt;very expensive to operate&lt;/strong&gt;. You're not just paying for storage you're also paying for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Token embedding generation costs&lt;/strong&gt; (converting text to vectors).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specialized vector storage fees&lt;/strong&gt;, which are higher than SQL storage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Similarity search queries&lt;/strong&gt;, which get more expensive as your memory grows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extra infrastructure&lt;/strong&gt; like Redis caches, orchestration layers, and sometimes even separate graph databases to handle relationships.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With SQL, all of that disappears. You just use a &lt;strong&gt;regular PostgreSQL, MySQL, or SQLite database&lt;/strong&gt;. Many are free, and managed versions on AWS, Azure, or Supabase/Postgres cost only a few dollars per month.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Comparison
&lt;/h3&gt;

&lt;p&gt;The numbers below come from benchmark tests of SQL-based memory vs. other popular vector DB solutions:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scale (Memories)&lt;/th&gt;
&lt;th&gt;Memori (SQL)&lt;/th&gt;
&lt;th&gt;Vector DB&lt;/th&gt;
&lt;th&gt;Savings&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;10K (Startup)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$45&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$250&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;82%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100K (Small)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$140&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$470&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;70%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1M (Medium)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$1,050&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$2,200&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;52%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10M (Large)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$8,500&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$15,000&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;43%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Here's what these numbers mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At the &lt;strong&gt;startup level (10K memories)&lt;/strong&gt;, using SQL costs about the same as a nice dinner out &lt;strong&gt;$45/month&lt;/strong&gt; while vector DBs cost more than &lt;strong&gt;5x higher ($250/month)&lt;/strong&gt;. That's a big difference for early-stage projects.&lt;/li&gt;
&lt;li&gt;At the &lt;strong&gt;enterprise level (10M memories)&lt;/strong&gt;, the savings still hold strong. SQL memory comes in at &lt;strong&gt;$8.5K/month&lt;/strong&gt;, while vector DBs hit &lt;strong&gt;$15K/month&lt;/strong&gt;. That's a savings of over &lt;strong&gt;$6,500 every month&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Storage Efficiency
&lt;/h3&gt;

&lt;p&gt;Another hidden cost is storage. Each memory stored in SQL is compact because it's just structured JSON + metadata:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SQL memory&lt;/strong&gt; ≈ &lt;strong&gt;2.8 KB per entry&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector DB memory&lt;/strong&gt; ≈ &lt;strong&gt;9 KB per entry&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's about &lt;strong&gt;70% smaller footprint&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;At scale, this matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1M memories&lt;/strong&gt; in SQL ≈ &lt;strong&gt;2.8 GB&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1M memories&lt;/strong&gt; in a Vector DB ≈ &lt;strong&gt;9.2 GB&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not only does SQL save on disk space, but cloud providers charge for every gigabyte.&lt;/p&gt;

&lt;h2&gt;
  
  
  SQL DBs are Easy to Deploy and Manage Infrastructure
&lt;/h2&gt;

&lt;p&gt;Another reason SQL works so well for AI memory is that the &lt;strong&gt;infrastructure is simple&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With SQLite, you don't even need a server, just a single file that works out of the box.&lt;/li&gt;
&lt;li&gt;With PostgreSQL or MySQL, you can spin up a managed instance on any major cloud provider in minutes.&lt;/li&gt;
&lt;li&gt;Scaling is straightforward: vertical scaling for smaller projects, sharding or replication for larger deployments.&lt;/li&gt;
&lt;li&gt;SQL databases come with &lt;strong&gt;decades of tooling&lt;/strong&gt;: monitoring dashboards, migration tools, backup systems, and admin interfaces.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Compare this to vector databases, which often require running &lt;strong&gt;specialized clusters, caches, and custom APIs&lt;/strong&gt; just to keep things stable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment Complexity Comparison
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;SQL Database&lt;/th&gt;
&lt;th&gt;Vector Database&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Setup Time&lt;/td&gt;
&lt;td&gt;Minutes (1–2 lines of code)&lt;/td&gt;
&lt;td&gt;Hours to days&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Services Required&lt;/td&gt;
&lt;td&gt;1 (DB only)&lt;/td&gt;
&lt;td&gt;3–5 (Vector + Cache + SQL + Orchestration)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Portability&lt;/td&gt;
&lt;td&gt;Single &lt;code&gt;.db&lt;/code&gt; file / serverless instance&lt;/td&gt;
&lt;td&gt;Complex export/import&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud Options&lt;/td&gt;
&lt;td&gt;Universal (AWS, Azure, GCP, Supabase, Neon)&lt;/td&gt;
&lt;td&gt;Limited or proprietary&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning Curve&lt;/td&gt;
&lt;td&gt;1 day (SQL is universal)&lt;/td&gt;
&lt;td&gt;1–2 weeks to learn APIs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This means you can go from &lt;strong&gt;idea → running an AI agent with memory&lt;/strong&gt; in a single afternoon, without wrestling with infrastructure. SQL keeps the boring parts boring, which is exactly what you want when deploying memory at scale.&lt;/p&gt;

&lt;p&gt;With GibsonAI, Neon, or Supabase, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a &lt;strong&gt;branch of your database&lt;/strong&gt; for testing without affecting production.&lt;/li&gt;
&lt;li&gt;Scale storage and compute independently, so you only pay for what you use.&lt;/li&gt;
&lt;li&gt;Integrate directly with AI frameworks using standard Postgres drivers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  SQL Is Easy to Debug and Maintain
&lt;/h2&gt;

&lt;p&gt;If something goes wrong in your AI agent's memory, it's easy to see why in a SQL table.&lt;/p&gt;

&lt;p&gt;You can open your database and &lt;strong&gt;read the memory yourself&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Unlike black-box systems in vector DBs, SQL is transparent. Developers can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;See what the agent remembered&lt;/li&gt;
&lt;li&gt;Check if memory is being saved correctly&lt;/li&gt;
&lt;li&gt;Fix mistakes easily&lt;/li&gt;
&lt;li&gt;Run a quick query to find &lt;strong&gt;exactly which conversations or facts were stored&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Export or back up the entire memory with a single command like:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cp &lt;/span&gt;memory.db backup.db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simplicity stands in sharp contrast to vector-based solutions, which often require &lt;strong&gt;multiple services (vector DB + cache + SQL)&lt;/strong&gt; just to function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Maintenance Comparison
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Task&lt;/th&gt;
&lt;th&gt;SQL Memory (Memori)&lt;/th&gt;
&lt;th&gt;Vector DB Solutions&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Backup&lt;/td&gt;
&lt;td&gt;Copy DB file (&lt;code&gt;cp&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;Proprietary export API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Schema Update&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ALTER TABLE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Re-index embeddings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Debugging&lt;/td&gt;
&lt;td&gt;Direct &lt;code&gt;SELECT&lt;/code&gt; query&lt;/td&gt;
&lt;td&gt;Opaque similarity search&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Deployment&lt;/td&gt;
&lt;td&gt;Single DB file&lt;/td&gt;
&lt;td&gt;Multi-service cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Transparency&lt;/td&gt;
&lt;td&gt;Full audit trail&lt;/td&gt;
&lt;td&gt;Limited / none&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Additional benefits of SQL's simplicity:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Portability&lt;/strong&gt;: You can move a SQLite file from one machine to another without setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auditability&lt;/strong&gt;: Every decision the AI makes can be traced back to stored rows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance Ready&lt;/strong&gt;: Industries like finance and healthcare require audit logs and explainability. SQL provides both by default.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Universality&lt;/strong&gt;: Works anywhere from local development on your laptop to cloud-scale PostgreSQL clusters.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When Not to Use SQL
&lt;/h2&gt;

&lt;p&gt;Of course, SQL isn't perfect for every scenario. While it shines in structured memory, transparency, and cost efficiency, there are still cases where vector databases or hybrid approaches make sense.&lt;/p&gt;

&lt;p&gt;You might still want a &lt;strong&gt;vector DB&lt;/strong&gt; if you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pure semantic similarity search&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Example: "Find text passages that are most similar to this paragraph" without relying on keywords or entities.&lt;/li&gt;
&lt;li&gt;SQL can approximate this with full-text search (FTS), but it's not designed for cosine similarity across high-dimensional embeddings.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Real-time embeddings for multimedia&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;If your AI system needs to instantly process and compare images, audio, or video in vector form, SQL isn't the best tool.&lt;/li&gt;
&lt;li&gt;Vector DBs are optimized for these workloads and support multimodal embeddings.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Massive distributed scale&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;SQL databases handle millions, even tens of millions, of records well, but when you reach &lt;strong&gt;hundreds of millions (100M+) or billions&lt;/strong&gt;, distributed vector DBs can offer better performance across clusters.&lt;/li&gt;
&lt;li&gt;This scale is usually only required by global search engines, huge SaaS products, or social networks.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Real-time similarity matching at high volume&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;If your system needs to process &lt;strong&gt;millions of similarity queries per second&lt;/strong&gt;, vector indexes like HNSW or IVF can outperform SQL in raw throughput.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;AI is moving fast, but sometimes the best technology is the one we already have. SQL databases have powered the world's apps for decades. Now, they can power AI memory too. By storing conversations, facts, and preferences in SQL, we make AI agents more useful, more trustworthy, and more affordable.&lt;/p&gt;

&lt;p&gt;By using SQL databases, you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Proven technology&lt;/strong&gt; trusted everywhere.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured memory&lt;/strong&gt; with facts, rules, and preferences.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy querying&lt;/strong&gt; with standard SQL.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lower costs&lt;/strong&gt;—up to 80–90% savings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transparency and auditability&lt;/strong&gt; for compliance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Need help getting started? Try &lt;a href="https://github.com/gibsonai/memori" rel="noopener noreferrer"&gt;&lt;strong&gt;Memori&lt;/strong&gt;&lt;/a&gt; — it's an open-source memory layer that works with any SQL database. Zero setup. Automatic memory. Smart agents.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>sql</category>
      <category>database</category>
      <category>ai</category>
    </item>
    <item>
      <title>How to build an OpenAI Agent with persistent memory</title>
      <dc:creator>Bobur Umurzokov</dc:creator>
      <pubDate>Wed, 27 Aug 2025 10:19:41 +0000</pubDate>
      <link>https://dev.to/bobur/how-to-build-an-openai-agent-with-persistent-memory-51kj</link>
      <guid>https://dev.to/bobur/how-to-build-an-openai-agent-with-persistent-memory-51kj</guid>
      <description>&lt;p&gt;This guide shows how to add &lt;strong&gt;persistent memory&lt;/strong&gt; to your &lt;a href="https://github.com/openai/openai-agents-python" rel="noopener noreferrer"&gt;OpenAI Agents&lt;/a&gt; using &lt;a href="https://github.com/GibsonAI/memori" rel="noopener noreferrer"&gt;Memori&lt;/a&gt;, an open-source memory engine that makes AI agents remember conversations and learn from past interactions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;AI memory for customer support chats&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Stop making customers repeat themselves. Add memory so AI remembers, learns, and supports like a human.&lt;br&gt;
🔗 &lt;a href="https://www.chatmemory.ai" rel="noopener noreferrer"&gt;https://www.chatmemory.ai&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What You'll Learn
&lt;/h2&gt;

&lt;p&gt;In this example, we'll build a memory-enhanced assistant that can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Remember Past Conversations&lt;/strong&gt; - Keep track of what you've talked about before&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learn Your Preferences&lt;/strong&gt; - Remember what you like and don't like
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search Memory&lt;/strong&gt; - Find relevant information from previous chats&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store New Information&lt;/strong&gt; - Save important details for future use&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How Memori Works
&lt;/h2&gt;

&lt;p&gt;Memori gives your AI agents two types of memory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Short-term Memory (Conscious Mode)&lt;/strong&gt; - Like keeping important info in your head that you use often&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Long-term Memory (Auto Mode)&lt;/strong&gt; - Like searching through all your past conversations when you need specific information&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of it like having a personal assistant who never forgets anything you've told them!&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before running this cookbook, you need:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. OpenAI Account
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Sign up at &lt;a href="https://openai.com" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt; if you don't have an account&lt;/li&gt;
&lt;li&gt;Get your API key from the &lt;a href="https://platform.openai.com/api-keys" rel="noopener noreferrer"&gt;OpenAI API Keys page&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  2. Install Required Packages
&lt;/h4&gt;

&lt;p&gt;We'll install these packages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;memorisdk&lt;/code&gt; - The Memori memory engine&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;openai-agents&lt;/code&gt; - OpenAI's agent framework&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;python-dotenv&lt;/code&gt; - For environment variable management&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. Create a .env file with your OpenAI API Key
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OPENAI_API_KEY=sk-your-openai-key-here
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Environment Setup
&lt;/h2&gt;

&lt;p&gt;First, we will install the necessary packages and set up our environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install Required Packages
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Install required packages
&lt;/span&gt;&lt;span class="n"&gt;pip&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;memorisdk&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt; &lt;span class="n"&gt;python&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;dotenv&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Import Libraries and Initialize
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;textwrap&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;dedent&lt;/span&gt;

&lt;span class="c1"&gt;# Import Memori for memory capabilities
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;memori&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Memori&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;create_memory_tool&lt;/span&gt;

&lt;span class="c1"&gt;# Import OpenAI Agents SDK
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Runner&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;function_tool&lt;/span&gt;

&lt;span class="c1"&gt;# Load environment variables
&lt;/span&gt;&lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;All packages imported successfully!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Memori + OpenAI Agents integration ready!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Initialize Memori Memory System
&lt;/h2&gt;

&lt;p&gt;Next, we will set up Memori to give our agent persistent memory. We'll use both conscious and auto modes for the best experience.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Initializing Memori memory system...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Initialize Memori with both conscious and auto memory modes
&lt;/span&gt;&lt;span class="n"&gt;memory_system&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Memori&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;database_connect&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sqlite:///cookbook_memory.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Local SQLite database
&lt;/span&gt;    &lt;span class="n"&gt;conscious_ingest&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Short-term working memory
&lt;/span&gt;    &lt;span class="n"&gt;auto_ingest&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c1"&gt;# Dynamic memory search  
&lt;/span&gt;    &lt;span class="n"&gt;verbose&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;          &lt;span class="c1"&gt;# Less debug output
&lt;/span&gt;    &lt;span class="n"&gt;namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cookbook_demo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Organize memories by project
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Enable the memory system
&lt;/span&gt;&lt;span class="n"&gt;memory_system&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;enable&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Memory system initialized!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Memory database: cookbook_memory.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Namespace: cookbook_demo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create Memory Tools
&lt;/h2&gt;

&lt;p&gt;Now we can create function tools that let our agent search and store memories. These tools give the agent the ability to remember and recall information.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Create the built-in memory search tool
&lt;/span&gt;&lt;span class="n"&gt;memory_tool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_memory_tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;memory_system&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@function_tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;search_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Search the agent&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s memory for past conversations, user preferences, and information.
    Use this to find relevant context from previous interactions.

    Args:
        query: What to search for in memory (e.g., &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;favorite color&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Python projects&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;)

    Returns:
        str: Search results from the agent&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s memory
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Please provide a search query&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Searching memory for: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Use Memori's memory tool to search
&lt;/span&gt;        &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;memory_tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;No relevant memories found&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Memory search error: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;


&lt;span class="nd"&gt;@function_tool&lt;/span&gt;  
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;remember_user_info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Remember important information about the user for future conversations.
    Use this when the user shares preferences, goals, or other important details.

    Args:
        info: The information to remember (e.g., &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s name is Alice&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Prefers Python over JavaScript&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;)

    Returns:
        str: Confirmation of what was remembered
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Storing user info: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Store in Memori's memory system
&lt;/span&gt;        &lt;span class="n"&gt;memory_system&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;record_conversation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User shared: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
            &lt;span class="n"&gt;ai_output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ll remember that: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Remembered: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error storing information: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;


&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Memory tools created successfully!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Available tools:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - search_memory: Find past conversations and preferences&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - remember_user_info: Store new information about the user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create the Memory-Enhanced Agent
&lt;/h2&gt;

&lt;p&gt;Now let's create an OpenAI Agent that has access to persistent memory. This agent will be able to remember past conversations and provide personalized responses.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Create the memory-enhanced OpenAI Agent
&lt;/span&gt;&lt;span class="n"&gt;memory_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Memory-Enhanced Assistant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;dedent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
        You are a helpful AI assistant with persistent memory capabilities. You can remember
        past conversations and user preferences across different chat sessions.

        Your memory abilities:
        1. search_memory: Search for relevant past conversations, user preferences, 
           and any information from previous interactions
        2. remember_user_info: Store important information about the user

        Guidelines for using memory:
        - ALWAYS start by searching your memory for relevant context before responding
        - When users share important information (name, preferences, goals, projects), 
          use remember_user_info to store it
        - Be conversational and personalize responses based on remembered information
        - Reference past conversations naturally when relevant
        - If this seems like a first conversation, introduce your memory capabilities

        Be helpful, friendly, and make good use of your memory to provide personalized assistance!
        &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;search_memory&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;remember_user_info&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Memory-Enhanced Assistant created!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;This agent can:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - Remember past conversations&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - Learn your preferences&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - Search through memory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - Provide personalized responses&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Helper Function for Agent Interaction
&lt;/h2&gt;

&lt;p&gt;It is time to create a helper function that processes user input through our memory-enhanced agent and automatically stores the conversation in memory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;chat_with_memory_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Process user input through the memory-enhanced agent and store the conversation.

    Args:
        user_input: What the user said

    Returns:
        str: The agent&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s response
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Processing: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Run the agent with the user input
&lt;/span&gt;        &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;Runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;memory_agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Get the response content
&lt;/span&gt;        &lt;span class="n"&gt;response_content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;final_output&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;hasattr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;final_output&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Store the conversation in Memori's memory system
&lt;/span&gt;        &lt;span class="n"&gt;memory_system&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;record_conversation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
            &lt;span class="n"&gt;ai_output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;response_content&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response_content&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;error_msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sorry, I encountered an error: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;error_msg&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;error_msg&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Chat function ready!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Demo 1: First Conversation - Building Memory
&lt;/h2&gt;

&lt;p&gt;Start with a first conversation where we introduce ourselves and share some preferences. The agent will remember this information for future interactions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Demo 1: First Conversation - Building Memory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# First conversation - introducing ourselves
&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hi! I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m Alice and I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m a Python developer. I love working with data science and I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m currently learning about AI agents.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Assistant (thinking...)&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;chat_with_memory_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Assistant: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Demo 2: Second Conversation - Memory in Action
&lt;/h2&gt;

&lt;p&gt;Now let's have another conversation and see how the agent uses the memory from our previous interaction to provide personalized responses.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Demo 2: Second Conversation - Memory in Action&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Second conversation - asking for help
&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Can you help me with a project? I want to build something cool but I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m not sure what.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Assistant (thinking...)&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;chat_with_memory_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Assistant: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Demo 3: Adding More Preferences
&lt;/h2&gt;

&lt;p&gt;Let's add more information to our memory by sharing additional preferences.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Demo 3: Adding More Preferences&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Adding more preferences
&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I also love using Jupyter notebooks for my data analysis work, and I prefer using pandas and matplotlib for visualization. I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m working on a machine learning project about predicting house prices.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Assistant (thinking...)&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;chat_with_memory_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Assistant: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Demo 4: Testing Memory Recall
&lt;/h2&gt;

&lt;p&gt;Now we can test how well the agent remembers our previous conversations by asking about something we mentioned earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Demo 4: Testing Memory Recall&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Testing memory recall
&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What do you remember about my current projects and interests?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Assistant (thinking...)&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;chat_with_memory_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Assistant: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Demo 5: Memory-Based Recommendations
&lt;/h2&gt;

&lt;p&gt;Finally, let's see how the agent uses accumulated memory to provide personalized recommendations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Demo 5: Memory-Based Recommendations&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Getting personalized recommendations
&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I have some free time this weekend. What would you recommend I work on or learn about?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Assistant (thinking...)&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;chat_with_memory_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Assistant: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Interactive Chat Session
&lt;/h2&gt;

&lt;p&gt;Create an interactive chat session where you can talk with the memory-enhanced agent directly. The agent will remember everything from this conversation for future sessions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;interactive_chat&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Interactive chat session with the memory-enhanced agent.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Interactive Chat Session Started!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Start chatting with your memory-enhanced assistant!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Try asking about previous conversations or sharing new information.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Type &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;quit&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;exit&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, or &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;stop&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; to end the session.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;conversation_count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;

    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Get user input
&lt;/span&gt;            &lt;span class="n"&gt;user_input&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

            &lt;span class="c1"&gt;# Check for exit commands
&lt;/span&gt;            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;quit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;exit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stop&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bye&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
                &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Assistant: Goodbye! I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ll remember our conversation for next time.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;continue&lt;/span&gt;

            &lt;span class="n"&gt;conversation_count&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Assistant (thinking... #&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;conversation_count&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="c1"&gt;# Get response from memory-enhanced agent
&lt;/span&gt;            &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;chat_with_memory_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Assistant: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;KeyboardInterrupt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s"&gt;Assistant: Goodbye! I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ll remember our conversation for next time.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Error: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Please try again.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Session Summary:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - Conversations: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;conversation_count&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - Memory database: cookbook_memory.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - Namespace: cookbook_demo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;All conversations are saved and will be available in future sessions!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Start the interactive chat (uncomment the line below to run)
# await interactive_chat()
&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Interactive chat function ready!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Uncomment the line above and run the cell to start chatting!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Memory Statistics and Inspection
&lt;/h2&gt;

&lt;p&gt;Let's look at what's been stored in our memory system and get some statistics about our conversations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Memory System Statistics&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Get memory statistics
&lt;/span&gt;    &lt;span class="n"&gt;stats&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;memory_system&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_memory_stats&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Memory Statistics:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;items&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Memory stats not available: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Searching for stored preferences...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Search for user preferences
&lt;/span&gt;    &lt;span class="n"&gt;preferences&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;memory_tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user preferences python data science&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Found preferences:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;preferences&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error searching memory: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Memory Database Information:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - Database file: cookbook_memory.db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - Namespace: cookbook_demo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - Conscious mode: Enabled (short-term memory)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;   - Auto mode: Enabled (dynamic search)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Real-World Use Cases
&lt;/h2&gt;

&lt;p&gt;Here are some practical applications where Memori + OpenAI Agents can be powerful:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Personal Assistant&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Remember your daily routines, preferences, and goals&lt;/li&gt;
&lt;li&gt;Track ongoing projects and deadlines&lt;/li&gt;
&lt;li&gt;Provide personalized recommendations&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Customer Support Agent&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Remember customer history and preferences&lt;/li&gt;
&lt;li&gt;Track previous issues and solutions&lt;/li&gt;
&lt;li&gt;Provide consistent, personalized support&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Learning Companion&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Remember what you've learned and what you're struggling with&lt;/li&gt;
&lt;li&gt;Track your learning progress over time&lt;/li&gt;
&lt;li&gt;Suggest next steps based on your learning journey&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Code Review Assistant&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Remember your coding style and preferences&lt;/li&gt;
&lt;li&gt;Track patterns in your code reviews&lt;/li&gt;
&lt;li&gt;Learn from past feedback to improve suggestions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. &lt;strong&gt;Research Assistant&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Remember your research topics and interests&lt;/li&gt;
&lt;li&gt;Track papers you've read and want to read&lt;/li&gt;
&lt;li&gt;Connect related research across different sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You now have a memory-enhanced AI agent that can remember conversations, learn preferences, and provide personalized assistance across sessions. This is just the beginning of what's possible with persistent memory in AI agents!&lt;/p&gt;

&lt;h3&gt;
  
  
  Next Steps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Explore More Examples&lt;/strong&gt;: Check out &lt;a href="https://github.com/gibsonai/memori" rel="noopener noreferrer"&gt;Memori's GitHub&lt;/a&gt; for more integration examples&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production Setup&lt;/strong&gt;: Use PostgreSQL or MySQL for production applications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Tools&lt;/strong&gt;: Create specialized memory tools for your specific use case&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Agent Systems&lt;/strong&gt;: Share memory between multiple agents&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Resources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Memori Documentation&lt;/strong&gt;: &lt;a href="https://gibsonai.github.io/memori/" rel="noopener noreferrer"&gt;gibsonai.github.io/memori&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Repository&lt;/strong&gt;: &lt;a href="https://github.com/gibsonai/memori" rel="noopener noreferrer"&gt;github.com/gibsonai/memori&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discord Community&lt;/strong&gt;: &lt;a href="https://www.gibsonai.com/discord" rel="noopener noreferrer"&gt;gibsonai.com/discord&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>AI Apps with memory or without</title>
      <dc:creator>Bobur Umurzokov</dc:creator>
      <pubDate>Wed, 13 Aug 2025 09:00:23 +0000</pubDate>
      <link>https://dev.to/bobur/ai-apps-with-memory-or-without-46k4</link>
      <guid>https://dev.to/bobur/ai-apps-with-memory-or-without-46k4</guid>
      <description>&lt;p&gt;How does memory change AI conversations? Why do agents need memory even more than assistants?&lt;/p&gt;

&lt;p&gt;When you chat with AI, for example, using ChatGPT, conversations are often stateless. Every interaction requires you to repeat context, preferences, and background information. You tell it your name, your preferences, your plans but next time you talk to it (or you initiate a new chat), it forgets everything that was in the past. That’s because most AI apps today don’t have &lt;strong&gt;memory&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let’s explore why memory matters for AI, and how memory tools can make your AI apps (and agents!) smarter, more helpful, and more human-like.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI Apps Need Memory
&lt;/h2&gt;

&lt;p&gt;When we say “memory,” we mean the app can &lt;strong&gt;save important facts from earlier&lt;/strong&gt; and &lt;strong&gt;use them later&lt;/strong&gt;. Just like people, AI apps need to &lt;em&gt;remember&lt;/em&gt; things — like your name, past conversations, preferences, or important facts. Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your coffee order: “latte, no sugar”&lt;/li&gt;
&lt;li&gt;Your timezone and language&lt;/li&gt;
&lt;li&gt;A project summary or running to‑do list&lt;/li&gt;
&lt;li&gt;Results from a tool call (e.g., “the API returned status=202”)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Common AI Memory Types
&lt;/h3&gt;

&lt;p&gt;In common scenarios, AI remembers the following types of information:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Type&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Purpose&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Facts&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Objective information&lt;/td&gt;
&lt;td&gt;“I use PostgreSQL for databases”&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Preferences&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;User choices&lt;/td&gt;
&lt;td&gt;“I prefer clean, readable code”&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Skills&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Abilities &amp;amp; knowledge&lt;/td&gt;
&lt;td&gt;“Experienced with FastAPI”&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Rules&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Constraints &amp;amp; guidelines&lt;/td&gt;
&lt;td&gt;“Always write tests first”&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Context&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Session information&lt;/td&gt;
&lt;td&gt;“Working on e-commerce project”&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  AI Apps Without Memory
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You have to repeat yourself every time. LLMs (Large Language Models like GPT-4 or GPT-5) don’t “remember” past chats on their own. It means you are wasting tokens and paying more to repeat the same context.&lt;/li&gt;
&lt;li&gt;AI gives generic answers. Context windows are limited. If your history is long, parts get dropped.&lt;/li&gt;
&lt;li&gt;It can't build long-term relationships. A memory layer stores &lt;strong&gt;long‑term&lt;/strong&gt; facts and injects only the &lt;strong&gt;relevant&lt;/strong&gt; bits into the next prompt.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AI Apps With Memory
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Every input context is automatically remembered.&lt;/li&gt;
&lt;li&gt;Reduces token usage and lowers costs. Usually,30% of token usage in LLM apps is spent on repeating context.&lt;/li&gt;
&lt;li&gt;AI apps feel more personal.&lt;/li&gt;
&lt;li&gt;Responses become more accurate. The AI improves itself every time you interact.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How AI answers improve with memory
&lt;/h2&gt;

&lt;p&gt;To understand the difference better, let’s have a look at a quick example (with and without memory). Say you’re building a personal assistant app.&lt;/p&gt;

&lt;p&gt;You say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"My name is Sam and I live in Berlin."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now ask:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"What’s the weather like today?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Without Memory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AI doesn’t remember anything from earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User: What's the weather like today?
AI: Could you tell me your city?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Boring, right?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With Memory&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Now let’s add memory, and the response might be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AI: The weather in Berlin today is sunny with a high of 25°C. Have a great day, Sam!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Boom 💥 — now the AI feels personal and useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example Code: No memory vs With Memory
&lt;/h2&gt;

&lt;p&gt;Below are two tiny examples showing the difference in Python. For the LLM, use any provider you like (OpenAI, etc.). I’ll show a simple code snippet to keep focus on memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No Memory&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAI&lt;/span&gt;

&lt;span class="c1"&gt;# Create OpenAI client
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Repeating context every single time
&lt;/span&gt;&lt;span class="n"&gt;response_no_memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a Python expert. I use FastAPI, PostgreSQL, prefer clean code...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Remember, I work on microservices, use Docker, my teammate is Mike...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Help me with authentication&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=== Without Memory ===&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response_no_memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;With Memory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your AI &lt;strong&gt;remembers important facts&lt;/strong&gt; across conversations. Once you’ve mentioned them, you don’t need to repeat them*&lt;em&gt;.&lt;/em&gt;* Now, let's use &lt;a href="https://memori.gibsonai.com/" rel="noopener noreferrer"&gt;Memori&lt;/a&gt;, a simple open-source Python library, to give your AI persistent memory across sessions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;memori&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Memori&lt;/span&gt;

&lt;span class="c1"&gt;# Enable Memori to automatically record all conversations
&lt;/span&gt;&lt;span class="n"&gt;memori&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Memori&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conscious_ingest&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;memori&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;enable&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Context automatically injected from memory
&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;completion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Help me with authentication&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;=== With Memory ===&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response_no_memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# ✨ Memori already knows:
# FastAPI, PostgreSQL, prefers clean code,
# works on microservices, uses Docker,
# teammate is Mike
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What happens with this change, the model &lt;strong&gt;remembers&lt;/strong&gt; your stack and preferences via Memori and jumps straight to the right solution. AI knows that you’re using FastAPI with PostgreSQL in a microservices setup and deploy via Docker, it recommends JWT auth with OAuth2 password flow.&lt;/p&gt;

&lt;p&gt;You can run and try this example by following the &lt;strong&gt;🔗&lt;/strong&gt;&lt;a href="https://github.com/GibsonAI/memori/blob/main/examples/personal_assistant.py" rel="noopener noreferrer"&gt;Memori &lt;strong&gt;Personal Assistant&lt;/strong&gt; demo&lt;/a&gt;. It’s a great starting point for adapting the with memory code above into your own AI assistant&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI Agents Need Memory Even More
&lt;/h2&gt;

&lt;p&gt;Agents especially need memory because they break work into steps and often call tools or other agents:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Plan the task&lt;/li&gt;
&lt;li&gt;Search the web&lt;/li&gt;
&lt;li&gt;Call an API&lt;/li&gt;
&lt;li&gt;Parse results&lt;/li&gt;
&lt;li&gt;Write a draft&lt;/li&gt;
&lt;li&gt;Review and fix&lt;/li&gt;
&lt;li&gt;Maybe more…&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Without memory, each step loses the previous step’s key facts, leading to repeated work and errors. Memory keeps those steps connected.  &lt;/p&gt;

&lt;p&gt;With memory, agents can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Share state across steps (“we already fetched page 2, skip it”)&lt;/li&gt;
&lt;li&gt;Learn preferences (“always use 24‑hour time”)&lt;/li&gt;
&lt;li&gt;Recover after failure (re‑use cached results instead of re‑scraping)&lt;/li&gt;
&lt;li&gt;Move faster (less repeated prompting and tool calls)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To see this in action with agents, check out this minimal example: &lt;strong&gt;🔗 &lt;a href="https://github.com/GibsonAI/memori/blob/main/examples/integrations/agno_example.py" rel="noopener noreferrer"&gt;Memori + Agno integration example&lt;/a&gt;.&lt;/strong&gt; It shows how to integrate &lt;strong&gt;persistent memory&lt;/strong&gt; with &lt;strong&gt;Agno agents&lt;/strong&gt;, so your agents can remember facts, preferences, and task results across runs, making multi-step workflows more consistent and efficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Examples: How AI Agents Use Memory Today
&lt;/h2&gt;

&lt;p&gt;AI agents with memory are already being used in the real world — from smart customer support to research assistants and personal AI companions. Companies like Intercom and Drift are now using &lt;strong&gt;AI bots with memory&lt;/strong&gt; to remember past tickets, know your account tier and track unresolved issues. Or &lt;a href="https://www.rewind.ai/" rel="noopener noreferrer"&gt;Rewind.ai&lt;/a&gt; is building a personal AI that remembers everything you’ve seen, said, or heard — by recording your screen and transcribing conversations.&lt;/p&gt;

&lt;p&gt;Want to see how memory can power real AI apps and agents? Check out &lt;a href="https://memori.gibsonai.com/use-cases/e-commerce" rel="noopener noreferrer"&gt;Memori’s Use Case Gallery&lt;/a&gt; — including examples for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smart shopping experience for e-commerce.&lt;/li&gt;
&lt;li&gt;Customer support bots&lt;/li&gt;
&lt;li&gt;AI research assistants&lt;/li&gt;
&lt;li&gt;Personalized chatbots&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;AI without memory is like starting from scratch every time — you must repeat the same context in each conversation, which wastes tokens, increases costs, and results in inconsistent answers. With memory, important details are automatically remembered, reducing token usage, lowering costs, and ensuring consistent, personalized experiences.&lt;/p&gt;

&lt;p&gt;For AI agents, memory is even more critical. Without it, agents can’t track progress, often repeat steps, and lose sight of important goals. With memory, agents coordinate better, learn from past mistakes, and steadily improve over time.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>llm</category>
      <category>learning</category>
    </item>
    <item>
      <title>What the AI Coding Experience Senior Software Engineers want</title>
      <dc:creator>Bobur Umurzokov</dc:creator>
      <pubDate>Wed, 23 Jul 2025 09:37:00 +0000</pubDate>
      <link>https://dev.to/bobur/what-the-ai-coding-experience-senior-software-engineers-want-2cij</link>
      <guid>https://dev.to/bobur/what-the-ai-coding-experience-senior-software-engineers-want-2cij</guid>
      <description>&lt;p&gt;AI coding assistants or editors, such as &lt;a href="https://cursor.com/" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt;, &lt;a href="http://windsurf.com/" rel="noopener noreferrer"&gt;Windsurf&lt;/a&gt;, &lt;a href="https://lovable.dev/" rel="noopener noreferrer"&gt;Lovable&lt;/a&gt;, and &lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub Copilot&lt;/a&gt;, are transforming how developers write code. You can now turn an idea into a working app in minutes just by typing a few prompts. That’s exciting but also risky. Many new developers can now build features without really understanding how the code works. Can you trust what the AI writes? Will you or your team understand it later? In some cases, the AI is making big decisions about how the software architecture is built, not the developer. &lt;/p&gt;

&lt;p&gt;Usually, Senior engineers do not jump straight into coding without considering domain knowledge, architecture or code reusability. They know when a piece of code fits and when it doesn’t. To be useful for real projects, AI tools need to give developers more structure, more control, and more ways to test and trust what gets built.&lt;/p&gt;

&lt;p&gt;In this article, I will explore existing problems with AI-assisted coding (or some people call it &lt;em&gt;vibe coding&lt;/em&gt;) and what the AI editor experience should look like for senior software engineers.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The first coding agent that plans, tests, and delivers reliable code like a senior engineer. Try out &lt;a href="https://www.verdent.ai/" rel="noopener noreferrer"&gt;Verdent&lt;/a&gt;!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Problem with Most AI Coding Tools Today
&lt;/h2&gt;

&lt;p&gt;Windsurf, Cursor, and others have shown us that language models can write code. Most of them today aim to save time and automate routine tasks. AI can automate up to 80% of the work, but achieving 99% or higher accuracy still &lt;strong&gt;depends on human input&lt;/strong&gt;. Because in the end, the most valuable part of your codebase isn’t the code. It’s the thinking behind it. Lets review some key AI coding problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The AI misunderstands your intent
&lt;/h3&gt;

&lt;p&gt;The AI never fully understood what you wanted to build. You type a prompt like “create an endpoint that returns active users.” The AI confidently writes some code. But what does “active” mean in this context? Last login? Session time? Subscription status? &lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;AI gives you a half-right solution without understanding the full intent&lt;/strong&gt;. If you try to provide highly detailed prompts, it is &lt;a href="https://www.reddit.com/r/cursor/comments/1lrlb6m/cursors_new_pricing_model_is_absolute_garbage/" rel="noopener noreferrer"&gt;too costly with a token-based pricing model&lt;/a&gt; and effort-intensive for the user. Or the AI gets into loops, forgets the prompt halfway. Now you spend more time debugging code you didn’t write.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdf1itfaiplj59u8jatkq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdf1itfaiplj59u8jatkq.png" alt="The AI misunderstands your intent" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The AI doesn't explain its choices
&lt;/h3&gt;

&lt;p&gt;Where did this API call come from? What’s the structure of this function? Why did it choose this library? These questions go unanswered because &lt;strong&gt;most AI tools provide output without rationale&lt;/strong&gt;. As a result, senior developers are left auditing unfamiliar code with no insight into the assumptions or trade-offs behind it, which makes modifying the output risky. When nobody owns or understands the reasoning behind the code, the code loses its long-term maintainability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficbyt1qkbrtcxy1zvtro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficbyt1qkbrtcxy1zvtro.png" alt="The AI doesn't explain its choices" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. No Task Structure, No Planning
&lt;/h3&gt;

&lt;p&gt;Coding is not just typing. It’s decomposing a problem, making architectural decisions, and thinking through edge cases. Most vibe coding tools I tried generate code in a single block, without breaking the work down into logical steps or providing visibility into what’s been completed and what’s left.&lt;/p&gt;

&lt;p&gt;There’s no task progress dashboard or overview of completed versus pending actions. &lt;strong&gt;You blindly click "Next” without knowing how much is done or left&lt;/strong&gt;.  It encourages a passive relationship with the AI, where the developer becomes a reviewer instead of a collaborator.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnoa2v4fpxxso1a23tx66.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnoa2v4fpxxso1a23tx66.png" alt="No Task Structure, No Planning" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Testing Comes Too Late (Or Not At All)
&lt;/h3&gt;

&lt;p&gt;AI tools rarely test what they write. If they do, it’s often surface-level. That means more bugs, more manual effort, and more risk. For senior developers shipping production code, these problems make AI feel more like a junior intern than a reliable teammate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxh70wxn302unwp8w373j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxh70wxn302unwp8w373j.png" alt="Testing Comes Too Late" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Senior Developers Really Need from AI
&lt;/h2&gt;

&lt;p&gt;AI tools shouldn’t just type fast. They should &lt;strong&gt;support the way experienced developers build and maintain software,&lt;/strong&gt; with structure, feedback loops, and domain awareness.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Plan: Align Before You Code
&lt;/h3&gt;

&lt;p&gt;Senior developers usually don’t jump straight into code—they clarify scope, break work into pieces, and align on what’s being built. AI tools should do the same by asking the right questions, clarifying the scope, and creating a task plan with subtasks. This &lt;strong&gt;Plan&lt;/strong&gt; phase helps solve one of the biggest pain points in AI coding: &lt;strong&gt;misalignment&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8ywswipjw3xqq85w0v7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8ywswipjw3xqq85w0v7.png" alt="Plan with AI: Align Before You Code" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Code and Verify: Don’t Just Generate but also Test, Fix, Repeat.
&lt;/h3&gt;

&lt;p&gt;It’s not enough for code to compile. Every time the AI generates code, it should also verify that it works through unit tests and functional testing for different workflows.&lt;/p&gt;

&lt;p&gt;This process should be automatic and repeatable, like a &lt;strong&gt;Code–Verify Loop&lt;/strong&gt; (as it's depicted in the picture below*&lt;em&gt;)&lt;/em&gt;*:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Generate&lt;/strong&gt; a unit of code from a task&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify&lt;/strong&gt; it with tests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debug and rewrite&lt;/strong&gt; if it fails, and until the code passes all tests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Summarize&lt;/strong&gt; the outcome and reasoning&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1sw5nx6luuxf24v6ft5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1sw5nx6luuxf24v6ft5i.png" alt="Code and Verify: Don’t Just Generate but also Test, Fix, Repeat" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3.  Don’t Just Write Code — Own It
&lt;/h3&gt;

&lt;p&gt;Senior developers build software that evolves with the business. That requires aligning code with business intent, domain terms, and organizational standards. To help Senior engineers, AI-generated content should come with context:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What was generated, and how does it connect to the goal&lt;/li&gt;
&lt;li&gt;Why was this method or library chosen&lt;/li&gt;
&lt;li&gt;What changed compared to the existing implementation&lt;/li&gt;
&lt;li&gt;What trade-offs were made—performance vs clarity, speed vs flexibility, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This level of clarity is essential, especially if you want to maintain the code for a longer period. Inline comments, code diffs, and simple changelogs should be part of the output.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.  Secure, sandboxed environments
&lt;/h3&gt;

&lt;p&gt;For enterprise and professional developers, trust in tooling doesn’t just come from output quality, but it comes from how safely that output is generated.&lt;/p&gt;

&lt;p&gt;Yet many AI tools today default to cloud-based processing, often uploading a large part of the code to external servers. For teams handling sensitive data, proprietary code, or regulated environments, this is a dealbreaker.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code should run and test in a secure sandbox.&lt;/li&gt;
&lt;li&gt;Avoid uploading source code to external servers&lt;/li&gt;
&lt;li&gt;Testing should work in isolated environments, enabling experimentation without risk&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Future of AI Coding Is Collaborative
&lt;/h2&gt;

&lt;p&gt;What senior engineers need is not just a typing assistant. They need a &lt;strong&gt;reliable, explainable, and autonomous teammate&lt;/strong&gt;—an agent that plans before coding, tests every step, explains decisions made and adapts to the context of the project. This is how real development works. We’re not far from this future. But getting there means rethinking the entire AI coding experience and starting to build AI agents that earn their place on the team. I tried out &lt;a href="//The%20first%20coding%20agent%20that%20plans,%20tests,%20and%20delivers%20reliable%20code%20like%20a%20senior%20engineer.%20Try%20out%20Verdent!"&gt;Verdent&lt;/a&gt; and really liked how it plans every single task.&lt;/p&gt;

&lt;h3&gt;
  
  
  I'd love to hear your thoughts:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;What do you expect from an AI assistant in your coding workflow?&lt;/li&gt;
&lt;li&gt;Have you ever used AI-generated code that worked but didn’t make sense?&lt;/li&gt;
&lt;li&gt;How do you balance speed and understanding when using AI tools in real projects?&lt;/li&gt;
&lt;li&gt;What’s one feature you wish current AI coding tools had—but don’t?&lt;/li&gt;
&lt;li&gt;Do you think AI tools should take part in architectural decisions? Why or why not?&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>learning</category>
    </item>
    <item>
      <title>Automatic PR creation on GitHub for database schema change</title>
      <dc:creator>Bobur Umurzokov</dc:creator>
      <pubDate>Tue, 22 Jul 2025 06:46:08 +0000</pubDate>
      <link>https://dev.to/bobur/automatic-pr-creation-on-github-for-database-schema-change-2cpn</link>
      <guid>https://dev.to/bobur/automatic-pr-creation-on-github-for-database-schema-change-2cpn</guid>
      <description>&lt;p&gt;Learn how to update database schemas using prompts with GitHub Copilot and create GitHub pull requests with matching Python model classes.&lt;/p&gt;

&lt;p&gt;Updating a database schema as part of your development process often feels more complicated than it should be. If you’ve ever worked with tools like &lt;a href="https://www.sqlalchemy.org/" rel="noopener noreferrer"&gt;SQLAlchemy&lt;/a&gt;, &lt;a href="https://pypi.org/project/alembic/" rel="noopener noreferrer"&gt;Alembic&lt;/a&gt;, or &lt;a href="https://learn.microsoft.com/en-us/ef/core/" rel="noopener noreferrer"&gt;EF Core&lt;/a&gt;, you probably know the drill: you first update your model classes in code, then generate a migration file, and finally apply those changes to your database. It's not a terrible process—but it's slow, easy to mess up with the correct migration order, and repetitive. You constantly have to switch contexts: from writing model code, to terminal commands, to reviewing raw SQL.&lt;/p&gt;

&lt;p&gt;Wouldn’t it be easier if you could just describe what you want in English and let your tools handle the rest?&lt;/p&gt;

&lt;p&gt;Let’s say we’re working on a simple &lt;a href="https://github.com/Boburmirzo/travel-agency-database-models" rel="noopener noreferrer"&gt;travel agency database model&lt;/a&gt;. We already have a &lt;code&gt;user&lt;/code&gt; table, and now we want to add an &lt;code&gt;address&lt;/code&gt; column to store where each user lives. Instead of manually changing the SQL or model file, imagine simply typing:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Add an address field to the user table as a string”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And immediately, the model gets updated (e.g., &lt;code&gt;User&lt;/code&gt; class in Python, Java, C#, or JavaScript, depending on the language you use), the SQL schema is regenerated, and a pull request is opened in your GitHub repo—complete with changes and documentation.&lt;/p&gt;

&lt;p&gt;This is exactly what we’ll walk through using GitHub Copilot chat and GibsonAI, which automates schema changes, model generation, and PR creation.&lt;/p&gt;

&lt;p&gt;You can try to see the generated sample PR on &lt;a href="https://github.com/Boburmirzo/travel-agency-database-models/pull/1" rel="noopener noreferrer"&gt;this public repo&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;Here’s a short demo that shows the workflow in VS Code:&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/1103368984" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Database Schema to PR Agent Works
&lt;/h2&gt;

&lt;p&gt;The solution uses &lt;a href="https://docs.gibsonai.com/ai/mcp-server" rel="noopener noreferrer"&gt;GibsonAI MCP Server&lt;/a&gt; for database operations and GitHub CLI for repository management in GitHub Copilot chat.&lt;/p&gt;

&lt;p&gt;Here’s what happens step by step:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Describe your schema change&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example: "Add an &lt;code&gt;address&lt;/code&gt; column to the &lt;code&gt;user&lt;/code&gt; table as a string"&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;GibsonAI applies the schema change&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It updates the database schema using the correct SQL syntax.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;GitHub Copilot generates the Python model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Copilot clones the repository, opens a new branch and commits changed files behind the scenes. This includes Pydantic or SQLAlchemy files based on your stack, and respond from GibsonAI schema update to keep your code in sync with database changes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;GitHub Copilot opens a GitHub Pull Request&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You get a PR with the updated model, schema changes, and docs.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's see an example of how you can make database schema changes and automatically generate corresponding Python model classes with a GitHub PR.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Set Up Your Environment
&lt;/h2&gt;

&lt;p&gt;You’ll need the following tools installed before you begin:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visual Studio Code with &lt;a href="https://docs.github.com/en/copilot/about-github-copilot/what-is-github-copilot#getting-access-to-copilot" rel="noopener noreferrer"&gt;GitHub Copilot enabled&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.astral.sh/uv/" rel="noopener noreferrer"&gt;UV&lt;/a&gt; package manager installed&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cli.github.com/" rel="noopener noreferrer"&gt;GitHub CLI&lt;/a&gt; is installed to connect and manage your GitHub repositories from GitHub Copilot&lt;/li&gt;
&lt;li&gt;A GibsonAI account (free) to use the database MCP server&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;You can also use other editors like Cursor, Windsurf, and &lt;a href="https://docs.gibsonai.com/ai/mcp-server" rel="noopener noreferrer"&gt;connect to the MCP server&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 2: Install and Log In via CLI
&lt;/h2&gt;

&lt;p&gt;Open your terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvx &lt;span class="nt"&gt;--from&lt;/span&gt; gibson-cli@latest gibson auth login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This logs you into the &lt;a href="https://docs.gibsonai.com/reference/cli-quickstart" rel="noopener noreferrer"&gt;GibsonAI CLI&lt;/a&gt; so you can access all the features directly from your terminal and integrate it into your workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Enable the MCP Server in VS Code
&lt;/h2&gt;

&lt;p&gt;To use the schema assistant inside VS Code, create a &lt;code&gt;.vscode/&lt;/code&gt; folder in your project and inside it, a file named &lt;code&gt;mcp.json&lt;/code&gt;. This file tells VS Code which MCP server to run.&lt;/p&gt;

&lt;p&gt;Paste the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"inputs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"servers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"gibson"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"stdio"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"uvx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"--from"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gibson-cli@latest"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gibson"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"run"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup launches the server locally and connects it with Copilot Chat. If you're using other tools like Cursor or Claude Desktop, you can configure them similarly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Describe the Schema Change
&lt;/h2&gt;

&lt;p&gt;Now, inside Copilot Chat, just say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Add an address column to the user table as a string”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs-git-docs-update-pr-agent-content-gibsonai.vercel.app%2F_next%2Fimage%3Furl%3D%252Fdocs%252Fguides%252Fautomatic-pr-creation-for-database-schema-change%252Fdatabase_schema_change_prompt.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_3opbcJAEcnYNvKfzoTydqjYoeg2i" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs-git-docs-update-pr-agent-content-gibsonai.vercel.app%2F_next%2Fimage%3Furl%3D%252Fdocs%252Fguides%252Fautomatic-pr-creation-for-database-schema-change%252Fdatabase_schema_change_prompt.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_3opbcJAEcnYNvKfzoTydqjYoeg2i" alt="Database schema change prompt using GitHub copilot in VS Code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Behind the scenes, this does four things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Applies the schema change to a real (serverless) database using GibsonAI&lt;/li&gt;
&lt;li&gt;Generates the updated Python model using Pydantic/SQLAlchemy&lt;/li&gt;
&lt;li&gt;Prepares documentation for the changes&lt;/li&gt;
&lt;li&gt;Opens a GitHub Pull Request in your connected repo&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can see an example of this in action:&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/Boburmirzo/travel-agency-database-models/pull/1" rel="noopener noreferrer"&gt;Pull Request Preview&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs-git-docs-update-pr-agent-content-gibsonai.vercel.app%2F_next%2Fimage%3Furl%3D%252Fdocs%252Fguides%252Fautomatic-pr-creation-for-database-schema-change%252Fsample_pr_created_with_prompts.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_3opbcJAEcnYNvKfzoTydqjYoeg2i" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs-git-docs-update-pr-agent-content-gibsonai.vercel.app%2F_next%2Fimage%3Furl%3D%252Fdocs%252Fguides%252Fautomatic-pr-creation-for-database-schema-change%252Fsample_pr_created_with_prompts.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_3opbcJAEcnYNvKfzoTydqjYoeg2i" alt="Sample pull request created using prompts"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes This Unique?
&lt;/h2&gt;

&lt;p&gt;Compared to other AI coding assistants, this introduces a &lt;strong&gt;full-stack AI workflow&lt;/strong&gt; that bridges coding, database management, and CI/CD:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Smart Migration Ordering:&lt;/strong&gt; Automatically resolves dependencies and applies changes in the right sequence (e.g., creating tables before adding constraints).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema Validation:&lt;/strong&gt; Test changes on an isolated development database before merging, preventing unexpected production issues.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema Diff &amp;amp; ERD Diagrams:&lt;/strong&gt; Automatically generates schema diffs and ERDs in PRs, making schema change reviews faster and more visual. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema-Only Database Environments:&lt;/strong&gt; Quickly spins up lightweight &lt;a href="https://docs.gibsonai.com/guides/create-schema-only-database-environments" rel="noopener noreferrer"&gt;database environments with just schema&lt;/a&gt; (no production data), perfect for testing with anonymized/mock data.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This setup is cutting my dev time by &lt;strong&gt;around 20%&lt;/strong&gt; when handling database schema changes. The approach doesn’t just speed up development but also reduces errors in migrations, saves time switching between tools, and brings schema evolution closer to how we already think and communicate as developers. The biggest win? Each feature branch gets its own isolated database environment.&lt;/p&gt;

&lt;p&gt;Next time you want to update your database, try typing a prompt and feel free to leave your comments on &lt;a href="https://www.gibsonai.com/discord" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; if you find this AI-assistant workflow useful. &lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>productivity</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Create schema-only database environments using AI Agents</title>
      <dc:creator>Bobur Umurzokov</dc:creator>
      <pubDate>Wed, 16 Jul 2025 09:33:20 +0000</pubDate>
      <link>https://dev.to/bobur/create-schema-only-database-environments-using-ai-agents-e5n</link>
      <guid>https://dev.to/bobur/create-schema-only-database-environments-using-ai-agents-e5n</guid>
      <description>&lt;p&gt;Learn how to create schema-only database environments to work with sensitive data and make zero-risk schema changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do you need multiple databases for development?
&lt;/h2&gt;

&lt;p&gt;Working on a live production database during development is risky. Even the smallest mistake like dropping a column or applying an incorrect migration can lead to downtime, corrupted data, or data loss. That’s why modern teams isolate their environments: you might have a separate &lt;strong&gt;dev&lt;/strong&gt;, &lt;strong&gt;staging&lt;/strong&gt;, and &lt;strong&gt;prod&lt;/strong&gt; database to protect production while still iterating fast.&lt;/p&gt;

&lt;p&gt;By working in an isolated environment, you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A safe space to develop new features&lt;/li&gt;
&lt;li&gt;No risk of affecting real user data&lt;/li&gt;
&lt;li&gt;The freedom to experiment with schema changes&lt;/li&gt;
&lt;li&gt;The ability to test integrations without breaking anything critical&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Benefits of syncing only the schema
&lt;/h2&gt;

&lt;p&gt;But in most cases, when you are creating a new database environment, you just want to sync the schema but not the data because of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Compliance&lt;/strong&gt;: You avoid sharing real customer data across environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Safety&lt;/strong&gt;: Developers can’t accidentally query or mutate production data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed&lt;/strong&gt;: Schema deployments are lightweight and fast—no data copy or replication overhead.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Control&lt;/strong&gt;: You can generate and populate a test environment with randomized or anonymized data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach is ideal for GDPR-compliant workflows, regulated industries, and teams that care about velocity without compromising on security.&lt;/p&gt;

&lt;p&gt;See the short demo of how to create schema-only database environment using &lt;a href="https://www.gibsonai.com/" rel="noopener noreferrer"&gt;GibsonAI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/1101805438" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  What is GibsonAI?
&lt;/h2&gt;

&lt;p&gt;GibsonAI is your AI-powered “database engineer” that lets you design, build, deploy, and scale production-ready serverless databases using natural-language prompts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2k17w8x180m6lxjcwsx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2k17w8x180m6lxjcwsx.png" alt="Create a database schema by chatting with AI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can create a database schema for your new project idea using the &lt;a href="https://dev.toLog%20In"&gt;GibsonAI App&lt;/a&gt; by chatting with AI. Under the hood, GibsonAI spun up a new real MySQL/Postgres database. You can create multiple database environments and update the existing schema in AI chat. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft85sgbk0p2wse9fb2tkm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft85sgbk0p2wse9fb2tkm.png" alt="update the existing schema in AI chat"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How GibsonAI Schema First Approach Works
&lt;/h2&gt;

&lt;p&gt;When you define first time your database schema in the GibsonAI project, that schema becomes the source of truth for ongoing development. It is called &lt;strong&gt;Current Schema,&lt;/strong&gt; and it is a safe environment where you can update or run experiments by chatting with AI. Once you are confident, you can then &lt;strong&gt;provision that schema&lt;/strong&gt; to any number of databases, like &lt;code&gt;prod&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt;, or &lt;code&gt;feat-a&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This way, each database receives the &lt;strong&gt;same structure&lt;/strong&gt;, but &lt;strong&gt;not the same data,&lt;/strong&gt; allowing you to test, build, or ship new features using safe, synthetic, or anonymized datasets. GibsonAI makes working with multiple database environments &lt;strong&gt;safe&lt;/strong&gt;. You can see difference between the current state of the schema and deployed one in the same dashboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Create and Work with a Schema-only Environment in GibsonAI
&lt;/h2&gt;

&lt;p&gt;You can create a schema-only database environment in the GibsonAI App. Imagine you're a developer building a new feature for a &lt;strong&gt;travel agency&lt;/strong&gt; app. You're adding a new &lt;code&gt;trip_preferences&lt;/code&gt; table into the existing database, and you want to test this schema without touching the production database and its data.&lt;/p&gt;

&lt;p&gt;Here's how to do it with GibsonAI:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Select Your Project
&lt;/h3&gt;

&lt;p&gt;Open the &lt;a href="https://app.gibsonai.com/project" rel="noopener noreferrer"&gt;GibsonAI App&lt;/a&gt; and pick the project you're actively working on.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Go to Databases and Create a New Database
&lt;/h3&gt;

&lt;p&gt;Head over to the &lt;strong&gt;Databases&lt;/strong&gt; tab. Click &lt;strong&gt;Create Database&lt;/strong&gt;, and name it something like &lt;code&gt;feat-a&lt;/code&gt;. This becomes your feature database environment.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Newly created database has not any database schema initially. You will get empty database. But the &lt;strong&gt;Current&lt;/strong&gt; schema keeps current state of the schema you are working on.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjzenqu9qfxu1ndxpb1me.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjzenqu9qfxu1ndxpb1me.png" alt="Current and deployed schema diff view in GibsonAI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Update the Current Schema
&lt;/h3&gt;

&lt;p&gt;Use the &lt;strong&gt;Schema Editor&lt;/strong&gt; or chat with GibsonAI to add new tables, columns, or relationships—like &lt;code&gt;trip_preferences&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Your schema updates live inside your &lt;strong&gt;project&lt;/strong&gt;, not in the database itself. This ensures changes stay tracked and reproducible.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Deploy Schema to &lt;code&gt;feat-a&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Click &lt;strong&gt;Deploy to Database&lt;/strong&gt; and choose your newly created &lt;code&gt;feat-a&lt;/code&gt; database. GibsonAI will provision the latest schema structure only into the environment. You can now use AI chat to generate sample SQL insert queries to populate with data.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Deploy to Production When Ready
&lt;/h3&gt;

&lt;p&gt;Once you're confident, deploy the same schema to the &lt;strong&gt;production&lt;/strong&gt; database. Since GibsonAI doesn’t automatically sync changes between environments. You control where and when to apply the schema. This gives you the ability to test fearlessly—and ship confidently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam85k9dqvlq9x4gco140.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam85k9dqvlq9x4gco140.png" alt="Deploy Database Schem Changes to Production"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Schema-only database environments in GibsonAI give you a &lt;strong&gt;production-like structure&lt;/strong&gt; without exposing &lt;strong&gt;production data&lt;/strong&gt;. Whether you're fixing a bug, testing a new feature, or validating complex changes, you can work in isolation and still deploy with confidence.&lt;/p&gt;

&lt;p&gt;By separating &lt;strong&gt;structure&lt;/strong&gt; from &lt;strong&gt;data&lt;/strong&gt;, GibsonAI empowers your team to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Move fast without breaking production&lt;/li&gt;
&lt;li&gt;Work safely in parallel&lt;/li&gt;
&lt;li&gt;Stay compliant and secure&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>database</category>
      <category>webdev</category>
      <category>ai</category>
    </item>
    <item>
      <title>How to convert Images, PDF, Excel sheets, or JSON to a relational database with AI</title>
      <dc:creator>Bobur Umurzokov</dc:creator>
      <pubDate>Tue, 01 Jul 2025 11:09:56 +0000</pubDate>
      <link>https://dev.to/bobur/how-to-convert-images-pdf-excel-sheets-or-json-to-a-relational-database-with-ai-29a4</link>
      <guid>https://dev.to/bobur/how-to-convert-images-pdf-excel-sheets-or-json-to-a-relational-database-with-ai-29a4</guid>
      <description>&lt;p&gt;Creating a database usually means defining a database schema, setting up a database server, and writing SQL commands/queries. But what if you could skip all that?&lt;/p&gt;

&lt;p&gt;Recently, I needed to recreate a new database from an old ER diagram in PNG format. Instead of writing everything manually in SQL, I tried something faster — using &lt;strong&gt;GitHub Copilot&lt;/strong&gt; inside &lt;strong&gt;VS Code&lt;/strong&gt;, along with &lt;a href="https://www.gibsonai.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;GibsonAI&lt;/strong&gt;&lt;/a&gt; to validate and deploy it. It worked surprisingly well. So, AI is not only hyping topic but it helps in certain tasks. Let me show you how to achieve this.&lt;/p&gt;

&lt;p&gt;Here’s a short demo showing the process:&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/1097816574" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;br&gt;
This video shows how you can go from a simple diagram or screenshot to a working, deployed database using just prompts and AI tools.&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Databases Still Slow Us Down
&lt;/h2&gt;

&lt;p&gt;Many developers want to launch new apps, build MVPs, or add features to an existing product. But they hit friction when it comes to databases.&lt;/p&gt;

&lt;p&gt;It doesn’t matter which language or framework you’re using — eventually, you’ll need a working data backend. And that’s where time gets lost:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setting up the database and designing a schema&lt;/li&gt;
&lt;li&gt;Adjusting the schema as your app changes&lt;/li&gt;
&lt;li&gt;Manually building APIs and ORMs&lt;/li&gt;
&lt;li&gt;No clean way to spin up test environments with real data&lt;/li&gt;
&lt;li&gt;Worrying about migrations and breaking changes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A New Workflow Using AI Tools
&lt;/h2&gt;

&lt;p&gt;In 2025, AI Agents can &lt;a href="https://www.databricks.com/company/newsroom/press-releases/databricks-agrees-acquire-neon-help-developers-deliver-ai-systems" rel="noopener noreferrer"&gt;create and modify databases on their own&lt;/a&gt;. Now let's see how AI tools solve the following developer challenges, and we can build apps faster:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Challenge&lt;/th&gt;
&lt;th&gt;Solution with AI&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;"I don’t want to spend hours setting up DB &amp;amp; APIs"&lt;/td&gt;
&lt;td&gt;One prompt → working backend &amp;amp; API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"My data model keeps changing as I test ideas"&lt;/td&gt;
&lt;td&gt;Schema evolution handled automatically and generate mapping data models&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"I want to connect my app quickly to my data"&lt;/td&gt;
&lt;td&gt;Apps can use live APIs with no extra infra.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"I need a testable environment with live data"&lt;/td&gt;
&lt;td&gt;Hosted database with built-in seed and test data options&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"I don’t want to manage migrations or versioning"&lt;/td&gt;
&lt;td&gt;AI handles that under the hood&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;AI still is NOT replacing us or software developers, but it is removing friction. You still make the decisions about your schema and relationships. You still write the logic. But you don’t waste time repeating boilerplate steps.&lt;/p&gt;

&lt;p&gt;Next, I will show you how I converted an existing ER diagram image to a really working database. I believe you can use the same approach with other data formats.&lt;/p&gt;

&lt;h2&gt;
  
  
  Convert an ER Diagram Image into a Working Database Using GibsonAI, GitHub Copilot in VS Code
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Get an ER Diagram
&lt;/h3&gt;

&lt;p&gt;If you already have an ER diagram as a &lt;code&gt;.png&lt;/code&gt;, you're good to go.&lt;/p&gt;

&lt;p&gt;If not, you can use tools like &lt;a href="https://drawsql.app/templates" rel="noopener noreferrer"&gt;drawdb.app&lt;/a&gt; to find templates for common use cases (like eCommerce, HR systems, or SaaS apps). You can quickly edit the schema, then export it as a PNG, JSON, or raw SQL. For example, let's use this &lt;a href="https://drawsql.app/templates/koel" rel="noopener noreferrer"&gt;music streaming app database diagram template&lt;/a&gt; in the demo.&lt;/p&gt;

&lt;p&gt;That way, you don’t even need to design the schema from scratch — just adapt an existing one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Enable MCP Server in VS Code
&lt;/h3&gt;

&lt;p&gt;What You Need&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VS Code (with &lt;a href="https://docs.github.com/en/copilot/about-github-copilot/what-is-github-copilot#getting-access-to-copilot" rel="noopener noreferrer"&gt;GitHub Copilot enabled&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.astral.sh/uv/" rel="noopener noreferrer"&gt;UV&lt;/a&gt; is installed.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.gibsonai.com/" rel="noopener noreferrer"&gt;GibsonAI&lt;/a&gt; -Sign up for a free account&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This tool turns your prompt into a complete schema, deploys serverless database and gives you a live REST API for managing data.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Set Up GibsonAI CLI and Log In
&lt;/h3&gt;

&lt;p&gt;Before using the GibsonAI &lt;a href="https://github.com/GibsonAI/mcp" rel="noopener noreferrer"&gt;MCP server&lt;/a&gt;, install &lt;a href="https://docs.gibsonai.com/#cli" rel="noopener noreferrer"&gt;GibsonAI’s CLI&lt;/a&gt; and log in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvx &lt;span class="nt"&gt;--from&lt;/span&gt; gibson-cli@latest gibson auth login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This logs you into your GibsonAI account so you can start using all CLI features.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Enable MCP Server in VS Code
&lt;/h3&gt;

&lt;p&gt;To use the GibsonAI MCP server inside your VS Code project, you’ll need to add a configuration script. Create a file called &lt;code&gt;mcp.json&lt;/code&gt; inside an empty &lt;code&gt;.vscode/&lt;/code&gt;folder. This file defines which GibsonAI MCP server to use for this project.&lt;/p&gt;

&lt;p&gt;Copy and paste the following content into the &lt;code&gt;.vscode/mcp.json&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"inputs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"servers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"gibson"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"stdio"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"uvx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"--from"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gibson-cli@latest"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gibson"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"run"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;You can also use GibsonAI MCP server with such as &lt;strong&gt;Cursor&lt;/strong&gt;, &lt;strong&gt;Claude Desktop, Cline, and Windsurf&lt;/strong&gt;. See the the &lt;a href="https://docs.gibsonai.com/ai/connect-mcp-clients-to-gibsonai" rel="noopener noreferrer"&gt;instructions for other tools&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Step 5: Describe the diagram in a GitHub Copilot chat prompt
&lt;/h3&gt;

&lt;p&gt;Open your ER diagram (or PNG file) in VS Code in the same VS Code project. Open GitHub Copilot Chat in VS Code, &lt;a href="https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode" rel="noopener noreferrer"&gt;switch to Agent mode&lt;/a&gt;, and select the LLM model, such as GPT-4.1 or GPT-4o. You should see the available tools from GibsonAI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj4qrij6renv0ogrxirb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj4qrij6renv0ogrxirb.png" alt="See the available tools from GibsonAI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Write a prompt like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;Create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;new&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;GibsonAI&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;database&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;diagram&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GitHub Copilot reads the comment and starts calling the relevant GibsonAI MCP server tools.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa05maen62mrc453xgsav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa05maen62mrc453xgsav.png" alt="Create a new GibsonAI database from this ER diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Inspect Your Database Schema in the GibsonAI Dashboard
&lt;/h3&gt;

&lt;p&gt;After the prompt runs successfully, go and inspect your new schema in the GibsonAI &lt;a href="http://app.gibsonai.com/" rel="noopener noreferrer"&gt;dashboard&lt;/a&gt;. You’ll see everything laid out — tables, columns, relationships — just like in your diagram, with additional improvement, but now fully working and hosted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foclqbd72y1y7ttkbcqe3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foclqbd72y1y7ttkbcqe3.png" alt="Inspect Your Database Schema in the GibsonAI Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From there, you can continue evolving your schema:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt more to customize the schema using natural language. Or switch to writing &lt;strong&gt;real SQL queries&lt;/strong&gt; if you prefer — &lt;strong&gt;Studio&lt;/strong&gt; let you write and run queries directly in your browser. You’ll also see a live ERD diagram update with every change you make.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Wrap-up
&lt;/h2&gt;

&lt;p&gt;In this workflow, I went from an &lt;strong&gt;ER diagram image&lt;/strong&gt; to a &lt;strong&gt;live serverless MySQL database&lt;/strong&gt; — all in just a few minutes.&lt;/p&gt;

&lt;p&gt;What surprised me is that the AI tool didn’t just create the schema — it also generated fully working &lt;strong&gt;CRUD APIs&lt;/strong&gt; for each model. These APIs include things like request validation and response schemas, so you can start using them immediately.&lt;/p&gt;

&lt;p&gt;This is really helpful if you want to interact with your database directly from your app, without having to build and manage all the data models yourself.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>database</category>
      <category>coding</category>
    </item>
    <item>
      <title>Create a Database Schema and REST APIs with a Single Prompt Using GitHub Copilot in VS Code</title>
      <dc:creator>Bobur Umurzokov</dc:creator>
      <pubDate>Wed, 11 Jun 2025 09:05:14 +0000</pubDate>
      <link>https://dev.to/bobur/create-a-database-schema-and-rest-apis-with-a-single-prompt-using-github-copilot-in-vs-code-41dj</link>
      <guid>https://dev.to/bobur/create-a-database-schema-and-rest-apis-with-a-single-prompt-using-github-copilot-in-vs-code-41dj</guid>
      <description>&lt;p&gt;Learn how to use &lt;strong&gt;GitHub Copilot&lt;/strong&gt; with one AI prompt to create a fully designed database schema, deploy a serverless MySQL database, and live CRUD APIs — in under 60 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Age of Prompt-Driven Development
&lt;/h2&gt;

&lt;p&gt;A significant shift is underway in the way we develop software. AI agents and prompt-based tools are shaping modern development. As a developer, you don’t want to miss this shift. Knowing how to use these tools puts you ahead. Instead of writing endless boilerplate, you can now describe what you want, and AI will generate code, create your database, connect APIs, and even deploy your app. New tools like &lt;strong&gt;Cursor&lt;/strong&gt;, &lt;strong&gt;Windsurf&lt;/strong&gt;, &lt;strong&gt;Lovable&lt;/strong&gt;, and &lt;strong&gt;Bolt&lt;/strong&gt; are rising fast. You can create stunning apps and websites by chatting with AI.&lt;/p&gt;

&lt;p&gt;Even with all these fancy tools, full-stack apps still need a solid backend, and that means data. Every application needs to work with data. Whether you’re building a blog, a booking platform, or an AI Agent, you’ll need to store and retrieve information. That usually means using a real database like &lt;strong&gt;PostgreSQL&lt;/strong&gt;, &lt;strong&gt;MySQL&lt;/strong&gt;, or &lt;strong&gt;MongoDB&lt;/strong&gt; (unless you’re treating Excel or Google Sheets like a backend, which… we’ve all done once).&lt;/p&gt;

&lt;p&gt;So schema design, database setup, and API generation can’t be skipped. I decided to experiment with automating the process of designing a database schema, running a database, and managing data using just prompts using GitHub Copilot in VS Code.&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/1092387717" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Every App Needs a Database — It’s Time to Simplify It
&lt;/h2&gt;

&lt;p&gt;Working with databases is often repetitive work and slows developers down. I think the issues we always face are the following during the setup of a database from scratch:&lt;/p&gt;

&lt;h3&gt;
  
  
  You start with a manual schema setup
&lt;/h3&gt;

&lt;p&gt;You have to create tables, think through relationships, indexes, data types, and naming. You map tables to objects using ORM libraries and build APIs to access that data. It’s easy to miss things or overcomplicate at an early stage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Schema changes are painful
&lt;/h3&gt;

&lt;p&gt;Your app evolves. You rename a column, split a table, or add a new relation. Now you need to write migrations. Update your ORM. Avoid downtime. And hope nothing breaks in staging or production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Every change triggers more boilerplate
&lt;/h3&gt;

&lt;p&gt;Once the schema changes, you usually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update model files&lt;/li&gt;
&lt;li&gt;Fix serializers or DTOs&lt;/li&gt;
&lt;li&gt;Rewrite REST API endpoints or GraphQL resolvers&lt;/li&gt;
&lt;li&gt;Modify test data and fixtures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s a lot of work for just one change.&lt;/p&gt;

&lt;h3&gt;
  
  
  Team coordination becomes tricky
&lt;/h3&gt;

&lt;p&gt;In team projects, syncing schema changes between developers often leads to merge conflicts, broken migrations, or inconsistent environments.&lt;/p&gt;

&lt;p&gt;But now? With the rise of AI code generation tools like GitHub Copilot, you can extend &lt;a href="https://docs.github.com/en/copilot/customizing-copilot/extending-copilot-chat-with-mcp?tool=vscode" rel="noopener noreferrer"&gt;Copilot Chat with the Model Context Protocol (MCP)&lt;/a&gt; from external providers, and you can create a fully working database schema &lt;strong&gt;with a single prompt&lt;/strong&gt; — right inside &lt;strong&gt;VS Code&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;And it’ll save you hours every week. Let me show you how you can achieve this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let’s Build: A Travel Agency App Schema
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What You Need
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;VS Code (with &lt;a href="https://docs.github.com/en/copilot/about-github-copilot/what-is-github-copilot#getting-access-to-copilot" rel="noopener noreferrer"&gt;GitHub Copilot enabled&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.astral.sh/uv/" rel="noopener noreferrer"&gt;UV&lt;/a&gt; is installed.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.gibsonai.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;GibsonAI&lt;/strong&gt;&lt;/a&gt; -Sign up for a free account&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This tool turns your prompt into a complete schema, deploys serverless database and gives you a live REST API for managing data.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 1: Set Up GibsonAI CLI and Log In
&lt;/h3&gt;

&lt;p&gt;Before using the GibsonAI &lt;a href="https://github.com/GibsonAI/mcp" rel="noopener noreferrer"&gt;MCP server&lt;/a&gt;, install &lt;a href="https://docs.gibsonai.com/#cli" rel="noopener noreferrer"&gt;GibsonAI’s CLI&lt;/a&gt; and log in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvx &lt;span class="nt"&gt;--from&lt;/span&gt; gibson-cli@latest gibson auth login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This logs you into your GibsonAI account so you can start using all CLI features.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Enable MCP Server in VS Code
&lt;/h3&gt;

&lt;p&gt;To use the GibsonAI MCP server inside your VS Code project, you’ll need to add a configuration script. Create a file in your project or inside an empty folder called &lt;code&gt;mcp.json&lt;/code&gt; in the &lt;code&gt;.vscode/&lt;/code&gt;folder. This file defines which GibsonAI MCP server to use for this project.&lt;/p&gt;

&lt;p&gt;Copy and paste the following content into the &lt;code&gt;.vscode/mcp.json&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"inputs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"servers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"gibson"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"stdio"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"uvx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"--from"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gibson-cli@latest"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gibson"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"run"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this file is added, GibsonAI tools inside VS Code will connect to the MCP server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Describe Your Travel App Schema in a Prompt
&lt;/h3&gt;

&lt;p&gt;Open GitHub Copilot Chat in VS Code, &lt;a href="https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode" rel="noopener noreferrer"&gt;switch to Agent mode&lt;/a&gt;, and select the LLM model, such as GPT-4.1 or GPT-4o. You should see the available tools from GibsonA&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu45szyvcdc4esx3ply8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu45szyvcdc4esx3ply8i.png" alt="GibsonAI MCP server tools in VS Code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then enter a prompt like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Create a database for a travel agency. It should include tables for destinations, bookings, users, and reviews. Each user can make bookings and write reviews. Each destination has a name, description, price, and rating.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg4u3difi1bumv1lhecjz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg4u3difi1bumv1lhecjz.png" alt="Describe Your Travel App Schema in GitHub Copilot Prompt"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GibsonAI reads your prompt, creates a new database project, and magically generates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A complete relational schema&lt;/li&gt;
&lt;li&gt;Visual Entity-Relationship Diagram (ERD)&lt;/li&gt;
&lt;li&gt;Proper foreign key constraints&lt;/li&gt;
&lt;li&gt;UUIDs, timestamps, and standard fields&lt;/li&gt;
&lt;li&gt;A clean MySQL or Postgres structure&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 4: Deploy Your Schema and Enable CRUD APIs
&lt;/h3&gt;

&lt;p&gt;Go to the &lt;a href="https://app.gibsonai.com/" rel="noopener noreferrer"&gt;GibsonAI app&lt;/a&gt;, log in, and open your newly created project. There, you can see and review the schema. Now you can click “&lt;strong&gt;Deploy&lt;/strong&gt;” to launch your schema:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ginmoquoqwwidxl9pq2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ginmoquoqwwidxl9pq2.png" alt="Deploy Your Schema in GibsonAI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alternatively, you can use Copilot chat to deploy the database. GibsonAI hosts the serverless MySQL database. Now you can get the database connection string and connect to your existing app. Or access live CRUD APIs and use them in your app:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9mvkmuw5viwrksded22.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9mvkmuw5viwrksded22.png" alt="Live CRUD APIs with GibsonAI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You now have a working backend without writing a single SQL query. You can plug these APIs directly into your frontend or backend — no need to write REST controllers for typical CRUD operations. GibsonAI also lets me share my database project schema with others.&lt;/p&gt;

&lt;p&gt;Feel free to clone the travel agency database I created for the demo: &lt;a href="https://app.gibsonai.com/clone/rRZ4wD9HDCdHO" rel="noopener noreferrer"&gt;https://app.gibsonai.com/clone/rRZ4wD9HDCdHO&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Let Copilot Help You Build Around the API
&lt;/h3&gt;

&lt;p&gt;Now that your schema and API are live, use &lt;strong&gt;GitHub Copilot&lt;/strong&gt; to build UI components using React or any other frontend frameworks. GitHub Copilot + GibsonAI MCP = the fastest way to go from prompt to full-featured app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;The future of development is not about using more AI-generated code. It’s about writing &lt;strong&gt;fewer, smarter prompts&lt;/strong&gt; — and letting AI handle the slow, repetitive, or painful tasks so you can fully focus on the innovation. You can already boost your development workflow with GitHub Copilot Agent Mode. It will provide you with a powerful set of tools that enable agents to run SQL queries, create tables, design schemas, import CSV files, and more.&lt;/p&gt;

&lt;p&gt;Give it a try. The next time you start a project, open VS Code, write a prompt, and let the database build itself.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>webdev</category>
      <category>ai</category>
      <category>database</category>
    </item>
  </channel>
</rss>
