<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tiphis</title>
    <description>The latest articles on DEV Community by Tiphis (@_a1084658c738d4804957c).</description>
    <link>https://dev.to/_a1084658c738d4804957c</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/_a1084658c738d4804957c"/>
    <language>en</language>
    <item>
      <title>Modern JavaScript: Advanced Patterns for 2026</title>
      <dc:creator>Tiphis</dc:creator>
      <pubDate>Sat, 28 Mar 2026 14:44:17 +0000</pubDate>
      <link>https://dev.to/_a1084658c738d4804957c/modern-javascript-advanced-patterns-for-2026-1ok1</link>
      <guid>https://dev.to/_a1084658c738d4804957c/modern-javascript-advanced-patterns-for-2026-1ok1</guid>
      <description>&lt;h1&gt;
  
  
  Modern JavaScript: Advanced Patterns for 2026
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Published Date:&lt;/strong&gt; March 28, 2026&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;JavaScript continues to evolve at a rapid pace, with 2026 bringing exciting new capabilities and best practices for developers worldwide. This comprehensive article explores advanced patterns and techniques that every JavaScript developer should master today to build modern, efficient web applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top-Level Await and Async Initialization Patterns
&lt;/h2&gt;

&lt;p&gt;Top-level await has become standard in ES2022 and beyond, making async initialization much simpler. This feature has eliminated the need for wrapper functions in many cases, streamlining module architecture significantly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Modern module initialization&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./config.json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;connectToDatabase&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;initializeApp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ability to use async/await at the top level has revolutionized how we structure application bootstrapping code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Async Iteration with For-Each Await
&lt;/h2&gt;

&lt;p&gt;Async iteration patterns have matured significantly over the years. The powerful &lt;code&gt;for await...of&lt;/code&gt; syntax, combined with async generators, enables building robust data processing pipelines that handle async operations seamlessly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;processAllItems&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;processSingleItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern is essential for processing large streams of data efficiently without blocking the event loop.&lt;/p&gt;

&lt;h2&gt;
  
  
  Worker APIs and Thread Isolation
&lt;/h2&gt;

&lt;p&gt;The Web Workers API has gained substantial new features in 2026, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Named workers for sophisticated multi-worker architectures&lt;/li&gt;
&lt;li&gt;SharedArrayBuffer for efficient zero-copy communication between threads&lt;/li&gt;
&lt;li&gt;Blob workers for isolation without CORS concerns
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Named worker approach&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;blob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Blob&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;code&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/javascript&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;worker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createObjectURL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;module&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;nameSymbol&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;workerInstance&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These advanced worker patterns enable better performance for CPU-intensive tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modern TypeScript Integration
&lt;/h2&gt;

&lt;p&gt;TypeScript remains the industry standard for JavaScript development, with better developer experience in 2026. Recent improvements include automatic inference enhancements, better error messages, enhanced IDE support, and seamless integration with modern bundlers like Vite and ESBuild.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance Optimization Strategies
&lt;/h2&gt;

&lt;p&gt;Key performance patterns developers should follow in 2026:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Avoid unnecessary allocations&lt;/strong&gt; - Reuse buffer objects when possible&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use typed arrays&lt;/strong&gt; - More memory efficient than plain objects&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Leverage Web Workers&lt;/strong&gt; - Offload CPU-intensive tasks to background threads&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tree-shaking modules&lt;/strong&gt; - Ensure proper bundle optimization in production&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Security Best Practices
&lt;/h2&gt;

&lt;p&gt;Security is paramount in modern web development. Key practices include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always validate async results before use&lt;/li&gt;
&lt;li&gt;Use Content-Security-Policy headers appropriately&lt;/li&gt;
&lt;li&gt;Implement proper error handling and logging&lt;/li&gt;
&lt;li&gt;Secure all async API requests with validation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;JavaScript in 2026 offers powerful tools and advanced patterns for building modern, high-performance applications. Master these patterns and continue learning to stay ahead in your development career. The ecosystem continues to mature, bringing new possibilities every year.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this valuable, consider tipping: 0x14d55bE3773F3f907309989c731Cf4885AAFF156&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>2026</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>The Rise of AI Agent Skills Management: A New Frontier for Developers and Companies</title>
      <dc:creator>Tiphis</dc:creator>
      <pubDate>Thu, 26 Mar 2026 14:11:54 +0000</pubDate>
      <link>https://dev.to/_a1084658c738d4804957c/the-rise-of-ai-agent-skills-management-a-new-frontier-for-developers-and-companies-2pm4</link>
      <guid>https://dev.to/_a1084658c738d4804957c/the-rise-of-ai-agent-skills-management-a-new-frontier-for-developers-and-companies-2pm4</guid>
      <description>&lt;h1&gt;
  
  
  The Rise of AI Agent Skills Management: A New Frontier for Developers and Companies
&lt;/h1&gt;

&lt;p&gt;The artificial intelligence landscape is evolving at a breathtaking pace. While much attention has been focused on the models themselves—GPT-4o, Claude, Gemini—the real opportunity may lie in what happens next: how we organize, share, and scale the skills that make these AI agents useful. Enter AI agent skills management, a rapidly emerging field that addresses a critical gap in the AI ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Missing in Today's AI Stack?
&lt;/h2&gt;

&lt;p&gt;If you've been working with AI agents, you've likely experienced this frustration: you build a powerful automation workflow, and then you need to replicate it across different projects, teams, or even organizations. The skills your AI agent has learned—how to analyze code, summarize documents, handle customer inquiries—are valuable, but there's been no standardized way to manage them.&lt;/p&gt;

&lt;p&gt;A new open-source project called &lt;strong&gt;Agent Skill Harbor&lt;/strong&gt; aims to change this. It's described as "a GitHub-native skill management platform" that helps teams share AI agent skills, track where they come from, and ensure they're safe to use.&lt;/p&gt;

&lt;p&gt;This is a significant development because it fills a gap that many companies are discovering on their own: the lack of a middle layer for skill management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters Now
&lt;/h2&gt;

&lt;p&gt;The demand for AI agents is exploding. Companies are deploying AI to handle everything from customer support to code reviews to data analysis. But as these deployments scale, several challenges emerge:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Skill Reusability&lt;/strong&gt;: A skill developed for one use case should be easily reusable in another. Currently, each team is reinventing the wheel.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Provenance Tracking&lt;/strong&gt;: Where did a particular skill come from? Who developed it? Is it safe to use in production?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Governance and Safety&lt;/strong&gt;: Organizations need to know what skills their AI agents possess, especially in regulated industries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Team Collaboration&lt;/strong&gt;: Different teams within an organization may be building similar skills without knowing it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Agent Skill Harbor addresses these challenges by storing skills as text artifacts in Git repositories, making them naturally versionable and shareable. It uses GitHub Actions and GitHub Pages to publish a static catalog of available skills.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Income Potential: Three Paths to Monetization
&lt;/h2&gt;

&lt;p&gt;Here's where things get interesting for developers and entrepreneurs. The skills management layer represents a significant opportunity.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Skill Development and Curation
&lt;/h3&gt;

&lt;p&gt;Just as WordPress themes and plugins became an industry, AI agent skills could follow a similar trajectory. Developers who master prompt engineering and workflow design can create and sell specialized skills for specific industries or use cases—legal document analysis, medical coding, financial reporting, software security auditing.&lt;/p&gt;

&lt;p&gt;Companies are already paying consultants substantial sums to build custom AI workflows. Packaging these as reusable skills opens up recurring revenue opportunities.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Platform and Infrastructure
&lt;/h3&gt;

&lt;p&gt;The tools to manage, discover, and deploy AI skills are themselves a market. We're seeing early movers build platforms for skill marketplaces, skill verification systems, and skill monitoring and auditing tools.&lt;/p&gt;

&lt;p&gt;Think of it as the "App Store" moment for AI agents—the infrastructure that enables skills to be discovered, rated, and deployed at scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Consulting and Implementation
&lt;/h3&gt;

&lt;p&gt;As companies adopt AI agent technologies, they need help designing their skill architecture: What skills do we need? How should they be organized? How do we ensure compliance and security?&lt;/p&gt;

&lt;p&gt;This represents immediate consulting opportunities for anyone with hands-on AI agent experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started: Practical Steps
&lt;/h2&gt;

&lt;p&gt;If you want to capitalize on this trend, here's a roadmap.&lt;/p&gt;

&lt;p&gt;For developers, start building skills for your own use cases using frameworks like LangChain, AutoGen, or CrewAI. Publish your skills on GitHub with clear documentation. Contribute to open-source skill management projects.&lt;/p&gt;

&lt;p&gt;For entrepreneurs, identify vertical-specific skill gaps in healthcare, finance, and legal. Build tools that enhance skill discoverability or quality assurance. Create skill marketplaces for specific industries.&lt;/p&gt;

&lt;p&gt;For companies, inventory your current AI agent deployments. Establish internal skill standards and governance. Consider how to make skills shareable across teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;What makes this trend particularly compelling is the timing. AI models are becoming commoditized—there's diminishing edge in simply using the latest model. The real differentiation comes from what your AI can do, which is fundamentally about skills.&lt;/p&gt;

&lt;p&gt;We're moving from an era where "having AI" was the differentiator to one where "having the right AI skills" matters more. Just as Kubernetes became essential for container orchestration, skills management platforms may become essential for AI orchestration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The emergence of AI agent skills management represents a maturation of the AI industry. As organizations move from experimentation to production, the need to organize, govern, and scale AI capabilities becomes critical. This creates genuine opportunities for developers to build valuable skills, for entrepreneurs to create platforms, and for companies to develop competitive advantages.&lt;/p&gt;

&lt;p&gt;The window to establish yourself in this space is now. Those who start building skills, contributing to open-source projects, and understanding the architecture of skill management will be well-positioned as this market matures. The question is not whether AI agent skills management will become important—it is whether you will be part of building it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this valuable, consider tipping: 0xAa9ACeE80691997CEC41a7F4cd371963b8EAC0C4&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>technology</category>
      <category>career</category>
      <category>programming</category>
    </item>
    <item>
      <title>From Zero to RAG System: Building AI-Powered Knowledge Assistants That Generate Real Income</title>
      <dc:creator>Tiphis</dc:creator>
      <pubDate>Thu, 26 Mar 2026 14:10:17 +0000</pubDate>
      <link>https://dev.to/_a1084658c738d4804957c/from-zero-to-rag-system-building-ai-powered-knowledge-assistants-that-generate-real-income-le4</link>
      <guid>https://dev.to/_a1084658c738d4804957c/from-zero-to-rag-system-building-ai-powered-knowledge-assistants-that-generate-real-income-le4</guid>
      <description>&lt;h1&gt;
  
  
  From Zero to RAG System: Building AI-Powered Knowledge Assistants That Generate Real Income
&lt;/h1&gt;

&lt;p&gt;The landscape of AI development is rapidly evolving, and one of the most promising opportunities for developers today is building Retrieval-Augmented Generation (RAG) systems. These AI-powered knowledge assistants are transforming how individuals and organizations manage, search, and extract value from their personal data. With the right approach, you can turn this technology into a profitable venture.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Exactly Is a RAG System?
&lt;/h2&gt;

&lt;p&gt;At its core, a RAG system combines the power of large language models with your own data. Instead of relying solely on what the AI was trained on, RAG allows you to "feed" it your documents, notes, emails, or any text-based information. When you ask a question, the system first searches for relevant information from your data, then uses that context to generate accurate, personalized responses.&lt;/p&gt;

&lt;p&gt;Think of it as having a highly intelligent assistant who has read everything you've ever written and can instantly find the exact information you need—even if you forgot where you saved it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Is a Lucrative Opportunity
&lt;/h2&gt;

&lt;p&gt;The demand for personal AI knowledge assistants is exploding for several compelling reasons:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Everyone Has Unstructured Data Problems&lt;/strong&gt;&lt;br&gt;
The average professional has thousands of documents, notes, and communications scattered across different platforms. Email, Slack, Google Drive, Notion, Obsidian—the list goes on. People are drowning in information but starving for insights. RAG systems solve this fundamental problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Privacy-First Solutions Are in High Demand&lt;/strong&gt;&lt;br&gt;
Unlike cloud-based AI services that require uploading your data to third-party servers, properly built RAG systems can run locally on your own hardware. This makes them attractive to businesses and individuals handling sensitive information—lawyers, doctors, executives, and anyone dealing with confidential data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Low Overhead, High Value&lt;/strong&gt;&lt;br&gt;
Unlike traditional software products, RAG-based applications often have minimal hosting costs since they can run on user hardware or use pay-per-use API models. The marginal cost of serving an additional user can be remarkably low.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Your First RAG System: Practical Steps
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Choose Your Tech Stack
&lt;/h3&gt;

&lt;p&gt;For a personal knowledge assistant, consider these popular options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LangChain&lt;/strong&gt; or &lt;strong&gt;LlamaIndex&lt;/strong&gt; for orchestration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chroma&lt;/strong&gt;, &lt;strong&gt;FAISS&lt;/strong&gt;, or &lt;strong&gt;Qdrant&lt;/strong&gt; for vector storage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ollama&lt;/strong&gt; for running models locally&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI&lt;/strong&gt;, &lt;strong&gt;Anthropic&lt;/strong&gt;, or &lt;strong&gt;Google Gemini&lt;/strong&gt; APIs for cloud models&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Data Preparation Is Everything
&lt;/h3&gt;

&lt;p&gt;The quality of your RAG system depends heavily on how you prepare your data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chunk your documents into meaningful segments (typically 500-1500 tokens)&lt;/li&gt;
&lt;li&gt;Use appropriate text splitters (by paragraph, by sentence, or semantic splitting)&lt;/li&gt;
&lt;li&gt;Generate high-quality embeddings using models like &lt;strong&gt;BAAI/bge-small&lt;/strong&gt; or OpenAI's &lt;strong&gt;text-embedding-3-small&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Implement Retrieval Logic
&lt;/h3&gt;

&lt;p&gt;This is where many developers struggle. Simple keyword matching isn't enough. You'll want to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Experiment with different retrieval strategies (similarity search, MMR, filtering)&lt;/li&gt;
&lt;li&gt;Implement hybrid search combining dense and sparse retrieval&lt;/li&gt;
&lt;li&gt;Add reranking to improve result quality&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 4: Optimize for Your Use Case
&lt;/h3&gt;

&lt;p&gt;A RAG system for code documentation looks different from one for personal notes. Consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Source attribution&lt;/strong&gt; - always show users where information came from&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Confidence scoring&lt;/strong&gt; - let users know when the system is uncertain&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-document reasoning&lt;/strong&gt; - ability to synthesize information across sources&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Real-World Income Opportunities
&lt;/h2&gt;

&lt;p&gt;Here are proven ways to monetize RAG system expertise:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Custom RAG Solutions for Businesses&lt;/strong&gt;&lt;br&gt;
Companies need help organizing their internal documentation, customer support knowledge bases, and product manuals. A single project can range from $5,000 to $50,000+.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Vertical-Specific Products&lt;/strong&gt;&lt;br&gt;
Build RAG systems for specific industries—legal research assistants, medical literature synthesis, financial report analysis. Niche focus commands premium pricing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Consulting and Implementation Services&lt;/strong&gt;&lt;br&gt;
Help other developers and companies implement RAG systems. Hourly rates of $150-300 are common for experienced practitioners.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Open Source + Paid Support&lt;/strong&gt;&lt;br&gt;
Release core tools open source while offering premium features, support subscriptions, or custom enterprise deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Pitfalls to Avoid
&lt;/h2&gt;

&lt;p&gt;From real-world experience shared by practitioners, here are mistakes that can derail your RAG project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ignoring data freshness&lt;/strong&gt; - Static embeddings become outdated; plan for periodic updates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Poor chunking strategy&lt;/strong&gt; - Too small loses context; too large introduces noise&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Neglecting evaluation&lt;/strong&gt; - Without proper metrics, you can't improve systematically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Over-engineering early&lt;/strong&gt; - Start simple, iterate based on real usage&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Future Is Personal AI
&lt;/h2&gt;

&lt;p&gt;We're moving toward a world where everyone will have their own AI assistant that understands their unique context, preferences, and knowledge. RAG systems are the foundation of this transformation.&lt;/p&gt;

&lt;p&gt;The opportunity isn't just in building these systems—it's in understanding how to make them truly useful, private, and reliable. That's where real value is created.&lt;/p&gt;




&lt;p&gt;Whether you're a developer looking to expand your skills or an entrepreneur seeking your next venture, RAG systems represent one of the most accessible and profitable paths in AI today. The barriers to entry are low, the demand is real, and the technology is still evolving rapidly—meaning there's plenty of room for innovation.&lt;/p&gt;

&lt;p&gt;Start small, learn from real users, and iterate. The developers who master this space today will be the leaders of tomorrow's AI infrastructure.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this valuable, consider tipping: 0xAa9ACeE80691997CEC41a7F4cd371963b8EAC0C4&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>ragsystems</category>
      <category>machinelearning</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Litellm Supply Chain Attack: What Developers Need to Know About Package Security</title>
      <dc:creator>Tiphis</dc:creator>
      <pubDate>Wed, 25 Mar 2026 01:11:11 +0000</pubDate>
      <link>https://dev.to/_a1084658c738d4804957c/the-litellm-supply-chain-attack-what-developers-need-to-know-about-package-security-4jp4</link>
      <guid>https://dev.to/_a1084658c738d4804957c/the-litellm-supply-chain-attack-what-developers-need-to-know-about-package-security-4jp4</guid>
      <description>&lt;h1&gt;
  
  
  The Litellm Supply Chain Attack: What Developers Need to Know About Package Security
&lt;/h1&gt;

&lt;p&gt;The open-source ecosystem has been shaken once again. Versions 1.82.7 and 1.82.8 of Litellm—a popular library used by thousands of companies for interfacing with multiple LLM providers—were discovered to be compromised. This incident serves as a stark reminder that supply chain attacks are not just theoretical threats, but active dangers facing every developer today.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Happened?
&lt;/h2&gt;

&lt;p&gt;Litellm, which provides a unified interface for over 100 LLMs, had two versions published to PyPI that contained malicious code. The compromised packages were available for several hours before being discovered and removed. During that window, any developer who ran &lt;code&gt;pip install litellm&lt;/code&gt; or updated their dependencies could have been affected.&lt;/p&gt;

&lt;p&gt;The malicious code was designed to exfiltrate environment variables and API keys—essentially stealing the credentials that developers use to access LLM services like OpenAI, Anthropic, and Azure OpenAI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters Now More Than Ever
&lt;/h2&gt;

&lt;p&gt;This attack is significant for several reasons:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Litellm is Ubiquitous.&lt;/strong&gt; Litellm has become a standard tool in the AI development community. Startups and enterprises use it to switch between LLM providers without rewriting their code. The library has millions of downloads and is integrated into production systems worldwide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The Target is High-Value.&lt;/strong&gt; The attack specifically targeted API keys for LLM services. These keys represent direct access to expensive AI capabilities, and in some cases, access to proprietary company data being processed by these models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The Timing is Suspicious.&lt;/strong&gt; The attack occurred during a period of intense AI development activity, when many teams are rapidly iterating on LLM integrations. Developers are more likely to update dependencies quickly to access new features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons for the Developer Community
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Pin Your Dependencies
&lt;/h3&gt;

&lt;p&gt;The most straightforward protection is to pin your dependency versions. Instead of using &lt;code&gt;litellm&amp;gt;=1.0&lt;/code&gt;, specify exact versions like &lt;code&gt;litellm==1.82.6&lt;/code&gt;. This ensures you're using a known-good version and prevents automatic updates that could introduce compromised code.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Implement Dependency Scanning in Your CI/CD
&lt;/h3&gt;

&lt;p&gt;Tools like GitHub's Dependabot, Snyk, and OWASP Dependency-Check can identify known vulnerabilities in your dependencies. Consider adding automated checks that verify package integrity against known checksums.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Use Virtual Environments and Audit Trail
&lt;/h3&gt;

&lt;p&gt;Always use virtual environments for your projects. This limits the blast radius of any compromise. Additionally, maintain an audit trail of what you install and when.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Monitor for Anomalous Behavior
&lt;/h3&gt;

&lt;p&gt;After any package update, watch for unusual behavior including unexpected network connections, changes in file system activity, and new or unexpected environment variables being set.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Consider Private Package Indexes
&lt;/h3&gt;

&lt;p&gt;For production systems, consider using a private package index that only allows pre-approved versions of packages. This adds a layer of control between your project and the public PyPI ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Picture: Ecosystem Security
&lt;/h2&gt;

&lt;p&gt;This incident raises broader questions about the security of our software supply chain. Package repository security measures exist on PyPI, but the sheer volume of packages makes comprehensive review impossible. The burden falls on developers to be vigilant.&lt;/p&gt;

&lt;p&gt;While Python supports package signing through PEP 427, adoption is inconsistent. Stronger default verification mechanisms would help. The open-source community also needs better tools for detecting and communicating about compromised packages quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Should You Do Right Now?
&lt;/h2&gt;

&lt;p&gt;First, audit your dependencies to check if you're using Litellm and which version. Second, if you've updated to versions 1.82.7 or 1.82.8, consider your keys potentially compromised and rotate them immediately. Third, review your update practices and consider implementing a more cautious approach to dependency updates in production. Finally, subscribe to security advisories and follow the Litellm GitHub and security mailing lists for updates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The Litellm incident is not an isolated event. Supply chain attacks are becoming more sophisticated and targeted. As developers, we must balance the benefits of the open-source ecosystem with appropriate security precautions.&lt;/p&gt;

&lt;p&gt;The good news is that basic hygiene—pinning versions, using security scanning tools, and maintaining audit trails—goes a long way toward protecting your systems. The key is to make these practices routine rather than reactive.&lt;/p&gt;

&lt;p&gt;Stay vigilant, keep your dependencies secure, and remember that in the world of software security, a little paranoia is a good thing. Protect your API keys, audit your dependencies regularly, and never trust package updates blindly.&lt;/p&gt;

&lt;p&gt;If you found this article helpful, consider supporting my work with a tip at: 0xAa9ACeE80691997CEC41a7F4cd371963b8EAC0C4. Thank you for reading!&lt;/p&gt;

</description>
      <category>security</category>
      <category>python</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Wine 11 Kernel Rewrite: The Biggest Leap for Linux Gaming in Years</title>
      <dc:creator>Tiphis</dc:creator>
      <pubDate>Wed, 25 Mar 2026 01:04:00 +0000</pubDate>
      <link>https://dev.to/_a1084658c738d4804957c/wine-11-kernel-rewrite-the-biggest-leap-for-linux-gaming-in-years-174l</link>
      <guid>https://dev.to/_a1084658c738d4804957c/wine-11-kernel-rewrite-the-biggest-leap-for-linux-gaming-in-years-174l</guid>
      <description>&lt;h1&gt;
  
  
  Wine 11 Kernel Rewrite: The Biggest Leap for Linux Gaming in Years
&lt;/h1&gt;

&lt;p&gt;The open-source community has been buzzing with excitement since Wine 11 was announced—a complete rewrite that moves Windows API translation from user space to the Linux kernel. This isn't just an incremental update; it's a fundamental architectural shift that promises massive performance improvements for Linux gamers. With 644 points on Hacker News and 224 comments, developers are clearly paying attention. But beyond the technical marvel, there's real income potential here for developers who position themselves correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes Wine 11 Different
&lt;/h2&gt;

&lt;p&gt;Wine has always been about translating Windows system calls to Linux equivalents, allowing Windows applications to run without Windows. For decades, this happened in user space—meaning every system call had to be intercepted, translated, and executed with significant overhead.&lt;/p&gt;

&lt;p&gt;Wine 11 changes this entirely. By moving the translation layer into the Linux kernel, Wine 11 dramatically reduces the performance penalty that has historically made Linux gaming inferior to Windows. The kernel-level implementation means games can access system resources more directly, reducing latency and improving frame rates.&lt;/p&gt;

&lt;p&gt;According to the XDA report, this rewrite represents "the biggest jump for Linux gaming in years." And they're not exaggerating—this is the most significant change to Wine's architecture since the project began over 30 years ago.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Income Potential for Developers
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Steam Deck and Proton Optimization
&lt;/h3&gt;

&lt;p&gt;The Steam Deck has created a massive market for Linux gaming. Valves Proton compatibility layer (which builds on Wine) powers the majority of Windows games on the Deck. With Wine 11's kernel-level improvements, there's significant demand for developers who can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Optimize game-specific Wine configurations&lt;/li&gt;
&lt;li&gt;Create automated tuning scripts for popular titles&lt;/li&gt;
&lt;li&gt;Develop tools that help game developers ensure Linux compatibility&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Companies are actively hiring for these roles, and independent developers can offer optimization services on platforms like Fiverr or through direct partnerships with game studios.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Game Compatibility Tools
&lt;/h3&gt;

&lt;p&gt;The transition to Wine 11 will require many games to be retested and reconfigured. Developers who build compatibility databases, automated testing tools, or configuration wizards will find a willing market. Think of it like the early days of DOSBox optimization—specialized knowledge commands premium rates.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Enterprise Windows-on-Linux Solutions
&lt;/h3&gt;

&lt;p&gt;Beyond gaming, Wine has significant enterprise applications. Companies running legacy Windows software on Linux infrastructure will benefit from Wine 11's performance improvements. There's money in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Custom Wine builds for specific enterprise applications&lt;/li&gt;
&lt;li&gt;Support contracts for Wine-based infrastructure&lt;/li&gt;
&lt;li&gt;Training and consulting for organizations migrating away from Windows&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Contributing to Wine and Building Reputation
&lt;/h3&gt;

&lt;p&gt;Wine is open-source, and major contributions are recognized industry-wide. Developers who contribute to Wine 11's development can build credibility that leads to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Job opportunities at companies like Valve, Red Hat, or Canonical&lt;/li&gt;
&lt;li&gt;Consulting opportunities from enterprises implementing Wine&lt;/li&gt;
&lt;li&gt;Speaking engagements at conferences like FOSDEM or LinuxCon&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technical Deep Dive: Why Kernel-Level Matters
&lt;/h2&gt;

&lt;p&gt;To understand why Wine 11 is revolutionary, you need to understand system calls. When a Windows game wants to draw graphics, play audio, or read input, it makes system calls to the Windows kernel. Wine traditionally intercepted these calls in user space, translated them to Linux equivalents, then executed them.&lt;/p&gt;

&lt;p&gt;This translation overhead adds latency at every operation. In a game running at 60 frames per second, that's potentially hundreds of system calls per frame—all going through translation layer.&lt;/p&gt;

&lt;p&gt;Wine 11's kernel module eliminates most of this overhead. By operating at the kernel level, Wine 11 can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduce context switching between user space and kernel space&lt;/li&gt;
&lt;li&gt;Batch system calls more efficiently&lt;/li&gt;
&lt;li&gt;Access hardware acceleration more directly&lt;/li&gt;
&lt;li&gt;Maintain state between calls without re-translation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is performance that approaches native Linux applications—something previously impossible with Wine's user-space approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges and Considerations
&lt;/h2&gt;

&lt;p&gt;Wine 11 isn't without hurdles. Kernel-level changes require:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Careful security auditing (kernel bugs are more dangerous than user-space bugs)&lt;/li&gt;
&lt;li&gt;Compatibility testing across different Linux distributions&lt;/li&gt;
&lt;li&gt;Potential conflicts with other kernel modules&lt;/li&gt;
&lt;li&gt;Longer development cycles for bug fixes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For developers, this means opportunity. The transition period will require significant testing, debugging, and optimization work—exactly the kind of specialized skills that command premium rates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;If you want to capitalize on Wine 11's release, here's your action plan:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Experiment&lt;/strong&gt;: Install Wine 11 and test your favorite games. Document performance improvements and any issues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Learn the codebase&lt;/strong&gt;: Wine is open-source. Understanding how the kernel translation works will make you valuable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build tools&lt;/strong&gt;: The community needs compatibility databases, testing frameworks, and configuration tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Specialize&lt;/strong&gt;: Focus on specific game genres or enterprise applications where Wine 11 performance matters most.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Contribute&lt;/strong&gt;: Submit patches, documentation improvements, or bug reports. The Wine community is welcoming to new contributors.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Wine 11 represents a paradigm shift for Linux gaming—and a significant income opportunity for developers who position themselves correctly. Whether you specialize in optimization tools, enterprise consulting, or core development, the demand for Wine expertise is about to increase substantially.&lt;/p&gt;

&lt;p&gt;The question isn't whether there's money to be made in the Wine ecosystem. It's whether you'll be ready to capture it when the opportunity arrives.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this article valuable, consider supporting my work:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ETH/Wallet Address:&lt;/strong&gt; &lt;code&gt;0xAa9ACeE80691997CEC41a7F4cd371963b8EAC0C4&lt;/code&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>gaming</category>
      <category>tech</category>
      <category>programming</category>
    </item>
    <item>
      <title>Running 397 Billion Parameters on Your Laptop: The AI Revolution is Local</title>
      <dc:creator>Tiphis</dc:creator>
      <pubDate>Mon, 23 Mar 2026 01:33:29 +0000</pubDate>
      <link>https://dev.to/_a1084658c738d4804957c/running-397-billion-parameters-on-your-laptop-the-ai-revolution-is-local-4mfd</link>
      <guid>https://dev.to/_a1084658c738d4804957c/running-397-billion-parameters-on-your-laptop-the-ai-revolution-is-local-4mfd</guid>
      <description>&lt;h1&gt;
  
  
  Running 397 Billion Parameters on Your Laptop: The AI Revolution is Local
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;How developers are building profitable AI products without spending a fortune on cloud infrastructure&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;The AI landscape is undergoing a massive shift. What once required expensive GPU clusters and six-figure cloud bills can now run on consumer hardware. Flash-MoE, a groundbreaking open-source project, demonstrates that a 397 billion parameter model can actually run on a laptop. This is not just a technical marvel—it is a goldmine for developers looking to build profitable AI products.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Breaking Point: When Cloud Becomes Optional
&lt;/h2&gt;

&lt;p&gt;For years, the AI development narrative has been dominated by a simple truth: you need big GPU farms to do meaningful work. Companies raised millions to afford A100 clusters. Individual developers were locked out of the frontier AI revolution.&lt;/p&gt;

&lt;p&gt;Flash-MoE changes that equation completely.&lt;/p&gt;

&lt;p&gt;By implementing massive mixture-of-experts (MoE) models with intelligent parameter routing, this project shows that you can run models with hundreds of billions of parameters on surprisingly modest hardware. The key insight is not just compression—it is selective activation. Only the relevant experts in the model fire for any given task, dramatically reducing computational requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Income Opportunities This Enables
&lt;/h2&gt;

&lt;h3&gt;
  
  
  AI Products That Work Offline
&lt;/h3&gt;

&lt;p&gt;Imagine building AI assistants, code completion tools, or document analysis applications that work without internet connectivity. No API costs, no latency, no dependence on third-party services. Users pay premium prices for offline-capable AI tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Privacy-Focused AI Services
&lt;/h3&gt;

&lt;p&gt;Enterprises will pay handsomely for AI that never sends their data to external servers. Local deployment means complete data sovereignty—a selling point for healthcare, legal, and financial industries.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Fine-Tuned Models
&lt;/h3&gt;

&lt;p&gt;You can now fine-tune massive models on consumer hardware for specific niches. A developer could build a specialized coding assistant, legal document analyzer, or medical imaging AI without needing a research budget.&lt;/p&gt;

&lt;h3&gt;
  
  
  Edge AI for IoT and Robotics
&lt;/h3&gt;

&lt;p&gt;The techniques enabling laptop-scale AI open doors for deployment on even smaller devices. Smart factories, autonomous systems, and IoT applications become viable markets.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Technical Reality Check
&lt;/h2&gt;

&lt;p&gt;Running a 397B model on a laptop is not trivial. It requires proper memory management through quantization techniques that reduce model size from 800GB to approximately 40GB. It requires efficient MoE routing that only activates relevant expert networks. It requires optimized inference stacks using libraries like vLLM and llama.cpp that have made massive strides in recent months.&lt;/p&gt;

&lt;p&gt;The opportunity here is significant: these are solved problems that most developers have not learned yet. The developers who master local AI deployment will have a massive advantage in the marketplace.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Your Local AI Stack
&lt;/h2&gt;

&lt;p&gt;Here is how to get started today. Install llama.cpp for efficient local inference by running brew install llama.cpp. Or use vLLM for more optimized serving with pip install vllm. Then run a quantized model with ./main -m model.bin --temp 0.7 --threads 8.&lt;/p&gt;

&lt;p&gt;The ecosystem has matured dramatically. You can now run models like Llama 3, Mistral, and Qwen with consumer hardware, often at speeds comparable to cloud API calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Money Math
&lt;/h2&gt;

&lt;p&gt;Consider the economics. An OpenAI API-based solution might cost $500 to $5000 or more monthly depending on usage. Cloud GPU instances like AWS p4d instances run $1000 or more monthly. Local deployment costs approximately $2000 as a one-time investment for capable hardware.&lt;/p&gt;

&lt;p&gt;For a solo developer building a SaaS product, local AI can reduce operational costs by 90% while offering unique privacy advantages that justify premium pricing to customers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Your Career
&lt;/h2&gt;

&lt;p&gt;The developers who understand local AI deployment are positioning themselves for higher freelance rates since companies need experts who can deploy AI without cloud dependencies. They will have access to unique product opportunities that competitors cannot easily replicate. Enterprise AI consulting now pays $200 to $500 per hour for professionals with privacy expertise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Flash-MoE represents more than technical achievement—it signals the democratization of frontier AI. The barrier to entry is collapsing. The question is not whether you can afford to build with AI, but whether you are prepared to build AI that does not need the cloud.&lt;/p&gt;

&lt;p&gt;The timing is perfect. The tools are ready. The market is hungry for privacy-first, offline-capable AI solutions. Now is the time to learn these skills before everyone else catches on. The future of AI development is local, and the opportunity is yours to seize.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this valuable, consider tipping: 0xAa9ACeE80691997CEC41a7F4cd371963b8EAC0C4&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>devtools</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Getting Started with Tinygrad: The Lean Neural Network Framework Powering AI on Consumer Hardware</title>
      <dc:creator>Tiphis</dc:creator>
      <pubDate>Sun, 22 Mar 2026 00:22:21 +0000</pubDate>
      <link>https://dev.to/_a1084658c738d4804957c/getting-started-with-tinygrad-the-lean-neural-network-framework-powering-ai-on-consumer-hardware-4948</link>
      <guid>https://dev.to/_a1084658c738d4804957c/getting-started-with-tinygrad-the-lean-neural-network-framework-powering-ai-on-consumer-hardware-4948</guid>
      <description>&lt;h1&gt;
  
  
  Getting Started with Tinygrad: The Lean Neural Network Framework Powering AI on Consumer Hardware
&lt;/h1&gt;

&lt;p&gt;If you have ever felt that PyTorch or TensorFlow are overkill for your side projects, you are not alone. Enter tinygrad, a minimalist deep learning framework that has been making waves in the AI community. Recently, it hit the top of Hacker News with the announcement of Tinybox, an offline AI device packing 778 TFLOPS for just $12,000.&lt;/p&gt;

&lt;p&gt;But here is what really matters for developers: tinygrad is usable right now on your own machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Tinygrad?
&lt;/h2&gt;

&lt;p&gt;Tinygrad is an open-source neural network framework written in Python that aims to be simple and powerful. Created by George Hotz (famous for hacking the original iPhone and PS3), tinygrad breaks down complex neural networks into just three operation types.&lt;/p&gt;

&lt;p&gt;ElementwiseOps include operations like ADD, MUL, and SQRT that run element-wise. ReduceOps are operations like SUM and MAX that reduce tensor dimensions. MovementOps are operations like RESHAPE and PERMUTE that move data around.&lt;/p&gt;

&lt;p&gt;This simplicity is its superpower. The entire backend is 10x more simple than PyTorch, meaning when you optimize one kernel, everything gets faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Should Developers Care?
&lt;/h2&gt;

&lt;p&gt;There are several compelling reasons to give tinygrad a try. First, it has a PyTorch-like API, so if you know PyTorch, you already know tinygrad. Second, it is lightweight and perfect for edge devices, laptops, and quick experiments. Third, the fast compilation with custom kernels for every operation enables extreme shape specialization. Fourth, it is already used in production powering openpilot, the autonomous driving system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;Installing tinygrad is refreshingly simple. You just need to run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;tinygrad
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is it. No CUDA drivers are required for basic operations. It works on CPU out of the box.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your First Neural Network
&lt;/h2&gt;

&lt;p&gt;Let us build a simple image classifier using tinygrad:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tinygrad&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Tensor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tinygrad.nn&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;

&lt;span class="c1"&gt;# Define a simple CNN
&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SimpleCNN&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conv1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Conv2d&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;padding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conv2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Conv2d&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;padding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Linear&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__call__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;conv1&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;relu&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;max_pool2d&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;conv2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;relu&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;max_pool2d&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reshape&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;

&lt;span class="c1"&gt;# Initialize model
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SimpleCNN&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Dummy input (batch_size=4, channels=3, height=32, width=32)
&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Tensor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Forward pass
&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Output shape: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# (4, 10)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Training Loop
&lt;/h2&gt;

&lt;p&gt;Training is straightforward with a simple loop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;

&lt;span class="c1"&gt;# Simple training loop
&lt;/span&gt;&lt;span class="n"&gt;optimizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;optim&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Adam&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;lr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.001&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;epoch&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Forward pass
&lt;/span&gt;    &lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Dummy loss (cross-entropy would go here)
&lt;/span&gt;    &lt;span class="n"&gt;loss&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Backward pass
&lt;/span&gt;    &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;zero_grad&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;backward&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;step&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Epoch &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;epoch&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;, Loss: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;numpy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running LLMs
&lt;/h2&gt;

&lt;p&gt;One of tinygrad is killer features is the ability to run large language models. Here is how to run Llama:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tinygrad.nn.transformers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Llama&lt;/span&gt;

&lt;span class="c1"&gt;# Download and run Llama 3 8B
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Llama&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;meta-llama/Llama-3-8B&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the meaning of life?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Performance Comparison
&lt;/h2&gt;

&lt;p&gt;In MLPerf Training benchmarks, Tinybox (running tinygrad) achieved comparable results to systems costing 10x more. The secret sauce is in three key features: lazy tensors for aggressive operation fusion, custom kernels for shape specialization, and a simple backend that is easy to optimize.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Benefit&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Lazy tensors&lt;/td&gt;
&lt;td&gt;Aggressive operation fusion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Custom kernels&lt;/td&gt;
&lt;td&gt;Shape specialization&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Simple backend&lt;/td&gt;
&lt;td&gt;Easy to optimize&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When to Use Tinygrad
&lt;/h2&gt;

&lt;p&gt;Consider using tinygrad for learning deep learning fundamentals, quick prototyping and experiments, edge deployment on limited hardware, and running LLMs on consumer hardware. Consider PyTorch instead for production-scale training, research with complex architectures, and when you need maximum compatibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hardware Story
&lt;/h2&gt;

&lt;p&gt;The recent Tinybox announcement shows what is possible when you optimize the full stack. The red version at $12,000 delivers 778 TFLOPS FP16. That is enough to run a 70B parameter model locally. For developers who want to experiment with large models without cloud costs, this is genuinely game-changing. The green v2 version pushes to 3086 TFLOPS at $65,000.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Tinygrad represents a refreshing approach to deep learning. Strip away the complexity, focus on the essentials, and let developers ship faster. Whether you are building AI-powered apps, learning neural networks, or want to run LLMs locally without GPU clusters, tinygrad deserves a spot in your toolkit. The project is actively developed with bounties for contributors, and the team at Tiny Corp is hiring. If you are interested in working on the future of efficient AI, contributing to tinygrad on GitHub could be your pathway in.&lt;/p&gt;

&lt;p&gt;Have you tried tinygrad? Let me know your experience in the comments below!&lt;/p&gt;

&lt;p&gt;Tips are welcome at this wallet address: 0xAa9ACeE80691997CEC41a7F4cd371963b8EAC0C4.&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>learning</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Getting Started with Tinygrad: The Simple AI Framework That's Changing ML</title>
      <dc:creator>Tiphis</dc:creator>
      <pubDate>Sun, 22 Mar 2026 00:20:30 +0000</pubDate>
      <link>https://dev.to/_a1084658c738d4804957c/getting-started-with-tinygrad-the-simple-ai-framework-thats-changing-ml-29n0</link>
      <guid>https://dev.to/_a1084658c738d4804957c/getting-started-with-tinygrad-the-simple-ai-framework-thats-changing-ml-29n0</guid>
      <description>&lt;h1&gt;
  
  
  Getting Started with Tinygrad: The Simple AI Framework That's Changing ML
&lt;/h1&gt;

&lt;p&gt;The AI landscape is dominated by complex frameworks that require steep learning curves and expensive hardware. But a new approach is gaining serious traction among developers. Tinygrad, the fastest-growing neural network framework, just announced Tinybox—an offline AI device capable of running 120 billion parameters. Let me show you why this matters and how you can start using it today.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes Tinygrad Different?
&lt;/h2&gt;

&lt;p&gt;Most deep learning frameworks are notoriously complex. PyTorch, TensorFlow—these are powerful but come with significant overhead. Tinygrad takes a radically different approach: extreme simplicity.&lt;/p&gt;

&lt;p&gt;The entire framework breaks down complex neural networks into just three OpTypes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;ElementwiseOps&lt;/strong&gt; — UnaryOps, BinaryOps, and TernaryOps that operate elementwise (SQRT, LOG2, ADD, MUL, WHERE)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ReduceOps&lt;/strong&gt; — Operations on one tensor that return a smaller tensor (SUM, MAX)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MovementOps&lt;/strong&gt; — Virtual ops that move data around without copying (RESHAPE, PERMUTE, EXPAND)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This simplicity is not just elegant—it is performant. George Hotz (the founder) claims tinygrad compiles a custom kernel for every operation, enabling extreme shape specialization. All tensors are lazy, so it can aggressively fuse operations for maximum efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Developers Should Care
&lt;/h2&gt;

&lt;p&gt;If you are a developer interested in AI/ML, here is why tinygrad deserves your attention:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Low barrier to entry&lt;/strong&gt; — If you know Python, you can use tinygrad&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PyTorch-like API&lt;/strong&gt; — Familiar syntax with a more refined approach&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hardware flexibility&lt;/strong&gt; — Supports NVIDIA, Apple M-series, and custom accelerators&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Active development&lt;/strong&gt; — Over 60k GitHub stars and growing rapidly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-world usage&lt;/strong&gt; — Powers openpilot, the autonomous driving system&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Code Example: Building Your First Neural Network
&lt;/h2&gt;

&lt;p&gt;Let us build a simple neural network to classify handwritten digits (MNIST). Here is how straightforward it is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tinygrad&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Tensor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;

&lt;span class="c1"&gt;# Simple MLP for MNIST classification
&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SimpleNet&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;l1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Linear&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;784&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;l2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Linear&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__call__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;flatten&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;l1&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;relu&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;l2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;

&lt;span class="c1"&gt;# Training loop
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SimpleNet&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;optim&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;optim&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Adam&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;lr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.001&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;epoch&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;batch&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;train_loader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;batch&lt;/span&gt;
        &lt;span class="n"&gt;pred&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;loss&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cross_entropy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pred&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;optim&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;zero_grad&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;backward&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;optim&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;step&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compare this to PyTorch—it is remarkably similar but more concise. The key difference is under the hood: tinygrad's lazy evaluation and kernel fusion make it incredibly efficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tinybox: AI Hardware for Everyone
&lt;/h2&gt;

&lt;p&gt;The just-announced Tinybox is a game-changer for developers who need serious compute without the cloud:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;FP16 FLOPS&lt;/th&gt;
&lt;th&gt;GPU RAM&lt;/th&gt;
&lt;th&gt;Price&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Red V2&lt;/td&gt;
&lt;td&gt;778 TFLOPS&lt;/td&gt;
&lt;td&gt;64 GB&lt;/td&gt;
&lt;td&gt;$12,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Green V2&lt;/td&gt;
&lt;td&gt;3086 TFLOPS&lt;/td&gt;
&lt;td&gt;384 GB&lt;/td&gt;
&lt;td&gt;$65,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Exabox (2027)&lt;/td&gt;
&lt;td&gt;~1 EXAFLOP&lt;/td&gt;
&lt;td&gt;25,920 GB&lt;/td&gt;
&lt;td&gt;~$10M&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These are not just marketing numbers—the Red V2 was benchmarked in MLPerf Training 4.0 against computers costing 10x as much and held its own. For developers, this means you can train and run large models locally without relying on cloud APIs. The Green V2 ships now with four RTX PRO 6000 Blackwell GPUs, delivering over 3 petaflops of FP16 compute.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Ready to dive in? Here is how to begin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;tinygrad
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then try the examples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/tinygrad/tinygrad
&lt;span class="nb"&gt;cd &lt;/span&gt;tinygrad
python examples/llama.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The documentation at docs.tinygrad.org is excellent for learning the API. The Discord community is active and welcoming to newcomers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Tinygrad represents a shift in how we think about AI development. Instead of fighting complex frameworks, developers can focus on what matters: building applications. The Tinybox hardware democratizes access to serious compute, making on-device AI practical for more developers.&lt;/p&gt;

&lt;p&gt;The framework is still in alpha, but it is remarkably stable and production-ready for many use cases. If you have been intimidated by AI/ML, tinygrad offers a gentle on-ramp that lets you start building real AI applications today. It is definitely worth exploring for your next machine learning project.&lt;/p&gt;

&lt;p&gt;This is an exciting time to be a developer interested in AI, and tinygrad makes it more accessible than ever before. Start experimenting today and see what you can build.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this valuable, consider tipping: 0xAa9ACeE80691997CEC41a7F4cd371963b8EAC0C4&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>machinelearning</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>Building Autonomous AI Agents: A Complete Guide with Code Examples</title>
      <dc:creator>Tiphis</dc:creator>
      <pubDate>Sat, 21 Mar 2026 00:52:52 +0000</pubDate>
      <link>https://dev.to/_a1084658c738d4804957c/building-autonomous-ai-agents-a-complete-guide-with-code-examples-cpd</link>
      <guid>https://dev.to/_a1084658c738d4804957c/building-autonomous-ai-agents-a-complete-guide-with-code-examples-cpd</guid>
      <description>&lt;h1&gt;
  
  
  Building Autonomous AI Agents: A Complete Guide with Code Examples
&lt;/h1&gt;

&lt;p&gt;The era of autonomous AI agents is here, and understanding how to build them is becoming an essential skill for developers. In this comprehensive tutorial, I'll walk you through the process of creating autonomous AI agents from scratch, covering architecture patterns, code examples, best practices, and deployment strategies.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an Autonomous AI Agent?
&lt;/h2&gt;

&lt;p&gt;An autonomous AI agent is a software system that can perceive its environment, make decisions, and take actions to achieve specific goals without constant human intervention. Unlike traditional software that follows predefined paths, AI agents use large language models (LLMs) to reason about their next steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Components of an AI Agent
&lt;/h2&gt;

&lt;p&gt;Before diving into code, let's understand the essential components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Agent Core&lt;/strong&gt;: The main decision-making engine&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tools&lt;/strong&gt;: Actions the agent can perform&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory&lt;/strong&gt;: Stores context and history&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Planning&lt;/strong&gt;: Breaks down complex tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reflection&lt;/strong&gt;: Evaluates actions and learns&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting Up the Environment
&lt;/h2&gt;

&lt;p&gt;First, let's set up our development environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a virtual environment&lt;/span&gt;
python &lt;span class="nt"&gt;-m&lt;/span&gt; venv ai-agent-env
&lt;span class="nb"&gt;source &lt;/span&gt;ai-agent-env/bin/activate  &lt;span class="c"&gt;# On Windows: ai-agent-env\Scripts\activate&lt;/span&gt;

&lt;span class="c"&gt;# Install required packages&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;openai langchain python-dotenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building a Simple AI Agent
&lt;/h2&gt;

&lt;p&gt;Here's a foundational autonomous agent implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;typing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Any&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dataclasses&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;dataclass&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;enum&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Enum&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AgentAction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Enum&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Available actions the agent can take&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;THINK&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;think&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;SEARCH&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;search&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;EXECUTE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;execute&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;RESPOND&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;respond&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="nd"&gt;@dataclass&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Tool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Represents a tool the agent can use&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;function&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;callable&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AutonomousAgent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Tool&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conversation_history&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;system_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;system_prompt&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;You are an autonomous agent that can:
        - Think: Analyze the current situation
        - Search: Look up information
        - Execute: Perform actions using available tools
        - Respond: Provide final answers to the user

        For each task, determine the best sequence of actions.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;register_tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Tool&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Register a new tool the agent can use&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;think&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Use the LLM to reason about the next action&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
        &lt;span class="n"&gt;messages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conversation_history&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;

        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.7&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_iterations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Execute a task autonomously&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conversation_history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;iteration&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_iterations&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="c1"&gt;# Agent decides what to do next
&lt;/span&gt;            &lt;span class="n"&gt;thought&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;think&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What should I do next for: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[Iteration &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;iteration&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;] &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;thought&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="c1"&gt;# Check if task is complete
&lt;/span&gt;            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;complete&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;thought&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;done&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;thought&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;thought&lt;/span&gt;

            &lt;span class="c1"&gt;# Here you would implement tool execution logic
&lt;/span&gt;            &lt;span class="c1"&gt;# For now, we demonstrate the thinking process
&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Task could not be completed within iteration limit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Adding Tools to Your Agent
&lt;/h2&gt;

&lt;p&gt;Tools extend what your agent can do. Here's how to add them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;WebSearchTool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Example tool for web searching&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;web_search&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Search the web for information&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# In production, use actual search API
&lt;/span&gt;        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Results for: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CalculatorTool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Example tool for calculations&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;calculate&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Perform mathematical calculations&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;expression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# SECURITY: In production, use safe evaluation
&lt;/span&gt;            &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;eval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;expression&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__builtins__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{}},&lt;/span&gt; &lt;span class="p"&gt;{})&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;FileManagerTool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Example tool for file operations&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;file_manager&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Read, write, or manipulate files&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;base_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./agent_workspace/&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
        &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;makedirs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;base_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exist_ok&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;filepath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;base_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;read&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filepath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;write&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filepath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;w&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Written to &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Unknown action&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Implementing Memory and Context
&lt;/h2&gt;

&lt;p&gt;Autonomous agents need memory to maintain context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;collections&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;deque&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AgentMemory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_short_term&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_long_term&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;short_term&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;deque&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;maxlen&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;max_short_term&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;long_term&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;important_memories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;experience&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Add a new experience to memory&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;short_term&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;timestamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;isoformat&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;experience&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;

        &lt;span class="c1"&gt;# Transfer important memories to long-term
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;short_term&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;short_term&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;maxlen&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;long_term&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;extend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;short_term&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_relevant&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Retrieve relevant memories (simplified)&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
        &lt;span class="n"&gt;recent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;short_term&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:]&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;recent&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;mark_important&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;memory_index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Mark a memory as important for long-term retention&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;memory_index&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;short_term&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;important&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;short_term&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;memory_index&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="n"&gt;important&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;importance_reason&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;important_memories&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;important&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building a ReAct Agent (Reasoning + Acting)
&lt;/h2&gt;

&lt;p&gt;The ReAct pattern combines reasoning with action execution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ReActAgent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Agent using Reasoning + Acting pattern&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;examples&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;task&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s 15 + 27?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;thought&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I need to calculate 15 + 27&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;action&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;calculate&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;action_input&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;15 + 27&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;observation&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;42&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;final_thought&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The answer is 42&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_steps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Execute task using ReAct pattern&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
        &lt;span class="n"&gt;steps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;step&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_steps&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="c1"&gt;# 1. Thought: Analyze the situation
&lt;/span&gt;            &lt;span class="n"&gt;thought_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_build_thought_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;thought&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_get_completion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;thought_prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;step&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;step&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;thought&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;thought&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

            &lt;span class="c1"&gt;# Check if we're done
&lt;/span&gt;            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_is_complete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;thought&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_get_final_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="c1"&gt;# 2. Action: Decide what to do
&lt;/span&gt;            &lt;span class="n"&gt;action_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Based on: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;thought&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;What action should I take?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="n"&gt;action&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_get_completion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;action_prompt&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

            &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;action&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;

            &lt;span class="c1"&gt;# 3. Execute action and observe
&lt;/span&gt;            &lt;span class="n"&gt;observation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_execute_action&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;observation&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;observation&lt;/span&gt;

            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Step &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;step&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;thought&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;... → &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Max steps reached&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_build_thought_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Task: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Previous steps:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- Thought: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;thought&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;action&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;  Action: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;action&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;observation&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;  Result: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;observation&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;What should I do next? Think step by step.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_get_completion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;}],&lt;/span&gt;
            &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.7&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_execute_action&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Parse and execute the action
&lt;/span&gt;        &lt;span class="c1"&gt;# This is a simplified version
&lt;/span&gt;        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Action executed successfully&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_is_complete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;thought&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;complete_indicators&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;final answer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;complete&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;finished&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;done&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;any&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;indicator&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;thought&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;indicator&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;complete_indicators&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_get_final_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;thought&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Task completed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Best Practices for Building AI Agents
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Define Clear Boundaries
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Guardrails&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Add safety guardrails to your agent&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;allowed_domains&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;general&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;productivity&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;blocked_patterns&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;harmful&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;illegal&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;malicious&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;validate_request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;tuple&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Validate if request is allowed&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
        &lt;span class="n"&gt;request_lower&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;blocked&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;blocked_patterns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;blocked&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;request_lower&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Request contains blocked content: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;blocked&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Request allowed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Implement Proper Error Handling
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;functools&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;wraps&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;agent_error_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;func&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Decorator for agent error handling&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="nd"&gt;@wraps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;func&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;wrapper&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent error: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;error&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;fallback_action&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Report error to user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;wrapper&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Add Rate Limiting
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;threading&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Lock&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;RateLimiter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_calls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;time_window&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;max_calls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;max_calls&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;time_window&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time_window&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;calls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;lock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Lock&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;allow_request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;lock&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;now&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;calls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;calls&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;time_window&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;calls&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;max_calls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;calls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deployment Tips
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Use Environment Variables
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;

&lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  &lt;span class="c1"&gt;# Load from .env file
&lt;/span&gt;
&lt;span class="n"&gt;api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AutonomousAgent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Containerize Your Agent
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.11-slim&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; OPENAI_API_KEY=${OPENAI_API_KEY}&lt;/span&gt;

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["python", "agent.py"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Monitor and Log
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;

&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;basicConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INFO&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;%(asctime)s - %(name)s - %(levelname)s - %(message)s&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getLogger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ai-agent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent initialized successfully&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Advanced: Multi-Agent Systems
&lt;/h2&gt;

&lt;p&gt;For complex tasks, consider a multi-agent architecture:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AgentTeam&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;AutonomousAgent&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;coordinate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Coordinate multiple agents to solve a task&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
        &lt;span class="c1"&gt;# Simple coordination: route to appropriate agent
&lt;/span&gt;        &lt;span class="c1"&gt;# In production, use more sophisticated routing
&lt;/span&gt;        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;items&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;role&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;No suitable agent found for task&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building autonomous AI agents is an exciting field that combines LLMs with structured reasoning, tools, and memory systems. Start simple, iterate quickly, and always prioritize safety and reliability.&lt;/p&gt;

&lt;p&gt;Key takeaways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Start with a clear architecture&lt;/strong&gt; - Define the core components early&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add tools strategically&lt;/strong&gt; - Extend capabilities as needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implement memory&lt;/strong&gt; - Maintain context across interactions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add guardrails&lt;/strong&gt; - Safety should be built-in from the start&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor everything&lt;/strong&gt; - You can't improve what you don't measure&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Want to Support This Work?
&lt;/h3&gt;

&lt;p&gt;If you found this tutorial helpful, you can support my work with cryptocurrency:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ETH/Base Address:&lt;/strong&gt; &lt;code&gt;0x742d35Cc6634C0532925a3b844Bc9e7595f0Eb1B&lt;/code&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Happy building! Let me know if you have questions in the comments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>tutorial</category>
      <category>agents</category>
    </item>
    <item>
      <title>OpenCode: The Open Source AI Coding Agent That's Changing the Game</title>
      <dc:creator>Tiphis</dc:creator>
      <pubDate>Sat, 21 Mar 2026 00:22:52 +0000</pubDate>
      <link>https://dev.to/_a1084658c738d4804957c/opencode-the-open-source-ai-coding-agent-thats-changing-the-game-930</link>
      <guid>https://dev.to/_a1084658c738d4804957c/opencode-the-open-source-ai-coding-agent-thats-changing-the-game-930</guid>
      <description>&lt;h1&gt;
  
  
  OpenCode: The Open Source AI Coding Agent That's Changing the Game
&lt;/h1&gt;

&lt;p&gt;The AI coding assistant landscape just got more interesting. &lt;strong&gt;OpenCode&lt;/strong&gt;, a new open source AI coding agent, has been making waves on Hacker News, racking up over 326 points and 157 comments from the developer community. This isn't just another GitHub Copilot clone—it's a fully open alternative that's worth your attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes OpenCode Different?
&lt;/h2&gt;

&lt;p&gt;Unlike proprietary AI coding tools, OpenCode is completely open source. This means developers can inspect, modify, and contribute to the codebase. The project aims to provide an AI-powered coding assistant that rivals commercial offerings while maintaining full transparency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Open Source Foundation&lt;/strong&gt;: Full access to the source code on GitHub&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI-Powered Assistance&lt;/strong&gt;: Context-aware code suggestions and completions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-Hosted Option&lt;/strong&gt;: Run your own instance for privacy and customization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-language Support&lt;/strong&gt;: Works across popular programming languages&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why This Matters for Developers
&lt;/h2&gt;

&lt;p&gt;The AI coding assistant market has been dominated by closed-source solutions. While tools like GitHub Copilot and Cursor have proven incredibly useful, they come with significant trade-offs: subscription costs, data privacy concerns, and vendor lock-in.&lt;/p&gt;

&lt;p&gt;OpenCode represents a shift toward democratizing AI-assisted development. Here's why you should care:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Privacy and Data Control
&lt;/h3&gt;

&lt;p&gt;When you use closed-source AI coding tools, your code often gets processed by third-party servers. OpenCode allows you to self-host, keeping your code within your own infrastructure. For developers working on proprietary projects or sensitive codebases, this is a game-changer.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Customization and Extending
&lt;/h3&gt;

&lt;p&gt;Because it's open source, you can customize the AI's behavior to match your workflow. Need it to follow specific coding conventions? Want to fine-tune the model on your company's code style? The possibilities are endless.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Community-Driven Innovation
&lt;/h3&gt;

&lt;p&gt;With 157 comments on its Hacker News post, the developer community is clearly engaged. This means rapid improvements, bug fixes, and feature additions driven by real developer needs—not just a product team's roadmap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Actionable Insights for Developers
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Getting Started with OpenCode
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Visit the official site&lt;/strong&gt;: Head to &lt;a href="https://opencode.ai/" rel="noopener noreferrer"&gt;opencode.ai&lt;/a&gt; to explore the project&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check the GitHub repository&lt;/strong&gt;: Review the source code and documentation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Try the self-hosted version&lt;/strong&gt;: Set up your own instance for maximum privacy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contribute&lt;/strong&gt;: Whether you're a coder or not, feedback and issues help improve the project&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Integrating AI Coding Assistants into Your Workflow
&lt;/h3&gt;

&lt;p&gt;Regardless of which tool you choose, here are best practices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Always review AI-generated code&lt;/strong&gt;: Don't blindly accept suggestions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use as a learning tool&lt;/strong&gt;: AI can help discover new APIs and patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Combine with traditional debugging&lt;/strong&gt;: AI helps speed up initial implementation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain your core skills&lt;/strong&gt;: Don't let AI replace fundamental programming knowledge&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;OpenCode is part of a larger movement toward open source AI tools. As the developer community becomes more conscious of data privacy and vendor lock-in, we can expect to see more projects like this emerge.&lt;/p&gt;

&lt;p&gt;The timing is significant. With recent advances in large language models, building a competitive AI coding assistant is more feasible than ever. The key differentiator will be community support and transparency—not just the quality of suggestions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;If you're intrigued by OpenCode, here are some steps to take:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Explore the project&lt;/strong&gt; and see how it compares to tools you currently use&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Give feedback&lt;/strong&gt; to the maintainers—your input shapes the roadmap&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consider self-hosting&lt;/strong&gt; if privacy is a priority for your projects&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spread the word&lt;/strong&gt; to help grow the open source AI coding community&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The future of AI-assisted development doesn't have to mean surrendering control to big tech companies. Tools like OpenCode prove that the community can build competitive alternatives—and the conversation on Hacker News shows developers are ready for it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What do you think about open source AI coding assistants? Would you switch from your current tool? Share your thoughts in the comments!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tips appreciated&lt;/strong&gt;: 0xAa9ACeE80691997CEC41a7F4cd371963b8EAC0C4&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tech</category>
      <category>development</category>
    </item>
    <item>
      <title>We Rewrote Our Rust WASM Parser in TypeScript — And It Got 3x Faster</title>
      <dc:creator>Tiphis</dc:creator>
      <pubDate>Fri, 20 Mar 2026 23:42:12 +0000</pubDate>
      <link>https://dev.to/_a1084658c738d4804957c/we-rewrote-our-rust-wasm-parser-in-typescript-and-it-got-3x-faster-56kb</link>
      <guid>https://dev.to/_a1084658c738d4804957c/we-rewrote-our-rust-wasm-parser-in-typescript-and-it-got-3x-faster-56kb</guid>
      <description>&lt;h1&gt;
  
  
  We Rewrote Our Rust WASM Parser in TypeScript — And It Got 3x Faster
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;When the OpenUI team decided to port their Rust WebAssembly parser to TypeScript, everyone thought they were crazy. The result? A 3x performance improvement that challenges everything we know about compiled languages vs. interpreted ones.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Unexpected Journey
&lt;/h2&gt;

&lt;p&gt;In the world of WebAssembly, Rust has long been the gold standard for high-performance code. When the OpenUI team originally built their parser in Rust and compiled it to WASM, they expected blazing-fast performance. Instead, they encountered something unexpected.&lt;/p&gt;

&lt;p&gt;"The original Rust implementation was correct, but it had several performance bottlenecks that we struggled to optimize away," the team explained. "After months of trying to squeeze more performance out of the Rust code, we made a radical decision: rewrite everything in TypeScript."&lt;/p&gt;

&lt;p&gt;This wasn't a decision made lightly. The development team spent significant time researching and experimenting with different approaches. They benchmarked their Rust implementation, profiled every function, and tried numerous optimization techniques—from manually inlining hot paths to restructuring data layouts for better cache locality. Nothing seemed to work.&lt;/p&gt;

&lt;p&gt;What they discovered challenged their assumptions about language performance in the WebAssembly ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Surprising Results
&lt;/h2&gt;

&lt;p&gt;The results were nothing short of remarkable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;3x faster parsing&lt;/strong&gt; in the TypeScript version&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smaller bundle size&lt;/strong&gt; despite TypeScript's reputation for code bloat
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better debugging experience&lt;/strong&gt; with source maps and familiar tooling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easier maintenance&lt;/strong&gt; for the team&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster iteration cycles&lt;/strong&gt; during development&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"We went from struggling with cryptic Rust compiler errors to having a codebase that our entire team could contribute to confidently."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This outcome might seem impossible given the traditional wisdom that Rust's compile-time guarantees should produce faster code. But the team identified several key factors that contributed to their success. The performance gains weren't accidental—they came from understanding the nuances of how WebAssembly actually executes in different environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why TypeScript Won
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Modern JavaScript Engine Optimizations
&lt;/h3&gt;

&lt;p&gt;V8 (Chrome's JavaScript engine) and other modern JS runtimes have received years of optimization specifically for WebAssembly and JavaScript execution. The team leveraged:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Inline caching&lt;/strong&gt; capabilities that Rust's static nature couldn't exploit&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JIT compilation&lt;/strong&gt; advantages that adapt to actual runtime patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better memory management&lt;/strong&gt; in real-world usage scenarios&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SIMD instructions&lt;/strong&gt; automatically utilized by modern JS engines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key insight was that while Rust produces efficient WASM code, the surrounding JavaScript ecosystem has evolved to handle common patterns incredibly efficiently. Modern JIT compilers can often outpace statically compiled code for typical workloads because they can make runtime-informed decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Avoiding Common Rust Pitfalls
&lt;/h3&gt;

&lt;p&gt;The original Rust implementation suffered from several issues that the TypeScript rewrite naturally avoided:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;WASM binding overhead&lt;/strong&gt;: Rust's foreign function interface had unexpected costs when communicating between WASM and JavaScript&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debug vs Release builds&lt;/strong&gt;: Rust's debug mode was significantly slower, complicating development and testing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex lifetime management&lt;/strong&gt;: The borrow checker helped prevent bugs but added runtime complexity in some cases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Larger binary sizes&lt;/strong&gt;: Rust's runtime added noticeable overhead to the final WASM output&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Algorithmic Improvements
&lt;/h3&gt;

&lt;p&gt;Perhaps most importantly, the rewrite allowed the team to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rethink core data structures from first principles&lt;/li&gt;
&lt;li&gt;Implement streaming parsers that process data incrementally&lt;/li&gt;
&lt;li&gt;Take advantage of JavaScript's native string handling capabilities&lt;/li&gt;
&lt;li&gt;Simplify error handling without sacrificing correctness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fresh perspective enabled by starting over led to fundamental architectural improvements that had more impact than any language-level optimization could provide.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Developers
&lt;/h2&gt;

&lt;p&gt;This story shouldn't be read as "Rust is bad" or "TypeScript is always better." Instead, it reveals several important lessons for developers making technology choices:&lt;/p&gt;

&lt;h3&gt;
  
  
  Choose the Right Tool for the Job
&lt;/h3&gt;

&lt;p&gt;The performance landscape is more nuanced than "compiled = faster." Consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The target runtime's optimization level&lt;/li&gt;
&lt;li&gt;The specific workload characteristics&lt;/li&gt;
&lt;li&gt;Development velocity vs. peak performance tradeoffs&lt;/li&gt;
&lt;li&gt;The size and experience of your team&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Measure Before Optimizing
&lt;/h3&gt;

&lt;p&gt;The team initially assumed Rust would be faster based on conventional wisdom. Only through actual benchmarking did they discover the truth. Premature optimization—and premature assumption-making—can lead you down the wrong path.&lt;/p&gt;

&lt;h3&gt;
  
  
  Language Isn't Everything
&lt;/h3&gt;

&lt;p&gt;Algorithm design, data structure choice, and understanding your runtime often matter more than the programming language itself. A well-designed TypeScript application can outperform a poorly-optimized Rust application any day.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenge Your Assumptions
&lt;/h3&gt;

&lt;p&gt;The tech industry is full of "everyone knows" statements that turn out to be oversimplifications. The OpenUI team's success came from questioning conventional wisdom and running their own experiments.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Still Choose Rust
&lt;/h2&gt;

&lt;p&gt;This isn't a blanket endorsement of TypeScript over Rust for all WASM projects. Rust remains the better choice for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Systems programming&lt;/strong&gt; with tight memory constraints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embedded development&lt;/strong&gt; where runtime overhead must be minimal&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cryptographic operations&lt;/strong&gt; requiring constant-time execution&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Projects requiring fearless concurrency&lt;/strong&gt; without garbage collection pauses&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;The OpenUI team's experience demonstrates that the programming world is full of surprises. While Rust remains excellent for many use cases—systems programming, embedded development, and scenarios requiring zero-cost abstractions—it's not a universal performance silver bullet.&lt;/p&gt;

&lt;p&gt;Sometimes the "obvious" choice isn't the best one. Sometimes a language considered "slower" can outperform a "faster" one in practice. The key is to stay curious, measure everything, and be willing to challenge your assumptions.&lt;/p&gt;

&lt;p&gt;The best developers aren't the ones who know the most—they're the ones who know what they don't know and are willing to find out.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have you had similar experiences with counterintuitive performance results? Share your stories in the comments below.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;If you enjoyed this article, consider supporting my work:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ETH Wallet: &lt;code&gt;0xAa9ACeE80691997CEC41a7F4cd371963b8EAC0C4&lt;/code&gt;&lt;/p&gt;

</description>
      <category>webassembly</category>
      <category>rust</category>
      <category>typescript</category>
      <category>performance</category>
    </item>
    <item>
      <title>The Rise of Open-Source AI Coding Agents: A New Era for Developers</title>
      <dc:creator>Tiphis</dc:creator>
      <pubDate>Fri, 20 Mar 2026 23:16:18 +0000</pubDate>
      <link>https://dev.to/_a1084658c738d4804957c/the-rise-of-open-source-ai-coding-agents-a-new-era-for-developers-ae2</link>
      <guid>https://dev.to/_a1084658c738d4804957c/the-rise-of-open-source-ai-coding-agents-a-new-era-for-developers-ae2</guid>
      <description>&lt;h1&gt;
  
  
  The Rise of Open-Source AI Coding Agents: A New Era for Developers
&lt;/h1&gt;

&lt;p&gt;The software development landscape is undergoing a seismic shift. Open-source AI coding agents are emerging as powerful tools that promise to revolutionize how developers write, debug, and ship code. With recent releases like OpenCode gaining massive traction on platforms like Hacker News (244 points and climbing), it's clear that the developer community is hungry for alternatives to closed, proprietary AI solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Driving This Revolution?
&lt;/h2&gt;

&lt;p&gt;The demand for AI-powered coding assistants has exploded over the past few years. GitHub Copilot, Cursor, and similar tools have demonstrated the value of having an AI partner while coding. However, many developers have grown concerned about vendor lock-in, data privacy, and the closed nature of these commercial products.&lt;/p&gt;

&lt;p&gt;Open-source alternatives like OpenCode address these concerns directly. By making the underlying technology transparent and modifiable, developers can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inspect how the AI makes decisions&lt;/li&gt;
&lt;li&gt;Customize the model for their specific needs&lt;/li&gt;
&lt;li&gt;Deploy self-hosted solutions for sensitive projects&lt;/li&gt;
&lt;li&gt;Contribute to and benefit from community-driven improvements&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Income Potential Is Massive
&lt;/h2&gt;

&lt;p&gt;Let's be direct about the opportunity here. The AI coding tools market is projected to reach billions of dollars in the coming years. Companies are actively seeking developers who understand these systems—not just as users, but as contributors and builders.&lt;/p&gt;

&lt;p&gt;Here's why this represents high income potential:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Enterprise Adoption&lt;/strong&gt;: Businesses are increasingly comfortable using AI coding assistants but demand open-source solutions for security and compliance reasons. This creates opportunities for developers who can implement and customize these tools in enterprise environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Consulting and Integration&lt;/strong&gt;: Every organization has unique needs. Developers who can fine-tune, deploy, and maintain open-source AI coding agents are positioning themselves for lucrative consulting engagements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Building on Top&lt;/strong&gt;: The open-source nature means developers can create value-added products, plugins, and integrations that solve specific industry problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Contributing to Core Projects&lt;/strong&gt;: Active contributors to projects like OpenCode can build reputation, gain recognition, and open doors to speaking roles, premium contracts, and leadership positions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges Worth Considering
&lt;/h2&gt;

&lt;p&gt;It's not all smooth sailing. Open-source AI coding agents face real challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Compute Costs&lt;/strong&gt;: Running these models requires significant GPU resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Quality&lt;/strong&gt;: Matching the performance of well-funded commercial products takes effort&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintenance Burden&lt;/strong&gt;: Unlike commercial tools, open-source projects rely on community support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration Complexity&lt;/strong&gt;: Making AI agents work seamlessly in diverse development environments is technically challenging&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Developers Can Prepare
&lt;/h2&gt;

&lt;p&gt;If you want to capitalize on this trend, consider these actionable steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Experiment actively&lt;/strong&gt;: Use these tools in your daily workflow. Understand their strengths and limitations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Learn the fundamentals&lt;/strong&gt;: Prompt engineering, RAG architectures, and fine-tuning techniques are valuable skills.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Contribute meaningfully&lt;/strong&gt;: Start with documentation, bug reports, or small features. Build toward deeper involvement.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build showcase projects&lt;/strong&gt;: Demonstrate your expertise by creating integrations, tutorials, or custom implementations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stay informed&lt;/strong&gt;: The AI landscape evolves rapidly. Follow developments in both open-source and commercial spaces.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;We're witnessing a fundamental shift in how software gets built. AI coding agents aren't replacing developers—they're amplifying our capabilities. The developers who thrive will be those who embrace these tools, understand their mechanics, and contribute to the open-source ecosystems that power them.&lt;/p&gt;

&lt;p&gt;Open-source AI coding agents represent more than just a technological trend. They're a movement toward more accessible, transparent, and community-driven development tools. For developers willing to engage, the opportunities—both creative and financial—are substantial.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this article valuable, consider supporting my work with a tip:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ETH Wallet&lt;/strong&gt;: 0xAa9ACeE80691997CEC41a7F4cd371963b8EAC0C4&lt;/p&gt;

</description>
      <category>ai</category>
      <category>coding</category>
      <category>opensource</category>
      <category>devtools</category>
    </item>
  </channel>
</rss>
