<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ArshTechPro</title>
    <description>The latest articles on DEV Community by ArshTechPro (@arshtechpro).</description>
    <link>https://dev.to/arshtechpro</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arshtechpro"/>
    <language>en</language>
    <item>
      <title>Years of Apple's Best Security Work, Cracked in Five Days — Here's What Developers Should Know</title>
      <dc:creator>ArshTechPro</dc:creator>
      <pubDate>Fri, 15 May 2026 10:03:07 +0000</pubDate>
      <link>https://dev.to/arshtechpro/five-years-of-apples-best-security-work-cracked-in-five-days-heres-what-developers-should-know-5dba</link>
      <guid>https://dev.to/arshtechpro/five-years-of-apples-best-security-work-cracked-in-five-days-heres-what-developers-should-know-5dba</guid>
      <description>&lt;p&gt;There's a stat buried in a recent security disclosure that should stop every developer in their tracks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apple spent five years and likely billions of dollars building Memory Integrity Enforcement (MIE) for the M5 chip.&lt;/strong&gt; A small team at Calif, working with an AI model called Mythos Preview, built a working kernel exploit against it in &lt;strong&gt;five days&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This isn't a story about Apple failing. It's a story about the state of modern security — and it has real lessons for every developer writing software today.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Exactly Is Memory Integrity Enforcement?
&lt;/h2&gt;

&lt;p&gt;Before we get to the exploit, you need to understand what was bypassed.&lt;/p&gt;

&lt;p&gt;Memory corruption bugs — things like buffer overflows, use-after-free errors, and heap sprays — have been the backbone of software exploits for decades. The reason they keep working is simple: most languages let you do unsafe things with memory, and hardware traditionally didn't care.&lt;/p&gt;

&lt;p&gt;ARM's &lt;strong&gt;Memory Tagging Extension (MTE)&lt;/strong&gt;, introduced in 2019, was the first serious hardware-level attempt to change that. The idea is elegant:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every 16-byte chunk of memory gets a secret &lt;strong&gt;4-bit tag&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Every pointer to that memory carries the same tag&lt;/li&gt;
&lt;li&gt;When your code accesses memory, the CPU hardware &lt;em&gt;checks the tags match&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;If they don't? Immediate exception — no exploit, no arbitrary write&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apple didn't just ship MTE as-is. They spent years hardening it into something they call EMTE (Enhanced MTE) and wrapped it in a system-wide defense called MIE:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Synchronous checking only&lt;/strong&gt; — no async mode where an attacker could slip past the check&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tag Confidentiality Enforcement&lt;/strong&gt; — protects tags from being leaked via side channels (like the &lt;a href="https://github.com/compsec-snu/tiktag" rel="noopener noreferrer"&gt;TikTag attack&lt;/a&gt; that broke standard MTE with 95% success rate in under 4 seconds)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Non-tagged memory protection&lt;/strong&gt; — plugs a hole in standard MTE where attackers could bypass tags by targeting global variables instead&lt;/li&gt;
&lt;li&gt;Applied kernel-wide, hardware-accelerated, and always on&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apple claimed — with evidence — that MIE disrupts &lt;em&gt;every known public exploit chain&lt;/em&gt; against modern iOS, including recently leaked commercial exploit kits.&lt;/p&gt;

&lt;p&gt;Then came May 2025.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Exploit: A Data-Only Kernel LPE
&lt;/h2&gt;

&lt;p&gt;The Calif team disclosed that they built the &lt;strong&gt;first public macOS kernel exploit on M5 hardware with MIE enabled&lt;/strong&gt;. Here are the key technical facts they shared:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Type:&lt;/strong&gt; Data-only kernel local privilege escalation (LPE)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Target:&lt;/strong&gt; macOS 26.4.1 on bare-metal M5&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Starting point:&lt;/strong&gt; Unprivileged local user, using only normal system calls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;End result:&lt;/strong&gt; Root shell&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bugs used:&lt;/strong&gt; Two vulnerabilities chained together&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time to build:&lt;/strong&gt; ~5 days (bugs found April 25th, working exploit by May 1st)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The term &lt;em&gt;data-only&lt;/em&gt; is significant. It means the exploit doesn't inject executable code — it manipulates data structures inside the kernel to hijack control flow. Traditional memory safety defenses often focus on code injection; data-only attacks are harder to catch because from the CPU's perspective, you're just... reading and writing memory normally.&lt;/p&gt;

&lt;p&gt;They haven't published the full technical report yet — that comes after Apple ships a fix. But the core insight is already visible: &lt;strong&gt;with the right vulnerabilities, MIE can be evaded.&lt;/strong&gt; The tags can still be worked around if an attacker has primitives to reason about memory layout and tag values.&lt;/p&gt;




&lt;h2&gt;
  
  
  The AI Angle Is the Part That Should Keep You Up at Night
&lt;/h2&gt;

&lt;p&gt;Here's what's genuinely new about this disclosure: the exploit wasn't found by a single legendary hacker working alone for months. It was found by a small team working &lt;em&gt;with&lt;/em&gt; an AI system.&lt;/p&gt;

&lt;p&gt;Mythos Preview identified the bugs because they belong to &lt;strong&gt;known vulnerability classes&lt;/strong&gt; — patterns that, once an AI system has learned them, generalize across a huge surface area of code. The human experts on the team then applied judgment for the parts that required novel reasoning: specifically, figuring out how to bypass MIE, which is new enough that AI had no prior examples to draw from.&lt;/p&gt;

&lt;p&gt;This human-AI pairing dynamic is important. The AI handled breadth — scanning for known patterns at scale. The humans handled depth — the novel, creative problem of defeating a new mitigation. Together they landed a kernel exploit against Apple's best hardware in a week.&lt;/p&gt;

&lt;p&gt;The implication: the old security model of "this is too obscure/complex for anyone to bother" is accelerating toward irrelevance. AI systems are getting better at the breadth problem. The cost of finding known bug classes in new codebases is dropping fast.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Developers Can Actually Learn From This
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Memory safety in your language matters more than ever
&lt;/h3&gt;

&lt;p&gt;If you're still writing systems-level code in C or C++, this is a reminder that hardware mitigations like MIE are playing defense &lt;em&gt;on your behalf&lt;/em&gt; — and that defense can be beaten. The industry push toward Rust, Swift, and memory-safe languages isn't hype.&lt;/p&gt;

&lt;p&gt;If you can't switch languages, use sanitizers (ASan, MSan, UBSan) in your CI pipeline. At minimum they'll catch bugs before attackers do.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Mitigations buy time — they don't buy safety
&lt;/h3&gt;

&lt;p&gt;MIE is an extraordinary engineering achievement. It dramatically raises the cost of exploitation. But the Calif research illustrates a principle that security engineers know well: &lt;strong&gt;mitigations are not fixes&lt;/strong&gt;. They change the economics of exploitation without eliminating the underlying bugs.&lt;/p&gt;

&lt;p&gt;Every security control you add to your application — rate limiting, WAFs, sandboxing, ASLR — buys you time and raises attacker cost. None of them substitute for writing correct, safe code in the first place.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. "Data-only" attacks are underappreciated in web and app security too
&lt;/h3&gt;

&lt;p&gt;The kernel exploit here avoided code injection entirely and instead manipulated kernel data structures. The web equivalent of this thinking pattern shows up in logic bugs, IDOR vulnerabilities, and race conditions — attacks that don't inject code but manipulate the state your application trusts.&lt;/p&gt;

&lt;p&gt;These are notoriously hard to catch with static analysis or fuzzing alone because they often require understanding &lt;em&gt;semantic intent&lt;/em&gt;, not just memory layout. Your threat model should account for attackers who want to corrupt your application's state without ever triggering a traditional "input validation" check.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. AI-assisted vulnerability discovery is already here
&lt;/h3&gt;

&lt;p&gt;The security landscape is changing. Bug bounty hunters, red teams, and — unfortunately — malicious actors are all beginning to pair AI with human expertise the same way Calif did here. &lt;/p&gt;

&lt;h3&gt;
  
  
  5. Responsible disclosure still works — and matters
&lt;/h3&gt;

&lt;p&gt;Calif walked into Apple Park and handed over a laser-printed report in person rather than submitting via the usual bug bounty flood. Theatrical? Maybe. But they also chose to withhold technical details until Apple ships a fix.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;The Calif team ended their post with a Vietnamese proverb: &lt;em&gt;nhỏ mà có võ&lt;/em&gt; — small but mighty. It's a fitting note for an era where a handful of researchers with the right AI tooling can do what used to require nation-state resources.&lt;/p&gt;

&lt;p&gt;For developers, the takeaway isn't panic. It's clarity: write memory-safe code where you can, layer your defenses, treat mitigations as speed bumps not walls, and take vulnerability reports seriously. The tools attackers have access to are improving. So should yours.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Full technical details of the exploit will be published by Calif after Apple releases a patch. Apple's MIE blog post is worth reading regardless — it's one of the best public explanations of hardware-assisted memory safety ever written.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>Xcode 26.5 — What Developers Actually Need to Know</title>
      <dc:creator>ArshTechPro</dc:creator>
      <pubDate>Tue, 12 May 2026 15:59:52 +0000</pubDate>
      <link>https://dev.to/arshtechpro/xcode-265-what-developers-actually-need-to-know-5033</link>
      <guid>https://dev.to/arshtechpro/xcode-265-what-developers-actually-need-to-know-5033</guid>
      <description>&lt;p&gt;Xcode 26.5 RC is out. It is not a landmark release, but if you ship subscription apps or work across Swift, SwiftUI, or web views, there is enough here to warrant attention before you push your next build.&lt;/p&gt;

&lt;p&gt;Here is what matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  The App Store Deadline You Cannot Miss
&lt;/h2&gt;

&lt;p&gt;Before getting into features — if you have not already done this, stop and do it now.&lt;/p&gt;

&lt;p&gt;Starting &lt;strong&gt;April 28, 2026&lt;/strong&gt;, all new apps and app updates uploaded to App Store Connect must be built with the &lt;strong&gt;iOS 26 SDK or later&lt;/strong&gt; (and the equivalent SDKs for tvOS, visionOS, and watchOS). If your CI/CD pipeline is still on Xcode 16, it will start rejecting your submissions. Update your build environment to Xcode 26 immediately.&lt;/p&gt;




&lt;h2&gt;
  
  
  StoreKit: The Headlining Addition
&lt;/h2&gt;

&lt;p&gt;The most substantive developer-facing change in 26.5 is a set of new StoreKit APIs built around &lt;strong&gt;monthly subscriptions with a 12-month commitment billing plan&lt;/strong&gt; — a billing configuration Apple introduced in App Store Connect.&lt;/p&gt;

&lt;p&gt;If your app monetizes via subscriptions, this is the update for you.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is new
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;SubscriptionInfo.pricingTerms&lt;/code&gt; (PricingTerms model)&lt;/strong&gt;&lt;br&gt;
You can now read pricing information for subscriptions with a monthly-with-12-month-commitment plan directly from StoreKit. No more hardcoding pricing strings in your UI. Pull them live.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;billingPlanType&lt;/code&gt; PurchaseOption&lt;/strong&gt;&lt;br&gt;
Specify the billing plan type at the point of purchase for subscriptions using the new commitment configuration. This gives you programmatic control over which billing path the customer follows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;CommitmentInfo&lt;/code&gt; on Transaction and SubscriptionRenewalInfo&lt;/strong&gt;&lt;br&gt;
Read customer entitlement metadata for subscriptions purchased on a monthly billing plan type. This belongs in your transaction verification and renewal logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;preferredSubscriptionPricingTerms(_:)&lt;/code&gt; — SwiftUI merchandising&lt;/strong&gt;&lt;br&gt;
Import both StoreKit and SwiftUI and you get a new view modifier that handles merchandising monthly commitment plans using Apple's built-in styles. If you are building a subscription paywall, this is the fastest path to a design that follows Apple's conventions without rolling your own layout.&lt;/p&gt;

&lt;h3&gt;
  
  
  Availability note
&lt;/h3&gt;

&lt;p&gt;These new billing plans will be available worldwide — except the United States and Singapore — on iOS 26.4 and later, with iOS 26.5's release in May.&lt;/p&gt;

&lt;h3&gt;
  
  
  Known issue to flag
&lt;/h3&gt;

&lt;p&gt;There is one active bug worth noting before upgrading your test pipeline:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;SKTestSession&lt;/code&gt; cannot use the selected StoreKit configuration during unit tests, causing test actions to fail. The workaround is to add a small delay before running the test so the configuration has time to persist on-device. Document this in your test setup code so no one wastes time debugging it later. The feedback number is FB22237318 if you want to follow along.&lt;/p&gt;




&lt;h2&gt;
  
  
  Debugger Fixes Worth Knowing
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Swift Task stepping across threads&lt;/strong&gt;&lt;br&gt;
The debugger can now correctly follow a Swift Task when a step operation causes the task to be migrated to a different thread. If you have hit confusing debugger behavior during async/await step-throughs, this should resolve it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SwiftUI Previews duplicate launch bug fixed&lt;/strong&gt;&lt;br&gt;
A run action was incorrectly launching a duplicate app instance when using SwiftUI Previews, or when running a command-line app that opens windows via SDL, GLFW, or NSApplication APIs without an app bundle. That is now resolved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Swift enum Optional SBValue representation&lt;/strong&gt;&lt;br&gt;
The payload of a Swift enum or Optional SBValue is now represented as a synthetic child rather than a direct child. If you have custom Python data formatters that unwrap Optional values, they will continue to work as long as you have not disabled &lt;code&gt;SetPreferSyntheticValue()&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Editor and Source Control Fixes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Syntax highlighting performance&lt;/strong&gt;&lt;br&gt;
Several major performance issues with syntax highlighting in Swift files have been resolved. Large files should feel noticeably more responsive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Git performance for large repositories&lt;/strong&gt;&lt;br&gt;
An issue where workspaces touching git repositories with many tags or branches would experience sudden hangs and spins has been fixed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation viewer&lt;/strong&gt;&lt;br&gt;
Missing documentation for PhotoKit and some SwiftUI symbols in the documentation viewer and Quick Help has been restored.&lt;/p&gt;




&lt;h2&gt;
  
  
  Interface Builder
&lt;/h2&gt;

&lt;p&gt;Two additions worth noting if you still use IB:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The "Show Library" button (+) has moved from the main toolbar to the bar at the bottom of the canvas.&lt;/li&gt;
&lt;li&gt;A new "Control Metrics" property in the File inspector for Mac XIB and Storyboard documents allows you to design for environments where &lt;code&gt;prefersCompactControlSizeMetrics&lt;/code&gt; will be set at runtime.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Instruments
&lt;/h2&gt;

&lt;p&gt;The SceneKit template has been removed. SceneKit Instrument remains available in the library, and SceneKit itself is now deprecated across all Apple platforms. If you have not started migrating to RealityKit, now is the time.&lt;/p&gt;

&lt;p&gt;The previous SwiftUI template containing View Body and View Properties instruments has been replaced — both instruments are deprecated but remain accessible in the library.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Xcode 26.5 is a focused update, not a feature drop. The StoreKit subscription billing APIs are the reason to prioritize testing against this SDK if you run a monetized app. The debugger and source editor fixes improve everyday reliability. And if you have not already migrated your build pipeline to Xcode 26, the April 28 App Store deadline makes that non-negotiable.&lt;/p&gt;

</description>
      <category>ios</category>
      <category>swift</category>
      <category>mobile</category>
      <category>programming</category>
    </item>
    <item>
      <title>React Doctor: Is This the Missing Health Check for Your React Codebase?</title>
      <dc:creator>ArshTechPro</dc:creator>
      <pubDate>Mon, 11 May 2026 21:09:29 +0000</pubDate>
      <link>https://dev.to/arshtechpro/react-doctor-is-this-the-missing-health-check-for-your-react-codebase-5015</link>
      <guid>https://dev.to/arshtechpro/react-doctor-is-this-the-missing-health-check-for-your-react-codebase-5015</guid>
      <description>&lt;p&gt;If you have ever inherited a messy React codebase, or simply wondered whether your own project has drifted into bad patterns over time, React Doctor is a tool worth knowing about. One command, a score between 0 and 100, and a list of problems to fix. That is the pitch. Let us dig into whether it lives up to it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is React Doctor?
&lt;/h2&gt;

&lt;p&gt;React Doctor is an open-source CLI tool from the team behind Million.js. It scans your React project and produces a health score alongside a structured list of issues. Think of it as a linter on steroids — one that understands React patterns specifically, rather than just generic JavaScript rules.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; react-doctor@latest &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is all it takes to run it. No installation, no config file required to get started.&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Works Under the Hood
&lt;/h2&gt;

&lt;p&gt;When you run React Doctor against your project, it does two things in parallel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Lint pass&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It checks 60+ rules organized into categories: state and effects, performance, architecture, bundle size, security, correctness, accessibility, and framework-specific concerns (Next.js, React Native). Importantly, it detects your framework, React version, and compiler setup automatically, and toggles rules accordingly. So if you are on Next.js, it will apply Next.js-specific rules without you having to configure anything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Dead code detection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It runs a separate pass to find unused files, unused exports, unused types, and duplicates across your codebase.&lt;/p&gt;

&lt;p&gt;After both passes, it filters the results through any config you have set, then computes a score weighted by severity:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;75 and above&lt;/strong&gt;: Great&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;50 to 74&lt;/strong&gt;: Needs work&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Below 50&lt;/strong&gt;: Critical&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Errors weigh more than warnings, so a single serious issue pulls the score down more than a pile of minor ones.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Verbose Output
&lt;/h2&gt;

&lt;p&gt;By default you get a summary. Add &lt;code&gt;--verbose&lt;/code&gt; and you see the exact files and line numbers involved:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; react-doctor@latest &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--verbose&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the mode you actually want when fixing things, since the summary alone does not tell you where to look.&lt;/p&gt;




&lt;h2&gt;
  
  
  Real-World Scores on Popular Projects
&lt;/h2&gt;

&lt;p&gt;The repo includes a leaderboard of scans against well-known open-source React projects. Here is a snapshot:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Project&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;tldraw&lt;/td&gt;
&lt;td&gt;84&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;excalidraw&lt;/td&gt;
&lt;td&gt;84&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;twenty&lt;/td&gt;
&lt;td&gt;78&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;plane&lt;/td&gt;
&lt;td&gt;78&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;formbricks&lt;/td&gt;
&lt;td&gt;75&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;posthog&lt;/td&gt;
&lt;td&gt;72&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;supabase&lt;/td&gt;
&lt;td&gt;69&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;payload&lt;/td&gt;
&lt;td&gt;68&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;sentry&lt;/td&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cal.com&lt;/td&gt;
&lt;td&gt;63&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dub&lt;/td&gt;
&lt;td&gt;62&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Even tldraw and excalidraw — projects maintained by experienced teams — score in the mid-80s, not 100. This is a useful calibration. Do not expect to hit a perfect score; the goal is identifying what actually matters in your specific project.&lt;/p&gt;




&lt;h2&gt;
  
  
  Plugging It Into CI With GitHub Actions
&lt;/h2&gt;

&lt;p&gt;React Doctor ships a GitHub Action you can drop into any workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v5&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;fetch-depth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;millionco/react-doctor@main&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;diff&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
    &lt;span class="na"&gt;github-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GITHUB_TOKEN }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;diff&lt;/code&gt; option is particularly useful — it tells the action to only scan files that changed relative to your base branch. This means in a pull request, it only flags new problems introduced by that PR, not pre-existing issues across the whole repo. The action also posts findings as a PR comment when &lt;code&gt;github-token&lt;/code&gt; is set.&lt;/p&gt;

&lt;p&gt;The action outputs a &lt;code&gt;score&lt;/code&gt; value you can use in subsequent steps — for example, failing the build if a PR drops the score below a threshold you define.&lt;/p&gt;




&lt;h2&gt;
  
  
  Config File
&lt;/h2&gt;

&lt;p&gt;You can suppress specific rules or exclude files via a &lt;code&gt;react-doctor.config.json&lt;/code&gt; at the project root:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ignore"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"rules"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"react/no-danger"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"jsx-a11y/no-autofocus"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"files"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"src/generated/**"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or, if you prefer not to add another config file, you can put the same config under a &lt;code&gt;"reactDoctor"&lt;/code&gt; key in your &lt;code&gt;package.json&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Using It Programmatically
&lt;/h2&gt;

&lt;p&gt;If you want to integrate React Doctor into a custom script or tooling pipeline, there is a Node.js API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;diagnose&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react-doctor/api&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;diagnose&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./path/to/your/react-project&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;score&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;       &lt;span class="c1"&gt;// { score: 82, label: "Good" }&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;diagnostics&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Array of Diagnostic objects&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;     &lt;span class="c1"&gt;// Framework, React version, compiler info&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each diagnostic object tells you the file, plugin, rule, severity, message, help text, line, and column. Clean enough to build your own reporting on top of.&lt;/p&gt;




&lt;h2&gt;
  
  
  Teaching Your AI Coding Agent React Best Practices
&lt;/h2&gt;

&lt;p&gt;One underrated feature: React Doctor can install a "skill" into your coding agent — Cursor, Claude Code, Windsurf, Copilot, and others are supported:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://react.doctor/install-skill.sh | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This teaches the agent 47+ React best practice rules so it can catch issues proactively while you write code, not just after the fact.&lt;/p&gt;




&lt;h2&gt;
  
  
  Is It Worth Adding to Your Project?
&lt;/h2&gt;

&lt;p&gt;Here is an honest assessment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where it genuinely helps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Onboarding onto a legacy codebase. Run it once and you immediately get a map of the biggest problems, organized by category.&lt;/li&gt;
&lt;li&gt;Enforcing standards across a team. The CI integration with diff mode means new PRs cannot silently introduce new antipatterns without someone noticing.&lt;/li&gt;
&lt;li&gt;Dead code. Most linters do not catch unused files and exports across the entire project. React Doctor does this out of the box.&lt;/li&gt;
&lt;li&gt;Framework-specific rules. If you are on Next.js, generic linters miss a lot. React Doctor knows about Next.js patterns and flags them appropriately.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Where you should temper expectations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is not a replacement for a well-configured ESLint setup. It is a complement. If you already have strict ESLint rules, some overlap is inevitable.&lt;/li&gt;
&lt;li&gt;The score is a rough guide, not a precise metric. A score of 72 versus 75 does not mean much. Focus on the specific diagnostics, not the number.&lt;/li&gt;
&lt;li&gt;It is relatively new (still under active development with open issues and PRs). Some rules may be noisy or context-dependent, which is why the config ignore list exists.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;--fix&lt;/code&gt; flag hands off to an AI agent (Ami) to auto-fix issues. This part is more experimental and depends on external tooling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; if you run &lt;code&gt;npx -y react-doctor@latest . --verbose&lt;/code&gt; on your project right now and nothing surprises you, you probably did not need it. But if it surfaces 50 unused exports, three components with missing keys in lists, and a handful of useEffect dependency issues you forgot about, that is a morning of tech debt cleanup you would not have found otherwise.&lt;/p&gt;

&lt;p&gt;For the CI use case alone — automatically flagging React-specific regressions in PRs with zero config — it earns its place in a frontend workflow.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started in 60 Seconds
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Run against your project&lt;/span&gt;
npx &lt;span class="nt"&gt;-y&lt;/span&gt; react-doctor@latest &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--verbose&lt;/span&gt;

&lt;span class="c"&gt;# If you use workspaces, select specific packages&lt;/span&gt;
npx &lt;span class="nt"&gt;-y&lt;/span&gt; react-doctor@latest &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--project&lt;/span&gt; my-app

&lt;span class="c"&gt;# Only scan changed files vs main&lt;/span&gt;
npx &lt;span class="nt"&gt;-y&lt;/span&gt; react-doctor@latest &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--diff&lt;/span&gt; main

&lt;span class="c"&gt;# Output just the score (useful in scripts)&lt;/span&gt;
npx &lt;span class="nt"&gt;-y&lt;/span&gt; react-doctor@latest &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--score&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The GitHub repo is at &lt;a href="https://github.com/millionco/react-doctor" rel="noopener noreferrer"&gt;github.com/millionco/react-doctor&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>react</category>
      <category>reactnative</category>
      <category>programming</category>
    </item>
    <item>
      <title>RTK: Cut Your AI Coding Bill by 80% With One CLI Tool</title>
      <dc:creator>ArshTechPro</dc:creator>
      <pubDate>Fri, 08 May 2026 21:08:33 +0000</pubDate>
      <link>https://dev.to/arshtechpro/how-rtk-reduces-llm-token-usage-for-ai-coding-agents-2kfd</link>
      <guid>https://dev.to/arshtechpro/how-rtk-reduces-llm-token-usage-for-ai-coding-agents-2kfd</guid>
      <description>&lt;p&gt;If you use Claude Code, Cursor, Copilot, or any other AI coding assistant in your terminal, you are probably spending way more on tokens than you need to. RTK (Rust Token Killer) is an open-source CLI proxy that sits between your shell and your LLM and silently compresses what gets sent to the model — without changing how you work.&lt;/p&gt;

&lt;p&gt;It has 39.5k stars on GitHub. It is worth understanding why.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem It Solves
&lt;/h2&gt;

&lt;p&gt;When an AI coding agent runs &lt;code&gt;git status&lt;/code&gt;, the raw output can be 2,000 tokens. When it runs &lt;code&gt;cargo test&lt;/code&gt; on a mid-sized project and a few tests fail, you are looking at 200+ lines of output — most of it passing tests you do not care about.&lt;/p&gt;

&lt;p&gt;The agent reads all of it. You pay for all of it.&lt;/p&gt;

&lt;p&gt;This happens dozens of times in a single session. A 30-minute Claude Code session on a TypeScript or Rust project can easily burn through 118,000 tokens just from routine shell commands — file reads, test runs, git operations, lint checks.&lt;/p&gt;

&lt;p&gt;RTK intercepts those commands, applies smart filtering, and hands the model a compressed version that contains the same useful signal. According to the project's benchmarks, the same 30-minute session drops to around 23,900 tokens — an 80% reduction.&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Actually Works
&lt;/h2&gt;

&lt;p&gt;RTK is a CLI proxy. You prefix your commands with &lt;code&gt;rtk&lt;/code&gt;, or you install a hook that does it transparently.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;Without RTK: git push outputs 15 lines &lt;span class="o"&gt;(&lt;/span&gt;~200 tokens&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="go"&gt;Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
&lt;/span&gt;&lt;span class="c"&gt;...
&lt;/span&gt;&lt;span class="go"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;With RTK: git push outputs 1 line &lt;span class="o"&gt;(&lt;/span&gt;~10 tokens&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="go"&gt;ok main
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Four strategies are applied depending on the command type:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Smart filtering&lt;/strong&gt; — strips comments, whitespace, and boilerplate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grouping&lt;/strong&gt; — aggregates similar items, like files grouped by directory or errors grouped by type&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Truncation&lt;/strong&gt; — keeps the signal, drops the redundancy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deduplication&lt;/strong&gt; — collapses repeated log lines into a count&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For test output specifically, RTK is particularly aggressive. &lt;code&gt;cargo test&lt;/code&gt; on a failure goes from 200+ lines to roughly 20. You see which tests failed and why. The agent does not need to read the 13 passing tests to understand what went wrong.&lt;/p&gt;




&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;RTK is a single Rust binary with no dependencies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;macOS (Homebrew):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;rtk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Linux / macOS (curl):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://raw.githubusercontent.com/rtk-ai/rtk/refs/heads/master/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Via Cargo:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cargo &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--git&lt;/span&gt; https://github.com/rtk-ai/rtk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;One gotcha: there is another project called "rtk" (Rust Type Kit) on crates.io. If &lt;code&gt;rtk gain&lt;/code&gt; fails after install, you got the wrong one. Use the &lt;code&gt;--git&lt;/code&gt; flag above.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After install, connect it to your AI tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rtk init &lt;span class="nt"&gt;-g&lt;/span&gt;                    &lt;span class="c"&gt;# Claude Code / Copilot&lt;/span&gt;
rtk init &lt;span class="nt"&gt;-g&lt;/span&gt; &lt;span class="nt"&gt;--agent&lt;/span&gt; cursor     &lt;span class="c"&gt;# Cursor&lt;/span&gt;
rtk init &lt;span class="nt"&gt;-g&lt;/span&gt; &lt;span class="nt"&gt;--gemini&lt;/span&gt;           &lt;span class="c"&gt;# Gemini CLI&lt;/span&gt;
rtk init &lt;span class="nt"&gt;--agent&lt;/span&gt; windsurf      &lt;span class="c"&gt;# Windsurf&lt;/span&gt;
rtk init &lt;span class="nt"&gt;--agent&lt;/span&gt; cline         &lt;span class="c"&gt;# Cline / Roo Code&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart your AI tool and that is it. Commands are automatically rewritten from &lt;code&gt;git status&lt;/code&gt; to &lt;code&gt;rtk git status&lt;/code&gt; before the model ever sees the output.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Commands Are Supported
&lt;/h2&gt;

&lt;p&gt;The coverage is broad. Here is a quick scan of what RTK knows how to compress:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Git:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rtk git status      &lt;span class="c"&gt;# compact status&lt;/span&gt;
rtk git log &lt;span class="nt"&gt;-n&lt;/span&gt; 10   &lt;span class="c"&gt;# one-line commits&lt;/span&gt;
rtk git diff        &lt;span class="c"&gt;# condensed diff&lt;/span&gt;
rtk git push        &lt;span class="c"&gt;# -&amp;gt; "ok main"&lt;/span&gt;
rtk git pull        &lt;span class="c"&gt;# -&amp;gt; "ok 3 files +10 -2"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Test runners&lt;/strong&gt; (failures only — this is where the big savings come from):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rtk cargo &lt;span class="nb"&gt;test
&lt;/span&gt;rtk pytest
rtk go &lt;span class="nb"&gt;test
&lt;/span&gt;rtk jest
rtk vitest
rtk playwright &lt;span class="nb"&gt;test
&lt;/span&gt;rtk rspec
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Build and lint:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rtk tsc             &lt;span class="c"&gt;# TypeScript errors grouped by file&lt;/span&gt;
rtk ruff check      &lt;span class="c"&gt;# Python linting&lt;/span&gt;
rtk cargo clippy    &lt;span class="c"&gt;# Rust lints&lt;/span&gt;
rtk golangci-lint run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;File operations:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rtk &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;            &lt;span class="c"&gt;# token-optimized directory tree&lt;/span&gt;
rtk &lt;span class="nb"&gt;read &lt;/span&gt;file.rs    &lt;span class="c"&gt;# smart file reading&lt;/span&gt;
rtk &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"pattern"&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;AWS, Docker, Kubernetes&lt;/strong&gt; are also covered. The project lists 100+ supported commands.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Auto-Rewrite Hook
&lt;/h2&gt;

&lt;p&gt;The most useful feature is the hook. Once you run &lt;code&gt;rtk init -g&lt;/code&gt;, a PreToolUse hook is installed in Claude Code (or your chosen agent) that transparently rewrites Bash commands before execution. The model never knows RTK is involved — it just receives smaller, cleaner output.&lt;/p&gt;

&lt;p&gt;One thing worth knowing: the hook only intercepts Bash tool calls. Claude Code's built-in &lt;code&gt;Read&lt;/code&gt;, &lt;code&gt;Grep&lt;/code&gt;, and &lt;code&gt;Glob&lt;/code&gt; tools bypass it. If you want RTK filtering for those workflows, use the shell equivalents (&lt;code&gt;cat&lt;/code&gt;, &lt;code&gt;rg&lt;/code&gt;, &lt;code&gt;find&lt;/code&gt;) or call &lt;code&gt;rtk read&lt;/code&gt;, &lt;code&gt;rtk grep&lt;/code&gt;, &lt;code&gt;rtk find&lt;/code&gt; directly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Token Savings in Practice
&lt;/h2&gt;

&lt;p&gt;Here is the table from the project's README, representing a 30-minute session on a medium-sized project:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;th&gt;Frequency&lt;/th&gt;
&lt;th&gt;Standard&lt;/th&gt;
&lt;th&gt;RTK&lt;/th&gt;
&lt;th&gt;Savings&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;ls&lt;/code&gt; / &lt;code&gt;tree&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;10x&lt;/td&gt;
&lt;td&gt;2,000&lt;/td&gt;
&lt;td&gt;400&lt;/td&gt;
&lt;td&gt;-80%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;cat&lt;/code&gt; / &lt;code&gt;read&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;20x&lt;/td&gt;
&lt;td&gt;40,000&lt;/td&gt;
&lt;td&gt;12,000&lt;/td&gt;
&lt;td&gt;-70%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;grep&lt;/code&gt; / &lt;code&gt;rg&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;8x&lt;/td&gt;
&lt;td&gt;16,000&lt;/td&gt;
&lt;td&gt;3,200&lt;/td&gt;
&lt;td&gt;-80%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;git status&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;10x&lt;/td&gt;
&lt;td&gt;3,000&lt;/td&gt;
&lt;td&gt;600&lt;/td&gt;
&lt;td&gt;-80%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;git diff&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;5x&lt;/td&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;2,500&lt;/td&gt;
&lt;td&gt;-75%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;cargo test&lt;/code&gt; / &lt;code&gt;npm test&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;5x&lt;/td&gt;
&lt;td&gt;25,000&lt;/td&gt;
&lt;td&gt;2,500&lt;/td&gt;
&lt;td&gt;-90%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;pytest&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;4x&lt;/td&gt;
&lt;td&gt;8,000&lt;/td&gt;
&lt;td&gt;800&lt;/td&gt;
&lt;td&gt;-90%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~118,000&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~23,900&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;-80%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You can see your own savings with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rtk gain              &lt;span class="c"&gt;# summary stats&lt;/span&gt;
rtk gain &lt;span class="nt"&gt;--graph&lt;/span&gt;      &lt;span class="c"&gt;# ASCII graph of the last 30 days&lt;/span&gt;
rtk gain &lt;span class="nt"&gt;--history&lt;/span&gt;    &lt;span class="c"&gt;# recent command history&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Is It Worth It?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The honest case for trying it:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zero friction to install and zero changes to your workflow after the hook is set up.&lt;/li&gt;
&lt;li&gt;The savings are real and verifiable — you can check them yourself with &lt;code&gt;rtk gain&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;39.5k stars and active development (141 releases, v0.38.0 as of late April 2026) suggest this is not an abandoned project.&lt;/li&gt;
&lt;li&gt;It is open-source (MIT), written in Rust, and ships as a single binary with no dependencies. The attack surface is minimal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The honest caveats:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The benchmark numbers (60-90% savings) are estimates based on medium-sized projects. Real savings depend heavily on how you work and what you are building.&lt;/li&gt;
&lt;li&gt;The hook only works on Bash calls. If your agent leans heavily on its native file-reading tools, a portion of your token usage is unaffected.&lt;/li&gt;
&lt;li&gt;On Windows, you get full filter support but no auto-rewrite hook unless you use WSL. Native Windows gets a CLAUDE.md fallback mode instead.&lt;/li&gt;
&lt;li&gt;When a command fails, RTK saves the full unfiltered output to disk so the model can read it — a thoughtful design, but worth being aware of if you are working with sensitive output.&lt;/li&gt;
&lt;li&gt;If RTK's filter for a specific command is overly aggressive, it could in theory strip context the agent needed. The &lt;code&gt;rtk discover&lt;/code&gt; command helps you find cases where savings were unexpectedly low, which can be a signal that something went wrong.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Quick Summary
&lt;/h2&gt;

&lt;p&gt;RTK is a thin, fast CLI proxy that compresses the output of 100+ common dev commands before they reach your LLM. It installs in one command, hooks into your agent transparently, and saves you a measurable amount on token usage — particularly for test runners and git operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/rtk-ai/rtk" rel="noopener noreferrer"&gt;github.com/rtk-ai/rtk&lt;/a&gt;&lt;/p&gt;




</description>
      <category>ai</category>
      <category>programming</category>
      <category>agents</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Local Deep Research: Run Your Own AI Research Assistant, Fully Private</title>
      <dc:creator>ArshTechPro</dc:creator>
      <pubDate>Wed, 06 May 2026 20:56:52 +0000</pubDate>
      <link>https://dev.to/arshtechpro/local-deep-research-run-your-own-ai-research-assistant-fully-private-6eg</link>
      <guid>https://dev.to/arshtechpro/local-deep-research-run-your-own-ai-research-assistant-fully-private-6eg</guid>
      <description>&lt;p&gt;If you have ever wished you could throw a complex question at an AI and get back a proper cited report — not a hallucinated paragraph, but something that actually searched the web, read papers, and synthesized sources — that is what Local Deep Research (LDR) does. And it runs entirely on your machine.&lt;/p&gt;

&lt;p&gt;The project sits at about 4,000 GitHub stars at the time of writing, has 124 releases, and is actively maintained. It is worth understanding what it actually does before you decide whether to spin it up.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is It?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/LearningCircuit/local-deep-research" rel="noopener noreferrer"&gt;Local Deep Research&lt;/a&gt; is a self-hosted AI research assistant. You give it a question. It searches across multiple sources — web, arXiv, PubMed, Wikipedia, GitHub, your own local documents — iterates on what it finds, and produces a structured report with citations.&lt;/p&gt;

&lt;p&gt;It supports both local models (via Ollama) and cloud models (OpenAI, Anthropic, Google). The "local" in the name means your data never has to leave your machine if you choose the fully-local setup.&lt;/p&gt;

&lt;p&gt;Benchmark-wise, the project claims roughly 95% accuracy on the SimpleQA benchmark when tested with GPT-4.1-mini and SearXNG. That puts it in the range of commercial deep research tools.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who This Is For
&lt;/h2&gt;

&lt;p&gt;This tool is genuinely useful if you fall into one of these categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You do research-heavy work (technical writing, literature reviews, competitive analysis) and are tired of manually stitching together sources.&lt;/li&gt;
&lt;li&gt;You want to search across your own document library with AI — think internal wikis, PDFs, notes.&lt;/li&gt;
&lt;li&gt;You work with sensitive topics and cannot send queries to a third-party API.&lt;/li&gt;
&lt;li&gt;You want to build a compounding knowledge base over time where each research session adds to a searchable library.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you just want quick answers and are fine with ChatGPT, LDR is probably overkill. But if you want something you own and control, it is a serious option.&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The core loop is straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You submit a research question.&lt;/li&gt;
&lt;li&gt;LDR picks a research strategy (quick summary, deep analysis, academic, etc.) and breaks the question into sub-queries.&lt;/li&gt;
&lt;li&gt;It searches across configured sources, pulling results from the web, academic databases, or your local documents.&lt;/li&gt;
&lt;li&gt;It synthesizes the results iteratively, discarding low-quality content and expanding on promising threads.&lt;/li&gt;
&lt;li&gt;It produces a final report with citations and optionally stores sources in your encrypted local library.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each session can download sources (arXiv papers, web pages, PubMed articles) directly into your library, which gets indexed and made searchable. Over time your knowledge base grows and future research queries can search across both live web results and everything you have already collected.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Option 1: Docker (Recommended for most people)
&lt;/h3&gt;

&lt;p&gt;This is the fastest path. It handles dependencies, encryption, and all service wiring automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Standard setup (CPU, works on Mac, Windows, Linux):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-O&lt;/span&gt; https://raw.githubusercontent.com/LearningCircuit/local-deep-research/main/docker-compose.yml
docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait about 30 seconds, then open &lt;code&gt;http://localhost:5000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With NVIDIA GPU acceleration (Linux only):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First install the NVIDIA Container Toolkit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://nvidia.github.io/libnvidia-container/gpgkey | &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg

curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-L&lt;/span&gt; https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g'&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/nvidia-container-toolkit.list

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;nvidia-container-toolkit &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart docker
nvidia-smi  &lt;span class="c"&gt;# verify it worked&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then bring up the stack with GPU support:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-O&lt;/span&gt; https://raw.githubusercontent.com/LearningCircuit/local-deep-research/main/docker-compose.yml
curl &lt;span class="nt"&gt;-O&lt;/span&gt; https://raw.githubusercontent.com/LearningCircuit/local-deep-research/main/docker-compose.gpu.override.yml
docker compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker-compose.yml &lt;span class="nt"&gt;-f&lt;/span&gt; docker-compose.gpu.override.yml up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Docker Compose setup bundles Ollama (local LLM runner) and SearXNG (self-hosted meta-search engine) together with LDR. Everything runs locally.&lt;/p&gt;




&lt;h3&gt;
  
  
  Option 2: pip (For developers / Python integration)
&lt;/h3&gt;

&lt;p&gt;If you want to embed LDR in a Python project or prefer to manage dependencies yourself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install the package&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;local-deep-research

&lt;span class="c"&gt;# Run SearXNG in Docker for search&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:8080 &lt;span class="nt"&gt;--name&lt;/span&gt; searxng searxng/searxng

&lt;span class="c"&gt;# Install Ollama from https://ollama.ai, then pull a model&lt;/span&gt;
ollama pull gemma3:12b

&lt;span class="c"&gt;# Start the web UI&lt;/span&gt;
python &lt;span class="nt"&gt;-m&lt;/span&gt; local_deep_research.web.app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important note on encryption:&lt;/strong&gt; The pip install does not automatically set up SQLCipher (the AES-256 encrypted database LDR uses for storing your data and API keys). If you hit errors during setup, bypass it for now with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;LDR_ALLOW_UNENCRYPTED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This stores data in plain SQLite. Fine for local dev, not recommended for production or shared setups. Docker handles encryption out of the box.&lt;/p&gt;




&lt;h2&gt;
  
  
  Using the Python API
&lt;/h2&gt;

&lt;p&gt;Once running, you can drive LDR programmatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;local_deep_research.api&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LDRClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;quick_query&lt;/span&gt;

&lt;span class="c1"&gt;# One-liner research
&lt;/span&gt;&lt;span class="n"&gt;summary&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;quick_query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;username&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;password&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the current state of Rust async runtimes?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Client for more control
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LDRClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;login&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;username&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;password&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;quick_research&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Compare FAISS vs Hnswlib for vector search at scale&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;summary&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Using the HTTP API
&lt;/h2&gt;

&lt;p&gt;LDR exposes a REST API with session-based authentication and CSRF protection. The auth flow is a bit verbose but works reliably:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;bs4&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BeautifulSoup&lt;/span&gt;

&lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Session&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Get CSRF token from login page
&lt;/span&gt;&lt;span class="n"&gt;login_page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:5000/auth/login&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;soup&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BeautifulSoup&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;login_page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;html.parser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;csrf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;soup&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csrf_token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Authenticate
&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:5000/auth/login&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;username&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;password&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pass&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csrf_token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;csrf&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;# Get API CSRF token
&lt;/span&gt;&lt;span class="n"&gt;api_csrf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:5000/auth/csrf-token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csrf_token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Submit a research query
&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:5000/api/start_research&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;query&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What are the tradeoffs between gRPC and REST for internal microservices?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X-CSRF-Token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;api_csrf&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The repository includes ready-to-run HTTP examples under &lt;code&gt;examples/api_usage/http/&lt;/code&gt; that handle authentication, retry logic, and progress polling.&lt;/p&gt;




&lt;h2&gt;
  
  
  Enterprise / RAG Integration
&lt;/h2&gt;

&lt;p&gt;If you already have a vector store or internal knowledge base, LDR can search it as one of its sources via LangChain retrievers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;local_deep_research.api&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;quick_summary&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;quick_summary&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What are our current deployment procedures for the payments service?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;retrievers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;internal_kb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;your_langchain_retriever&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;search_tool&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;internal_kb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It supports FAISS, Chroma, Pinecone, Weaviate, Elasticsearch, and anything LangChain-compatible. This is where the tool gets interesting for teams — you can combine live web search with your own internal documents in a single research pass.&lt;/p&gt;




&lt;h2&gt;
  
  
  Search Sources Available
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Free (no API key needed):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;arXiv, PubMed, Semantic Scholar (academic)&lt;/li&gt;
&lt;li&gt;Wikipedia, SearXNG (general web)&lt;/li&gt;
&lt;li&gt;GitHub (technical)&lt;/li&gt;
&lt;li&gt;The Guardian, Wikinews (news)&lt;/li&gt;
&lt;li&gt;Wayback Machine (historical)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Premium (API key required):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tavily (AI-optimized search)&lt;/li&gt;
&lt;li&gt;Google (via SerpAPI or Programmable Search Engine)&lt;/li&gt;
&lt;li&gt;Brave Search&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Custom:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your local documents&lt;/li&gt;
&lt;li&gt;Any LangChain-compatible retriever&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Supported LLMs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Local via Ollama:&lt;/strong&gt; Llama 3, Mistral, Gemma, DeepSeek, and anything Ollama supports. No API costs, processing stays on your machine. Search queries will still hit the web if you are using web search engines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud:&lt;/strong&gt; OpenAI (GPT-4, GPT-4.1-mini), Anthropic (Claude 3), Google (Gemini), and 100+ models via OpenRouter.&lt;/p&gt;

&lt;p&gt;The README benchmarks show GPT-4.1-mini + SearXNG hitting 90-95% on SimpleQA. Gemini 2.0 Flash reached 82% in a single test run. Results vary by query type and configuration.&lt;/p&gt;




&lt;h2&gt;
  
  
  Security Model
&lt;/h2&gt;

&lt;p&gt;For a self-hosted tool that holds API keys and research data, the security story matters.&lt;/p&gt;

&lt;p&gt;Each user gets an isolated SQLCipher database encrypted with AES-256. The project uses a zero-knowledge design — there is no password recovery mechanism, which means even server admins cannot read user data. Docker images are signed with Cosign and include SLSA provenance attestations. The CI pipeline runs CodeQL, Semgrep, OWASP ZAP, Trivy, Gitleaks, and OSV-Scanner on every release.&lt;/p&gt;

&lt;p&gt;If you are running this fully locally with Ollama and SearXNG, nothing leaves your machine.&lt;/p&gt;




&lt;h2&gt;
  
  
  Is It Worth Trying?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Yes, if:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You regularly do research that requires synthesizing multiple sources.&lt;/li&gt;
&lt;li&gt;You need to search across private documents alongside the web.&lt;/li&gt;
&lt;li&gt;Privacy matters — you cannot send queries to commercial APIs.&lt;/li&gt;
&lt;li&gt;You want to build up a searchable knowledge base over time.&lt;/li&gt;
&lt;li&gt;You are building a research-augmented application and want a local-first backend.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Maybe not, if:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need simple Q&amp;amp;A. This is heavyweight for that.&lt;/li&gt;
&lt;li&gt;You are on limited hardware. Running a local LLM plus SearXNG plus the app itself adds up. A GPU helps significantly.&lt;/li&gt;
&lt;li&gt;You want a zero-config experience. The Docker path is smooth, but getting the full setup — GPU passthrough, encryption, custom models — takes some tinkering.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The SQLCipher setup is the roughest edge. Docker sidesteps it cleanly, but the pip path has caught people out. The project documents it well, but plan for some back-and-forth if you go that route.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick Reference
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Repo&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;github.com/LearningCircuit/local-deep-research&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;License&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Language&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Python (80%), JavaScript (14%)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Install&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Docker (recommended) or pip&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Local LLM&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Ollama&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Local Search&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;SearXNG&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Database&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;SQLCipher (AES-256)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;API&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;REST + Python client&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;WebSocket&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes (live progress)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Benchmark&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~95% SimpleQA (GPT-4.1-mini)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/LearningCircuit/local-deep-research/wiki/Installation" rel="noopener noreferrer"&gt;Installation Wiki&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
    <item>
      <title>DeepSeek-TUI: Run a DeepSeek Coding Agent Directly in Your Terminal</title>
      <dc:creator>ArshTechPro</dc:creator>
      <pubDate>Wed, 06 May 2026 16:11:06 +0000</pubDate>
      <link>https://dev.to/arshtechpro/deepseek-tui-run-a-deepseek-coding-agent-directly-in-your-terminal-59ij</link>
      <guid>https://dev.to/arshtechpro/deepseek-tui-run-a-deepseek-coding-agent-directly-in-your-terminal-59ij</guid>
      <description>&lt;p&gt;If you have spent time using AI coding tools through a browser or GUI, you already know the friction. You switch windows, lose context, and your workflow gets interrupted. DeepSeek-TUI removes that friction by bringing a full DeepSeek coding agent into your terminal.&lt;/p&gt;

&lt;p&gt;This article walks you through what DeepSeek-TUI is, what you can do with it, and exactly how to get it running.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is DeepSeek-TUI
&lt;/h2&gt;

&lt;p&gt;DeepSeek-TUI is an open-source terminal user interface that connects to DeepSeek's language models and acts as an agentic coding assistant. It is written in Rust and installable via npm, which means you do not need a Rust toolchain to get started.&lt;/p&gt;

&lt;p&gt;The key thing to understand is that this is not just a chat interface. It is an agent — meaning it can take actions on your behalf: edit files, run shell commands, make git commits, search the web, and interact with external services through MCP (Model Context Protocol) servers.&lt;/p&gt;

&lt;p&gt;Everything runs inside your terminal. No browser tab. No electron app. Your existing workflow stays intact.&lt;/p&gt;




&lt;h2&gt;
  
  
  Three Modes of Operation
&lt;/h2&gt;

&lt;p&gt;DeepSeek-TUI has three visible modes you can cycle through with Tab or Shift+Tab:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plan mode&lt;/strong&gt; — Before the agent starts making changes, it shows you a plan. You review and approve before anything happens. Good for unfamiliar or risky tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent mode&lt;/strong&gt; — The default. The agent works interactively, uses tools step by step, and asks for approval on sensitive actions like running shell commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YOLO mode&lt;/strong&gt; — Auto-approves all tool use. Useful in isolated, trusted environments where you want fully autonomous operation without confirmation prompts.&lt;/p&gt;

&lt;p&gt;You can also set a default mode in your config, or launch straight into YOLO with &lt;code&gt;deepseek-tui --yolo&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;You need Node.js installed. That is it for the npm path.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 — Install via npm
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; deepseek-tui
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the quickest way and works on macOS, Linux, and Windows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 — Get a DeepSeek API Key
&lt;/h3&gt;

&lt;p&gt;Go to &lt;a href="https://platform.deepseek.com" rel="noopener noreferrer"&gt;platform.deepseek.com&lt;/a&gt; and create an account. Generate an API key from the dashboard. DeepSeek's API pricing is notably low compared to other providers, which makes this tool cost-effective even for heavy use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 — Set Your API Key
&lt;/h3&gt;

&lt;p&gt;You have two options:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option A — Interactive login (recommended for first-time setup)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;deepseek-tui login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prompts you for your API key and saves it to &lt;code&gt;~/.deepseek/config.toml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option B — Environment variable (useful for CI or scripting)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;DEEPSEEK_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your_key_here"&lt;/span&gt; deepseek-tui
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4 — Launch
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;deepseek-tui
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On first launch, if no API key is configured, it will prompt you for one automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify Your Setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;deepseek-tui doctor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This runs a diagnostics check: API key presence, model configuration, MCP status, shell tool availability, and API connectivity. If something is off, it tells you exactly what.&lt;/p&gt;




&lt;h2&gt;
  
  
  Alternative Installation Methods
&lt;/h2&gt;

&lt;p&gt;If you prefer to install from source or via Rust's package manager:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Via cargo (requires Rust 1.85 or newer):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cargo &lt;span class="nb"&gt;install &lt;/span&gt;deepseek-tui &lt;span class="nt"&gt;--locked&lt;/span&gt;
cargo &lt;span class="nb"&gt;install &lt;/span&gt;deepseek-tui-cli &lt;span class="nt"&gt;--locked&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Build from source:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/Hmbown/DeepSeek-TUI.git
&lt;span class="nb"&gt;cd &lt;/span&gt;DeepSeek-TUI
cargo &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--path&lt;/span&gt; crates/tui &lt;span class="nt"&gt;--locked&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Basic Usage
&lt;/h2&gt;

&lt;p&gt;Once running, you interact with the agent through the TUI. Here are the most useful commands to know:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;deepseek-tui                                   &lt;span class="c"&gt;# start the interactive TUI&lt;/span&gt;
deepseek-tui &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"explain this codebase"&lt;/span&gt;        &lt;span class="c"&gt;# one-shot prompt, no interactive UI&lt;/span&gt;
deepseek-tui &lt;span class="nt"&gt;--yolo&lt;/span&gt;                            &lt;span class="c"&gt;# start in YOLO (auto-approve) mode&lt;/span&gt;
deepseek-tui models                            &lt;span class="c"&gt;# list available DeepSeek models&lt;/span&gt;
deepseek-tui serve &lt;span class="nt"&gt;--http&lt;/span&gt;                      &lt;span class="c"&gt;# run as an HTTP/SSE API server&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside the TUI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;F1&lt;/code&gt; opens help&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Ctrl+K&lt;/code&gt; opens the command palette&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Esc&lt;/code&gt; backs out of the current action&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Tab&lt;/code&gt; / &lt;code&gt;Shift+Tab&lt;/code&gt; cycles between Plan, Agent, and YOLO modes&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/config&lt;/code&gt; opens the interactive config editor&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/compact&lt;/code&gt; manually compresses session history when context gets long&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To add local files as context, type &lt;code&gt;@path/to/file&lt;/code&gt; in the composer. To attach an image from the clipboard, use &lt;code&gt;Ctrl+V&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;p&gt;The config file lives at &lt;code&gt;~/.deepseek/config.toml&lt;/code&gt;. A minimal working config looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="py"&gt;api_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"your_deepseek_api_key"&lt;/span&gt;
&lt;span class="py"&gt;default_text_model&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"deepseek-v4-pro"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Profiles
&lt;/h3&gt;

&lt;p&gt;If you work with multiple providers or API keys, profiles let you switch between them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="py"&gt;api_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"personal_key"&lt;/span&gt;
&lt;span class="py"&gt;default_text_model&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"deepseek-v4-pro"&lt;/span&gt;

&lt;span class="nn"&gt;[profiles.work]&lt;/span&gt;
&lt;span class="py"&gt;api_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"work_key"&lt;/span&gt;
&lt;span class="py"&gt;base_url&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"https://api.deepseek.com"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Switch profiles on launch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;deepseek-tui &lt;span class="nt"&gt;--profile&lt;/span&gt; work
&lt;span class="c"&gt;# or&lt;/span&gt;
&lt;span class="nv"&gt;DEEPSEEK_PROFILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;work deepseek-tui
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Environment Variables
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Variable&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DEEPSEEK_API_KEY&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Your API key&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DEEPSEEK_MODEL&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Override the default model for one run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DEEPSEEK_BASE_URL&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Point to a custom endpoint&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DEEPSEEK_PROFILE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Select a named profile&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DEEPSEEK_SANDBOX_MODE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Control file access: &lt;code&gt;read-only&lt;/code&gt;, &lt;code&gt;workspace-write&lt;/code&gt;, &lt;code&gt;danger-full-access&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DEEPSEEK_APPROVAL_POLICY&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tool approval behavior: &lt;code&gt;on-request&lt;/code&gt;, &lt;code&gt;untrusted&lt;/code&gt;, &lt;code&gt;never&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Supported Providers
&lt;/h3&gt;

&lt;p&gt;Beyond DeepSeek's own API, you can point the tool at other providers that host DeepSeek models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;deepseek&lt;/code&gt; — Default, uses &lt;code&gt;https://api.deepseek.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nvidia-nim&lt;/code&gt; — NVIDIA's hosted NIM endpoints&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fireworks&lt;/code&gt; — Fireworks AI&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sglang&lt;/code&gt; — Self-hosted, defaults to &lt;code&gt;http://localhost:30000/v1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;openrouter&lt;/code&gt; — OpenRouter&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;novita&lt;/code&gt; — Novita AI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Set the provider in your config or via &lt;code&gt;DEEPSEEK_PROVIDER&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  MCP Server Integration
&lt;/h2&gt;

&lt;p&gt;MCP (Model Context Protocol) lets you connect external tools and services to the agent. DeepSeek-TUI reads MCP configuration from &lt;code&gt;~/.deepseek/mcp.json&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To scaffold the MCP directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;deepseek-tui mcp init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once configured, any MCP server listed in that file becomes available as a tool the agent can call. This is how you would connect databases, custom APIs, or other external systems to the agent's toolset.&lt;/p&gt;




&lt;h2&gt;
  
  
  Feature Flags
&lt;/h2&gt;

&lt;p&gt;You can enable or disable individual capabilities:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[features]&lt;/span&gt;
&lt;span class="py"&gt;shell_tool&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="py"&gt;subagents&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="py"&gt;web_search&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="py"&gt;apply_patch&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="py"&gt;mcp&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or override for a single session:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;deepseek-tui &lt;span class="nt"&gt;--enable&lt;/span&gt; web_search
deepseek-tui &lt;span class="nt"&gt;--disable&lt;/span&gt; subagents
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To see the current state of all flags:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;deepseek-tui features list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Current Models
&lt;/h2&gt;

&lt;p&gt;DeepSeek-TUI defaults to &lt;code&gt;deepseek-v4-pro&lt;/code&gt;. Both current public models have 1M context windows and support thinking mode:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;deepseek-v4-pro&lt;/code&gt; — Full capability model, default&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deepseek-v4-flash&lt;/code&gt; — Faster, lighter variant&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The legacy aliases &lt;code&gt;deepseek-chat&lt;/code&gt; and &lt;code&gt;deepseek-reasoner&lt;/code&gt; still work but map to &lt;code&gt;deepseek-v4-flash&lt;/code&gt;. Run &lt;code&gt;deepseek-tui models&lt;/code&gt; to see live model IDs from your configured endpoint.&lt;/p&gt;




&lt;h2&gt;
  
  
  Practical Tips
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use Plan mode for anything that touches production.&lt;/strong&gt; It forces a review step before the agent starts modifying files. Five seconds of reading a plan is worth it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run &lt;code&gt;doctor&lt;/code&gt; after any config change.&lt;/strong&gt; It catches misconfiguration before you need it to work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use &lt;code&gt;@file&lt;/code&gt; references liberally.&lt;/strong&gt; The more context you give the agent up front, the fewer clarification rounds you need.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set &lt;code&gt;sandbox_mode = "workspace-write"&lt;/code&gt; for normal development.&lt;/strong&gt; This restricts the agent to your project directory, which is a sensible default. Use &lt;code&gt;danger-full-access&lt;/code&gt; only when you explicitly need broader access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check &lt;code&gt;deepseek-tui --no-alt-screen&lt;/code&gt; if you want scrollback.&lt;/strong&gt; By default the TUI uses an alternate screen. Running with &lt;code&gt;--no-alt-screen&lt;/code&gt; keeps output in your normal terminal buffer so you can scroll through it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Repository: &lt;a href="https://github.com/Hmbown/DeepSeek-TUI" rel="noopener noreferrer"&gt;github.com/Hmbown/DeepSeek-TUI&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>deepseek</category>
      <category>rust</category>
      <category>programming</category>
    </item>
    <item>
      <title>Zig: The Honest Systems Language You Have Been Ignoring</title>
      <dc:creator>ArshTechPro</dc:creator>
      <pubDate>Thu, 30 Apr 2026 12:10:34 +0000</pubDate>
      <link>https://dev.to/arshtechpro/zig-the-honest-systems-language-you-have-been-ignoring-45ei</link>
      <guid>https://dev.to/arshtechpro/zig-the-honest-systems-language-you-have-been-ignoring-45ei</guid>
      <description>&lt;p&gt;&lt;em&gt;A practical introduction for developers who want control without chaos&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is Zig?
&lt;/h2&gt;

&lt;p&gt;If you have been writing C for years and quietly resenting it, or if you tried Rust and got intimidated by the borrow checker on day two, Zig might be the language you have been waiting for.&lt;/p&gt;

&lt;p&gt;Zig is a general-purpose systems programming language created by Andrew Kelley in 2016. It is free, open-source (MIT licensed), and designed with one clear ambition: be a better C. Not a replacement for Rust, not a competitor to Go -- a sharper, more honest version of C.&lt;/p&gt;

&lt;p&gt;The official tagline from the Zig team sums it up well: Zig is built for "robustness, optimality and maintainability." It does not introduce new paradigms. It does not hide things from you. It just removes the parts of C that have been quietly ruining your week for decades.&lt;/p&gt;

&lt;p&gt;As of April 2026, Zig sits at position 39 on the TIOBE Index with a 0.31% rating -- small but growing, with real production usage. Bun, the popular JavaScript runtime, is written in Zig. So is a Sega Dreamcast emulator called Deecy, and a Wayland compositor called River. The community is small and focused, and that is part of its appeal.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Core Philosophy: No Hidden Anything
&lt;/h2&gt;

&lt;p&gt;The most important thing to understand about Zig is its stance on transparency.&lt;/p&gt;

&lt;p&gt;Zig has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No hidden control flow&lt;/li&gt;
&lt;li&gt;No hidden memory allocations&lt;/li&gt;
&lt;li&gt;No preprocessor&lt;/li&gt;
&lt;li&gt;No macros&lt;/li&gt;
&lt;li&gt;No operator overloading&lt;/li&gt;
&lt;li&gt;No exceptions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If a line of Zig code does not look like it calls a function, it does not call a function. That sounds obvious until you spend a morning debugging a C++ program where &lt;code&gt;a + b&lt;/code&gt; secretly calls an overloaded operator that allocates memory.&lt;/p&gt;

&lt;p&gt;In Zig, if you write &lt;code&gt;foo()&lt;/code&gt; and then &lt;code&gt;bar()&lt;/code&gt;, those two functions are called, in that order, guaranteed. No exceptions swallowing control flow, no hidden allocations, no surprises.&lt;/p&gt;

&lt;p&gt;The entire Zig syntax is defined in a 580-line PEG grammar file. That is the whole language. For comparison, the C++ grammar is notoriously enormous and context-dependent. Zig is designed so that a maintainer who does not know Zig deeply can still read and debug Zig code.&lt;/p&gt;




&lt;h2&gt;
  
  
  Merits: What Zig Does Well
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Explicit Error Handling (No Exceptions)
&lt;/h3&gt;

&lt;p&gt;Zig does not have exceptions. Instead, functions that can fail return error union types. You handle errors at the call site, explicitly, every time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight zig"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="n"&gt;readFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"data.txt"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;try&lt;/code&gt; keyword means: if this fails, propagate the error up. If you want to handle it yourself, use &lt;code&gt;catch&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight zig"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;readFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"data.txt"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error: {}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are no silent failures. Every error path is visible in the code. This is one of the most frequently praised features by developers coming from C.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Manual Memory Management Without the Foot-guns
&lt;/h3&gt;

&lt;p&gt;Zig requires you to manage memory yourself -- there is no garbage collector, no runtime, and no automatic cleanup. But unlike C, Zig gives you allocators as first-class objects.&lt;/p&gt;

&lt;p&gt;Instead of calling &lt;code&gt;malloc&lt;/code&gt; directly, you pass an allocator into your functions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight zig"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;@import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"std"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;gpa&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;heap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GeneralPurposeAllocator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;{}){};&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="mi"&gt;_&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;gpa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;deinit&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;allocator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;gpa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;allocator&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;buffer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="n"&gt;allocator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;alloc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;u8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;allocator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;free&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c"&gt;// use buffer...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern makes testing much easier -- you can swap in a testing allocator that detects leaks automatically. The &lt;code&gt;defer&lt;/code&gt; keyword ensures cleanup happens when the scope exits, which removes the classic C problem of forgetting to free memory before every return path.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. &lt;code&gt;comptime&lt;/code&gt;: Compile-Time Execution
&lt;/h3&gt;

&lt;p&gt;This is Zig's most unusual and powerful feature. Instead of macros or templates, Zig uses &lt;code&gt;comptime&lt;/code&gt; -- a directive that runs arbitrary Zig code at compile time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight zig"&gt;&lt;code&gt;&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="n"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;comptime&lt;/span&gt; &lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;T&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// Works for any numeric type&lt;/span&gt;
&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;i32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;f64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;3.14&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;2.71&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You are not writing a macro. You are writing normal Zig code that the compiler evaluates during compilation. You can inspect types, loop over struct fields, generate code -- all in plain Zig syntax. No template metaprogramming arcana required.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. Built-In Cross-Compilation
&lt;/h3&gt;

&lt;p&gt;Cross-compiling in C typically means fighting with toolchains, sysroots, and autoconf scripts. In Zig, it is a flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;zig build-exe main.zig &lt;span class="nt"&gt;-target&lt;/span&gt; aarch64-linux-gnu
zig build-exe main.zig &lt;span class="nt"&gt;-target&lt;/span&gt; x86_64-windows-gnu
zig build-exe main.zig &lt;span class="nt"&gt;-target&lt;/span&gt; wasm32-wasi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Zig ships with its own cross-compilation toolchain. This is a genuine killer feature for embedded, IoT, WebAssembly, and server-side work where you need to build for multiple targets from a single machine.&lt;/p&gt;




&lt;h3&gt;
  
  
  5. C Interoperability
&lt;/h3&gt;

&lt;p&gt;Zig can include C headers directly, with no bindings, no glue code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight zig"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;@cImport&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="nb"&gt;@cInclude&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"stdio.h"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="mi"&gt;_&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Hello from C&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also compile Zig code as a C library and call it from C. This makes Zig a practical incremental migration path for projects with large existing C codebases.&lt;/p&gt;




&lt;h3&gt;
  
  
  6. Four Build Modes
&lt;/h3&gt;

&lt;p&gt;Zig separates safety from performance explicitly, giving you four build modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Debug&lt;/code&gt; -- safety checks on, slow, verbose panics&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ReleaseSafe&lt;/code&gt; -- safety checks on, optimized&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ReleaseFast&lt;/code&gt; -- safety checks off, maximum performance&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ReleaseSmall&lt;/code&gt; -- optimized for binary size&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You pick the trade-off consciously. There is no magic mode that guesses for you.&lt;/p&gt;




&lt;h3&gt;
  
  
  7. &lt;code&gt;defer&lt;/code&gt; for Guaranteed Cleanup
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;defer&lt;/code&gt; keyword runs a statement when the current scope exits, regardless of how it exits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight zig"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;openFileAbsolute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/etc/hosts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;{});&lt;/span&gt;
&lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="c"&gt;// file.close() is called here no matter what&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This removes entire categories of resource leak bugs common in C.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demerits: Where Zig Falls Short
&lt;/h2&gt;

&lt;p&gt;Being fair is important. Zig is a young language and it comes with real limitations.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Not at 1.0 Yet
&lt;/h3&gt;

&lt;p&gt;As of 2026, Zig is still pre-1.0. The language itself changes between versions. Code written for 0.11 may need updates to compile on 0.13. If you are building production software that needs to stay stable for five years, this is a real concern. The core team is actively working toward a stable release, but it is not there yet.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Small Ecosystem
&lt;/h3&gt;

&lt;p&gt;The standard library is usable but not comprehensive. There is no official package repository -- packages are URLs pointing to compressed archives.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. No Async (For Now)
&lt;/h3&gt;

&lt;p&gt;Async support existed in earlier versions but was removed. The team found it needed to be redesigned from the ground up to work correctly with the native backend. It is coming back, but it is not in the current stable releases. If your project depends heavily on async I/O, this matters.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Smaller Community
&lt;/h3&gt;

&lt;p&gt;A smaller community means fewer Stack Overflow answers, fewer tutorials, fewer examples to copy from.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Get Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# On macOS with Homebrew&lt;/span&gt;
brew &lt;span class="nb"&gt;install &lt;/span&gt;zig

&lt;span class="c"&gt;# On Linux (download from ziglang.org)&lt;/span&gt;
wget https://ziglang.org/download/0.13.0/zig-linux-x86_64-0.13.0.tar.xz
&lt;span class="nb"&gt;tar &lt;/span&gt;xf zig-linux-x86_64-0.13.0.tar.xz
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$PATH&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/zig-linux-x86_64-0.13.0

&lt;span class="c"&gt;# Verify&lt;/span&gt;
zig version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Hello World
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight zig"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;@import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"std"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Hello, Zig!&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;{});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;zig run hello.zig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compile it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;zig build-exe hello.zig
./hello
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  A More Realistic Example: Reading a File
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight zig"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;@import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"std"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;gpa&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;heap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GeneralPurposeAllocator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;{}){};&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="mi"&gt;_&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;gpa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;deinit&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;allocator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;gpa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;allocator&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cwd&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;openFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"notes.txt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;{});&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;contents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readToEndAlloc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;allocator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;allocator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;free&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;contents&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"{s}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;contents&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice: every operation that can fail uses &lt;code&gt;try&lt;/code&gt;. Every allocation has a matching &lt;code&gt;defer&lt;/code&gt; to free it. Every resource has a matching &lt;code&gt;defer&lt;/code&gt; to close it. The flow is explicit from top to bottom.&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing a Test
&lt;/h3&gt;

&lt;p&gt;Zig has a built-in test runner, no external library required:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight zig"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;@import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"std"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;i32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;i32&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;i32&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;test&lt;/span&gt; &lt;span class="s"&gt;"add works correctly"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;testing&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;expectEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;@as&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;i32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;zig &lt;span class="nb"&gt;test &lt;/span&gt;math.zig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  How Zig Compares to Its Neighbors
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;C&lt;/th&gt;
&lt;th&gt;Rust&lt;/th&gt;
&lt;th&gt;Zig&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Memory management&lt;/td&gt;
&lt;td&gt;Manual&lt;/td&gt;
&lt;td&gt;Ownership/borrow&lt;/td&gt;
&lt;td&gt;Manual with allocators&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Error handling&lt;/td&gt;
&lt;td&gt;errno / return codes&lt;/td&gt;
&lt;td&gt;Result type&lt;/td&gt;
&lt;td&gt;Error unions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Generics&lt;/td&gt;
&lt;td&gt;Macros / void*&lt;/td&gt;
&lt;td&gt;Traits&lt;/td&gt;
&lt;td&gt;comptime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cross-compilation&lt;/td&gt;
&lt;td&gt;Painful&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;td&gt;First-class&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C interop&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;td&gt;Requires FFI&lt;/td&gt;
&lt;td&gt;Direct (cImport)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning curve&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;td&gt;Steep&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ecosystem maturity&lt;/td&gt;
&lt;td&gt;Extensive&lt;/td&gt;
&lt;td&gt;Growing&lt;/td&gt;
&lt;td&gt;Early&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stability&lt;/td&gt;
&lt;td&gt;Stable&lt;/td&gt;
&lt;td&gt;Stable&lt;/td&gt;
&lt;td&gt;Pre-1.0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Zig sits between C and Rust. It gives you C's performance and directness without C's preprocessor mess, and it avoids Rust's complexity without pretending the safety tradeoffs don't exist. You still have to think carefully about memory. You just have better tools to do it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who Should Use Zig Today
&lt;/h2&gt;

&lt;p&gt;Zig is a good choice right now if you are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing embedded or low-level systems software and tired of C's preprocessor and opaque error handling&lt;/li&gt;
&lt;li&gt;Building tools or runtimes where cross-compilation matters (WebAssembly, IoT, CLI tools)&lt;/li&gt;
&lt;li&gt;Incrementally migrating a C codebase and want a language that interoperates directly&lt;/li&gt;
&lt;li&gt;Exploring language design and compilers -- Zig's internals are genuinely instructive&lt;/li&gt;
&lt;li&gt;The kind of developer who reads source code instead of waiting for Stack Overflow answers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is probably not the right choice today if you need a stable production language for a team unfamiliar with systems programming, or if you need a rich ecosystem of third-party libraries out of the box.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;Zig's design philosophy can be summarized in one sentence: if it isn't written, it doesn't happen. No hidden allocations, no hidden control flow, no hidden anything. That philosophy resonates with a specific kind of developer -- the one who wants to know exactly what the machine is doing.&lt;/p&gt;

&lt;p&gt;It is not the flashiest language. It does not have a mascot with a marketing team behind it. But the developers who try it tend to keep using it, and the projects built with it -- Bun being the most visible example -- demonstrate that it can produce fast, real software.&lt;/p&gt;

&lt;p&gt;If you have ever thought "I wish C just worked better," give Zig an afternoon. You might not put it down.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Standard library reference: &lt;a href="https://ziglang.org/documentation/master/std/" rel="noopener noreferrer"&gt;ziglang.org/documentation&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>c</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Claude Code Routines: Put Your AI Agent on Autopilot</title>
      <dc:creator>ArshTechPro</dc:creator>
      <pubDate>Thu, 30 Apr 2026 04:39:20 +0000</pubDate>
      <link>https://dev.to/arshtechpro/claude-code-routines-put-your-ai-agent-on-autopilot-51d1</link>
      <guid>https://dev.to/arshtechpro/claude-code-routines-put-your-ai-agent-on-autopilot-51d1</guid>
      <description>&lt;p&gt;If you have been using Claude Code interactively, you already know what it can do in a session. Routines take that further: you define a prompt once, wire up a trigger, and Claude Code runs autonomously on Anthropic-managed cloud infrastructure whenever that trigger fires. Your laptop can be off. The job still runs.&lt;/p&gt;

&lt;p&gt;This article walks through what routines are, how to set them up, and where the rough edges are during the current research preview.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is a Routine?
&lt;/h2&gt;

&lt;p&gt;A routine is a saved Claude Code configuration. It packages three things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A prompt (the instructions Claude runs each time)&lt;/li&gt;
&lt;li&gt;One or more GitHub repositories to work in&lt;/li&gt;
&lt;li&gt;A set of connectors (MCP servers for external services like Slack, Linear, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You attach one or more triggers to it. Each trigger type determines when a run starts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Schedule&lt;/strong&gt; - recurring on a cron cadence, or a one-off at a specific timestamp&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API&lt;/strong&gt; - an HTTP POST to a per-routine endpoint with a bearer token&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub event&lt;/strong&gt; - reactions to pull requests, releases, and similar repository events&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A single routine can combine all three. A PR review routine could run nightly, also fire when your CD pipeline calls the endpoint, and also react to every new pull request.&lt;/p&gt;

&lt;p&gt;Routines are available on Pro, Max, Team, and Enterprise plans with Claude Code on the web enabled. You manage them at &lt;code&gt;claude.ai/code/routines&lt;/code&gt; or from the CLI using &lt;code&gt;/schedule&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;The key property is that routines run without you present. A regular Claude Code session expects you to review, approve, and guide. A routine is designed for unattended, repeatable work tied to a clear outcome.&lt;/p&gt;

&lt;p&gt;That changes the use cases you can build:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backlog maintenance.&lt;/strong&gt; A schedule trigger runs every weeknight. The routine reads issues opened since the last run, applies labels, assigns owners based on the area of code referenced, and posts a summary to Slack. Your team starts the day with a groomed queue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alert triage.&lt;/strong&gt; Your monitoring tool POSTs to the routine's API endpoint when an error threshold crosses. The routine pulls the stack trace, correlates it with recent commits, and opens a draft PR with a proposed fix. On-call reviews the PR instead of starting from a blank terminal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bespoke code review.&lt;/strong&gt; A GitHub trigger fires on &lt;code&gt;pull_request.opened&lt;/code&gt;. The routine applies your team's review checklist and leaves inline comments covering security, performance, and style issues. Human reviewers focus on design decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Library porting.&lt;/strong&gt; A trigger fires on merged PRs in one SDK repository. The routine ports the change to a parallel SDK in another language and opens a matching PR. Two libraries stay in sync without a human re-implementing each change.&lt;/p&gt;




&lt;h2&gt;
  
  
  Creating a Routine
&lt;/h2&gt;

&lt;p&gt;You can create routines from the web UI, the Claude Desktop app, or the CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  From the Web
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;code&gt;claude.ai/code/routines&lt;/code&gt; and click &lt;strong&gt;New routine&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Name it and write the prompt. The prompt is the most important part. Since the routine runs autonomously, the prompt must be self-contained and explicit about what to do and what success looks like.&lt;/li&gt;
&lt;li&gt;Select one or more GitHub repositories. Each is cloned fresh at the start of every run from the default branch.&lt;/li&gt;
&lt;li&gt;Pick a cloud environment (controls network access, environment variables, and setup scripts). A Default environment is provided.&lt;/li&gt;
&lt;li&gt;Add triggers (Schedule, GitHub event, or API).&lt;/li&gt;
&lt;li&gt;Review connectors. All your connected MCP connectors are included by default. Remove any the routine does not need.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After creation, click &lt;strong&gt;Run now&lt;/strong&gt; on the detail page to start an immediate run without waiting for a trigger.&lt;/p&gt;

&lt;h3&gt;
  
  
  From the CLI
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/schedule daily PR review at 9am
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/schedule &lt;span class="k"&gt;in &lt;/span&gt;2 weeks, open a cleanup PR that removes the feature flag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude walks through the same information the web form collects and saves the routine. The CLI supports scheduled triggers only. To add API or GitHub triggers, edit the routine on the web afterward.&lt;/p&gt;

&lt;p&gt;Useful CLI management commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/schedule list        &lt;span class="c"&gt;# see all routines&lt;/span&gt;
/schedule update      &lt;span class="c"&gt;# change a routine&lt;/span&gt;
/schedule run         &lt;span class="c"&gt;# trigger a routine immediately&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Configuring Triggers
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Schedule Triggers
&lt;/h3&gt;

&lt;p&gt;Pick a preset: hourly, daily, weekdays, or weekly. Times are entered in your local timezone and converted automatically. Runs may start a few minutes after the scheduled time due to stagger, but the offset is consistent for each routine.&lt;/p&gt;

&lt;p&gt;For custom intervals (every two hours, first of each month), pick the closest preset in the form and then use &lt;code&gt;/schedule update&lt;/code&gt; in the CLI to set a specific cron expression. The minimum interval is one hour.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One-off runs&lt;/strong&gt; fire the routine a single time at a specific timestamp, then auto-disable. Useful for cleanup after a rollout, follow-ups when an upstream change lands, or end-of-week summaries.&lt;/p&gt;

&lt;p&gt;One-off runs do not count against the daily routine run cap. They draw down your regular subscription usage like any other session.&lt;/p&gt;

&lt;h3&gt;
  
  
  API Triggers
&lt;/h3&gt;

&lt;p&gt;An API trigger gives the routine a dedicated HTTP endpoint. POSTing to it with the bearer token starts a new session and returns a session URL.&lt;/p&gt;

&lt;p&gt;To set one up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the routine for editing.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add another trigger&lt;/strong&gt; and choose &lt;strong&gt;API&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Copy the URL and click &lt;strong&gt;Generate token&lt;/strong&gt;. Store the token immediately - it is shown once and cannot be retrieved later.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is how to call the endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://api.anthropic.com/v1/claude_code/routines/trig_01ABCDEFGHJKLMNOPQRSTUVW/fire &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Bearer sk-ant-oat01-xxxxx"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"anthropic-beta: experimental-cc-routine-2026-04-01"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"anthropic-version: 2023-06-01"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"text": "Sentry alert SEN-4521 fired in prod. Stack trace attached."}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;text&lt;/code&gt; field is optional freeform context passed alongside the saved prompt. It is not parsed - if you send JSON, the routine receives it as a literal string.&lt;/p&gt;

&lt;p&gt;A successful response looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"routine_fire"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"claude_code_session_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"session_01HJKLMNOPQRSTUVWXYZ"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"claude_code_session_url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://claude.ai/code/session_01HJKLMNOPQRSTUVWXYZ"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the session URL to watch the run in real time, review changes, or continue the conversation manually.&lt;/p&gt;

&lt;p&gt;Note: The &lt;code&gt;/fire&lt;/code&gt; endpoint ships under the &lt;code&gt;experimental-cc-routine-2026-04-01&lt;/code&gt; beta header. Request and response shapes, rate limits, and token semantics may change. Breaking changes ship behind new dated beta header versions, with the two most recent previous versions remaining active during migration.&lt;/p&gt;

&lt;p&gt;Each routine gets its own token scoped to that routine only. To rotate or revoke it, return to the same modal and use &lt;strong&gt;Regenerate&lt;/strong&gt; or &lt;strong&gt;Revoke&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Triggers
&lt;/h3&gt;

&lt;p&gt;GitHub triggers start a new session automatically on matching repository events. Each matching event starts its own independent session - there is no session reuse across events.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supported events:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Event&lt;/th&gt;
&lt;th&gt;Triggers when&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Pull request&lt;/td&gt;
&lt;td&gt;A PR is opened, closed, assigned, labeled, synchronized, or otherwise updated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Release&lt;/td&gt;
&lt;td&gt;A release is created, published, edited, or deleted&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You can subscribe to a specific action (like &lt;code&gt;pull_request.opened&lt;/code&gt;) or to all actions in the category.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filters&lt;/strong&gt; let you narrow which events actually start a session:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Filter&lt;/th&gt;
&lt;th&gt;Matches&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Author&lt;/td&gt;
&lt;td&gt;PR author's GitHub username&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Title&lt;/td&gt;
&lt;td&gt;PR title text&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Body&lt;/td&gt;
&lt;td&gt;PR description text&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Base branch&lt;/td&gt;
&lt;td&gt;Branch the PR targets&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Head branch&lt;/td&gt;
&lt;td&gt;Branch the PR comes from&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Labels&lt;/td&gt;
&lt;td&gt;Labels applied to the PR&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Is draft&lt;/td&gt;
&lt;td&gt;Whether the PR is in draft state&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Is merged&lt;/td&gt;
&lt;td&gt;Whether the PR has been merged&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Operators: equals, contains, starts with, is one of, is not one of, matches regex.&lt;/p&gt;

&lt;p&gt;One thing to watch with regex: the &lt;code&gt;matches regex&lt;/code&gt; operator tests the entire field value, not a substring. To match any PR title containing &lt;code&gt;hotfix&lt;/code&gt;, write &lt;code&gt;.*hotfix.*&lt;/code&gt;. Without the surrounding &lt;code&gt;.*&lt;/code&gt;, it only matches a title that is exactly &lt;code&gt;hotfix&lt;/code&gt; and nothing else. For substring matching without regex syntax, use the &lt;code&gt;contains&lt;/code&gt; operator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example filter combinations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auth module review: base branch &lt;code&gt;main&lt;/code&gt;, head branch contains &lt;code&gt;auth-provider&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Ready-for-review only: is draft is &lt;code&gt;false&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Label-gated backport: labels include &lt;code&gt;needs-backport&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Claude GitHub App must be installed on the repository. The trigger setup prompts you if it is not already. Note that running &lt;code&gt;/web-setup&lt;/code&gt; in the CLI grants repository access for cloning but does not install the GitHub App and does not enable webhook delivery. GitHub triggers require the App specifically.&lt;/p&gt;

&lt;p&gt;During the research preview, GitHub webhook events are subject to per-routine and per-account hourly caps. Events beyond the limit are dropped until the window resets.&lt;/p&gt;




&lt;h2&gt;
  
  
  Branch Behavior and Safety
&lt;/h2&gt;

&lt;p&gt;By default, Claude can only push to branches prefixed with &lt;code&gt;claude/&lt;/code&gt;. This prevents routines from accidentally modifying protected or long-lived branches.&lt;/p&gt;

&lt;p&gt;To allow pushing to any branch (for example, if you need Claude to push directly to &lt;code&gt;main&lt;/code&gt; or a release branch), enable &lt;strong&gt;Allow unrestricted branch pushes&lt;/strong&gt; for that repository when creating or editing the routine. Scope this permission carefully.&lt;/p&gt;




&lt;h2&gt;
  
  
  Connectors
&lt;/h2&gt;

&lt;p&gt;Routines can use your connected MCP connectors to read from and write to external services during each run. A backlog routine might read from a Slack channel and create issues in Linear. An alert triage routine might read from PagerDuty and push to GitHub.&lt;/p&gt;

&lt;p&gt;All connectors are included by default when you create a routine. Remove any the routine does not actually need. During a run, Claude can use every tool from an included connector, including writes, without asking for permission.&lt;/p&gt;




&lt;h2&gt;
  
  
  Usage and Limits
&lt;/h2&gt;

&lt;p&gt;Routines draw down subscription usage the same way interactive sessions do. There is also a daily cap on how many routine runs can start per account.&lt;/p&gt;

&lt;p&gt;You can check your current consumption and remaining daily routine runs at &lt;code&gt;claude.ai/code/routines&lt;/code&gt; or &lt;code&gt;claude.ai/settings/usage&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When a routine hits the daily cap or your subscription usage limit, organizations with extra usage enabled can keep running routines on metered overage. Without extra usage, additional runs are rejected until the window resets. Enable extra usage from &lt;strong&gt;Settings &amp;gt; Billing&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;One-off runs are exempt from the daily routine run allowance. They still consume your regular subscription usage.&lt;/p&gt;

&lt;p&gt;There is no per-run token count displayed directly in the routine detail view. Usage is reflected in your overall account consumption at the settings page rather than being itemized per routine run.&lt;/p&gt;




&lt;h2&gt;
  
  
  Managing Runs
&lt;/h2&gt;

&lt;p&gt;Click a routine in the list to open its detail page. From there you can view past runs, see what Claude did in each run, review changes, create a pull request, or continue the conversation.&lt;/p&gt;

&lt;p&gt;You can also:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;Run now&lt;/strong&gt; to start an immediate run&lt;/li&gt;
&lt;li&gt;Toggle the schedule on and off to pause or resume&lt;/li&gt;
&lt;li&gt;Edit the name, prompt, repositories, environment, connectors, or triggers&lt;/li&gt;
&lt;li&gt;Delete the routine (past sessions created by it remain in your session list)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Things to Know Before You Ship
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Routines run as you.&lt;/strong&gt; Anything a routine does through your connected GitHub identity or connectors appears as your account. Commits and pull requests carry your GitHub user. Slack messages and Linear tickets use your linked accounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Routines belong to your individual account.&lt;/strong&gt; They are not shared with teammates and count against your account's daily run allowance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The prompt is everything.&lt;/strong&gt; The routine runs autonomously with no approval prompts. There is no permission-mode picker. Write prompts that are self-contained, explicit about the goal, and explicit about what success looks like.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is research preview.&lt;/strong&gt; Behavior, limits, and the API surface may change. The &lt;code&gt;/fire&lt;/code&gt; endpoint ships under a dated beta header for exactly this reason.&lt;/p&gt;




&lt;h2&gt;
  
  
  Related Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://code.claude.com/docs/en/claude-code-on-the-web" rel="noopener noreferrer"&gt;Claude Code on the web reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://code.claude.com/docs/en/mcp" rel="noopener noreferrer"&gt;MCP connectors&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>claudecode</category>
      <category>agents</category>
      <category>programming</category>
    </item>
    <item>
      <title>Lemonade v10.3: Run Local LLMs, Image Gen, and Speech on Your Own GPU for Free</title>
      <dc:creator>ArshTechPro</dc:creator>
      <pubDate>Wed, 29 Apr 2026 20:09:26 +0000</pubDate>
      <link>https://dev.to/arshtechpro/lemonade-v103-run-local-llms-image-gen-and-speech-on-your-own-gpu-for-free-29ob</link>
      <guid>https://dev.to/arshtechpro/lemonade-v103-run-local-llms-image-gen-and-speech-on-your-own-gpu-for-free-29ob</guid>
      <description>&lt;p&gt;If you are building AI-powered apps and feeling the cost of cloud API bills — or the anxiety of sending user data off-device — Lemonade is worth your time.&lt;/p&gt;

&lt;p&gt;Lemonade is an open-source local AI server (3.7k stars, sponsored by AMD) that runs LLMs, image generation, speech-to-text, and text-to-speech entirely on your own hardware. It exposes a standard OpenAI-compatible API, so switching from a cloud provider means changing one URL. The project just shipped v10.3, its biggest release yet.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is Lemonade?
&lt;/h2&gt;

&lt;p&gt;Lemonade installs as a system service and manages everything: model downloads, backend selection, and a unified REST endpoint at &lt;code&gt;http://localhost:13305/v1&lt;/code&gt;. Under the hood it wires together proven inference engines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;llama.cpp&lt;/strong&gt; for GGUF LLMs (Vulkan, ROCm, CPU, Metal)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OnnxRuntime GenAI / FastFlowLM&lt;/strong&gt; for NPU-accelerated FLM models&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;whisper.cpp&lt;/strong&gt; for speech-to-text&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;stable-diffusion.cpp&lt;/strong&gt; for image generation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kokoro&lt;/strong&gt; for text-to-speech&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apps like n8n, VS Code GitHub Copilot, Open WebUI, Continue, OpenHands, and Dify already integrate with it out of the box via the standard OpenAI API.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is new in v10.3 (Latest Release)
&lt;/h2&gt;

&lt;p&gt;v10.3 is a landmark release with three headline changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Desktop app is now 10x smaller.&lt;/strong&gt; The app migrated from Electron to Tauri, a Rust-based cross-platform framework that uses the system's native webview instead of bundling Chromium. macOS and Windows binaries dropped from ~101-107 MB to ~7-9 MB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OmniRouter for true omni-modal chat.&lt;/strong&gt; The new OmniRouter unifies all backends — text, image, speech, vision — into a single OpenAI-compatible endpoint. You can interact with these modalities as tools in an agentic loop, making natural-language requests like "generate an image of X and then describe it" without gluing separate API calls together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ROCm 7 support with multiple channels.&lt;/strong&gt; ROCm 7.2 stable, 7.12 preview, and TheRock nightly builds are all supported. The 7.12 preview is now the default.&lt;/p&gt;

&lt;p&gt;Other notable changes in v10.3:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Light mode theme added to the GUI&lt;/li&gt;
&lt;li&gt;Easy llama.cpp version pinning and auto-update&lt;/li&gt;
&lt;li&gt;AppImage removed for Linux; use the web app or Snap instead&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;lemonade-server-minimal.msi&lt;/code&gt; deprecated (will be removed in a future release)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;amd_igpu&lt;/code&gt; and &lt;code&gt;amd_dgpu&lt;/code&gt; consolidated to &lt;code&gt;amd_gpu&lt;/code&gt; in the system-info endpoint&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nvidia_dgpu&lt;/code&gt; renamed to &lt;code&gt;nvidia_gpu&lt;/code&gt; for consistency&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Recent release history at a glance
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;th&gt;Key headline&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;v10.3&lt;/strong&gt; (latest)&lt;/td&gt;
&lt;td&gt;OmniRouter, Tauri app (10x smaller), ROCm 7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;v10.2&lt;/td&gt;
&lt;td&gt;Embeddable Lemonade binary, Qwen Image models, OpenCode integration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;v10.1&lt;/td&gt;
&lt;td&gt;Gemma 4 on GPU, super-resolution (Real-ESRGAN), new &lt;code&gt;lemonade&lt;/code&gt; CLI, default port changed to 13305&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;v10.0&lt;/td&gt;
&lt;td&gt;Claude Code integration, Fedora RPM installer, NPU on Linux, FLM multi-modal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;v9.4&lt;/td&gt;
&lt;td&gt;Qwen 3.5 on ROCm/Vulkan, redesigned app, image editing endpoint&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Windows
&lt;/h3&gt;

&lt;p&gt;Download &lt;code&gt;lemonade.msi&lt;/code&gt; from the &lt;a href="https://github.com/lemonade-sdk/lemonade/releases/latest" rel="noopener noreferrer"&gt;releases page&lt;/a&gt;. This installs both the server and the Tauri desktop app.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ubuntu / Debian (PPA)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;add-apt-repository ppa:lemonade-team/stable
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;lemonade-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Snap
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;snap &lt;span class="nb"&gt;install &lt;/span&gt;lemonade-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  macOS (beta)
&lt;/h3&gt;

&lt;p&gt;Download the &lt;code&gt;.pkg&lt;/code&gt; from the &lt;a href="https://github.com/lemonade-sdk/lemonade/releases/latest" rel="noopener noreferrer"&gt;releases page&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 13305:13305 lemonade-sdk/lemonade-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  RPM (Fedora)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Download the .rpm from the releases page&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;rpm &lt;span class="nt"&gt;-i&lt;/span&gt; lemonade-server-10.3.0.x86_64.rpm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Your first model in 3 commands
&lt;/h2&gt;

&lt;p&gt;Note: as of v10.1, the CLI command is &lt;code&gt;lemonade&lt;/code&gt; (the old &lt;code&gt;lemonade-server&lt;/code&gt; CLI is deprecated).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# See which backends your hardware supports&lt;/span&gt;
lemonade recipes

&lt;span class="c"&gt;# Download a model&lt;/span&gt;
lemonade pull Gemma-3-4b-it-GGUF

&lt;span class="c"&gt;# Run it&lt;/span&gt;
lemonade run Gemma-3-4b-it-GGUF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Other modalities work the same way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Image generation&lt;/span&gt;
lemonade run SDXL-Turbo

&lt;span class="c"&gt;# Text-to-speech&lt;/span&gt;
lemonade run kokoro-v1

&lt;span class="c"&gt;# Speech-to-text&lt;/span&gt;
lemonade run Whisper-Large-v3-Turbo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Integrating with your app
&lt;/h2&gt;

&lt;p&gt;Because Lemonade exposes an OpenAI-compatible API, you swap it in with a single config change. The base URL as of v10.1 is &lt;code&gt;http://localhost:13305/v1&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Python
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAI&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:13305/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;lemonade&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;  &lt;span class="c1"&gt;# required by the library, but unused by Lemonade
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Gemma-3-4b-it-GGUF&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a helpful assistant.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summarize the benefits of running AI locally.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Node.js
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;OpenAI&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;openai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;baseURL&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:13305/v1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;lemonade&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Gemma-3-4b-it-GGUF&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Hello from Node.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lemonade also supports the &lt;strong&gt;Ollama API&lt;/strong&gt; and the &lt;strong&gt;Anthropic API&lt;/strong&gt; for apps that use those clients natively.&lt;/p&gt;




&lt;h2&gt;
  
  
  Supported hardware
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Hardware&lt;/th&gt;
&lt;th&gt;Backend&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AMD Radeon RDNA3 / RDNA4 GPU&lt;/td&gt;
&lt;td&gt;ROCm&lt;/td&gt;
&lt;td&gt;RX 7000/9000 series, Radeon PRO&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ryzen AI MAX (Strix Halo)&lt;/td&gt;
&lt;td&gt;ROCm (gfx1151)&lt;/td&gt;
&lt;td&gt;Windows and Ubuntu&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Any Vulkan-capable GPU&lt;/td&gt;
&lt;td&gt;Vulkan (llamacpp)&lt;/td&gt;
&lt;td&gt;Broad compatibility&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AMD Ryzen AI NPU (XDNA2)&lt;/td&gt;
&lt;td&gt;FLM / FastFlowLM&lt;/td&gt;
&lt;td&gt;Windows and Linux (beta)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Any x86_64 CPU&lt;/td&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;Universally available, slower&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Apple Silicon&lt;/td&gt;
&lt;td&gt;Metal&lt;/td&gt;
&lt;td&gt;macOS beta&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Not sure what your machine supports? Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;lemonade recipes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It auto-detects your hardware and lists exactly which backends are available.&lt;/p&gt;




&lt;h2&gt;
  
  
  Embeddable Lemonade
&lt;/h2&gt;

&lt;p&gt;Since v10.2, you can bundle Lemonade as a portable binary inside your own application. Your users get local multi-modal AI without ever seeing a Lemonade installer or any Lemonade branding.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Run lemond as a subprocess from your app&lt;/span&gt;
lemond ./
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Full guide: &lt;a href="https://lemonade-server.ai/docs/embeddable/" rel="noopener noreferrer"&gt;lemonade-server.ai/docs/embeddable/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is particularly useful for desktop app developers who want to ship local AI features without taking on the complexity of packaging inference engines themselves.&lt;/p&gt;




&lt;h2&gt;
  
  
  App integrations
&lt;/h2&gt;

&lt;p&gt;Lemonade has a growing &lt;a href="https://lemonade-server.ai/marketplace" rel="noopener noreferrer"&gt;marketplace&lt;/a&gt; of first-class integrations. Highlights include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VS Code GitHub Copilot&lt;/strong&gt; — use local models for code completions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude Code&lt;/strong&gt; — &lt;code&gt;lemonade launch claude&lt;/code&gt; wires it up natively (added in v10.0)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open WebUI&lt;/strong&gt; — a polished ChatGPT-style UI, running locally&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continue&lt;/strong&gt; — local AI coding assistant for VS Code and JetBrains&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;n8n&lt;/strong&gt; — automate workflows with local AI nodes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenHands&lt;/strong&gt; — local AI agent for software engineering tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dify&lt;/strong&gt; — LLM app building platform&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AnythingLLM&lt;/strong&gt; — local knowledge base with RAG&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  OmniRouter: the key new API concept in v10.3
&lt;/h2&gt;

&lt;p&gt;OmniRouter is worth calling out separately because it changes how you think about the API surface. Previously you would call separate endpoints for text, image, and speech. With OmniRouter, you interact through a single multi-modal endpoint and Lemonade routes each request to the correct backend engine automatically.&lt;/p&gt;

&lt;p&gt;This means you can build agentic pipelines — for example, a loop that generates text, converts it to speech, and produces an image — all through one unified client without managing multiple base URLs or backend configurations.&lt;/p&gt;




&lt;h2&gt;
  
  
  How it compares to Ollama and LM Studio
&lt;/h2&gt;

&lt;p&gt;These tools all solve similar problems. Where Lemonade stands out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;NPU support&lt;/strong&gt; — one of the very few tools that accelerates inference on the AMD XDNA2 NPU in Ryzen AI laptops&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;True multi-modal in one server&lt;/strong&gt; — text, images, speech-to-text, and TTS from a single API endpoint&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OmniRouter&lt;/strong&gt; — automatic multi-modal routing without manual backend wiring&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embeddable binary&lt;/strong&gt; — package it inside your own app with no Lemonade branding&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple API standards&lt;/strong&gt; — OpenAI, Anthropic, and Ollama APIs simultaneously&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AMD-first optimizations&lt;/strong&gt; — deep ROCm integration and NPU tooling maintained by AMD engineers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are on NVIDIA with a mainstream GPU and only need text generation, Ollama is slightly simpler to get started. For AMD hardware, AI PCs with NPUs, or multi-modal workloads, Lemonade covers more ground.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick reference
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Resource&lt;/th&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GitHub&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/lemonade-sdk/lemonade" rel="noopener noreferrer"&gt;github.com/lemonade-sdk/lemonade&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>llm</category>
      <category>programming</category>
    </item>
    <item>
      <title>Claude Connectors Explained: How to Give Claude Access to Your Tools</title>
      <dc:creator>ArshTechPro</dc:creator>
      <pubDate>Mon, 27 Apr 2026 11:10:44 +0000</pubDate>
      <link>https://dev.to/arshtechpro/claude-connectors-explained-how-to-give-claude-access-to-your-tools-471k</link>
      <guid>https://dev.to/arshtechpro/claude-connectors-explained-how-to-give-claude-access-to-your-tools-471k</guid>
      <description>&lt;p&gt;Claude is no longer just a chat window. With Connectors, you can wire it up to the tools your team actually uses. Google Drive, Gmail, GitHub, Slack, Notion, Asana, Spotify, Uber, and over 200 others.&lt;/p&gt;

&lt;p&gt;This post covers what Connectors are, how they work under the hood, and how to build your own.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Are Connectors?
&lt;/h2&gt;

&lt;p&gt;Connectors extend Claude's capabilities by giving it access to external tools, data sources, and services. They are powered by the &lt;strong&gt;Model Context Protocol (MCP)&lt;/strong&gt;, an open standard created by Anthropic.&lt;/p&gt;

&lt;p&gt;Think of it this way: Claude has a conversation with you, but mid-conversation it can reach into your Google Drive, pull a document, summarize it, and drop the result into a Slack message — all without you leaving the thread.&lt;/p&gt;

&lt;p&gt;A real workflow example: a product manager pulls a query from Amplitude, turns it into a Canva deck, and drops the link into Asana. One conversation. Three connected apps.&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Works: MCP in Plain English
&lt;/h2&gt;

&lt;p&gt;MCP is a protocol that defines how Claude (the client) communicates with external services (the servers). The spec is open, which means anyone can build a connector for any service.&lt;/p&gt;

&lt;p&gt;There are two things a connector can do:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Provide tools and data&lt;/strong&gt;&lt;br&gt;
This gives Claude the ability to take actions — read files, send emails, create issues, query databases, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Surface UI components (MCP Apps)&lt;/strong&gt;&lt;br&gt;
Instead of just returning text, an MCP server can render interactive UI elements directly in the conversation: charts, maps, forms, booking flows, and more.&lt;/p&gt;


&lt;h2&gt;
  
  
  Types of Connectors
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Prebuilt First-Party Integrations
&lt;/h3&gt;

&lt;p&gt;Anthropic ships ready-to-use connectors for the most common services. No setup beyond logging in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Google Drive, Gmail, Google Calendar&lt;/li&gt;
&lt;li&gt;GitHub&lt;/li&gt;
&lt;li&gt;Slack&lt;/li&gt;
&lt;li&gt;Microsoft 365&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These work across all Claude products immediately.&lt;/p&gt;
&lt;h3&gt;
  
  
  Remote MCP Servers
&lt;/h3&gt;

&lt;p&gt;Third-party developers can host their own MCP servers in the cloud. Claude connects to them over HTTPS. These are what you find in the &lt;a href="https://claude.ai/directory/connectors" rel="noopener noreferrer"&gt;Connectors Directory&lt;/a&gt; — verified by Anthropic and available to all users.&lt;/p&gt;
&lt;h3&gt;
  
  
  MCP Apps
&lt;/h3&gt;

&lt;p&gt;These are MCP servers that go a step further and render UI inside the conversation. A booking flow, an interactive chart, a filled-out form — all rendered in the chat thread.&lt;/p&gt;
&lt;h3&gt;
  
  
  MCP Bundles (Desktop Extensions)
&lt;/h3&gt;

&lt;p&gt;For enterprise or local use cases, you can package an MCP server with all its dependencies into a desktop extension (&lt;code&gt;.mcpb&lt;/code&gt; format). This handles cross-platform compatibility, code signing, and centralized version updates.&lt;/p&gt;
&lt;h3&gt;
  
  
  Local MCP via Plugins
&lt;/h3&gt;

&lt;p&gt;If you want to distribute a local MCP server through npm or PyPI, you bundle it in a Plugin using &lt;code&gt;.mcp.json&lt;/code&gt; and submit it to the plugin directory.&lt;/p&gt;


&lt;h2&gt;
  
  
  Where Connectors Work
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Remote MCP&lt;/th&gt;
&lt;th&gt;MCP Apps&lt;/th&gt;
&lt;th&gt;Local Extensions&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Claude.ai (web)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Desktop&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Mobile&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Beta&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Code&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Via plugins&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Cowork&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Via plugins&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;


&lt;h2&gt;
  
  
  How Claude Discovers and Uses Connectors
&lt;/h2&gt;

&lt;p&gt;This is the part that feels almost magical once it clicks.&lt;/p&gt;

&lt;p&gt;When you connect a service, Claude does not just wait for you to explicitly invoke it. It dynamically surfaces the right connector based on what you are doing. Ask Claude to recommend a weekend hike and AllTrails will appear automatically. Ask for grocery help and Instacart shows up.&lt;/p&gt;

&lt;p&gt;When multiple connectors could help, Claude shows all of them and lets you choose. There is no hidden ranking by paid placement. The directory is ad-free.&lt;/p&gt;


&lt;h2&gt;
  
  
  Privacy and Data Boundaries
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Your data from connected apps is not used to train Claude's models.&lt;/li&gt;
&lt;li&gt;A connected app cannot see your other conversations with Claude.&lt;/li&gt;
&lt;li&gt;Before Claude books or purchases something on your behalf, it confirms with you first.&lt;/li&gt;
&lt;li&gt;You can disconnect any connector at any time from Settings.&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Building Your Own Connector
&lt;/h2&gt;

&lt;p&gt;If you have an internal tool or a service you want Claude to access, you can build a connector. Here is the path:&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1: Build an MCP Server
&lt;/h3&gt;

&lt;p&gt;Your MCP server is a standard HTTPS service that implements the MCP protocol. It exposes a set of tools — each tool has a name, description, and input schema.&lt;/p&gt;

&lt;p&gt;A minimal Node.js example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;McpServer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@modelcontextprotocol/sdk/server/mcp.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;StdioServerTransport&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@modelcontextprotocol/sdk/server/stdio.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;zod&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;McpServer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;my-tool-server&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1.0.0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;get_user_data&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;userId&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetchFromYourDB&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}]&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;transport&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StdioServerTransport&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;transport&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a remote server, you swap &lt;code&gt;StdioServerTransport&lt;/code&gt; for an HTTP/SSE transport and deploy it like any other API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Connect it in Claude Desktop
&lt;/h3&gt;

&lt;p&gt;In your &lt;code&gt;claude_desktop_config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"my-tool-server"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"node"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/path/to/your/server.js"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a remote server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"my-remote-server"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://your-mcp-server.com/sse"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart Claude Desktop and your tools are immediately available.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 (Optional): Add UI with MCP Apps
&lt;/h3&gt;

&lt;p&gt;If you want to render interactive UI inside the conversation, your MCP server can return HTML-based components that Claude will render inline. See the &lt;a href="https://claude.com/docs/connectors/building/mcp-apps/design-guidelines" rel="noopener noreferrer"&gt;MCP Apps design guidelines&lt;/a&gt; for what you can render and how.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4 (Optional): Submit to the Directory
&lt;/h3&gt;

&lt;p&gt;If your connector would be useful to other Claude users, you can submit it to the Connectors Directory. Anthropic reviews submissions and, if approved, your connector becomes available to all users across Claude products.&lt;/p&gt;

&lt;p&gt;Submit at: &lt;a href="https://claude.com/docs/connectors/building/submission" rel="noopener noreferrer"&gt;claude.com/docs/connectors/building/submission&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Using Connectors in the Anthropic API (for Artifact Builders)
&lt;/h2&gt;

&lt;p&gt;If you are building Claude-powered apps via the API, you can pass MCP servers directly in your API call. Claude will use them during the conversation to take actions on behalf of the user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://api.anthropic.com/v1/messages&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;claude-sonnet-4-20250514&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;max_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Create a task in Asana for reviewing the Q3 report&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;mcp_servers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;url&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://mcp.asana.com/sse&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;asana-mcp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude will discover the tools exposed by that MCP server and use them automatically to complete the task.&lt;/p&gt;

&lt;p&gt;You can combine multiple MCP servers in a single call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;mcp_servers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;url&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://mcp.asana.com/sse&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;asana-mcp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;url&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://gmailmcp.googleapis.com/mcp/v1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gmail-mcp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also add web search alongside MCP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;web_search_20250305&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;web_search&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="nx"&gt;mcp_servers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;url&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://mcp.asana.com/sse&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;asana-mcp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Handling MCP Responses in Your Code
&lt;/h2&gt;

&lt;p&gt;When Claude uses an MCP tool, the response &lt;code&gt;content&lt;/code&gt; array contains multiple block types. Do not assume ordering — filter by type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// Get tool results (the actual data returned from your MCP server)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;toolResults&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mcp_tool_result&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;?.[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]?.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Get Claude's natural language response&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;claudeText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// See what tools were called and with what inputs&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;toolCalls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mcp_tool_use&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Quick Reference
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What you want&lt;/th&gt;
&lt;th&gt;How to do it&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Use a prebuilt connector&lt;/td&gt;
&lt;td&gt;Go to claude.ai Settings, connect the service&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Browse available connectors&lt;/td&gt;
&lt;td&gt;&lt;a href="https://claude.ai/directory/connectors" rel="noopener noreferrer"&gt;claude.ai/directory/connectors&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Connect a remote MCP server&lt;/td&gt;
&lt;td&gt;Add URL to claude_desktop_config.json or Settings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Build your own MCP server&lt;/td&gt;
&lt;td&gt;Use the MCP SDK, implement tools, expose via HTTPS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Submit to the directory&lt;/td&gt;
&lt;td&gt;&lt;a href="https://claude.com/docs/connectors/building/submission" rel="noopener noreferrer"&gt;claude.com/docs/connectors/building/submission&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Use MCP in the API&lt;/td&gt;
&lt;td&gt;Pass &lt;code&gt;mcp_servers&lt;/code&gt; array in your API request&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Add UI to your connector&lt;/td&gt;
&lt;td&gt;Implement MCP Apps using the design guidelines&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;MCP Apps: &lt;a href="https://claude.com/docs/connectors/building/mcp-apps/getting-started" rel="noopener noreferrer"&gt;https://claude.com/docs/connectors/building/mcp-apps/getting-started&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Anthropic Blog Post: &lt;a href="https://claude.com/blog/connectors-for-everyday-life" rel="noopener noreferrer"&gt;https://claude.com/blog/connectors-for-everyday-life&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;The Connectors Directory launched in July 2025 and already has over 200 integrations. The new consumer connectors (Uber, Spotify, Instacart, TripAdvisor, Resy, Audible, AllTrails, and others) were added in April 2026 and are available on all plans, with mobile in beta.&lt;/p&gt;

&lt;p&gt;If you build something useful, submit it. The protocol is open and the directory is growing.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>mcp</category>
      <category>productivity</category>
    </item>
    <item>
      <title>iOS 26 SDK Is Now Mandatory — Here Is What Actually Changes for Your App</title>
      <dc:creator>ArshTechPro</dc:creator>
      <pubDate>Mon, 27 Apr 2026 10:08:23 +0000</pubDate>
      <link>https://dev.to/arshtechpro/ios-26-sdk-is-now-mandatory-here-is-what-actually-changes-for-your-app-39m4</link>
      <guid>https://dev.to/arshtechpro/ios-26-sdk-is-now-mandatory-here-is-what-actually-changes-for-your-app-39m4</guid>
      <description>&lt;p&gt;Starting April 28, 2026, Apple will reject any app or update submitted to App Store Connect unless it is built with the iOS 26 SDK. If you have not migrated yet, this article walks you through exactly what changed, what breaks, and what you need to do before you submit your next build.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Requirement Actually Means
&lt;/h2&gt;

&lt;p&gt;Apple has updated its minimum SDK policy. From tomorrow onward, every app and game uploaded to App Store Connect must meet these requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;iOS and iPadOS apps must be built with the &lt;strong&gt;iOS 26 and iPadOS 26 SDK&lt;/strong&gt; or later&lt;/li&gt;
&lt;li&gt;tvOS apps must use the &lt;strong&gt;tvOS 26 SDK&lt;/strong&gt; or later&lt;/li&gt;
&lt;li&gt;visionOS apps must use the &lt;strong&gt;visionOS 26 SDK&lt;/strong&gt; or later&lt;/li&gt;
&lt;li&gt;watchOS apps must use the &lt;strong&gt;watchOS 26 SDK&lt;/strong&gt; or later (64-bit support is also now required)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, this means you must build using &lt;strong&gt;Xcode 26 or later&lt;/strong&gt;. The current latest SDK is version 26.2, so that is what you should be targeting.&lt;/p&gt;

&lt;p&gt;One thing to clarify upfront: this requirement applies to the &lt;strong&gt;SDK you build with&lt;/strong&gt;, not the iOS version your users must run. You can still set your deployment target to iOS 16 or 17. Your existing users on older iOS versions are not affected.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Changes When You Build With the iOS 26 SDK
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Liquid Glass UI Is Applied by Default
&lt;/h3&gt;

&lt;p&gt;This is the change that will visually affect most apps immediately.&lt;/p&gt;

&lt;p&gt;Liquid Glass is Apple's new design language — it applies translucent, fluid materials to native UI components. When you build with the iOS 26 SDK, standard UIKit and SwiftUI components like navigation bars, tab bars, buttons, and sheets will automatically pick up this new look on devices running iOS 26.&lt;/p&gt;

&lt;p&gt;You do not need to write a single line of code for this to happen. It applies automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What this means for you:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run your app on an iOS 26 device or simulator after rebuilding&lt;/li&gt;
&lt;li&gt;Check every screen for layout regressions or contrast issues&lt;/li&gt;
&lt;li&gt;Pay close attention to custom UI elements — they will not automatically adopt Liquid Glass and may look inconsistent next to system components&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to opt out of the Liquid Glass appearance for specific components, you can do so explicitly. But be aware that Apple's design direction is clearly moving toward this system, and opting out across your entire app may increasingly feel out of place over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Basic example of applying Liquid Glass manually in SwiftUI:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;import&lt;/span&gt; &lt;span class="kt"&gt;SwiftUI&lt;/span&gt;

&lt;span class="kd"&gt;struct&lt;/span&gt; &lt;span class="kt"&gt;CardView&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;View&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="nv"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kd"&gt;some&lt;/span&gt; &lt;span class="kt"&gt;View&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;VStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;alignment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;leading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;spacing&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kt"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Account Summary"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;font&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;headline&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="kt"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Balance: $4,200.00"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;font&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;foregroundStyle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;secondary&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;background&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;regularMaterial&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// This picks up the Liquid Glass material&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clipShape&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;RoundedRectangle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;cornerRadius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use &lt;code&gt;.regularMaterial&lt;/code&gt;, &lt;code&gt;.thickMaterial&lt;/code&gt;, or &lt;code&gt;.ultraThinMaterial&lt;/code&gt; to match the system's visual depth.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Foundation Models Framework Is Now Available
&lt;/h3&gt;

&lt;p&gt;iOS 26 introduces the Foundation Models framework, which lets you run on-device language models without sending any data to a server. This is separate from Core ML — it is designed for natural language tasks like summarization, classification, tagging, and contextual responses.&lt;/p&gt;

&lt;p&gt;You do not have to use it immediately, but building with the iOS 26 SDK means you now have access to it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A simple example — summarizing user notes on-device:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;import&lt;/span&gt; &lt;span class="kt"&gt;FoundationModels&lt;/span&gt;

&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;summarize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;note&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;throws&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;LanguageModel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="k"&gt;default&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Summarize this note in one sentence: &lt;/span&gt;&lt;span class="se"&gt;\(&lt;/span&gt;&lt;span class="n"&gt;note&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On-device processing means no network call, no latency spike, and no privacy concern about user data leaving the device.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Deprecated APIs Will Block Compilation
&lt;/h3&gt;

&lt;p&gt;This is the most common reason apps fail to build with a new SDK. Frameworks that Apple has been warning about for years — like &lt;code&gt;UIWebView&lt;/code&gt;, legacy &lt;code&gt;Core Data&lt;/code&gt; stacks, and older networking patterns — may now cause build failures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to do:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open your project in Xcode 26 and do a clean build. Read every warning and error carefully. The compiler will tell you exactly where deprecated usage lives.&lt;/p&gt;

&lt;p&gt;Run this in Terminal to quickly scan for common deprecated patterns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-rn&lt;/span&gt; &lt;span class="s2"&gt;"UIWebView&lt;/span&gt;&lt;span class="se"&gt;\|&lt;/span&gt;&lt;span class="s2"&gt;NSURLConnection&lt;/span&gt;&lt;span class="se"&gt;\|&lt;/span&gt;&lt;span class="s2"&gt;ABAddressBook"&lt;/span&gt; ./YourProject/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace any matches with their modern equivalents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;UIWebView&lt;/code&gt; -&amp;gt; &lt;code&gt;WKWebView&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;NSURLConnection&lt;/code&gt; -&amp;gt; &lt;code&gt;URLSession&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ABAddressBook&lt;/code&gt; -&amp;gt; &lt;code&gt;Contacts&lt;/code&gt; framework&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  4. Third-Party SDKs May Not Be Ready
&lt;/h3&gt;

&lt;p&gt;Your own code might be fine, but analytics libraries, ad networks, crash reporters, and feature flag SDKs may not have updated yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check your dependencies before submitting:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# If you use CocoaPods&lt;/span&gt;
pod outdated

&lt;span class="c"&gt;# If you use Swift Package Manager, check in Xcode:&lt;/span&gt;
&lt;span class="c"&gt;# File &amp;gt; Packages &amp;gt; Update to Latest Package Versions&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If a third-party SDK is not compatible with the iOS 26 SDK, you have three options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wait for the vendor to release an update&lt;/li&gt;
&lt;li&gt;Remove the SDK temporarily to unblock your submission&lt;/li&gt;
&lt;li&gt;Contact the vendor directly — most are aware and have updates in progress&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  5. Age Rating Questionnaire Update
&lt;/h3&gt;

&lt;p&gt;Apple updated its age rating questions in App Store Connect. If you have not already done this, update the questionnaire for all your apps immediately. Stale ratings can cause submission issues independently of the SDK requirement.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step-by-Step Migration Checklist
&lt;/h2&gt;

&lt;p&gt;Work through these in order. Do not skip the simulator testing step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1 — Install Xcode 26&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Download Xcode 26 from the Mac App Store or the Apple Developer portal. Ensure your Mac is running macOS Tahoe 26.2 or later, as Xcode 26 requires it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2 — Update your project settings&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open your project in Xcode 26. Set the build SDK to iOS 26. Do not change your deployment target unless you specifically want to drop support for older iOS versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3 — Do a clean build&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Product &amp;gt; Clean Build Folder (Shift + Command + K)
Product &amp;gt; Build (Command + B)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Read every warning. Treat deprecation warnings as tasks, not noise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4 — Run on an iOS 26 simulator&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go through every screen in your app. Look for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Layout issues in navigation bars or tab bars due to Liquid Glass materials&lt;/li&gt;
&lt;li&gt;Text readability problems against translucent backgrounds&lt;/li&gt;
&lt;li&gt;Custom UI components that look visually disconnected from system components&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 5 — Run on a physical iOS 26 device&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Simulators do not fully replicate Metal rendering or ProMotion behavior. Test on a real device, especially if your app has custom animations or graphics-heavy screens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6 — Update or remove incompatible third-party SDKs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;pod outdated&lt;/code&gt; or check Swift Package Manager for updates. Flag anything that does not have an iOS 26-compatible release.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7 — Archive and validate before submitting&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use Xcode's Organizer to archive your build and run Validate App before uploading. This catches many issues before Apple's review team does.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Product &amp;gt; Archive
Window &amp;gt; Organizer &amp;gt; Distribute App &amp;gt; Validate App
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 8 — Submit via TestFlight first&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Upload to TestFlight before submitting for App Store review. This gives you a buffer to catch any runtime issues that do not show up in local testing.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;This is Apple's standard annual SDK bump, but iOS 26 is a larger change than most years. Liquid Glass is a system-wide visual overhaul. Foundation Models is a new category of capability. The unified versioning across all Apple platforms (iOS, macOS, watchOS, tvOS, and visionOS all sharing the "26" designation) signals Apple pushing harder on cross-platform consistency.&lt;/p&gt;

&lt;p&gt;Developers who migrate now and begin adopting these APIs incrementally will be in a much stronger position when WWDC 2026 announcements add another layer of requirements to keep up with.&lt;/p&gt;

&lt;p&gt;The deadline is tomorrow. The steps above are everything you need. Build with Xcode 26, test on iOS 26, fix what breaks, and submit.&lt;/p&gt;




</description>
      <category>ios</category>
      <category>swift</category>
      <category>mobile</category>
      <category>programming</category>
    </item>
    <item>
      <title>iOS - On-Device AI vs. Cloud AI: Why Privacy Can Win Over Convenience</title>
      <dc:creator>ArshTechPro</dc:creator>
      <pubDate>Thu, 09 Apr 2026 17:41:35 +0000</pubDate>
      <link>https://dev.to/arshtechpro/on-device-ai-vs-cloud-ai-why-privacy-can-win-over-convenience-2h1i</link>
      <guid>https://dev.to/arshtechpro/on-device-ai-vs-cloud-ai-why-privacy-can-win-over-convenience-2h1i</guid>
      <description>&lt;p&gt;Most photo cleaner apps upload photos to the cloud for processing. Every selfie, every screenshot, every private moment — sent to a remote server before the app can identify which ones to delete.&lt;/p&gt;




&lt;h2&gt;
  
  
  The easy path: cloud AI
&lt;/h2&gt;

&lt;p&gt;Cloud processing is the default for a reason. It's cheaper to develop, easier to scale, and provides access to the most powerful models available. For developers, it's the path of least resistance.&lt;/p&gt;

&lt;p&gt;But it comes with a trade-off users pay for: their data leaves their device.&lt;/p&gt;

&lt;p&gt;For a photo management app, that means intimate, personal content — family photos, medical documents, private conversations captured in screenshots — all passing through infrastructure that isn't fully under the user's control.&lt;/p&gt;

&lt;h2&gt;
  
  
  The harder path: on-device AI
&lt;/h2&gt;

&lt;p&gt;CleanKit takes a different approach, running all of its intelligence directly on-device. Duplicate detection, blur analysis, screenshot grouping, smart categorization — none of it requires a network connection, and none of it ever leaves the phone.&lt;/p&gt;

&lt;p&gt;On-device processing means working within real hardware constraints: memory limits, thermal throttling, battery impact. Every algorithm needs to be optimized not just for accuracy, but for efficiency on mobile silicon.&lt;/p&gt;

&lt;p&gt;Apple's Core ML framework and the Neural Engine make this possible — but making it fast and reliable across thousands of different photo libraries requires significant engineering effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it matters
&lt;/h2&gt;

&lt;p&gt;Privacy isn't a feature. It's a design decision that affects every layer of a product.&lt;/p&gt;

&lt;p&gt;When data never leaves the device:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There's no server breach that can expose photos&lt;/li&gt;
&lt;li&gt;There's no privacy policy users need to trust&lt;/li&gt;
&lt;li&gt;There's no internet requirement — it works on a plane, in airplane mode, anywhere&lt;/li&gt;
&lt;li&gt;There's no per-scan cost that forces aggressive monetization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Users shouldn't have to choose between a clean photo library and their privacy. They can have both.&lt;/p&gt;

&lt;h2&gt;
  
  
  The result
&lt;/h2&gt;

&lt;p&gt;CleanKit scans thousands of photos in minutes, identifies duplicates, blurry shots, old screenshots, and large videos — all without a single byte leaving the phone. It reclaims gigabytes of storage while keeping data exactly where it belongs: with the user.&lt;/p&gt;




&lt;p&gt;For anyone building products that handle personal data, it's worth asking: does this need to leave the device? More often than not, the answer is no — and users will notice the difference.&lt;/p&gt;

&lt;p&gt;What trade-offs have others encountered between on-device and cloud processing? The comments are open for discussion.&lt;/p&gt;




&lt;p&gt;Clean Kit – Storage Cleaner is available on the &lt;a href="https://apps.apple.com/us/app/clean-kit-storage-cleaner/id6761145538" rel="noopener noreferrer"&gt;App Store&lt;/a&gt;. Free to try — photos stay on-device, always.&lt;/p&gt;

</description>
      <category>ios</category>
      <category>ai</category>
      <category>programming</category>
      <category>mobile</category>
    </item>
  </channel>
</rss>
