<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: cz</title>
    <description>The latest articles on DEV Community by cz (@czmilo).</description>
    <link>https://dev.to/czmilo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/czmilo"/>
    <language>en</language>
    <item>
      <title>Hunter Eyes: Complete Guide to Understanding and Evaluating Eye-Area Aesthetics in 2026</title>
      <dc:creator>cz</dc:creator>
      <pubDate>Fri, 24 Apr 2026 12:22:25 +0000</pubDate>
      <link>https://dev.to/czmilo/hunter-eyes-complete-guide-to-understanding-and-evaluating-eye-area-aesthetics-in-2026-2cfm</link>
      <guid>https://dev.to/czmilo/hunter-eyes-complete-guide-to-understanding-and-evaluating-eye-area-aesthetics-in-2026-2cfm</guid>
      <description>&lt;h1&gt;
  
  
  Hunter Eyes: Complete Guide to Understanding and Evaluating Eye-Area Aesthetics in 2026
&lt;/h1&gt;

&lt;h2&gt;
  
  
  🎯 Key Takeaways (TL;DR)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hunter Eyes&lt;/strong&gt; is an online label describing a predator-leaning eye-area look commonly discussed in looksmax communities—and &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; is the AI-powered tool that scores and measures it&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; product analyzes six eye-area dimensions (canthal tilt, eyelid exposure, socket depth, and more) and delivers a single composite score&lt;/li&gt;
&lt;li&gt;You can track your eye-area presentation over time using non-surgical, everyday habits—sleep, cold compress, brow grooming, and body composition&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; offers two modes: Scientific for objective readouts and Roast for a humorous take, both delivering the same underlying metrics&lt;/li&gt;
&lt;li&gt;The tool is an aesthetic self-assessment product, not a medical device—see a qualified professional for any health concerns&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What Are Hunter Eyes?&lt;/li&gt;
&lt;li&gt;The Anatomy Behind Hunter Eyes&lt;/li&gt;
&lt;li&gt;How Hunter Eyes AI Evaluates Your Eye Area&lt;/li&gt;
&lt;li&gt;Hunter Eyes Scoring Dimensions and Tiers&lt;/li&gt;
&lt;li&gt;Who Is Hunter Eyes For?&lt;/li&gt;
&lt;li&gt;How to Get the Most Out of Hunter Eyes&lt;/li&gt;
&lt;li&gt;FAQ&lt;/li&gt;
&lt;li&gt;Summary&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What Are Hunter Eyes?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Hunter Eyes&lt;/strong&gt; is both a concept and a product—and understanding the distinction is essential.&lt;/p&gt;

&lt;p&gt;In online aesthetics communities (looksmax, Reddit, TikTok), &lt;strong&gt;hunter eyes&lt;/strong&gt; refers to a specific combination of eye-area traits associated with a predator-like, commanding presence. Wikipedia's looksmaxxing entry defines it as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"a neutral or positive canthal tilt, little to no upper eyelid exposure, and low-set eyebrows—resembling the eye area of a predatorial animal."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In practical terms, &lt;strong&gt;hunter eyes&lt;/strong&gt; describe traits that read as dominant, focused, and sexually dimorphic—qualities that attract attention in both social and romantic contexts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; is an AI-powered web product built around this label. Upload a clear front-facing photo, and within seconds you receive:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An overall &lt;strong&gt;Hunter Eyes&lt;/strong&gt; composite score&lt;/li&gt;
&lt;li&gt;A tier rank (S / A / B / C / D–F) with community-style titles&lt;/li&gt;
&lt;li&gt;Six sub-dimension scores on a 1–10 scale&lt;/li&gt;
&lt;li&gt;Strengths and weaknesses breakdown&lt;/li&gt;
&lt;li&gt;Actionable improvement tips&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Pro Tip&lt;/strong&gt;: The &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; product is built so your photos are &lt;strong&gt;not kept long-term&lt;/strong&gt;. Images are used for the current analysis and removed after processing—see the official privacy policy for details.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Anatomy Behind Hunter Eyes
&lt;/h2&gt;

&lt;p&gt;To understand what &lt;strong&gt;hunter eyes&lt;/strong&gt; actually measure, it helps to break down the underlying anatomy. The &lt;strong&gt;hunter eyes&lt;/strong&gt; look emerges from how several facial structures interact:&lt;/p&gt;

&lt;h3&gt;
  
  
  Canthal Tilt
&lt;/h3&gt;

&lt;p&gt;Canthal tilt describes the angle of the outer eye corner relative to the inner corner. A &lt;strong&gt;positive canthal tilt&lt;/strong&gt; (outer corner higher than inner) is one of the most discussed traits in &lt;strong&gt;hunter eyes&lt;/strong&gt; discourse. A negative tilt—where the outer corner sits lower—is often framed as "prey eyes" in online communities. The &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; tool measures this angle objectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  Upper Eyelid Exposure
&lt;/h3&gt;

&lt;p&gt;How much of the upper sclera (white of the eye) shows above the iris is one of the strongest signals in &lt;strong&gt;hunter eyes&lt;/strong&gt; talk. Less upper eyelid exposure—achieved naturally through deeper-set eyes, thicker brow ridge, or favorable fat distribution—is commonly associated with the &lt;strong&gt;hunter eyes&lt;/strong&gt; aesthetic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Eye Socket Depth
&lt;/h3&gt;

&lt;p&gt;Deeper-set eyes create shadow and contrast around the eye, which is a hallmark of the &lt;strong&gt;hunter eyes&lt;/strong&gt; look. Bone structure plays a significant role here, though fat distribution and surrounding muscle tone can also influence perceived depth.&lt;/p&gt;

&lt;h3&gt;
  
  
  Brow Position and Eye Distance
&lt;/h3&gt;

&lt;p&gt;The distance between the brow and the upper eyelid (brow–eye distance) affects how "compact" the upper third of the face feels. A shorter, tighter brow–eye distance is frequently cited in &lt;strong&gt;hunter eyes&lt;/strong&gt; discussions as contributing to an intense, predatory gaze.&lt;/p&gt;

&lt;h3&gt;
  
  
  Eye Shape and Aperture
&lt;/h3&gt;

&lt;p&gt;Truly &lt;strong&gt;hunter eyes&lt;/strong&gt; tend toward an almond-shaped horizontal aperture rather than a round, vertically tall aperture. This shape is influenced by the interplay of the orbital bone, the orbital fat pad, and the tension of the surrounding skin and muscle.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lower Eyelid Position
&lt;/h3&gt;

&lt;p&gt;Lower eyelid tightness—how much lower sclera is visible—contributes to the overall alert, focused appearance associated with &lt;strong&gt;hunter eyes&lt;/strong&gt;. Excess lower lid exposure can soften the look.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Hunter Eyes AI Evaluates Your Eye Area
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; brings a data-driven approach to an area traditionally dominated by subjective judgment and comparison photos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Photo Upload
&lt;/h3&gt;

&lt;p&gt;Upload a clear, front-facing image with the following qualities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Even lighting&lt;/strong&gt; on both sides of the face&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Eyes and brow clearly visible&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Neutral expression&lt;/strong&gt; (no smiling, which can distort eyelid exposure)&lt;/li&gt;
&lt;li&gt;Standard image formats (JPG, PNG)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For consistent results over time, try to match lighting and camera angle across sessions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Choose Your Mode
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; offers two analysis modes:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scientific&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Objective, structured eye-area readouts with clinical-style scoring and improvement suggestions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Roast&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Humorous, satirical tone while keeping the same underlying scores and dimensions—easy to share with friends&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Both modes use the &lt;strong&gt;same evaluation engine&lt;/strong&gt;—the Roast mode just wraps the output in a more entertaining format.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Receive Your Hunter Eyes Score
&lt;/h3&gt;

&lt;p&gt;Results include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Total Score&lt;/strong&gt;: Composite score mapped to a tier&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tier Rank&lt;/strong&gt;: S / A / B / C / D–F with community-style titles (e.g., "Supreme Hunter," "Normie")&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Six Sub-dimension Scores&lt;/strong&gt;: Each on a 1–10 scale&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strengths &amp;amp; Weaknesses&lt;/strong&gt;: Which dimensions are working for you&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Actionable Tips&lt;/strong&gt;: Practical recommendations (sleep improvement, cold compress, brow grooming, body-fat management, eye-area training notes)&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Note&lt;/strong&gt;: &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; is an aesthetic self-assessment tool. It does &lt;strong&gt;not&lt;/strong&gt; replace professional medical or mental-health advice. For eye disease, vision concerns, or psychological distress, consult a qualified healthcare provider.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Hunter Eyes Scoring Dimensions and Tiers
&lt;/h2&gt;

&lt;p&gt;Here is how &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; breaks down the &lt;strong&gt;hunter eyes&lt;/strong&gt; concept into measurable dimensions:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Sub-dimension&lt;/th&gt;
&lt;th&gt;Role in Hunter Eyes Assessment&lt;/th&gt;
&lt;th&gt;Weight&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Canthal Tilt&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Outer vs. inner eye corner angle; the most discussed trait in &lt;strong&gt;hunter eyes&lt;/strong&gt; discourse&lt;/td&gt;
&lt;td&gt;20%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Upper Eyelid Exposure&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;How much upper sclera shows; less exposure reads more "hunter"&lt;/td&gt;
&lt;td&gt;20%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Eye Socket Depth&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Perceived depth of the orbit and bone structure&lt;/td&gt;
&lt;td&gt;20%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Lower Eyelid Exposure&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Lower lid tightness and lower scleral show&lt;/td&gt;
&lt;td&gt;15%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Eye Shape / Almond&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Horizontal vs. vertical aperture; almond shape alignment&lt;/td&gt;
&lt;td&gt;15%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Brow–Eye Distance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Brow height vs. lid; compactness of the upper third&lt;/td&gt;
&lt;td&gt;10%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These sub-scores combine into a &lt;strong&gt;total Hunter Eyes score&lt;/strong&gt; that maps to a tier:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tier&lt;/th&gt;
&lt;th&gt;Community Title&lt;/th&gt;
&lt;th&gt;Approximate Score Range&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;S&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Supreme Hunter&lt;/td&gt;
&lt;td&gt;8.5–10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;A&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Elite Hunter&lt;/td&gt;
&lt;td&gt;7.0–8.4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;B&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Decent Hunter&lt;/td&gt;
&lt;td&gt;5.5–6.9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;C&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Average / Borderline&lt;/td&gt;
&lt;td&gt;4.0–5.4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;D–F&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Prey Zone&lt;/td&gt;
&lt;td&gt;Below 4.0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Pro Tip&lt;/strong&gt;: Your score is most useful as a &lt;strong&gt;longitudinal tracking tool&lt;/strong&gt;. Comparing your &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; results over weeks and months—whether you've changed sleep habits, body fat, or grooming—gives you far more value than a single snapshot.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Who Is Hunter Eyes For?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; serves several audiences:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Looksmax Community Members
&lt;/h3&gt;

&lt;p&gt;If you've encountered &lt;strong&gt;hunter eyes&lt;/strong&gt; content on forums, Reddit (r/malegrooming, r/looksmax), TikTok, or YouTube and want one consistent, repeatable yardstick for your eye area, &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; provides just that. Instead of subjective before/after comparisons, you get numerical scores you can track over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Self-Improvement Enthusiasts
&lt;/h3&gt;

&lt;p&gt;People interested in optimizing their appearance want &lt;strong&gt;non-surgical levers&lt;/strong&gt; they can act on. The improvement tips from &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sleep quality and duration&lt;/strong&gt; (affects eye puffiness and lid swelling)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cold compress&lt;/strong&gt; (temporarily reduces puffiness and may tighten skin)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Brow grooming&lt;/strong&gt; (shaping the brow changes perceived brow–eye distance)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Body fat percentage&lt;/strong&gt; (affects facial fat distribution around the eyes)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Eye-area habits&lt;/strong&gt; (reducing eye rubbing, screen strain)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Those Who Prefer Data Over Subjectivity
&lt;/h3&gt;

&lt;p&gt;If you find subjective photo comparisons frustrating and prefer &lt;strong&gt;scores and dimensions&lt;/strong&gt; to vague impressions, the &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; breakdown gives you concrete numbers to work with.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;✅ &lt;strong&gt;Best Practice&lt;/strong&gt;: Use &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; results as &lt;strong&gt;one input&lt;/strong&gt; among many—alongside how you feel, feedback from people you trust, and professional advice. No single score defines your worth or attractiveness.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How to Get the Most Out of Hunter Eyes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Track Over Time, Don't Obsess Over One Score
&lt;/h3&gt;

&lt;p&gt;A single &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; score is a data point. What matters is the &lt;strong&gt;trend&lt;/strong&gt;. Take photos under consistent conditions (same lighting, same camera, same expression) every 2–4 weeks and compare your trajectory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Focus on the Levers You Can Actually Pull
&lt;/h3&gt;

&lt;p&gt;Some &lt;strong&gt;hunter eyes&lt;/strong&gt; traits are heavily influenced by bone structure and genetics—and are hard to change. Others respond to lifestyle and grooming adjustments. The &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; improvement tips are deliberately practical:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improve sleep (7–9 hours, consistent schedule)&lt;/li&gt;
&lt;li&gt;Reduce sodium and alcohol (reduces eye puffiness)&lt;/li&gt;
&lt;li&gt;Maintain a stable body fat percentage&lt;/li&gt;
&lt;li&gt;Groom eyebrows to optimize brow shape&lt;/li&gt;
&lt;li&gt;Use cold water or cold compresses in the morning&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Use the Right Mode for the Right Context
&lt;/h3&gt;

&lt;p&gt;Share your &lt;strong&gt;Hunter Eyes&lt;/strong&gt; results with friends using &lt;strong&gt;Roast mode&lt;/strong&gt; for laughs, but use &lt;strong&gt;Scientific mode&lt;/strong&gt; when you want to seriously study your scores and track specific dimensions over time.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤔 FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q: What exactly are "hunter eyes"?
&lt;/h3&gt;

&lt;p&gt;A: &lt;strong&gt;Hunter eyes&lt;/strong&gt; is an online aesthetics label describing a predator-leaning combination of eye-area traits—positive or neutral canthal tilt, less upper eyelid exposure, deeper-set sockets, and a more almond-shaped aperture. It originates from looksmax and looksmaxxing communities and is discussed extensively on platforms like Reddit and TikTok. Wikipedia notes that in looksmaxxing culture, &lt;strong&gt;hunter eyes&lt;/strong&gt; refer to "a neutral/positive canthal tilt, little to no upper eyelid exposure, and low-set eyebrows, resembling the eye area of a predatorial animal."&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Is Hunter Eyes a medical product?
&lt;/h3&gt;

&lt;p&gt;A: No. &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; is an aesthetic self-assessment tool. It does not diagnose medical conditions, replace professional healthcare, or provide treatment recommendations. For any eye health concerns, vision issues, or psychological distress related to appearance, consult a qualified medical professional.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How does Hunter Eyes AI work?
&lt;/h3&gt;

&lt;p&gt;A: &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; uses computer vision and AI to analyze six sub-dimensions of your eye area from a front-facing photo: canthal tilt, upper and lower eyelid exposure, eye socket depth, brow–eye distance, and eye shape. These are combined into a composite score and tier rank.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Does Hunter Eyes keep my photos?
&lt;/h3&gt;

&lt;p&gt;A: According to the product's privacy stance, photos are used only for the current analysis session and removed after processing. They are not kept long-term. Review the official &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; privacy policy for full details.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How can I improve my Hunter Eyes score?
&lt;/h3&gt;

&lt;p&gt;A: Improvement tips from &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; focus on actionable, non-surgical levers: optimize sleep quality, reduce eye puffiness through cold compresses and sodium reduction, maintain stable body composition, groom eyebrows strategically, and build consistent eye-area habits. Genetics and bone structure set a baseline, but lifestyle and grooming can meaningfully influence how your eye area reads.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: What does the tier system mean?
&lt;/h3&gt;

&lt;p&gt;A: &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; maps your total score to tiers S through F. S-tier ("Supreme Hunter") represents the highest-scoring eye-area presentations within &lt;strong&gt;hunter eyes&lt;/strong&gt; community standards. Lower tiers reflect dimensions that fall below the ideal range. The tier system is inspired by community language used in looksmax forums and social media.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Hunter eyes&lt;/strong&gt; is one of the most discussed concepts in online aesthetics communities—a shorthand for a commanding, predator-like eye-area appearance. &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; takes this concept and transforms it into something measurable and trackable.&lt;/p&gt;

&lt;p&gt;By breaking down the &lt;strong&gt;hunter eyes&lt;/strong&gt; look into six scored dimensions—canthal tilt, upper eyelid exposure, lower eyelid exposure, eye socket depth, brow–eye distance, and eye shape—the &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; product gives you a consistent, repeatable way to evaluate and follow your eye-area presentation over time.&lt;/p&gt;

&lt;p&gt;Whether you're a looksmax enthusiast, someone exploring non-surgical self-improvement, or simply curious about how your face reads in the &lt;strong&gt;hunter eyes&lt;/strong&gt; framework, &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; provides the tools to measure, understand, and act.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visit &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; to analyze your eye area today.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article provides informational content about the Hunter Eyes aesthetic concept and the &lt;a href="https://huntereyes.net/" rel="noopener noreferrer"&gt;Hunter Eyes&lt;/a&gt; AI-powered evaluation product. It is not medical advice.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Originally published at:&lt;/strong&gt; &lt;a href="https://curateclick.com/blog/hunter-eyes-complete-guide-2026" rel="noopener noreferrer"&gt;Hunter Eyes: Complete Guide to Understanding and Evaluating Eye-Area Aesthetics in 2026&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aesthetics</category>
      <category>looksmax</category>
      <category>selfimprovement</category>
    </item>
    <item>
      <title>Qwen3.6-35B-A3B Complete Review: Alibaba's Open-Source Coding Model That Beats Frontier Giants</title>
      <dc:creator>cz</dc:creator>
      <pubDate>Fri, 17 Apr 2026 11:01:11 +0000</pubDate>
      <link>https://dev.to/czmilo/qwen36-35b-a3b-complete-review-alibabas-open-source-coding-model-that-beats-frontier-giants-4382</link>
      <guid>https://dev.to/czmilo/qwen36-35b-a3b-complete-review-alibabas-open-source-coding-model-that-beats-frontier-giants-4382</guid>
      <description>&lt;h1&gt;
  
  
  Qwen3.6-35B-A3B Complete Review: Alibaba's Open-Source Coding Model That Beats Frontier Giants
&lt;/h1&gt;

&lt;h2&gt;
  
  
  🎯 TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Qwen3.6-35B-A3B&lt;/strong&gt; is Alibaba's latest open-source sparse Mixture-of-Experts (MoE) model with &lt;strong&gt;35B total parameters&lt;/strong&gt; and only &lt;strong&gt;3B active parameters per token&lt;/strong&gt;, making it incredibly efficient for local deployment&lt;/li&gt;
&lt;li&gt;Released &lt;strong&gt;April 16, 2026&lt;/strong&gt; under the &lt;strong&gt;Apache 2.0 license&lt;/strong&gt;, freely available on Hugging Face, Ollama, and Unsloth (GGUF format)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Outperforms&lt;/strong&gt; dense 27B-param models and directly competes with frontier models on coding benchmarks, scoring &lt;strong&gt;51.5 on Terminal-Bench 2.0&lt;/strong&gt; and &lt;strong&gt;73.4 on SWE-bench Verified&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Excels at &lt;strong&gt;agentic coding&lt;/strong&gt; — repository-level reasoning, tool calling, and multi-step workflows — all with &lt;strong&gt;262,144 token context&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Runs on consumer hardware (24GB RAM Mac compatible with GGUF quantization)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What Is Qwen3.6-35B-A3B?&lt;/li&gt;
&lt;li&gt;Technical Architecture: Sparse MoE Explained&lt;/li&gt;
&lt;li&gt;Benchmark Performance&lt;/li&gt;
&lt;li&gt;Agentic Coding Capabilities&lt;/li&gt;
&lt;li&gt;How to Run Locally&lt;/li&gt;
&lt;li&gt;Availability: Hugging Face, Ollama, Unsloth&lt;/li&gt;
&lt;li&gt;Qwen Studio: Cloud Access&lt;/li&gt;
&lt;li&gt;Comparison with Competitors&lt;/li&gt;
&lt;li&gt;FAQ&lt;/li&gt;
&lt;li&gt;Summary&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What Is Qwen3.6-35B-A3B?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Qwen3.6-35B-A3B&lt;/strong&gt; is the latest open-weight model from Alibaba's Qwen team, officially released on &lt;strong&gt;April 16, 2026&lt;/strong&gt;. It represents a significant leap in the Qwen series, specifically designed for &lt;strong&gt;agentic coding&lt;/strong&gt; and &lt;strong&gt;repository-scale reasoning tasks&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The model name encodes its architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;35B&lt;/strong&gt; — Total parameter count across all expert modules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A3B&lt;/strong&gt; — Only &lt;strong&gt;3B (3 billion) parameters&lt;/strong&gt; are activated per token, dramatically reducing inference cost while maintaining massive total capacity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a &lt;strong&gt;sparse Mixture-of-Experts (MoE)&lt;/strong&gt; architecture, where only a small subset of the model's "expert" neurons fire for each input token. The result: frontier-level performance at a fraction of the active parameter cost.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Key Insight&lt;/strong&gt;: Qwen3.6-35B-A3B activates only 3B parameters per token, yet its 35B total parameters give it knowledge capacity comparable to much larger dense models — at roughly 1/10th the inference compute.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Apache 2.0 License — Truly Open
&lt;/h3&gt;

&lt;p&gt;Unlike many "open" models with restrictive licenses, Qwen3.6-35B-A3B is released under &lt;strong&gt;Apache 2.0&lt;/strong&gt;, which means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Commercial use allowed&lt;/li&gt;
&lt;li&gt;✅ No royalties or fees&lt;/li&gt;
&lt;li&gt;✅ Can be modified and distributed&lt;/li&gt;
&lt;li&gt;✅ Patent rights granted&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes it one of the most permissive open-source models available for enterprise and individual developers alike.&lt;/p&gt;




&lt;h2&gt;
  
  
  Technical Architecture: Sparse MoE Explained
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How Mixture-of-Experts Works
&lt;/h3&gt;

&lt;p&gt;Traditional dense language models activate &lt;strong&gt;all parameters&lt;/strong&gt; for every token. In contrast, sparse MoE models like Qwen3.6-35B-A3B use a &lt;strong&gt;router mechanism&lt;/strong&gt; that selects only a subset of "expert" modules for each token.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Traditional Dense Model:  Every token → All 35B parameters
Qwen3.6-35B-A3B:          Every token → Only 3B active experts (via routing)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Inference efficiency&lt;/strong&gt;: Only ~8.6% of parameters are computed per token&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Knowledge capacity&lt;/strong&gt;: 35B total parameters store vast knowledge&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: More experts can be added without proportionally increasing compute&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Technical Specifications
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Specification&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Total Parameters&lt;/td&gt;
&lt;td&gt;35B&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Active Parameters per Token&lt;/td&gt;
&lt;td&gt;3B&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Architecture&lt;/td&gt;
&lt;td&gt;Sparse MoE (Mixture-of-Experts)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context Length&lt;/td&gt;
&lt;td&gt;262,144 tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;License&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multimodal&lt;/td&gt;
&lt;td&gt;Yes (image + video understanding)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tool Calling&lt;/td&gt;
&lt;td&gt;Native support&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Thinking Mode&lt;/td&gt;
&lt;td&gt;Yes — preserves chain-of-thought reasoning&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Thinking Mode Preservation
&lt;/h3&gt;

&lt;p&gt;One of Qwen3.6's most innovative features is its &lt;strong&gt;thinking mode preservation&lt;/strong&gt; — the model's ability to maintain full reasoning context across extended agentic workflows. This is particularly beneficial for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agent scenarios&lt;/strong&gt; where maintaining reasoning context enhances decision consistency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reducing token consumption&lt;/strong&gt; by minimizing redundant reasoning in multi-step tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improving KV cache utilization&lt;/strong&gt;, optimizing inference efficiency in both thinking and non-thinking modes&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Benchmark Performance
&lt;/h2&gt;

&lt;p&gt;Qwen3.6-35B-A3B demonstrates &lt;strong&gt;impressive performance&lt;/strong&gt; across coding and reasoning benchmarks, often surpassing models with significantly more active parameters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Coding Benchmarks
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Benchmark&lt;/th&gt;
&lt;th&gt;Qwen3.6-35B-A3B&lt;/th&gt;
&lt;th&gt;Gemma4-31B&lt;/th&gt;
&lt;th&gt;Claude Sonnet 4.5&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Terminal-Bench 2.0 (Agentic Coding)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;51.5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;42.9&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SWE-bench Pro&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;49.5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;35.7&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SWE-bench Verified&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;73.4&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RealWorldQA&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;85.3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;70.3&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terminal-Bench 2.0&lt;/strong&gt; measures agentic terminal coding — the ability to navigate repositories, write code, and execute commands. Qwen3.6-35B-A3B's score of &lt;strong&gt;51.5&lt;/strong&gt; crushes Gemma4-31B's 42.9 (+20% improvement)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SWE-bench Pro&lt;/strong&gt; tests software engineering problem-solving in real GitHub repositories — 49.5 vs 35.7 represents a massive &lt;strong&gt;38% advantage&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RealWorldQA&lt;/strong&gt; measures real-world multimodal understanding — Qwen3.6 scores 85.3, outperforming Claude Sonnet 4.5's 70.3 by 21%&lt;/li&gt;
&lt;li&gt;The model &lt;strong&gt;dramatically surpasses its predecessor Qwen3.5-35B-A3B&lt;/strong&gt;, especially on agentic coding and reasoning tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Outperforms the dense 27B-param Qwen3.5-27B&lt;/strong&gt; on several key coding benchmarks&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Comparison with Previous Qwen Generations
&lt;/h3&gt;

&lt;p&gt;Qwen3.6-35B-A3B isn't just an incremental update — it's a generational leap:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;vs Qwen3.5-35B-A3B&lt;/strong&gt;: Dramatic improvement on agentic tasks and repository-scale reasoning&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vs Qwen3.5-27B (dense)&lt;/strong&gt;: Outperforms on coding benchmarks despite using fewer active parameters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This demonstrates that sparse MoE architecture, when properly optimized, can surpass dense models of comparable or even larger total parameter counts.&lt;/p&gt;




&lt;h2&gt;
  
  
  Agentic Coding Capabilities
&lt;/h2&gt;

&lt;p&gt;Qwen3.6-35B-A3B is specifically engineered for &lt;strong&gt;agentic coding&lt;/strong&gt; — the ability to autonomously perform complex software engineering tasks across entire codebases.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is Agentic Coding?
&lt;/h3&gt;

&lt;p&gt;Agentic coding refers to AI models that can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Navigate large repositories&lt;/strong&gt; — understand project structure, dependencies, and architecture&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Write and modify code&lt;/strong&gt; across multiple files and languages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execute commands&lt;/strong&gt; — run tests, build systems, interact with terminals&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reason about code&lt;/strong&gt; — understand bug causes, trace execution paths, design solutions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chain multi-step tasks&lt;/strong&gt; — break complex problems into subtasks and execute sequentially&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Tool Calling Excellence
&lt;/h3&gt;

&lt;p&gt;Qwen3.6 excels at &lt;strong&gt;tool calling capabilities&lt;/strong&gt;, making it ideal for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IDE integrations&lt;/strong&gt; (Continue.dev, Cursor, VS Code Copilot)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Automated code review pipelines&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD automation&lt;/strong&gt; — model-triggered test runs and deployments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation generation&lt;/strong&gt; from code analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Repository-Scale Reasoning
&lt;/h3&gt;

&lt;p&gt;With &lt;strong&gt;262,144 token context&lt;/strong&gt;, Qwen3.6-35B-A3B can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ingest entire medium-sized repositories in a single context window&lt;/li&gt;
&lt;li&gt;Maintain coherent understanding across thousands of lines of code&lt;/li&gt;
&lt;li&gt;Reason about cross-file dependencies and architectural patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Pro Tip&lt;/strong&gt;: For repository-scale tasks, pair Qwen3.6-35B-A3B with a vector database (like Chroma or Qdrant) for retrieval-augmented generation (RAG). The model's tool calling makes it easy to query external knowledge bases.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Real-World Application: GraphRAG Workflow
&lt;/h3&gt;

&lt;p&gt;A March 2026 arXiv paper demonstrated that a &lt;strong&gt;GraphRAG workflow with Qwen3.5-35B-A3B&lt;/strong&gt; (the predecessor):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Improved bug resolution from 24% to 32%&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cut regressions from 6.08% to 1.82%&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Qwen3.6 builds on this foundation with even stronger reasoning capabilities.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Run Locally
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Option 1: Ollama (Simplest)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install Ollama (macOS/Linux)&lt;/span&gt;
brew &lt;span class="nb"&gt;install &lt;/span&gt;ollama

&lt;span class="c"&gt;# Pull and run the model&lt;/span&gt;
ollama run qwen3.6:35b-a3b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ollama automatically downloads the quantized model and manages GPU memory. On a 24GB Mac with Apple Silicon, you can run this model efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 2: Unsloth (Fastest, GGUF Format)
&lt;/h3&gt;

&lt;p&gt;Unsloth provides &lt;strong&gt;optimized GGUF&lt;/strong&gt; versions of Qwen3.6-35B-A3B, with dynamic 4-bit quantization that runs well on consumer hardware.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Download from Hugging Face&lt;/span&gt;
&lt;span class="c"&gt;# https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF&lt;/span&gt;

&lt;span class="c"&gt;# The full model at F16 precision is ~72GB&lt;/span&gt;
&lt;span class="c"&gt;# With 4-bit quantization, it fits in ~18GB VRAM&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Unsloth's dynamic 4-bit&lt;/strong&gt; achieves near-lossless quality at dramatically reduced memory requirements, making 35B models viable on 24GB GPUs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 3: SGLang (Production-Grade)
&lt;/h3&gt;

&lt;p&gt;For production deployments with optimal throughput:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; sglang.launch_server &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--model-path&lt;/span&gt; Qwen/Qwen3.6-35B-A3B &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--port&lt;/span&gt; 8000 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--tp-size&lt;/span&gt; 8 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--mem-fraction-static&lt;/span&gt; 0.8 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--context-length&lt;/span&gt; 262144 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--reasoning-parser&lt;/span&gt; qwen3 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--speculative-algo&lt;/span&gt; NEXTN &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--speculative-num-steps&lt;/span&gt; 3 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--speculative-eagle-topk&lt;/span&gt; 1 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--speculative-num-draft-tokens&lt;/span&gt; 4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Option 4: Hugging Face Transformers
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;transformers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AutoModelForCausalLM&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AutoTokenizer&lt;/span&gt;

&lt;span class="n"&gt;model_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Qwen/Qwen3.6-35B-A3B&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;tokenizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;AutoTokenizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_pretrained&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;AutoModelForCausalLM&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_pretrained&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;torch_dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;auto&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;device_map&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;auto&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Hardware Requirements
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Precision&lt;/th&gt;
&lt;th&gt;VRAM Required&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Full F16&lt;/td&gt;
&lt;td&gt;~72GB&lt;/td&gt;
&lt;td&gt;Requires 2x A100 or high-end workstation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8-bit&lt;/td&gt;
&lt;td&gt;~36GB&lt;/td&gt;
&lt;td&gt;Single A100 40GB viable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4-bit (Unsloth)&lt;/td&gt;
&lt;td&gt;~18-20GB&lt;/td&gt;
&lt;td&gt;RTX 3090/4090 or Mac 24GB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Availability
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Hugging Face
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Model Page&lt;/strong&gt;: &lt;a href="https://huggingface.co/Qwen/Qwen3.6-35B-A3B" rel="noopener noreferrer"&gt;https://huggingface.co/Qwen/Qwen3.6-35B-A3B&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The official release includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Base model weights&lt;/li&gt;
&lt;li&gt;Chat/instruct versions&lt;/li&gt;
&lt;li&gt;FP8 optimized variants&lt;/li&gt;
&lt;li&gt;SGLang integration scripts&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Ollama Library
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Library Page&lt;/strong&gt;: &lt;a href="https://ollama.com/library/qwen3.6:35b-a3b" rel="noopener noreferrer"&gt;https://ollama.com/library/qwen3.6:35b-a3b&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ollama's library version includes optimized defaults for consumer hardware.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unsloth (GGUF)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Model Page&lt;/strong&gt;: &lt;a href="https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF" rel="noopener noreferrer"&gt;https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unsloth provides quantized GGUF files for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mac compatible&lt;/strong&gt; (Apple Silicon optimized)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;4-bit dynamic&lt;/strong&gt; quantization for maximum efficiency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast inference&lt;/strong&gt; with Unsloth's inference engine&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Qwen Studio (Cloud)
&lt;/h3&gt;

&lt;p&gt;For those who don't want to run locally, &lt;strong&gt;Qwen Studio&lt;/strong&gt; offers comprehensive cloud access:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chatbot interface&lt;/li&gt;
&lt;li&gt;Image and video understanding&lt;/li&gt;
&lt;li&gt;Image generation&lt;/li&gt;
&lt;li&gt;Document processing&lt;/li&gt;
&lt;li&gt;Web search integration&lt;/li&gt;
&lt;li&gt;Tool utilization&lt;/li&gt;
&lt;li&gt;Artifacts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Access at &lt;a href="https://qwen.ai" rel="noopener noreferrer"&gt;https://qwen.ai&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Comparison with Competitors
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Qwen3.6-35B-A3B vs Gemma4-31B
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Qwen3.6-35B-A3B&lt;/th&gt;
&lt;th&gt;Gemma4-31B&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Active Parameters&lt;/td&gt;
&lt;td&gt;3B&lt;/td&gt;
&lt;td&gt;31B (dense)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total Parameters&lt;/td&gt;
&lt;td&gt;35B (MoE)&lt;/td&gt;
&lt;td&gt;31B (dense)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;License&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;Gemma Terms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Terminal-Bench 2.0&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;51.5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;42.9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SWE-bench Pro&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;49.5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;35.7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tool Calling&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;td&gt;Via API&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Verdict&lt;/strong&gt;: Qwen3.6-35B-A3B wins decisively on coding benchmarks with only 3B active vs Gemma's 31B dense — proof that sparse MoE architecture can dramatically outperform dense models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Qwen3.6-35B-A3B vs Claude Sonnet 4.5
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Qwen3.6-35B-A3B&lt;/th&gt;
&lt;th&gt;Claude Sonnet 4.5&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Deployment&lt;/td&gt;
&lt;td&gt;Local + Cloud&lt;/td&gt;
&lt;td&gt;API only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;License&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;Proprietary&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RealWorldQA&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;85.3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;70.3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multimodal&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tool Calling&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context&lt;/td&gt;
&lt;td&gt;262K&lt;/td&gt;
&lt;td&gt;200K&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Verdict&lt;/strong&gt;: Qwen3.6 matches or beats Claude Sonnet 4.5 on key benchmarks while offering local deployment and open weights.&lt;/p&gt;

&lt;h3&gt;
  
  
  Qwen3.6-35B-A3B vs GPT-4o
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Qwen3.6-35B-A3B&lt;/th&gt;
&lt;th&gt;GPT-4o&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Deployment&lt;/td&gt;
&lt;td&gt;Local&lt;/td&gt;
&lt;td&gt;API only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;License&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;Proprietary&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Open Weight&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Coding (SWE-bench)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;73.4&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~50-60 est.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tool Calling&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Verdict&lt;/strong&gt;: Qwen3.6-35B-A3B's open-source nature, Apache 2.0 license, and competitive performance make it an attractive alternative for developers who need local deployment.&lt;/p&gt;




&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q: What does "35B-A3B" mean?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;A&lt;/strong&gt;: The model has &lt;strong&gt;35B total parameters&lt;/strong&gt; across all expert modules in its MoE architecture, but only &lt;strong&gt;3B (A3B) parameters are activated per token&lt;/strong&gt;. This sparse activation is what makes inference so efficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Can I run Qwen3.6-35B-A3B on my Mac?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;A&lt;/strong&gt;: Yes — with &lt;strong&gt;Unsloth's 4-bit GGUF&lt;/strong&gt; quantization, the model runs on 24GB Apple Silicon Macs (M3 Max, M2 Ultra). The full F16 model requires ~72GB, which exceeds consumer hardware.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Is this model truly open-source?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;A&lt;/strong&gt;: Yes. Released under &lt;strong&gt;Apache 2.0 license&lt;/strong&gt; — one of the most permissive open-source licenses. You can use it commercially, modify it, and distribute it without paying royalties or requesting permission.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How does it compare to GPT-4 or Claude?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;A&lt;/strong&gt;: On coding benchmarks like SWE-bench Verified (73.4), Qwen3.6-35B-A3B approaches frontier-level performance. It's not quite at GPT-4o/Claude Opus level on all tasks, but at 3B active parameters and with an Apache 2.0 license, it's remarkably capable for local deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: What is Qwen3.6's thinking mode?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;A&lt;/strong&gt;: Qwen3.6 supports &lt;strong&gt;thinking mode&lt;/strong&gt; — an explicit chain-of-thought reasoning process where the model shows its work before giving final answers. This is preserved across agentic workflows, enabling more consistent multi-step reasoning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: What is speculative decoding support?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;A&lt;/strong&gt;: Qwen3.6 supports &lt;strong&gt;speculative decoding&lt;/strong&gt; with SGLang, enabling faster inference by using draft tokens predicted by a smaller model. This can significantly improve throughput in production deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Can it handle entire codebases?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;A&lt;/strong&gt;: With &lt;strong&gt;262,144 token context&lt;/strong&gt;, Qwen3.6-35B-A3B can ingest most medium-sized repositories in a single context. For larger projects, use retrieval-augmented generation (RAG) to fetch relevant files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: What makes it good for agentic coding?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;A&lt;/strong&gt;: Three key features:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Thinking mode preservation&lt;/strong&gt; — maintains reasoning context across steps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Native tool calling&lt;/strong&gt; — integrates with IDEs, terminals, and APIs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extended context (262K)&lt;/strong&gt; — processes large repositories without losing history&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Qwen3.6-35B-A3B represents a watershed moment&lt;/strong&gt; in the open-source AI landscape. For the first time, developers have access to a model that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Activates only 3B parameters&lt;/strong&gt; per token while leveraging 35B total parameters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Beats Gemma4-31B by 20%+&lt;/strong&gt; on agentic coding benchmarks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scores 73.4 on SWE-bench Verified&lt;/strong&gt; — approaching frontier-level coding ability&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runs locally&lt;/strong&gt; on consumer hardware (24GB Mac) with GGUF quantization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Carries Apache 2.0 license&lt;/strong&gt; — truly open for commercial and personal use&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  When to Use Qwen3.6-35B-A3B
&lt;/h3&gt;

&lt;p&gt;✅ &lt;strong&gt;Best for&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local LLM deployments (privacy, cost, offline access)&lt;/li&gt;
&lt;li&gt;Agentic coding workflows (Continue.dev, Cursor, custom agents)&lt;/li&gt;
&lt;li&gt;Repository-scale code understanding and generation&lt;/li&gt;
&lt;li&gt;Applications requiring tool calling and external integrations&lt;/li&gt;
&lt;li&gt;Teams needing commercially permissive open-source models&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;❌ &lt;strong&gt;Consider alternatives if&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need GPT-4/Claude-level reasoning on non-coding tasks&lt;/li&gt;
&lt;li&gt;You require managed API with SLAs and support&lt;/li&gt;
&lt;li&gt;Your hardware cannot handle 18-72GB model sizes&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Resources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hugging Face&lt;/strong&gt;: &lt;a href="https://huggingface.co/Qwen/Qwen3.6-35B-A3B" rel="noopener noreferrer"&gt;Qwen/Qwen3.6-35B-A3B&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ollama&lt;/strong&gt;: &lt;code&gt;ollama run qwen3.6:35b-a3b&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unsloth GGUF&lt;/strong&gt;: &lt;a href="https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF" rel="noopener noreferrer"&gt;unsloth/Qwen3.6-35B-A3B-GGUF&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Qwen Studio&lt;/strong&gt;: &lt;a href="https://qwen.ai" rel="noopener noreferrer"&gt;https://qwen.ai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/QwenLM/Qwen3.6" rel="noopener noreferrer"&gt;QwenLM/Qwen3.6&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Originally published at:&lt;/strong&gt; &lt;a href="https://curateclick.com/blog/qwen3-6-35b-a3b-review" rel="noopener noreferrer"&gt;Qwen3.6-35B-A3B Complete Review: Alibaba's Open-Source Coding Model&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Originally published at:&lt;/strong&gt; &lt;a href="https://curateclick.com/blog/qwen3-6-35b-a3b-review" rel="noopener noreferrer"&gt;Qwen3.6-35B-A3B Complete Review&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>coding</category>
      <category>qwen</category>
    </item>
    <item>
      <title>freqz: Photo Puzzles, AI Puzzles, and a Workflow That Actually Ships — 2026 Review</title>
      <dc:creator>cz</dc:creator>
      <pubDate>Fri, 10 Apr 2026 13:26:41 +0000</pubDate>
      <link>https://dev.to/czmilo/freqz-photo-puzzles-ai-puzzles-and-a-workflow-that-actually-ships-2026-review-ngl</link>
      <guid>https://dev.to/czmilo/freqz-photo-puzzles-ai-puzzles-and-a-workflow-that-actually-ships-2026-review-ngl</guid>
      <description>&lt;h1&gt;
  
  
  freqz: Photo Puzzles, AI Puzzles, and a Workflow That Actually Ships — 2026 Review
&lt;/h1&gt;

&lt;h2&gt;
  
  
  🎯 Key Takeaways (TL;DR)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;freqz&lt;/strong&gt; is an AI-powered creative platform combining &lt;strong&gt;photo puzzles&lt;/strong&gt;, &lt;strong&gt;AI puzzle aesthetics&lt;/strong&gt;, and &lt;strong&gt;K-style visual output&lt;/strong&gt; into a single repeatable workflow&lt;/li&gt;
&lt;li&gt;Unlike typical AI generators that produce inconsistent "lucky shots," freqz prioritizes &lt;strong&gt;reliable, repeatable output&lt;/strong&gt; — critical for creators and teams with publishing schedules&lt;/li&gt;
&lt;li&gt;The platform targets &lt;strong&gt;creators, designers, marketers, and social media operators&lt;/strong&gt; who need consistent visual assets without spending hours on configuration&lt;/li&gt;
&lt;li&gt;freqz compresses the entire creative loop — upload, choose a direction, generate, export — into a process you can repeat daily without mental fatigue&lt;/li&gt;
&lt;li&gt;The core value proposition: &lt;strong&gt;calm interfaces beat powerful ones&lt;/strong&gt; when the goal is finishing rather than tinkering&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What Is freqz?&lt;/li&gt;
&lt;li&gt;Core Features: Photo Puzzles and AI Puzzle Aesthetics&lt;/li&gt;
&lt;li&gt;Why freqz Beats AI Lucky Shots for Real Workflows&lt;/li&gt;
&lt;li&gt;Who Is freqz For?&lt;/li&gt;
&lt;li&gt;First-Time Tips: How to Get the Most Out of freqz&lt;/li&gt;
&lt;li&gt;SEO-Friendly Content Strategy for freqz&lt;/li&gt;
&lt;li&gt;The "Good Taste" Philosophy: Calm Interfaces as a Feature&lt;/li&gt;
&lt;li&gt;Trust and Transparency: What to Expect&lt;/li&gt;
&lt;li&gt;FAQ&lt;/li&gt;
&lt;li&gt;Get Started&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What Is freqz?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;freqz&lt;/strong&gt; (&lt;a href="https://freqz.net" rel="noopener noreferrer"&gt;https://freqz.net&lt;/a&gt;) is an AI creative platform that combines &lt;strong&gt;photo puzzles&lt;/strong&gt;, &lt;strong&gt;AI puzzle aesthetics&lt;/strong&gt;, and &lt;strong&gt;K-style visual output&lt;/strong&gt; into a single, repeatable creative workflow.&lt;/p&gt;

&lt;p&gt;The problem freqz solves is real: most AI image tools are "sometimes incredible, often inconsistent." They work great as a demo. They fall apart when you need to ship ten social posts by Friday with a consistent visual identity.&lt;/p&gt;

&lt;p&gt;freqz takes the opposite approach. Instead of maximizing what the model can do in isolation, freqz optimizes for &lt;strong&gt;what you can reproduce tomorrow&lt;/strong&gt;. The interface is intentionally simple — fewer knobs, fewer mystery failures, fewer moments where you wonder whether the model "just didn't feel like it."&lt;/p&gt;

&lt;p&gt;That restraint is the product philosophy. And it's surprisingly rare in the AI creative space.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Features: Photo Puzzles and AI Puzzle Aesthetics
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Photo Puzzles
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;photo puzzle&lt;/strong&gt; feature lets you upload a source image and transform it into a structured visual comparison — ideal for before-and-after content, portfolio tiles, carousel assets, and social media thumbnails.&lt;/p&gt;

&lt;p&gt;Unlike simple filters or presets, photo puzzles on freqz preserve the subject's integrity while applying a stylized transformation. The result is something that looks intentional, not accidental.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI Puzzle Aesthetics
&lt;/h3&gt;

&lt;p&gt;The AI puzzle aesthetic layer is where freqz differentiates from conventional photo editors. By treating each visual as a "puzzle piece" in a larger K-style composition, freqz helps creators build:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Profile photo refreshes&lt;/strong&gt; with consistent mood across a series&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cover art&lt;/strong&gt; with a cohesive visual language&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comparison graphics&lt;/strong&gt; that are crisp and easy to recombine in external design tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Themed feed content&lt;/strong&gt; where each post reinforces the last&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  K-Style Output
&lt;/h3&gt;

&lt;p&gt;K-style (Korean-style) visual aesthetics have become a dominant force in social media — characterized by clean compositions, subtle color grading, and an overall "premium but approachable" feel. freqz leans into this sensibility, making it easy to produce K-style visuals without endless trial and error.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why freqz Beats AI Lucky Shots for Real Workflows
&lt;/h2&gt;

&lt;p&gt;The most common AI image tool failure mode is "sometimes incredible, often inconsistent." Here's why that matters less on freqz:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Criteria&lt;/th&gt;
&lt;th&gt;Typical AI Generator&lt;/th&gt;
&lt;th&gt;freqz&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Consistency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Random or mood-dependent&lt;/td&gt;
&lt;td&gt;Planable, repeatable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Onboarding&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Tutorial required&lt;/td&gt;
&lt;td&gt;Start in under 2 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Output type&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Single images&lt;/td&gt;
&lt;td&gt;Batched consistent series&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Workflow fit&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Novelty toy&lt;/td&gt;
&lt;td&gt;Production tool&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Learning curve&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Weekly publishing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Exhausting&lt;/td&gt;
&lt;td&gt;Sustainable&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;freqz focuses on &lt;strong&gt;usable output you can plan around&lt;/strong&gt; — social posts, portfolio tiles, before-and-after comparisons, thumbnails. When your reputation depends on a coherent look, freqz behaves less like a randomizer and more like a production tool.&lt;/p&gt;

&lt;p&gt;This is why teams mention freqz in reviews: &lt;strong&gt;reliability beats novelty when you ship weekly.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Is freqz for?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Creators and Social Media Operators
&lt;/h3&gt;

&lt;p&gt;You need &lt;strong&gt;repeatable style&lt;/strong&gt; and &lt;strong&gt;repeatable throughput&lt;/strong&gt;. freqz fits a weekly publishing rhythm: one theme, one lane, many images. People who post often understand why velocity is the floor under distribution — and freqz is designed to raise that floor.&lt;/p&gt;

&lt;h3&gt;
  
  
  Designers, Marketers, and Growth Teams
&lt;/h3&gt;

&lt;p&gt;You need &lt;strong&gt;explainable steps&lt;/strong&gt; and &lt;strong&gt;controllable outcomes&lt;/strong&gt;. When you present to a client or stakeholder, "magic" is not a strategy. freqz keeps the pipeline legible, which makes it easier to adopt inside a real workflow instead of treating it as a one-off toy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Everyday Users
&lt;/h3&gt;

&lt;p&gt;You don't want to tinker — you want a good result quickly. That's exactly where freqz shines: &lt;strong&gt;complexity stays in the system, simplicity stays with you&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Pro Tip&lt;/strong&gt;&lt;br&gt;
The fastest way to understand freqz is to ship something small: one asset, one caption, one post. Once you feel how freqz fits your rhythm, you'll know why so many creators recommend it over alternatives.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  First-Time Tips: How to Get the Most Out of freqz
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Start with a Clear Subject
&lt;/h3&gt;

&lt;p&gt;Well-lit photos with a readable focal point tend to produce cleaner compositions in freqz. If you have an image with strong contrast and a clear subject, you'll get better puzzle transformations.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Keep a Series Consistent
&lt;/h3&gt;

&lt;p&gt;If you're building a themed set, &lt;strong&gt;stay in one style lane&lt;/strong&gt; so freqz can reinforce a unified look across all your content. This is especially important for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Social media campaigns&lt;/li&gt;
&lt;li&gt;Brand identity pieces&lt;/li&gt;
&lt;li&gt;Portfolio series&lt;/li&gt;
&lt;li&gt;Before/after documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Plan the Export
&lt;/h3&gt;

&lt;p&gt;Social crops, hero banners, and side-by-side comparisons have different framing needs. Generate in freqz, then refine in your layout tool if needed — often faster than fighting the wrong canvas up front.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Use the Photo Puzzle for Comparisons
&lt;/h3&gt;

&lt;p&gt;The comparison layout is one of freqz's most underrated features. Use it for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before/after transformations&lt;/li&gt;
&lt;li&gt;Product comparison cards&lt;/li&gt;
&lt;li&gt;Case study visuals&lt;/li&gt;
&lt;li&gt;Process documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  SEO-Friendly Content Strategy for freqz
&lt;/h2&gt;

&lt;p&gt;If you're writing articles, landing pages, or community posts to promote freqz, bind keywords to &lt;strong&gt;intent&lt;/strong&gt; instead of repeating adjectives. Search engines reward clarity. Users reward specificity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Recommended Keyword Clusters
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Brand &amp;amp; Product:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;freqz, AI puzzle tool, photo puzzle maker, K-style puzzle visuals&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use-Case:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;profile photo refresh, cover art, carousel assets, comparison graphics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Intent:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how to create AI visuals, best creative workflow, photo puzzle tutorial, freqz alternatives, freqz pricing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Article Skeleton
&lt;/h3&gt;

&lt;p&gt;A high-performing freqz article typically follows this structure:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;One-sentence thesis&lt;/strong&gt;: Why freqz fits the reader's goal&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Three verifiable reasons&lt;/strong&gt;: Speed, stability, versatility (or your honest experience)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One walkthrough&lt;/strong&gt;: From opening freqz to exporting a file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Three mini scenarios&lt;/strong&gt;: Different personas using freqz&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clear CTA&lt;/strong&gt;: Visit freqz.net and try your first image today&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This structure helps readers AND helps search engines understand that freqz is a &lt;strong&gt;concrete solution&lt;/strong&gt; — not a vague "AI app."&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Good Taste" Philosophy: Calm Interfaces as a Feature
&lt;/h2&gt;

&lt;p&gt;Many tools confuse "premium" with "complicated." freqz moves in the opposite direction: fewer dead ends, fewer mystery failures, fewer moments where you wonder whether the model "just didn't feel like it."&lt;/p&gt;

&lt;p&gt;From a &lt;strong&gt;product philosophy&lt;/strong&gt; standpoint, calm interfaces are expensive to build. From a &lt;strong&gt;user&lt;/strong&gt; standpoint, calm interfaces are valuable because they reduce regret.&lt;/p&gt;

&lt;p&gt;You are not trying to master freqz. You are trying to &lt;strong&gt;finish the task&lt;/strong&gt;. freqz is optimized for finishing.&lt;/p&gt;

&lt;p&gt;That's a meaningful distinction. Most AI creative tools are designed to impress in demos. freqz is designed to disappear into your workflow — which is a much harder thing to build.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trust and Transparency: What to Expect
&lt;/h2&gt;

&lt;p&gt;No tool should promise perfection on every input. What you can expect from freqz is a &lt;strong&gt;straightforward loop you can repeat&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pick a strong photo&lt;/li&gt;
&lt;li&gt;Steer the style&lt;/li&gt;
&lt;li&gt;Review the output&lt;/li&gt;
&lt;li&gt;Iterate quickly&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That iteration speed is what turns freqz from a novelty into a habit. When you write about freqz for SEO, be &lt;strong&gt;specific about inputs and outcomes&lt;/strong&gt; — readers reward honesty, and search engines reward pages that answer real questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  🤔 FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q: What exactly is a "photo puzzle" on freqz?
&lt;/h3&gt;

&lt;p&gt;A: A photo puzzle on freqz is a structured visual transformation where your source image is processed through AI to create a puzzle-piece-style comparison layout. It's ideal for before-and-after content, portfolio tiles, carousel assets, and social media thumbnails with a consistent aesthetic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How does freqz compare to other AI image generators?
&lt;/h3&gt;

&lt;p&gt;A: Unlike typical AI generators that produce random or mood-dependent output ("sometimes incredible, often inconsistent"), freqz prioritizes &lt;strong&gt;repeatability and consistency&lt;/strong&gt;. It's designed as a production tool for creators and teams who need to ship weekly — not as a novelty demo tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Do I need design experience to use freqz?
&lt;/h3&gt;

&lt;p&gt;A: No. freqz is specifically designed to have a low learning curve. The core path is obvious: bring an image, pick a style lane, generate, download. You can start producing usable assets in under 2 minutes without any design experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: What is K-style aesthetic?
&lt;/h3&gt;

&lt;p&gt;A: K-style (Korean-style) aesthetic refers to the visual design language popularized by Korean social media and content creators — characterized by clean compositions, subtle color grading, and a premium but approachable look. freqz makes it easy to produce K-style visuals without manual editing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Can freqz be used for commercial projects?
&lt;/h3&gt;

&lt;p&gt;A: Yes. freqz is built for creators, designers, and marketers who need production-quality assets. The output is designed to be published directly or used in client presentations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How does freqz handle consistency across a series of images?
&lt;/h3&gt;

&lt;p&gt;A: By staying in one style lane, freqz can reinforce a unified visual look across multiple images. This makes it ideal for brand identity work, social media campaigns, and portfolio series where visual consistency matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;

&lt;p&gt;Tools are judged by lists until you actually live with them. The real test is whether you return tomorrow.&lt;/p&gt;

&lt;p&gt;freqz earns that return by reducing friction: fewer abandoned attempts, fewer half-finished drafts, fewer "I'll try again later" moments.&lt;/p&gt;

&lt;p&gt;If your goal is a &lt;strong&gt;dependable creative loop for photo puzzles and AI puzzle output&lt;/strong&gt;, start here:&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;&lt;a href="https://freqz.net" rel="noopener noreferrer"&gt;freqz.net&lt;/a&gt;&lt;/strong&gt; — Try your first image today.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Originally published at:&lt;/strong&gt; &lt;a href="https://curateclick.com/blog/freqz-photo-puzzles-ai-puzzles-workflow-2026-review" rel="noopener noreferrer"&gt;freqz: Photo Puzzles, AI Puzzles, and a Workflow That Actually Ships — 2026 Review&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>photography</category>
      <category>design</category>
      <category>productivity</category>
    </item>
    <item>
      <title>SBTI and SBTI Skill: The 2026 Complete Guide to the Super-Big Personality Test</title>
      <dc:creator>cz</dc:creator>
      <pubDate>Fri, 10 Apr 2026 04:35:14 +0000</pubDate>
      <link>https://dev.to/czmilo/sbti-and-sbti-skill-the-2026-complete-guide-to-the-super-big-personality-test-289l</link>
      <guid>https://dev.to/czmilo/sbti-and-sbti-skill-the-2026-complete-guide-to-the-super-big-personality-test-289l</guid>
      <description>&lt;h1&gt;
  
  
  SBTI and SBTI Skill: The 2026 Complete Guide to the Super-Big Personality Test
&lt;/h1&gt;

&lt;h2&gt;
  
  
  🎯 Key Takeaways (TL;DR)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;SBTI (Super-Big Personality Test) is a humorous yet surprisingly insightful personality framework covering 15 psychological dimensions across 5 models — far more nuanced than MBTI&lt;/li&gt;
&lt;li&gt;An SBTI Skill is a Claude Code extension that runs the entire personality test conversationally, calculates your type, and generates a personalized result image&lt;/li&gt;
&lt;li&gt;The SBTI Skill is open-source, dependency-free (pure Python), and runs on macOS, Linux, and Windows&lt;/li&gt;
&lt;li&gt;Personality types range from CTRL (The Controller) to DRUNK (The Drunkard), matched using Manhattan distance similarity against a library of 25 archetypes&lt;/li&gt;
&lt;li&gt;The original SBTI test comes from Chinese creator @蛆肉儿串儿 on Bilibili; the Claude Skill was built with AI-assisted coding&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What is SBTI?&lt;/li&gt;
&lt;li&gt;The 5 Models and 15 Dimensions&lt;/li&gt;
&lt;li&gt;How SBTI Scoring Works&lt;/li&gt;
&lt;li&gt;What is a Claude Skill?&lt;/li&gt;
&lt;li&gt;How the SBTI Skill Was Built&lt;/li&gt;
&lt;li&gt;Repository Structure&lt;/li&gt;
&lt;li&gt;Core Python Implementation&lt;/li&gt;
&lt;li&gt;How to Use the SBTI Skill&lt;/li&gt;
&lt;li&gt;Sample Output&lt;/li&gt;
&lt;li&gt;Open Source and Credits&lt;/li&gt;
&lt;li&gt;FAQ&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What is SBTI?
&lt;/h2&gt;

&lt;p&gt;SBTI stands for &lt;strong&gt;Super-Big Personality Test&lt;/strong&gt; — a personality framework that originated from Chinese creator @蛆肉儿串儿 on Bilibili. Unlike traditional personality systems like MBTI, which reduces people to 4-letter types based on binary dimensions, SBTI takes a irreverent, meme-laden approach to self-discovery that is both entertaining and surprisingly deep.&lt;/p&gt;

&lt;p&gt;The core idea: map a person's responses across &lt;strong&gt;15 psychological dimensions&lt;/strong&gt;, organized into 5 models, producing a 15-character pattern like &lt;code&gt;HHH-HMH-MHH-HHH-MHM&lt;/code&gt;. This pattern is then matched against 25 unique personality archetypes — from &lt;strong&gt;CTRL (The Controller)&lt;/strong&gt; to &lt;strong&gt;DRUNK (The Drunkard)&lt;/strong&gt; — using Manhattan distance similarity scoring.&lt;/p&gt;

&lt;p&gt;Some types are hidden and only trigger based on specific answers. For example, the DRUNK type activates if you indicate heavy alcohol consumption. Others serve as fallback options when the match is too loose — for instance, &lt;code&gt;HHHH&lt;/code&gt; (the "Gigilord") is assigned when your brain pattern is so unique that the standard type library refuses to categorize you.&lt;/p&gt;

&lt;p&gt;This blend of psychological depth and meme culture is what makes SBTI stand out. It's not trying to be a clinical instrument — it's designed to be shareable, fun, and genuinely insightful about the complexity of human personality.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 5 Models and 15 Dimensions
&lt;/h2&gt;

&lt;p&gt;SBTI organizes personality into 5 models, each containing 3 dimensions. Here's the complete breakdown:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Dimensions&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Self Model&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;S1 Self-Esteem, S2 Self-Clarity, S3 Core Values&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Emotional Model&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;E1 Attachment Security, E2 Emotional Investment, E3 Boundaries &amp;amp; Dependence&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Attitude Model&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A1 Worldview Tendency, A2 Rules &amp;amp; Flexibility, A3 Life Meaning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Action Drive Model&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Ac1 Motivation Orientation, Ac2 Decision Style, Ac3 Execution Mode&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Social Model&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;So1 Social Proactivity, So2 Interpersonal Boundaries, So3 Expression &amp;amp; Authenticity&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each dimension is scored on a &lt;strong&gt;3-point scale&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;L&lt;/strong&gt; = Low&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;M&lt;/strong&gt; = Medium&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;H&lt;/strong&gt; = High&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The final output is a 15-character vector (e.g., &lt;code&gt;HHH-HMH-MHH-HHH-MHM&lt;/code&gt;), which is then compared against the 25 personality type patterns in the type library.&lt;/p&gt;

&lt;p&gt;This is significantly more nuanced than MBTI's 4-factor approach. While MBTI tells you whether you prefer extroversion or introversion, SBTI tries to capture the texture of how you relate to yourself, your emotions, your worldview, your action patterns, and your social behavior — all separately.&lt;/p&gt;

&lt;h2&gt;
  
  
  How SBTI Scoring Works
&lt;/h2&gt;

&lt;p&gt;The scoring algorithm uses &lt;strong&gt;Manhattan distance&lt;/strong&gt; to find the closest matching personality type from the library of 25 archetypes. Here's the process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sum answers per dimension → convert to L/M/H level&lt;/li&gt;
&lt;li&gt;Build a 15-character user vector&lt;/li&gt;
&lt;li&gt;Compute Manhattan distance against all 25 personality patterns&lt;/li&gt;
&lt;li&gt;Apply special rules (e.g., drunk trigger, HHHH fallback)&lt;/li&gt;
&lt;li&gt;Return the type with the smallest distance, along with match confidence&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some types require special triggers — they aren't just distance-based. The DRUNK type, for instance, is only assigned if specific answers indicate heavy alcohol consumption. These special rules add a layer of whimsy while keeping the system grounded in the idea that certain personality configurations are distinctive enough to warrant their own category.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Claude Skill?
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;Claude Skill&lt;/strong&gt; is a lightweight, portable unit of functionality that extends Claude's capabilities. Think of it as a plug-in for Claude Code. It consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;code&gt;SKILL.md&lt;/code&gt; file — the manifest defining the skill's name, description, trigger words, and step-by-step instructions&lt;/li&gt;
&lt;li&gt;Supporting files — Python scripts, images, data files, etc.&lt;/li&gt;
&lt;li&gt;Placed in the &lt;code&gt;~/.claude/skills/&lt;/code&gt; directory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Skills are invoked by users with slash commands (e.g., &lt;code&gt;/sbti&lt;/code&gt;) and executed entirely within Claude's workflow — &lt;strong&gt;no external services, no API keys required&lt;/strong&gt;. This makes Claude Skills a powerful way to package and share domain-specific expertise.&lt;/p&gt;

&lt;p&gt;Unlike traditional software that requires you to learn an API or write code, a Claude Skill lets you have a natural conversation to accomplish a task. For the SBTI Skill, this means Claude asks you the questions one by one, you respond with your answer choice, and Claude handles the scoring and result presentation — all without you ever touching a command line.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the SBTI Skill Was Built
&lt;/h2&gt;

&lt;p&gt;The SBTI Skill demonstrates several best practices for building Claude Skills:&lt;/p&gt;

&lt;h3&gt;
  
  
  Repository Structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sbti-skill/
├── SKILL.md          # Skill manifest
├── sbti.py           # Core Python logic (questions + scoring)
└── image/            # 27 personality result images
    ├── CTRL.png
    ├── BOSS.png
    ├── DRUNK.png
    └── ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Core Python Implementation
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;sbti.py&lt;/code&gt; script is &lt;strong&gt;dependency-free&lt;/strong&gt; — it uses only the Python standard library. It exposes two commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# List all questions&lt;/span&gt;
python3 sbti.py questions

&lt;span class="c"&gt;# Calculate personality from answers&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'{"q1": 3, "q2": 1, ...}'&lt;/span&gt; | python3 sbti.py calc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;calc&lt;/code&gt; command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sums answers per dimension → converts to L/M/H level&lt;/li&gt;
&lt;li&gt;Builds a 15-character user vector&lt;/li&gt;
&lt;li&gt;Computes Manhattan distance against all 25 personality patterns&lt;/li&gt;
&lt;li&gt;Applies special rules (drunk trigger, HHHH fallback)&lt;/li&gt;
&lt;li&gt;Returns JSON with type code, name, description, image path, and dimension breakdown&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cross-Platform Considerations
&lt;/h3&gt;

&lt;p&gt;To ensure the skill works on macOS, Linux, and Windows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Prefer uvx if available, fall back to python3 if needed&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; uvx &amp;amp;&amp;gt; /dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nv"&gt;PY_CMD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"uvx --from python python3"&lt;/span&gt;
&lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nv"&gt;PY_CMD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"python3"&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="nv"&gt;$PY_CMD&lt;/span&gt; sbti.py questions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the result image, Downloads directory detection uses platform-specific paths:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;DOWNLOAD_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/Downloads"&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOWNLOAD_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nv"&gt;DOWNLOAD_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$USERPROFILE&lt;/span&gt;&lt;span class="s2"&gt;/Downloads"&lt;/span&gt;  &lt;span class="c"&gt;# Windows fallback&lt;/span&gt;
&lt;span class="k"&gt;fi
&lt;/span&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; ./image/&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.png &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOWNLOAD_DIR&lt;/span&gt;&lt;span class="s2"&gt;/sbti_&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.png"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to Use the SBTI Skill
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Claude Code CLI installed&lt;/li&gt;
&lt;li&gt;Skill placed in &lt;code&gt;~/.claude/skills/sbti-skill/&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Invocation
&lt;/h3&gt;

&lt;p&gt;Simply type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/sbti
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Workflow
&lt;/h3&gt;

&lt;p&gt;The SBTI Skill walks you through a complete 4-step process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Welcome&lt;/strong&gt; — Claude greets you and explains the test&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Q&amp;amp;A&lt;/strong&gt; — Claude asks all 30+ questions one by one; you respond with your choice (A/B/C/D)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scoring&lt;/strong&gt; — After the last question, Claude runs the calculation using the Python script&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Result&lt;/strong&gt; — Claude outputs your personality type, description, 15-dimension breakdown, and saves the result image to &lt;code&gt;~/Downloads/sbti_{TYPE}.png&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Sample Output
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## 你的 SBTI 人格&lt;/span&gt;

&lt;span class="gs"&gt;**类型代码**&lt;/span&gt;: CTRL（拿捏者）
&lt;span class="gs"&gt;**匹配度**&lt;/span&gt;: 87% · 精准命中 11/15 维
&lt;span class="p"&gt;
---
&lt;/span&gt;
&lt;span class="gu"&gt;### 该人格的简单解读&lt;/span&gt;

您是宇宙熵增定律的天然反抗者！CTRL人格，是行走的人形自走任务管理器...
&lt;span class="p"&gt;
---
&lt;/span&gt;
&lt;span class="gu"&gt;### 十五维度评分&lt;/span&gt;

| 维度 | 等级 | 解读 |
|------|------|------|
| S1 自尊自信 | H | ... |
| S2 自我清晰度 | H | ... |
| S3 核心价值观 | H | ... |
| E1 依恋安全感 | M | ... |
| E2 情感投入度 | H | ... |
| E3 边界与依赖 | M | ... |
| A1 世界观倾向 | H | ... |
| A2 规则与灵活 | M | ... |
| A3 人生意义感 | H | ... |
| Ac1 动机取向 | H | ... |
| Ac2 决策风格 | H | ... |
| Ac3 执行模式 | H | ... |
| So1 社交主动性 | M | ... |
| So2 人际边界 | H | ... |
| So3 表达与真实 | M | ... |
&lt;span class="p"&gt;
---
&lt;/span&gt;
&lt;span class="gu"&gt;### 结果图片&lt;/span&gt;

图片已保存至: ~/Downloads/sbti_CTRL.png
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Open Source and Credits
&lt;/h2&gt;

&lt;p&gt;The SBTI Skill is fully open-source and available at &lt;a href="https://github.com/sing1ee/sbti-skill" rel="noopener noreferrer"&gt;github.com/sing1ee/sbti-skill&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Extending the personality library&lt;/strong&gt; is straightforward — just add a new entry to &lt;code&gt;TYPE_LIBRARY&lt;/code&gt; and &lt;code&gt;NORMAL_TYPES&lt;/code&gt; in &lt;code&gt;sbti.py&lt;/code&gt;, then add a matching image in the &lt;code&gt;image/&lt;/code&gt; directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Credits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Original SBTI Test&lt;/strong&gt;: &lt;a href="https://www.bilibili.com/video/BV1LpDHByET6/" rel="noopener noreferrer"&gt;B站@蛆肉儿串儿&lt;/a&gt; — the creator of the original personality test&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skill Implementation&lt;/strong&gt;: Claude Code + AI-assisted coding&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License&lt;/strong&gt;: MIT&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Disclaimer&lt;/strong&gt;: This article and the SBTI Skill are for entertainment purposes only. Personality tests are not scientifically validated instruments and should not be used for diagnosis, hiring, dating, or life-altering decisions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  🤔 FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q: Is SBTI scientifically validated?
&lt;/h3&gt;

&lt;p&gt;A: No. SBTI is explicitly designed as a humorous, entertainment-focused personality framework — not a clinical or scientific instrument. It draws on psychological dimensions (self-esteem, attachment security, motivation orientation, etc.) that have academic grounding, but the mapping to specific types and the meme-laden presentation are purely for fun.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How is SBTI different from MBTI?
&lt;/h3&gt;

&lt;p&gt;A: MBTI uses 4 binary dimensions (Extroversion/Introversion, Sensing/Intuition, Thinking/Feeling, Judging/Perceiving), producing 16 types. SBTI uses 15 dimensions scored on 3 levels (L/M/H), producing a much more granular pattern that is matched against 25 named archetypes using distance-based similarity. SBTI is also far more irreverent in its naming and presentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Do I need to install Python to use the SBTI Skill?
&lt;/h3&gt;

&lt;p&gt;A: Not necessarily. The SBTI Skill uses a cross-platform shell wrapper that prefers &lt;code&gt;uvx&lt;/code&gt; (if available) and falls back to &lt;code&gt;python3&lt;/code&gt;. Most modern systems have at least one of these available.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Can I add my own personality types to SBTI?
&lt;/h3&gt;

&lt;p&gt;A: Yes! The type library in &lt;code&gt;sbti.py&lt;/code&gt; is designed to be extended. Add a new entry to &lt;code&gt;TYPE_LIBRARY&lt;/code&gt; and &lt;code&gt;NORMAL_TYPES&lt;/code&gt;, create a matching result image in &lt;code&gt;image/&lt;/code&gt;, and your type is live.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: What are the 25 personality types?
&lt;/h3&gt;

&lt;p&gt;A: Types include CTRL (The Controller), BOSS (The Boss), DRUNK (The Drunkard), GIGILORD (unique pattern fallback), and 21 others. The full library is defined in &lt;code&gt;sbti.py&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: What is Manhattan distance and why is it used for SBTI scoring?
&lt;/h3&gt;

&lt;p&gt;A: Manhattan distance is the sum of absolute differences across all dimensions. For a 15-character SBTI vector, it measures how "far" your personality pattern is from each archetype. The closest match wins — but special rules (like the DRUNK trigger) override distance-based matching when specific conditions are met.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Originally published at:&lt;/strong&gt; &lt;a href="https://curateclick.com/blog/sbti-sbti-skill-2026-complete-guide" rel="noopener noreferrer"&gt;SBTI and SBTI Skill: The 2026 Complete Guide to the Super-Big Personality Test&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>personality</category>
      <category>claude</category>
      <category>programming</category>
    </item>
    <item>
      <title>Happy Horse: The AI Video Generator Redefining Cinematic Content Creation in 2026</title>
      <dc:creator>cz</dc:creator>
      <pubDate>Wed, 08 Apr 2026 01:52:35 +0000</pubDate>
      <link>https://dev.to/czmilo/happy-horse-the-ai-video-generator-redefining-cinematic-content-creation-in-2026-4lli</link>
      <guid>https://dev.to/czmilo/happy-horse-the-ai-video-generator-redefining-cinematic-content-creation-in-2026-4lli</guid>
      <description>&lt;h1&gt;
  
  
  Happy Horse: The AI Video Generator That's Redefining Cinematic Content Creation in 2026
&lt;/h1&gt;

&lt;h2&gt;
  
  
  🎯 Key Takeaways (TL;DR)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Happy Horse-1.0 currently ranks &lt;strong&gt;#1 globally&lt;/strong&gt; on the Artificial Analysis Text-to-Video Arena with an Elo of 1333, outperforming industry giants like Seedance 2.0&lt;/li&gt;
&lt;li&gt;The model excels at both &lt;strong&gt;text-to-video&lt;/strong&gt; and &lt;strong&gt;image-to-video&lt;/strong&gt; generation with industry-leading motion quality and prompt adherence&lt;/li&gt;
&lt;li&gt;Happy Horse-1.0 uniquely &lt;strong&gt;jointly generates synchronized video and audio&lt;/strong&gt; from text prompts — fully multilingual and open-source&lt;/li&gt;
&lt;li&gt;On the Image-to-Video leaderboard, it dominates with an &lt;strong&gt;Elo of 1392&lt;/strong&gt;, setting a new benchmark for the entire AI video industry&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What Is Happy Horse?&lt;/li&gt;
&lt;li&gt;Performance Benchmarks: Where Happy Horse Stands&lt;/li&gt;
&lt;li&gt;Key Features and Capabilities&lt;/li&gt;
&lt;li&gt;How Happy Horse Compares to Competitors&lt;/li&gt;
&lt;li&gt;Use Cases and Applications&lt;/li&gt;
&lt;li&gt;How to Get Started with Happy Horse&lt;/li&gt;
&lt;li&gt;FAQ&lt;/li&gt;
&lt;li&gt;Summary&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What Is Happy Horse? {#what-is-happy-horse}
&lt;/h2&gt;

&lt;p&gt;Happy Horse (also referred to as &lt;strong&gt;HappyHorse-1.0&lt;/strong&gt;) is a cutting-edge AI video generation model that has taken the artificial intelligence community by storm. Built for cinematic &lt;strong&gt;text-to-video&lt;/strong&gt; and &lt;strong&gt;image-to-video&lt;/strong&gt; generation, it delivers unmatched motion quality, superior prompt following, and remarkably fast generation speeds.&lt;/p&gt;

&lt;p&gt;What sets Happy Horse apart from the crowded AI video landscape is its &lt;strong&gt;holistic approach to content generation&lt;/strong&gt; — it doesn't just produce visuals. HappyHorse-1.0 &lt;strong&gt;jointly generates synchronized video and audio from text prompts&lt;/strong&gt;, creating complete, production-ready video clips that include sound design, narration nuances, and ambient audio — all generated simultaneously from a single text input.&lt;/p&gt;

&lt;p&gt;The model is &lt;strong&gt;fully open&lt;/strong&gt;, meaning developers, creators, and researchers can access and build upon it. It supports &lt;strong&gt;multilingual prompts&lt;/strong&gt;, making it accessible to a global audience without language barriers.&lt;/p&gt;

&lt;p&gt;Since its surprise appearance on the &lt;strong&gt;Artificial Analysis AI Video Arena&lt;/strong&gt;, Happy Horse has rapidly climbed the rankings, establishing itself as a serious contender — and often the outright leader — against established players like ByteDance's Seedance 2.0, Kling, and Wan.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Pro Tip&lt;/strong&gt;&lt;br&gt;
Happy Horse emerged seemingly overnight as a "mystery model" on Artificial Analysis's leaderboards, quickly dominating both Text-to-Video and Image-to-Video categories. Its rapid ascent suggests a breakthrough from an Asian AI lab, possibly related to the WAN series of models.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Performance Benchmarks: Where Happy Horse Stands {#performance-benchmarks}
&lt;/h2&gt;

&lt;p&gt;Numbers don't lie. Happy Horse-1.0's performance on independent, third-party benchmarks tells a compelling story.&lt;/p&gt;

&lt;h3&gt;
  
  
  Text-to-Video Arena (Without Audio)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rank&lt;/th&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Elo Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;#1&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;HappyHorse-1.0&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1333&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;#2&lt;/td&gt;
&lt;td&gt;Dreamina Seedance 2.0 720p&lt;/td&gt;
&lt;td&gt;1355*&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;#3&lt;/td&gt;
&lt;td&gt;PixVerse V6&lt;/td&gt;
&lt;td&gt;1338&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;#4&lt;/td&gt;
&lt;td&gt;grok-imagine-video&lt;/td&gt;
&lt;td&gt;1333&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;#5&lt;/td&gt;
&lt;td&gt;Kling 3.0 Omni 1080p (Pro)&lt;/td&gt;
&lt;td&gt;1297&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;*Note: Scores vary by leaderboard category. Happy Horse leads the overall Text-to-Video Arena with Elo 1333.&lt;/p&gt;

&lt;h3&gt;
  
  
  Text-to-Video Arena (With Audio)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rank&lt;/th&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Elo Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;#1&lt;/td&gt;
&lt;td&gt;Dreamina Seedance 2.0 720p&lt;/td&gt;
&lt;td&gt;1219&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;#2&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;HappyHorse-1.0&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1205&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;#3&lt;/td&gt;
&lt;td&gt;Kling 3.0 Omni&lt;/td&gt;
&lt;td&gt;~1180&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Image-to-Video Arena
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rank&lt;/th&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Elo Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;#1&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;HappyHorse-1.0&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1392&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;#2&lt;/td&gt;
&lt;td&gt;Dreamina Seedance 2.0 720p&lt;/td&gt;
&lt;td&gt;1355&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;#3&lt;/td&gt;
&lt;td&gt;PixVerse V6&lt;/td&gt;
&lt;td&gt;1338&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;#4&lt;/td&gt;
&lt;td&gt;grok-imagine-video&lt;/td&gt;
&lt;td&gt;1333&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;#5&lt;/td&gt;
&lt;td&gt;Kling 3.0 Omni 1080p (Pro)&lt;/td&gt;
&lt;td&gt;1297&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The &lt;strong&gt;Image-to-Video&lt;/strong&gt; ranking is particularly striking — Happy Horse's Elo of 1392 represents a significant margin over the second-place model, establishing it as the clear leader in converting static images into dynamic, high-quality video sequences.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Features and Capabilities {#key-features}
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Cinematic Text-to-Video Generation
&lt;/h3&gt;

&lt;p&gt;Happy Horse transforms text prompts into cinematic video footage. Whether you're describing a sweeping landscape, an intense action sequence, or a subtle emotional moment, Happy Horse renders it with &lt;strong&gt;photorealistic fidelity and natural motion dynamics&lt;/strong&gt;. The model understands complex prompts and delivers results that closely match the creator's intent.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Image-to-Video Transformation
&lt;/h3&gt;

&lt;p&gt;Feed a single image into Happy Horse and watch it come alive. The model takes static photographs and animates them into fluid video sequences — perfect for bringing vintage photos, concept art, product images, or portraits to life.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Joint Video + Audio Generation
&lt;/h3&gt;

&lt;p&gt;This is Happy Horse's secret weapon. Unlike most AI video models that generate either silent video or require separate audio pipelines, &lt;strong&gt;HappyHorse-1.0 generates video and audio simultaneously from text&lt;/strong&gt;. This dramatically reduces post-production overhead and produces more cohesive final content.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Multilingual Support
&lt;/h3&gt;

&lt;p&gt;Happy Horse understands and processes prompts in &lt;strong&gt;multiple languages&lt;/strong&gt;, making it a truly global tool. Whether you write your prompts in English, Chinese, Japanese, Spanish, or any other supported language, the model delivers consistent quality.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Superior Motion Quality
&lt;/h3&gt;

&lt;p&gt;One of the most common failure points in AI-generated video is &lt;strong&gt;unnatural motion&lt;/strong&gt; — jerky movements, physics violations, or inconsistent character animation. Happy Horse addresses this with advanced motion modeling that produces fluid, physically plausible movement across all generated content.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Clean Prompt Following
&lt;/h3&gt;

&lt;p&gt;AI models often "hallucinate" or drift from the original prompt, adding elements that weren't requested or ignoring key details. Happy Horse demonstrates &lt;strong&gt;exceptional prompt adherence&lt;/strong&gt;, staying true to the creator's vision throughout the generated clip.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Open and Accessible
&lt;/h3&gt;

&lt;p&gt;Unlike many competing models that are locked behind proprietary APIs or subscription paywalls, Happy Horse is &lt;strong&gt;fully open&lt;/strong&gt;. This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Researchers can inspect and study the model architecture&lt;/li&gt;
&lt;li&gt;Developers can fine-tune it for specific use cases&lt;/li&gt;
&lt;li&gt;Creators can run it locally without depending on external services&lt;/li&gt;
&lt;li&gt;The community can contribute to improvements and variants&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How Happy Horse Compares to Competitors {#comparison}
&lt;/h2&gt;

&lt;p&gt;The AI video generation space in 2026 is fiercely competitive. Here's how Happy Horse stacks up against the major players:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;HappyHorse-1.0&lt;/th&gt;
&lt;th&gt;Seedance 2.0&lt;/th&gt;
&lt;th&gt;Kling 3.0&lt;/th&gt;
&lt;th&gt;Wan 2.6&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Text-to-Video&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ #1 (Elo 1333)&lt;/td&gt;
&lt;td&gt;Strong&lt;/td&gt;
&lt;td&gt;Strong&lt;/td&gt;
&lt;td&gt;Good&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Image-to-Video&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ #1 (Elo 1392)&lt;/td&gt;
&lt;td&gt;Strong&lt;/td&gt;
&lt;td&gt;Good&lt;/td&gt;
&lt;td&gt;Good&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Audio Generation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Joint Video+Audio&lt;/td&gt;
&lt;td&gt;✅ Audio&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Multilingual&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Open Source&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Fully Open&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;Partial&lt;/td&gt;
&lt;td&gt;Partial&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Motion Quality&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Industry-leading&lt;/td&gt;
&lt;td&gt;Strong&lt;/td&gt;
&lt;td&gt;Good&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Prompt Following&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Excellent&lt;/td&gt;
&lt;td&gt;Strong&lt;/td&gt;
&lt;td&gt;Good&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Generation Speed&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Fast&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaway&lt;/strong&gt;: Happy Horse's combination of &lt;strong&gt;top-tier video quality&lt;/strong&gt;, &lt;strong&gt;integrated audio generation&lt;/strong&gt;, &lt;strong&gt;multilingual capabilities&lt;/strong&gt;, and &lt;strong&gt;open-source accessibility&lt;/strong&gt; makes it a uniquely powerful option. While Seedance 2.0 holds a slight edge in the audio-enabled text-to-video category, Happy Horse dominates the overall arena rankings and leads decisively in Image-to-Video.&lt;/p&gt;




&lt;h2&gt;
  
  
  Use Cases and Applications {#use-cases}
&lt;/h2&gt;

&lt;p&gt;Happy Horse's capabilities open up a wide range of practical applications:&lt;/p&gt;

&lt;h3&gt;
  
  
  🎬 Filmmaking and Pre-Visualization
&lt;/h3&gt;

&lt;p&gt;directors and independent filmmakers can use Happy Horse to quickly generate concept sequences, storyboard animations, and pre-visualization clips — all with synchronized audio — before committing to full production.&lt;/p&gt;

&lt;h3&gt;
  
  
  📢 Marketing and Advertising
&lt;/h3&gt;

&lt;p&gt;Create compelling video ads from text prompts in minutes. Happy Horse's cinematic quality makes it suitable for social media campaigns, product demonstrations, and brand storytelling.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎮 Gaming and Virtual Worlds
&lt;/h3&gt;

&lt;p&gt;Game developers can generate in-engine cutscenes, character animations, and environmental sequences, dramatically reducing the time and cost of pre-rendered video content.&lt;/p&gt;

&lt;h3&gt;
  
  
  📚 Education and Training
&lt;/h3&gt;

&lt;p&gt;Transform educational content into engaging video lessons. Happy Horse's ability to generate video + audio from text makes it ideal for creating training materials, tutorials, and explainer content.&lt;/p&gt;

&lt;h3&gt;
  
  
  🖼️ Digital Art and Creative Expression
&lt;/h3&gt;

&lt;p&gt;Artists and designers can animate their artwork, creating living illustrations and immersive visual experiences from static images.&lt;/p&gt;

&lt;h3&gt;
  
  
  🏢 Enterprise Video Production
&lt;/h3&gt;

&lt;p&gt;Businesses can produce internal communications, product demos, and presentation materials without requiring a full video production team.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Get Started with Happy Horse {#getting-started}
&lt;/h2&gt;

&lt;p&gt;Getting started with Happy Horse is straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Visit the Official Website&lt;/strong&gt;: Head to &lt;a href="https://www.happy-horse.net" rel="noopener noreferrer"&gt;happy-horse.net&lt;/a&gt; for the latest model downloads, documentation, and community resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Access the Model&lt;/strong&gt;: As an open-source model, Happy Horse-1.0 is available for download. Check the official website for model weights, inference code, and technical specifications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Access&lt;/strong&gt;: For those who prefer cloud-based generation, Happy Horse offers API access through its platform at &lt;a href="https://happyhorse-ai.com" rel="noopener noreferrer"&gt;happyhorse-ai.com&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Join the Community&lt;/strong&gt;: Engage with other Happy Horse users, share your creations, and get help with troubleshooting on the official Discord server and GitHub repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Experiment with Prompts&lt;/strong&gt;: Start with simple text prompts and gradually increase complexity. The model's strong prompt adherence means descriptive, detailed prompts yield excellent results.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;✅ &lt;strong&gt;Best Practice&lt;/strong&gt;&lt;br&gt;
When writing prompts for Happy Horse, be specific about: subject details, camera movement, lighting conditions, mood/atmosphere, and any desired audio characteristics. The more context you provide, the better the output will match your vision.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🤔 FAQ {#faq}
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q: Is Happy Horse really free to use?
&lt;/h3&gt;

&lt;p&gt;A: Yes. HappyHorse-1.0 is fully open-source. You can download the model and run it locally at no cost. Cloud API access may have usage-based pricing, but the underlying model is free and open.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How does Happy Horse compare to OpenAI's Sora or Veo 3?
&lt;/h3&gt;

&lt;p&gt;A: Happy Horse holds its own against major commercial models. On independent Artificial Analysis benchmarks, it ranks #1 in the Text-to-Video arena (Elo 1333) and #1 in Image-to-Video (Elo 1392). Its unique advantage is integrated audio generation and multilingual support — areas where many commercial models still lag.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Can Happy Horse generate long-form video content?
&lt;/h3&gt;

&lt;p&gt;A: Happy Horse generates video clips. For longer content, you would chain multiple generations together or use it in combination with video editing tools. The model's strength lies in the quality of individual clips rather than ultra-long sequences.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: What hardware do I need to run Happy Horse locally?
&lt;/h3&gt;

&lt;p&gt;A: As a state-of-the-art video generation model, Happy Horse requires significant GPU resources. Specific hardware requirements are listed on the official website, but a modern high-end GPU (24GB+ VRAM recommended) is typically needed for comfortable local inference.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Who developed Happy Horse?
&lt;/h3&gt;

&lt;p&gt;A: The exact origin of Happy Horse remains somewhat of a mystery — it appeared suddenly on the Artificial Analysis leaderboards. Evidence suggests it comes from an Asian AI research lab, with speculation pointing to a connection to the WAN series of models. The team has not publicly disclosed their identity beyond the official website.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Does Happy Horse support image-to-video?
&lt;/h3&gt;

&lt;p&gt;A: Absolutely. HappyHorse-1.0 is one of the best Image-to-Video models available, ranking #1 on the Artificial Analysis Image-to-Video leaderboard with an impressive Elo of 1392.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How does the audio generation work?
&lt;/h3&gt;

&lt;p&gt;A: HappyHorse-1.0 uses a joint generation approach — both video frames and audio waveforms are generated simultaneously from the same text prompt. This produces more cohesive content where the audio naturally matches what's happening in the video, rather than being an afterthought.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary {#summary}
&lt;/h2&gt;

&lt;p&gt;Happy Horse represents a significant leap forward in AI video generation technology. With its &lt;strong&gt;#1 global ranking&lt;/strong&gt; on the Artificial Analysis Text-to-Video Arena (Elo 1333) and Image-to-Video Arena (Elo 1392), it has proven itself as a top-tier model that rivals and often surpasses industry giants like ByteDance's Seedance 2.0 and Kling.&lt;/p&gt;

&lt;p&gt;Its defining advantages — &lt;strong&gt;joint video+audio generation&lt;/strong&gt;, &lt;strong&gt;multilingual prompt support&lt;/strong&gt;, &lt;strong&gt;superior motion quality&lt;/strong&gt;, &lt;strong&gt;excellent prompt adherence&lt;/strong&gt;, and &lt;strong&gt;fully open-source accessibility&lt;/strong&gt; — make it a compelling choice for creators, developers, filmmakers, and businesses alike.&lt;/p&gt;

&lt;p&gt;As the AI video generation landscape continues to evolve rapidly in 2026, Happy Horse has established itself not as a flash-in-the-pan novelty, but as a serious, production-ready tool that is shaping the future of AI-generated video content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Whether you're a filmmaker seeking rapid pre-visualization, a marketer creating compelling ad content, or a developer building the next generation of AI applications, Happy Horse-1.0 deserves your attention.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Article generated on April 8, 2026. Performance data sourced from Artificial Analysis AI Video Arena leaderboards.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Originally published at:&lt;/strong&gt; &lt;a href="https://curateclick.com/blog/happy-horse-ai-video-generator-2026" rel="noopener noreferrer"&gt;Happy Horse: The AI Video Generator Redefining Cinematic Content Creation in 2026&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>videogen</category>
      <category>machinelearning</category>
      <category>opensource</category>
    </item>
    <item>
      <title>OpenClaw Dreaming Guide 2026: Background Memory Consolidation for AI Agents</title>
      <dc:creator>cz</dc:creator>
      <pubDate>Tue, 07 Apr 2026 12:04:55 +0000</pubDate>
      <link>https://dev.to/czmilo/openclaw-dreaming-guide-2026-background-memory-consolidation-for-ai-agents-585e</link>
      <guid>https://dev.to/czmilo/openclaw-dreaming-guide-2026-background-memory-consolidation-for-ai-agents-585e</guid>
      <description>&lt;h1&gt;
  
  
  OpenClaw Dreaming Guide 2026: Background Memory Consolidation for AI Agents
&lt;/h1&gt;

&lt;h2&gt;
  
  
  🎯 Core Takeaways (TL;DR)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dreaming&lt;/strong&gt; is OpenClaw's automatic three-phase background process that turns short-term memory signals into durable long-term knowledge&lt;/li&gt;
&lt;li&gt;It runs in three stages: &lt;strong&gt;Light Sleep&lt;/strong&gt; (ingest &amp;amp; stage), &lt;strong&gt;REM Sleep&lt;/strong&gt; (reflect &amp;amp; extract patterns), and &lt;strong&gt;Deep Sleep&lt;/strong&gt; (promote to MEMORY.md)&lt;/li&gt;
&lt;li&gt;Only entries that pass all three threshold gates — &lt;strong&gt;minScore 0.8&lt;/strong&gt;, &lt;strong&gt;minRecallCount 3&lt;/strong&gt;, &lt;strong&gt;minUniqueQueries 3&lt;/strong&gt; — get promoted&lt;/li&gt;
&lt;li&gt;Six weighted signals score every candidate: &lt;strong&gt;Relevance (0.30)&lt;/strong&gt;, &lt;strong&gt;Frequency (0.24)&lt;/strong&gt;, &lt;strong&gt;Query diversity (0.15)&lt;/strong&gt;, &lt;strong&gt;Recency (0.15)&lt;/strong&gt;, &lt;strong&gt;Consolidation (0.10)&lt;/strong&gt;, &lt;strong&gt;Conceptual richness (0.06)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Dreaming is &lt;strong&gt;opt-in and disabled by default&lt;/strong&gt; — enable with &lt;code&gt;/dreaming on&lt;/code&gt; or via config&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Why Dreaming Exists&lt;/li&gt;
&lt;li&gt;How It Works: The Three Phases&lt;/li&gt;
&lt;li&gt;Deep Ranking Signals Explained&lt;/li&gt;
&lt;li&gt;Threshold Gates: What Gets Promoted&lt;/li&gt;
&lt;li&gt;The Dream Diary: Human-Readable Output&lt;/li&gt;
&lt;li&gt;Where Things Live on Disk&lt;/li&gt;
&lt;li&gt;Getting Started&lt;/li&gt;
&lt;li&gt;Configuration Reference&lt;/li&gt;
&lt;li&gt;Tuning Guide&lt;/li&gt;
&lt;li&gt;FAQ&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why Dreaming Exists
&lt;/h2&gt;

&lt;p&gt;OpenClaw agents accumulate memory throughout the day: daily notes, session transcripts, recall traces from searches. Most of this material is useful in the moment but doesn't belong in long-term storage. Without a consolidation step, you face one of two bad outcomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Too aggressive&lt;/strong&gt;: every fleeting detail lands in &lt;code&gt;MEMORY.md&lt;/code&gt;, bloating it with noise.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Too conservative&lt;/strong&gt;: nothing ever gets promoted, and genuinely important patterns are lost.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Dreaming solves this with a &lt;strong&gt;three-phase background sweep&lt;/strong&gt; that scores short-term signals over time and only promotes the ones that cross evidence thresholds. Think of it as a curatorial pipeline: ingest, reflect, then carefully promote.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Key Insight&lt;/strong&gt;&lt;br&gt;
Dreaming is &lt;strong&gt;opt-in&lt;/strong&gt; and &lt;strong&gt;disabled by default&lt;/strong&gt;. You choose when and how OpenClaw consolidates memory.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How It Works: The Three Phases
&lt;/h2&gt;

&lt;p&gt;When enabled, &lt;code&gt;memory-core&lt;/code&gt; creates a managed cron job (default: 3 AM daily) that runs a full dreaming sweep. Each sweep executes three phases in sequence:&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: Light Sleep (Sort and Stage)
&lt;/h3&gt;

&lt;p&gt;Light phase is the ingestion layer. It:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reads recent daily memory files (&lt;code&gt;memory/YYYY-MM-DD.md&lt;/code&gt;) and parses them into snippet chunks&lt;/li&gt;
&lt;li&gt;Ingests session transcripts into per-day corpus files under &lt;code&gt;memory/.dreams/session-corpus/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Deduplicates entries using Jaccard similarity (threshold 0.9)&lt;/li&gt;
&lt;li&gt;Stages candidates in the short-term recall store&lt;/li&gt;
&lt;li&gt;Records "light phase signal" hits — these boost ranking in the deep phase later&lt;/li&gt;
&lt;li&gt;Writes a &lt;code&gt;## Light Sleep&lt;/code&gt; block into the daily memory file (when storage mode includes inline output)&lt;/li&gt;
&lt;li&gt;Optionally generates a dream diary narrative entry&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Important&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Light phase never writes to &lt;code&gt;MEMORY.md&lt;/code&gt;&lt;/strong&gt;. It only stages and records signals.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Phase 2: REM Sleep (Reflect and Extract Patterns)
&lt;/h3&gt;

&lt;p&gt;REM phase looks for recurring themes across the staged material. It:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reads all short-term recall entries within the REM lookback window (default: 7 days)&lt;/li&gt;
&lt;li&gt;Extracts recurring themes by analyzing concept tag frequency&lt;/li&gt;
&lt;li&gt;Identifies "candidate truths" — entries that show up repeatedly with high confidence&lt;/li&gt;
&lt;li&gt;Writes a &lt;code&gt;## REM Sleep&lt;/code&gt; block with reflections&lt;/li&gt;
&lt;li&gt;Records REM signal hits (these also boost deep ranking)&lt;/li&gt;
&lt;li&gt;Generates a dream diary narrative entry&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Important&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;REM phase never writes to &lt;code&gt;MEMORY.md&lt;/code&gt;&lt;/strong&gt; either. It produces reflective signals that inform the deep phase.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Phase 3: Deep Sleep (Promote to Long-Term Memory)
&lt;/h3&gt;

&lt;p&gt;This is where promotion actually happens. Deep phase:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Takes all candidates from the short-term recall store&lt;/li&gt;
&lt;li&gt;Scores each one using six weighted signals (see ranking table below)&lt;/li&gt;
&lt;li&gt;Applies phase reinforcement boosts from light and REM signal hits&lt;/li&gt;
&lt;li&gt;Filters out candidates that don't pass the threshold gates&lt;/li&gt;
&lt;li&gt;Rehydrates surviving snippets from live daily files (so deleted or stale content is skipped)&lt;/li&gt;
&lt;li&gt;Appends promoted entries to &lt;code&gt;MEMORY.md&lt;/code&gt; under a dated &lt;code&gt;## Promoted From Short-Term Memory&lt;/code&gt; section&lt;/li&gt;
&lt;li&gt;Writes a deep sleep report and generates a dream diary narrative entry&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;✅ &lt;strong&gt;Best Practice&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Deep phase is the only phase that writes to &lt;code&gt;MEMORY.md&lt;/code&gt;&lt;/strong&gt;. This separation ensures noisy data never pollutes long-term memory.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Deep Ranking Signals Explained
&lt;/h2&gt;

&lt;p&gt;Every candidate in the short-term recall store is scored using six weighted signals. Here's the complete breakdown:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Signal&lt;/th&gt;
&lt;th&gt;Weight&lt;/th&gt;
&lt;th&gt;What It Measures&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Relevance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0.30&lt;/td&gt;
&lt;td&gt;Average retrieval quality across all recalls&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Frequency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0.24&lt;/td&gt;
&lt;td&gt;Total number of short-term signals accumulated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Query diversity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0.15&lt;/td&gt;
&lt;td&gt;How many distinct query contexts surfaced the entry&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Recency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0.15&lt;/td&gt;
&lt;td&gt;Time-decayed freshness (14-day half-life)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Consolidation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0.10&lt;/td&gt;
&lt;td&gt;Multi-day recurrence strength&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Conceptual richness&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0.06&lt;/td&gt;
&lt;td&gt;Concept-tag density from snippet and path&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Light and REM phase hits add a small recency-decayed boost (up to &lt;strong&gt;0.05&lt;/strong&gt; and &lt;strong&gt;0.08&lt;/strong&gt; respectively) on top of the base score.&lt;/p&gt;

&lt;h3&gt;
  
  
  Signal Weight Visual
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Relevance         ████████████████████████████████ 0.30
Frequency         █████████████████████████  0.24
Query diversity   ███████████████  0.15
Recency           ███████████████  0.15
Consolidation     ██████████  0.10
Conceptual rich   ██████  0.06
─────────────────────────────────────────────
Total             1.00
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Threshold Gates: What Gets Promoted
&lt;/h2&gt;

&lt;p&gt;A candidate must pass &lt;strong&gt;all three gates&lt;/strong&gt; to be promoted:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Gate&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Meaning&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;minScore&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.8&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Weighted composite score must be at least this high&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;minRecallCount&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Entry must have been recalled at least this many times&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;minUniqueQueries&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Entry must have surfaced from at least this many distinct queries&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Why Three Gates?&lt;/strong&gt;&lt;br&gt;
These gates prevent one-off mentions from being promoted. A memory must demonstrate &lt;strong&gt;sustained, diverse relevance&lt;/strong&gt; — not just a single lucky retrieval.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Phase Reinforcement Boosts
&lt;/h3&gt;

&lt;p&gt;Light and REM phase hits add bonus points on top of the base signal scores:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Phase&lt;/th&gt;
&lt;th&gt;Maximum Boost&lt;/th&gt;
&lt;th&gt;Condition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Light Sleep&lt;/td&gt;
&lt;td&gt;+0.05&lt;/td&gt;
&lt;td&gt;Recency-decayed light phase signal hits&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;REM Sleep&lt;/td&gt;
&lt;td&gt;+0.08&lt;/td&gt;
&lt;td&gt;Recency-decayed REM phase signal hits&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The Dream Diary: Human-Readable Output
&lt;/h2&gt;

&lt;p&gt;Alongside the machine-readable state, dreaming produces a human-readable &lt;strong&gt;Dream Diary&lt;/strong&gt; in &lt;code&gt;DREAMS.md&lt;/code&gt;. After each phase with enough material, a background subagent generates a short, creative narrative entry (80-180 words) written from the perspective of "a curious, gentle, slightly whimsical mind reflecting on the day."&lt;/p&gt;

&lt;p&gt;The diary is visible in the Gateway &lt;strong&gt;Dreams tab&lt;/strong&gt; and is intended for human browsing only — it is &lt;strong&gt;not a promotion source&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What the Dream Diary Looks Like
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Light Sleep&lt;/span&gt;
[Creative narrative about the day's memories being gathered]

&lt;span class="gu"&gt;## REM Sleep&lt;/span&gt;
[Whimsical reflection on recurring patterns discovered]

&lt;span class="gu"&gt;## Deep Sleep&lt;/span&gt;
[Final contemplation on what was worth keeping]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Where Things Live on Disk
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Machine State (&lt;code&gt;memory/.dreams/&lt;/code&gt;)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;short-term-recall.json&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;All tracked recall entries and their scores&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;phase-signals.json&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Light/REM hit counts per entry key&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;daily-ingestion.json&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Daily file change tracking&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;session-ingestion.json&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Session file change tracking&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;session-corpus/YYYY-MM-DD.txt&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Ingested session message snippets&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;short-term-promotion.lock&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;File lock during promotion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;events.jsonl&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Audit log of dreaming events&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Human-Readable Output
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DREAMS.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Dream Diary with &lt;code&gt;## Light Sleep&lt;/code&gt;, &lt;code&gt;## REM Sleep&lt;/code&gt;, &lt;code&gt;## Deep Sleep&lt;/code&gt; blocks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;memory/dreaming/deep/YYYY-MM-DD.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Optional separate deep phase reports&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;MEMORY.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Long-term memory where promoted entries land&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Enable Dreaming
&lt;/h3&gt;

&lt;p&gt;The fastest way is the slash command in any channel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/dreaming on
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or add it to your config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"plugins"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"entries"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"memory-core"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"config"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"dreaming"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"enabled"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Change the Sweep Schedule
&lt;/h3&gt;

&lt;p&gt;Default is 3 AM daily. To run every 6 hours instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"plugins"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"entries"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"memory-core"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"config"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"dreaming"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"enabled"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"frequency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0 */6 * * *"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Check Status
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/dreaming status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or via CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw memory status &lt;span class="nt"&gt;--deep&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Disable Dreaming
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/dreaming off
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Manual and Debugging Workflows
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Preview Promotions Without Applying
&lt;/h3&gt;

&lt;p&gt;See what would be promoted if you ran a deep sweep now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw memory promote
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Apply Promotions Manually
&lt;/h3&gt;

&lt;p&gt;Run a deep promotion and write results to &lt;code&gt;MEMORY.md&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw memory promote &lt;span class="nt"&gt;--apply&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Limit to the top 5 candidates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw memory promote &lt;span class="nt"&gt;--apply&lt;/span&gt; &lt;span class="nt"&gt;--limit&lt;/span&gt; 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Explain Why Something Would or Wouldn't Promote
&lt;/h3&gt;

&lt;p&gt;Useful for tuning thresholds or understanding the scoring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw memory promote-explain &lt;span class="s2"&gt;"router vlan"&lt;/span&gt;
openclaw memory promote-explain &lt;span class="s2"&gt;"router vlan"&lt;/span&gt; &lt;span class="nt"&gt;--json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Preview REM Reflections
&lt;/h3&gt;

&lt;p&gt;See what REM phase would produce without writing anything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw memory rem-harness
openclaw memory rem-harness &lt;span class="nt"&gt;--json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuration Reference
&lt;/h2&gt;

&lt;p&gt;All settings live under &lt;code&gt;plugins.entries.memory-core.config.dreaming&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Key&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;enabled&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;false&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Master switch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;frequency&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"0 3 * * *"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Cron schedule for full sweeps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;timezone&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;(agent default)&lt;/td&gt;
&lt;td&gt;Timezone for day boundary calculations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;verboseLogging&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;false&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Detailed candidate logging&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;storage.mode&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"inline"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"inline"&lt;/code&gt;, &lt;code&gt;"separate"&lt;/code&gt;, or &lt;code&gt;"both"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;storage.separateReports&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;false&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Write per-phase report files&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;phases.light.limit&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;100&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Max candidates to process in light phase&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;phases.light.lookbackDays&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;2&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;How far back light reads daily files&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;phases.deep.limit&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;10&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Max promotions per sweep&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;phases.deep.minScore&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;0.8&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Minimum weighted score to promote&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;phases.deep.minRecallCount&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;3&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Minimum recall signals required&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;phases.deep.minUniqueQueries&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;3&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Minimum distinct query contexts required&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;phases.deep.recencyHalfLifeDays&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;14&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Recency decay half-life in days&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;phases.deep.maxAgeDays&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;30&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Maximum candidate age in days&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;phases.rem.lookbackDays&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;7&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;How far back REM reads recall entries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;phases.rem.limit&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;10&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Max REM candidates per sweep&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;phases.rem.minPatternStrength&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;0.75&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Minimum pattern strength for REM themes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Tuning Guide
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Too Many Promotions
&lt;/h3&gt;

&lt;p&gt;If &lt;code&gt;MEMORY.md&lt;/code&gt; is growing too fast:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Raise &lt;code&gt;phases.deep.minScore&lt;/code&gt; (try &lt;code&gt;0.85&lt;/code&gt; or &lt;code&gt;0.9&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Raise &lt;code&gt;phases.deep.minRecallCount&lt;/code&gt; (try &lt;code&gt;5&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Lower &lt;code&gt;phases.deep.limit&lt;/code&gt; (try &lt;code&gt;5&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Shorten &lt;code&gt;phases.deep.maxAgeDays&lt;/code&gt; so older candidates expire sooner&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Too Few Promotions
&lt;/h3&gt;

&lt;p&gt;If nothing is getting promoted and you're losing important context:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lower &lt;code&gt;phases.deep.minScore&lt;/code&gt; (try &lt;code&gt;0.7&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Lower &lt;code&gt;phases.deep.minRecallCount&lt;/code&gt; to &lt;code&gt;2&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Increase &lt;code&gt;phases.deep.limit&lt;/code&gt; to allow more per sweep&lt;/li&gt;
&lt;li&gt;Extend &lt;code&gt;phases.deep.maxAgeDays&lt;/code&gt; to give candidates more time to accumulate signals&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Sweep Frequency
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Frequency&lt;/th&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Daily (default)&lt;/td&gt;
&lt;td&gt;Good for most users. Low resource usage, steady promotion.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Every 6 hours&lt;/td&gt;
&lt;td&gt;For active agents with high daily memory throughput.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Weekly (&lt;code&gt;0 3 * * 0&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;For agents that don't accumulate much short-term memory.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Debugging Candidate Scoring
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Enable &lt;code&gt;verboseLogging: true&lt;/code&gt; to see per-candidate scores in the event log&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;openclaw memory promote-explain "&amp;lt;query&amp;gt;"&lt;/code&gt; to inspect a specific candidate&lt;/li&gt;
&lt;li&gt;Check &lt;code&gt;memory/.dreams/events.jsonl&lt;/code&gt; for detailed phase execution logs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Dreaming Integrates with the Rest of OpenClaw
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Daily notes + Sessions + Recall traces
            │
            ▼
┌─────────────────────┐
│    Light Phase       │  Ingest, dedupe, stage, record signals
└──────────┬──────────┘
           │
           ▼
┌─────────────────────┐
│     REM Phase        │  Extract themes, record reinforcement signals
└──────────┬──────────┘
           │
           ▼
┌─────────────────────┐
│    Deep Phase        │  Score, threshold, promote → MEMORY.md
└──────────┬──────────┘
           │
           ▼
    Dream Diary (DREAMS.md) — human-readable narrative only
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key integration points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Memory search&lt;/strong&gt; (&lt;code&gt;openclaw memory search&lt;/code&gt;) feeds short-term recall signals into the promotion pipeline during normal agent operation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Daily memory files&lt;/strong&gt; (&lt;code&gt;memory/YYYY-MM-DD.md&lt;/code&gt;) are the primary source material for light phase ingestion&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session transcripts&lt;/strong&gt; (&lt;code&gt;~/.openclaw/agents/&amp;lt;id&amp;gt;/sessions/*.jsonl&lt;/code&gt;) are the secondary source&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gateway startup&lt;/strong&gt; reconciles the managed cron job, so config changes take effect on next gateway restart&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Dreams UI tab&lt;/strong&gt; in the Gateway shows live status, phase counts, and the dream diary&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🤔 FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q: What exactly is "dreaming" in the context of AI agents?
&lt;/h3&gt;

&lt;p&gt;A: Dreaming is OpenClaw's background memory consolidation system. It mimics a biological sleep cycle — light sleep for ingestion, REM for pattern recognition, and deep sleep for memory promotion. It runs automatically during off-hours to transform noisy short-term signals into curated long-term knowledge.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How is this different from just writing everything to MEMORY.md?
&lt;/h3&gt;

&lt;p&gt;A: Without dreaming, you face binary outcomes: either over-promote (everything lands in MEMORY.md, bloating it with noise) or under-promote (nothing survives, and important patterns are lost). Dreaming uses evidence-based scoring with six weighted signals and three threshold gates to ensure only truly valuable, repeatedly-relevant entries get promoted.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Can I preview what would be promoted before changes happen?
&lt;/h3&gt;

&lt;p&gt;A: Yes. Use &lt;code&gt;openclaw memory promote&lt;/code&gt; to preview without applying, or &lt;code&gt;openclaw memory promote-explain "&amp;lt;query&amp;gt;"&lt;/code&gt; to understand why a specific entry would or wouldn't make it. You can also check the Gateway's Dreams tab for live status.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How do I know if my configuration is causing too many or too few promotions?
&lt;/h3&gt;

&lt;p&gt;A: Monitor &lt;code&gt;MEMORY.md&lt;/code&gt; growth rate. If it's bloaty, raise &lt;code&gt;minScore&lt;/code&gt; and &lt;code&gt;minRecallCount&lt;/code&gt;. If you're losing important context, lower thresholds and extend &lt;code&gt;maxAgeDays&lt;/code&gt;. The &lt;code&gt;events.jsonl&lt;/code&gt; log and &lt;code&gt;promote-explain&lt;/code&gt; command give you per-candidate visibility.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Is the Dream Diary purely aesthetic or does it serve a function?
&lt;/h3&gt;

&lt;p&gt;A: The Dream Diary is &lt;strong&gt;human-only&lt;/strong&gt; — it's not a promotion source. It's designed for you to browse and understand what OpenClaw found interesting from your sessions. Think of it as a curiosity artifact: a gentle, slightly whimsical narrative that makes the memory consolidation process transparent and engaging.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: What happens to candidates that don't pass the threshold gates?
&lt;/h3&gt;

&lt;p&gt;A: They remain in the short-term recall store and continue accumulating signals on future recalls. If they eventually cross all three gates, they'll promote in a future sweep. Entries that exceed &lt;code&gt;maxAgeDays&lt;/code&gt; expire and are removed from consideration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary &amp;amp; Next Steps
&lt;/h2&gt;

&lt;p&gt;OpenClaw's Dreaming system brings &lt;strong&gt;disciplined curation&lt;/strong&gt; to AI agent memory management. By separating ingestion (Light), reflection (REM), and promotion (Deep), it ensures your long-term memory stays clean, relevant, and genuinely useful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get started in 30 seconds:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/dreaming on
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Check back tomorrow morning&lt;/strong&gt; — your Dream Diary will be waiting in the Gateway Dreams tab.&lt;/p&gt;

&lt;p&gt;For deeper tuning, explore &lt;code&gt;openclaw memory promote --dry-run&lt;/code&gt; and &lt;code&gt;openclaw memory status --deep&lt;/code&gt; to understand what's happening under the hood.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Originally published at:&lt;/strong&gt; &lt;a href="https://curateclick.com/blog/openclaw-dreaming-guide-2026" rel="noopener noreferrer"&gt;OpenClaw Dreaming Guide 2026&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openclaw</category>
      <category>llm</category>
    </item>
    <item>
      <title>Qwen3.6-Plus: Alibaba's Quiet Giant in the AI Race Delivers a Million-Token Enterprise Powerhouse</title>
      <dc:creator>cz</dc:creator>
      <pubDate>Thu, 02 Apr 2026 10:14:56 +0000</pubDate>
      <link>https://dev.to/czmilo/qwen36-plus-alibabas-quiet-giant-in-the-ai-race-delivers-a-million-token-enterprise-powerhouse-166o</link>
      <guid>https://dev.to/czmilo/qwen36-plus-alibabas-quiet-giant-in-the-ai-race-delivers-a-million-token-enterprise-powerhouse-166o</guid>
      <description>&lt;h1&gt;
  
  
  Qwen3.6-Plus: Alibaba's Quiet Giant in the AI Race Delivers a Million-Token Enterprise Powerhouse
&lt;/h1&gt;

&lt;h2&gt;
  
  
  🎯 TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Qwen3.6-Plus&lt;/strong&gt; is Alibaba's latest flagship large language model, released April 2, 2026, designed specifically for enterprise agentic AI workloads&lt;/li&gt;
&lt;li&gt;The model ships with a &lt;strong&gt;1-million-token context window by default&lt;/strong&gt;, enabling true repository-level code understanding and long-form task processing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agentic coding&lt;/strong&gt; is the headline capability of Qwen3.6-Plus — the model plans, executes, and refines tasks autonomously across complex engineering environments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal reasoning&lt;/strong&gt; is built in, spanning text, code, images, and structured data across Alibaba's broader AI ecosystem (Wukong, Alibaba Cloud)&lt;/li&gt;
&lt;li&gt;Available via API and integrated into Alibaba Cloud; early preview launched March 30, 2026, with free access on OpenRouter&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What is Qwen3.6-Plus?&lt;/li&gt;
&lt;li&gt;The 1-Million-Token Context Window: Why It Matters&lt;/li&gt;
&lt;li&gt;Agentic Coding: The Real Headline&lt;/li&gt;
&lt;li&gt;Multimodal Reasoning Across the Alibaba Ecosystem&lt;/li&gt;
&lt;li&gt;Technical Architecture: Hybrid Design for Efficiency&lt;/li&gt;
&lt;li&gt;Benchmark Performance&lt;/li&gt;
&lt;li&gt;Enterprise Use Cases: Where Qwen3.6-Plus Shines&lt;/li&gt;
&lt;li&gt;How to Access and Integrate Qwen3.6-Plus&lt;/li&gt;
&lt;li&gt;Qwen3.6-Plus vs. The Competition&lt;/li&gt;
&lt;li&gt;Frequently Asked Questions&lt;/li&gt;
&lt;li&gt;Summary &amp;amp; Next Steps&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  1. What is Qwen3.6-Plus?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Qwen3.6-Plus&lt;/strong&gt; is the latest iteration in Alibaba Cloud's flagship Qwen series of large language models. Released on April 2, 2026, Qwen3.6-Plus represents a significant step forward from its predecessors — not just in raw benchmark numbers, but in its fundamental design philosophy: &lt;strong&gt;agentic AI for real enterprise workflows&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;While many AI labs have talked about "agentic AI" as a future aspiration, Alibaba has shipped Qwen3.6-Plus with agentic capabilities baked into its core architecture. The model doesn't just respond to prompts — it plans multi-step tasks, uses tools, refines its own approach, and operates across complex, repository-scale engineering environments.&lt;/p&gt;

&lt;p&gt;The release also marks a quiet but meaningful shift in the global AI landscape. Qwen3.6-Plus positions Alibaba not as a follower in the LLM race, but as a contender with a differentiated focus on &lt;strong&gt;practical, deployment-ready enterprise AI&lt;/strong&gt;. This isn't about beating GPT-5 on a single benchmark. It's about giving enterprises a model they can actually put to work.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. The 1-Million-Token Context Window: Why It Matters
&lt;/h2&gt;

&lt;p&gt;The most immediately striking spec of Qwen3.6-Plus is its &lt;strong&gt;1-million-token context window by default&lt;/strong&gt;. For those unfamiliar, this means the model can ingest and reason over approximately 750,000 words of text — or an entire large code repository — in a single context window.&lt;/p&gt;

&lt;p&gt;To understand why this matters, consider the limitations of earlier models:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model Generation&lt;/th&gt;
&lt;th&gt;Typical Context&lt;/th&gt;
&lt;th&gt;Practical Implication&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GPT-3.5 era&lt;/td&gt;
&lt;td&gt;4K–16K tokens&lt;/td&gt;
&lt;td&gt;Single files, short documents&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPT-4 era&lt;/td&gt;
&lt;td&gt;32K–128K tokens&lt;/td&gt;
&lt;td&gt;Medium documents, small codebases&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Qwen3.6-Plus&lt;/td&gt;
&lt;td&gt;1,000,000 tokens&lt;/td&gt;
&lt;td&gt;Entire repositories, years of documentation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A 1-million-token context transforms what's architecturally possible. A software engineering team can feed Qwen3.6-Plus an entire codebase — all dependencies, tests, documentation, and commit history — and ask it to reason about architectural decisions, identify bugs, or generate features that respect patterns established across hundreds of files.&lt;/p&gt;

&lt;p&gt;This isn't extrapolation or "hope it works" context extension. Qwen3.6-Plus provides the 1-million-token window as a &lt;strong&gt;default, native capability&lt;/strong&gt; — a direct response to the real-world need for repository-level AI assistance in enterprise environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Agentic Coding: The Real Headline
&lt;/h2&gt;

&lt;p&gt;If the context window is the spec that gets attention, &lt;strong&gt;agentic coding&lt;/strong&gt; is the capability that will determine whether Qwen3.6-Plus actually changes how enterprises build software.&lt;/p&gt;

&lt;p&gt;Agentic coding goes beyond autocomplete or even code suggestion. Qwen3.6-Plus is designed to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Plan&lt;/strong&gt; a multi-file code change before writing a single line&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execute&lt;/strong&gt; code changes across a repository with awareness of dependencies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Refine&lt;/strong&gt; its own outputs based on test results, linting feedback, or human review&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reason&lt;/strong&gt; about code architecture, identifying patterns and anti-patterns across large codebases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debug&lt;/strong&gt; with full repository context — tracing a bug to its root cause rather than patching symptoms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the difference between a basic AI and Qwen3.6-Plus, which acts as a true &lt;strong&gt;coding agent&lt;/strong&gt;. Qwen3.6-Plus enables enterprises to automate entire workflows — from requirements to PR review — that previously required human senior engineers to orchestrate.&lt;/p&gt;

&lt;p&gt;Alibaba has also deeply integrated Qwen3.6-Plus with its developer tooling ecosystem. The model is not just an API endpoint; it's designed to be embedded into IDEs, CI/CD pipelines, and code review workflows via Alibaba Cloud's developer services.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Multimodal Reasoning Across the Alibaba Ecosystem
&lt;/h2&gt;

&lt;p&gt;Qwen3.6-Plus isn't a single-purpose coding model. It delivers &lt;strong&gt;multimodal reasoning&lt;/strong&gt; — the ability to understand and generate across text, code, images, and structured data — and it's deeply integrated into Alibaba's broader AI ecosystem.&lt;/p&gt;

&lt;p&gt;Qwen3.6-Plus connects with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Wukong&lt;/strong&gt; — Alibaba's multimodal foundation model for image understanding and generation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alibaba Cloud&lt;/strong&gt; — The enterprise cloud platform where Qwen3.6-Plus is deployed as a managed service&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Qwen Chat&lt;/strong&gt; — Alibaba's consumer-facing AI chat interface&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ecosystem integration means enterprises don't just get an LLM API — they get a cohesive AI infrastructure. A logistics company, for example, can use Qwen3.6-Plus to analyze warehouse images (via Wukong integration), process shipping documentation, optimize routing algorithms, and generate customer communication — all within a single, integrated workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Technical Architecture: Hybrid Design for Efficiency
&lt;/h2&gt;

&lt;p&gt;Alibaba's technical documentation describes Qwen3.6-Plus as built on a &lt;strong&gt;hybrid architecture designed for improved efficiency and scalability&lt;/strong&gt;. While full architectural details remain closely held, this hybrid approach suggests a Mixture-of-Experts (MoE) inspired design — similar to how Qwen3-Coder-480B uses 480B total parameters with 35B active parameters per token.&lt;/p&gt;

&lt;p&gt;This design philosophy reflects a pragmatic reality: enterprises need models like Qwen3.6-Plus that are powerful but not prohibitively expensive to run. Qwen3.6-Plus achieves this balance through its hybrid architecture. By activating only the necessary parameters for each task, Qwen3.6-Plus can deliver frontier-level performance at a fraction of the compute cost of dense models.&lt;/p&gt;

&lt;p&gt;Qwen3.6-Plus also enforces &lt;strong&gt;chain-of-thought reasoning&lt;/strong&gt; and &lt;strong&gt;tool use&lt;/strong&gt; as core capabilities — not optional features toggled by prompt engineering. This means developers and enterprises get consistent, reliable reasoning traces without needing to craft complex system prompts.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Benchmark Performance
&lt;/h2&gt;

&lt;p&gt;Across a broad set of industry benchmarks, Qwen3.6-Plus demonstrates strong performance, particularly in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agentic coding tasks&lt;/strong&gt; — repository-level code understanding, multi-file code generation, automated debugging&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal reasoning&lt;/strong&gt; — image-text understanding, cross-modal consistency, document understanding&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Long-context tasks&lt;/strong&gt; — needle-in-a-haystack retrieval, multi-document synthesis, full-codebase analysis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise workflow tasks&lt;/strong&gt; — business document reasoning, data analysis, multilingual processing (100+ languages supported)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While specific benchmark scores vary by test, the consistent theme from early evaluations of Qwen3.6-Plus is that it punches at or above the tier-1 frontier model level on agentic and coding tasks — precisely the workloads that matter most for enterprise AI deployment.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Pro Tip&lt;/strong&gt;&lt;br&gt;
When evaluating Qwen3.6-Plus for your enterprise, focus on task-specific benchmarks relevant to your use case rather than aggregate leaderboard positions. The model's agentic coding capabilities may outperform its raw MMLU score suggests.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  7. Enterprise Use Cases: Where Qwen3.6-Plus Shines
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Software Engineering Automation
&lt;/h3&gt;

&lt;p&gt;Qwen3.6-Plus is purpose-built for engineering teams. Qwen3.6-Plus empowers developers and enterprises alike with agentic capabilities. It can serve as an &lt;strong&gt;AI coding agent&lt;/strong&gt; that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reviews pull requests with full repository context&lt;/li&gt;
&lt;li&gt;Generates test suites covering edge cases across entire modules&lt;/li&gt;
&lt;li&gt;Refactors legacy code while maintaining behavioral equivalence&lt;/li&gt;
&lt;li&gt;Documents APIs and codebases automatically&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Customer Service &amp;amp; Support
&lt;/h3&gt;

&lt;p&gt;With Qwen3.6-Plus multimodal reasoning and 100+ language support, Qwen3.6-Plus powers &lt;strong&gt;multilingual customer service agents&lt;/strong&gt; that understand text, images (screenshots, documents), and structured data — delivering coherent, context-aware responses across Alibaba Cloud's infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Financial Analysis &amp;amp; Document Processing
&lt;/h3&gt;

&lt;p&gt;Enterprises in finance and legal can leverage the 1-million-token context to &lt;strong&gt;analyze entire document repositories&lt;/strong&gt; — years of filings, contracts, or research reports — in a single query, extracting insights and connections that would be impossible with shorter-context models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Healthcare &amp;amp; Research
&lt;/h3&gt;

&lt;p&gt;Qwen3.6-Plus multimodal capabilities combined with long-context processing enable Qwen3.6-Plus to &lt;strong&gt;synthesize research literature&lt;/strong&gt;, analyze medical imaging reports alongside clinical notes, and support clinical decision-making with full patient history context.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. How to Access and Integrate Qwen3.6-Plus
&lt;/h2&gt;

&lt;p&gt;Qwen3.6-Plus is available through multiple channels:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Access Method&lt;/th&gt;
&lt;th&gt;Details&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Alibaba Cloud API&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Managed endpoint via Alibaba Cloud ML Platform — production-ready&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OpenRouter&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Free preview access (as of March 30, 2026) — good for evaluation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Qwen Chat&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Consumer interface at qwen.ai — quick experimentation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hugging Face&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Model weights available for self-hosting (Qwen3.5 series already on HF)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For enterprise integration, Alibaba Cloud provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;REST API access with standard authentication&lt;/li&gt;
&lt;li&gt;SDKs for Python, Java, and Node.js&lt;/li&gt;
&lt;li&gt;Direct integration with Alibaba Cloud's data and compute services&lt;/li&gt;
&lt;li&gt;SLA-backed production support&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  9. Qwen3.6-Plus vs. The Competition
&lt;/h2&gt;

&lt;p&gt;How does Qwen3.6-Plus stack up against the leading frontier models?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Capability&lt;/th&gt;
&lt;th&gt;Qwen3.6-Plus&lt;/th&gt;
&lt;th&gt;GPT-4o&lt;/th&gt;
&lt;th&gt;Claude 3.5 Sonnet&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Context Window&lt;/td&gt;
&lt;td&gt;1M tokens (native)&lt;/td&gt;
&lt;td&gt;128K–1M (extended)&lt;/td&gt;
&lt;td&gt;200K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agentic Coding&lt;/td&gt;
&lt;td&gt;Built-in, core feature&lt;/td&gt;
&lt;td&gt;Via extensions&lt;/td&gt;
&lt;td&gt;Good, via extensions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multimodal&lt;/td&gt;
&lt;td&gt;Native, ecosystem-integrated&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;td&gt;Strong&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enterprise Integration&lt;/td&gt;
&lt;td&gt;Alibaba Cloud-native&lt;/td&gt;
&lt;td&gt;Via Azure OpenAI&lt;/td&gt;
&lt;td&gt;Via Anthropic API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multilingual (100+ languages)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Open Source Weights&lt;/td&gt;
&lt;td&gt;Partial&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Free Access&lt;/td&gt;
&lt;td&gt;Yes (OpenRouter preview)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Qwen3.6-Plus's clearest differentiator is its &lt;strong&gt;default 1-million-token context&lt;/strong&gt; combined with &lt;strong&gt;built-in agentic coding&lt;/strong&gt; — both delivered as core capabilities rather than optional features or premium add-ons.&lt;/p&gt;

&lt;h2&gt;
  
  
  10. Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q: What is Qwen3.6-Plus?
&lt;/h3&gt;

&lt;p&gt;A: Qwen3.6-Plus is Alibaba Cloud's latest flagship large language model, released April 2, 2026. It features a 1-million-token context window, built-in agentic coding capabilities, and multimodal reasoning, designed for enterprise AI deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How does Qwen3.6-Plus compare to GPT-4o?
&lt;/h3&gt;

&lt;p&gt;A: Qwen3.6-Plus matches or exceeds GPT-4o on agentic coding and long-context tasks, particularly for enterprise use cases. Its Qwen3.6-Plus 1-million-token default context is larger than GPT-4o's standard offering, and its deep integration with Alibaba Cloud provides a compelling alternative for enterprises in Asia or with Alibaba ecosystem dependencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Is Qwen3.6-Plus free to use?
&lt;/h3&gt;

&lt;p&gt;A: Qwen3.6-Plus has a free preview on OpenRouter. For production enterprise use, Qwen3.6-Plus is available via Alibaba Cloud's paid API service with SLA guarantees.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: What makes Qwen3.6-Plus different from earlier Qwen models?
&lt;/h3&gt;

&lt;p&gt;A: Qwen3.6-Plus is the first Qwen model to ship with agentic capabilities as a core, default feature rather than a prompt-based behavior. It also introduces the 1-million-token context as a native default (not extrapolation), and deeper ecosystem integration with Wukong and Alibaba Cloud services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Can I self-host Qwen3.6-Plus?
&lt;/h3&gt;

&lt;p&gt;A: Model weights for the Qwen3.5 series are available on Hugging Face for self-hosting. Qwen3.6-Plus weights availability follows Alibaba's phased release model — check the official Qwen GitHub and Hugging Face pages for the latest.&lt;/p&gt;

&lt;h2&gt;
  
  
  11. Summary &amp;amp; Next Steps
&lt;/h2&gt;

&lt;p&gt;Alibaba's release of Qwen3.6-Plus is a signal event in the enterprise AI race. While Western AI labs have dominated headlines, Alibaba has been quietly building an AI ecosystem that is now competitive at the frontier level — and more importantly, &lt;strong&gt;deployment-ready for real enterprise workflows&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The Qwen3.6-Plus 1-million-token context window, built-in agentic coding, and multimodal reasoning aren't just spec sheet wins. They're practical capabilities that enterprises can use today to automate complex, multi-step workflows across software engineering, customer service, financial analysis, and research.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you're evaluating AI for enterprise deployment, Qwen3.6-Plus truly deserves serious consideration&lt;/strong&gt; — especially if you're already in the Alibaba Cloud ecosystem or need best-in-class performance on agentic coding and long-context tasks.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Article generated based on publicly available information as of April 2026. For the latest model capabilities and pricing, visit &lt;a href="https://www.alibabacloud.com" rel="noopener noreferrer"&gt;Alibaba Cloud&lt;/a&gt; or &lt;a href="https://qwen.ai" rel="noopener noreferrer"&gt;Qwen.ai&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Originally published at:&lt;/strong&gt; &lt;a href="https://curateclick.com/blog/qwen36-plus-alibaba-ai-million-token-enterprise" rel="noopener noreferrer"&gt;Qwen3.6-Plus: Alibaba's Quiet Giant in the AI Race Delivers a Million-Token Enterprise Powerhouse&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>alibaba</category>
      <category>qwen</category>
      <category>enterprise</category>
    </item>
    <item>
      <title>CurateClick Weekly Picks: 6 Fresh Tools Worth Trying (Mar 22, 2026 Edition)</title>
      <dc:creator>cz</dc:creator>
      <pubDate>Tue, 31 Mar 2026 04:48:04 +0000</pubDate>
      <link>https://dev.to/czmilo/curateclick-weekly-picks-6-fresh-tools-worth-trying-mar-22-2026-edition-21g3</link>
      <guid>https://dev.to/czmilo/curateclick-weekly-picks-6-fresh-tools-worth-trying-mar-22-2026-edition-21g3</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;CurateClick's latest Weekly Picks spotlight &lt;strong&gt;six&lt;/strong&gt; tools that help you speak better, create faster, and express yourself more clearly—whether you're preparing for a dinner party, building an illustrated story world, or generating multi-shot cinematic video.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dinner Party Practice&lt;/strong&gt; — practice meaningful conversation with prompts + a wine-glass timer (plus optional speech analysis)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pretty Scale&lt;/strong&gt; — AI-based attractiveness analysis with breakdowns and privacy-first handling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;C2story&lt;/strong&gt; — create and evolve illustrated stories with reusable characters and 50+ art styles&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Random Topic Generator&lt;/strong&gt; — impromptu speech topics + built-in 1/3/5 minute timer&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seedance 2.0&lt;/strong&gt; — multimodal, controllable multi-shot AI video for cinematic storytelling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ValRequest&lt;/strong&gt; — generate personalized romantic messages in different tones and lengths&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are Weekly Picks on CurateClick?
&lt;/h2&gt;

&lt;p&gt;CurateClick is a discovery platform for useful products and tools. &lt;strong&gt;Weekly Picks&lt;/strong&gt; are hand-selected highlights—things that feel unusually practical, surprisingly delightful, or simply ahead of the curve.&lt;/p&gt;

&lt;p&gt;This roundup focuses on the most recent entries shown on the Weekly Picks page (latest date: &lt;strong&gt;Mar 22, 2026&lt;/strong&gt;), and selects &lt;strong&gt;six&lt;/strong&gt; products for deeper coverage.&lt;/p&gt;

&lt;h2&gt;
  
  
  1) Dinner Party Practice — the art of having something to say (Mar 22, 2026)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; social confidence, language learners, networking, and anyone who wants to sound more interesting without sounding rehearsed.&lt;/p&gt;

&lt;p&gt;Dinner Party Practice is a free, AI-powered "conversation gym." You pick a category (All Topics / Love / Culture / Personal), draw a card, then speak on a prompt while a &lt;strong&gt;wine-glass timer&lt;/strong&gt; fills—an elegant little constraint that makes practice feel less like homework.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why it stands out
&lt;/h3&gt;

&lt;p&gt;Most "conversation starters" are shallow. Dinner Party Practice aims for questions that invite real stories and opinions—prompts that can turn a table of polite strangers into a room with momentum.&lt;/p&gt;

&lt;h3&gt;
  
  
  Notable features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Three thought-provoking prompts&lt;/strong&gt; per draw&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wine-glass timer&lt;/strong&gt; (1/3/5 minutes) to build fluency under gentle pressure&lt;/li&gt;
&lt;li&gt;Optional &lt;strong&gt;AI speech analysis&lt;/strong&gt;: transcript + rewrite + pacing + filler words + tone + pauses&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  A quick way to use it
&lt;/h3&gt;

&lt;p&gt;Try a 3-minute session before any social event:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Draw a Culture or Personal card&lt;/li&gt;
&lt;li&gt;Pick one prompt&lt;/li&gt;
&lt;li&gt;Speak for 3 minutes&lt;/li&gt;
&lt;li&gt;Review fillers and pacing once, then stop—don't over-optimize&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;🔗 &lt;strong&gt;Link:&lt;/strong&gt; &lt;a href="https://curateclick.com/product/dinner-party-practice" rel="noopener noreferrer"&gt;https://curateclick.com/product/dinner-party-practice&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  2) Pretty Scale — How Pretty Are You? Let AI Decide. (Mar 22, 2026)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; curiosity, photo feedback loops, modeling/photography experimentation, or "just for fun" comparisons (with a reality check).&lt;/p&gt;

&lt;p&gt;Pretty Scale is an AI-powered attractiveness evaluation tool that analyzes a photo and produces an overall score plus a dimensional breakdown. It offers two modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scientific Evaluation&lt;/strong&gt; (more objective framing + constructive feedback)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Roast Mode&lt;/strong&gt; (same scoring, delivered with humor)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What's interesting (and what to be careful about)
&lt;/h3&gt;

&lt;p&gt;The value here isn't "the number." It's the &lt;strong&gt;structured breakdown&lt;/strong&gt; —symmetry, proportions, skin quality, facial structure, etc.—which can be used as a lens for photography, lighting, styling, and presentation.&lt;/p&gt;

&lt;p&gt;At the same time, it's still a model. Treat results as &lt;strong&gt;feedback for iteration&lt;/strong&gt;, not identity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Privacy notes
&lt;/h3&gt;

&lt;p&gt;Pretty Scale claims it &lt;strong&gt;doesn't store uploaded photos&lt;/strong&gt; and deletes them after processing—exactly the kind of baseline hygiene you want for image analysis tools.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🔗 &lt;strong&gt;Link:&lt;/strong&gt; &lt;a href="https://curateclick.com/product/pretty-scale" rel="noopener noreferrer"&gt;https://curateclick.com/product/pretty-scale&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  3) C2story — Create Illustrated Stories with AI (Mar 7, 2026)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; writers, educators, parents, indie comic makers, and anyone who wants to turn characters into a repeatable "story engine."&lt;/p&gt;

&lt;p&gt;C2story is built around a simple but powerful idea: stories don't end after one generation. You create a character and a story—then &lt;strong&gt;continue&lt;/strong&gt;, &lt;strong&gt;rewrite&lt;/strong&gt;, or &lt;strong&gt;remix&lt;/strong&gt; it into something bigger.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why it stands out
&lt;/h3&gt;

&lt;p&gt;A lot of AI storytelling tools generate a one-off output. C2story emphasizes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Character persistence&lt;/strong&gt; (reuse characters across stories)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evolving narratives&lt;/strong&gt; (branching and iteration)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared story worlds&lt;/strong&gt; (collaboration and community remix)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Notable features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;50+ visual styles&lt;/strong&gt; (storybook, anime, watercolor, cinematic, cartoon, etc.)&lt;/li&gt;
&lt;li&gt;Multi-language support (including bilingual editions)&lt;/li&gt;
&lt;li&gt;Export options like &lt;strong&gt;PDF&lt;/strong&gt; and downloadable asset bundles&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Practical use cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Teachers:&lt;/strong&gt; create illustrated reading material tailored to a lesson&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Families:&lt;/strong&gt; personalized bedtime stories featuring your kid as the hero&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creators:&lt;/strong&gt; prototype a comic series quickly, then refine the best arcs&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;🔗 &lt;strong&gt;Link:&lt;/strong&gt; &lt;a href="https://curateclick.com/product/c2story" rel="noopener noreferrer"&gt;https://curateclick.com/product/c2story&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  4) Random Topic Generator — Impromptu Speech Topics &amp;amp; Timer (Feb 22, 2026)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Toastmasters, interviews, meetings, students, and anyone leveling up "thinking out loud."&lt;/p&gt;

&lt;p&gt;Random Topic Generator does one job well: generate &lt;strong&gt;three&lt;/strong&gt; impromptu speaking prompts, then let you practice with a built-in timer (1/3/5 minutes). It also supports &lt;strong&gt;English and Chinese&lt;/strong&gt;, with optional hints like "technology" or "funny."&lt;/p&gt;

&lt;h3&gt;
  
  
  Why it's useful
&lt;/h3&gt;

&lt;p&gt;Impromptu speaking is a foundational skill: interviews, standups, brainstorming, leadership moments. The hardest part is often &lt;strong&gt;starting&lt;/strong&gt; —this tool removes the friction.&lt;/p&gt;

&lt;h3&gt;
  
  
  A simple training loop (10 minutes/day)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;1 minute warm-up:&lt;/strong&gt; one topic, speak without stopping&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3 minutes:&lt;/strong&gt; structure with PREP (Point, Reason, Example, Point)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;5 minutes (optional):&lt;/strong&gt; add a counter-argument or a personal story&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Consistency beats intensity here.&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;Link:&lt;/strong&gt; &lt;a href="https://curateclick.com/product/random-topic-generator" rel="noopener noreferrer"&gt;https://curateclick.com/product/random-topic-generator&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  5) Seedance 2.0 — multi-shot cinematic video, no clips (Feb 10, 2026)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; indie filmmakers, creative studios, content teams, and anyone trying to turn "AI video" from a toy into a workflow.&lt;/p&gt;

&lt;p&gt;Seedance 2.0 positions itself as a multimodal AI video engine controlled by &lt;strong&gt;text, image, audio, and video&lt;/strong&gt; —with the goal of producing &lt;strong&gt;production-ready, multi-shot cinematic stories&lt;/strong&gt; in one go.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why this matters
&lt;/h3&gt;

&lt;p&gt;Most text-to-video tools struggle with three painful gaps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt; (characters/scene drift across shots)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Narrative cohesion&lt;/strong&gt; (clips don't feel like a sequence)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audio-visual sync&lt;/strong&gt; (lip sync and timing are fragile)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Seedance 2.0 claims progress on all three: director-like control, story pacing, and stronger audio alignment.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to think about it
&lt;/h3&gt;

&lt;p&gt;If you've ever storyboarded, you'll recognize the advantage of multi-shot generation: it's not just a pretty clip—it's a &lt;em&gt;sequence&lt;/em&gt; with intent (camera, action, transitions).&lt;/p&gt;

&lt;p&gt;Even if you don't ship the output directly, it can be a powerful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;previs tool&lt;/strong&gt; (pre-visualization)&lt;/li&gt;
&lt;li&gt;concept pitch generator&lt;/li&gt;
&lt;li&gt;rapid iteration engine for narrative ads&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;🔗 &lt;strong&gt;Link:&lt;/strong&gt; &lt;a href="https://curateclick.com/product/seedance-2.0-create-multi-shot-movies-no-clips.-the-controllable-ai-video-generator" rel="noopener noreferrer"&gt;https://curateclick.com/product/seedance-2.0-create-multi-shot-movies-no-clips.-the-controllable-ai-video-generator&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  6) ValRequest — Turn Feelings Into Words (Feb 6, 2026)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; people who care, but freeze when it's time to write; last-minute romantics; anyone who wants "sweet" without sounding generic.&lt;/p&gt;

&lt;p&gt;ValRequest generates short, personalized romantic messages. You pick:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;recipient type (partner / crush / friend)&lt;/li&gt;
&lt;li&gt;style (heartfelt / humorous / Shakespeare / cute)&lt;/li&gt;
&lt;li&gt;length (short / medium / long)&lt;/li&gt;
&lt;li&gt;a few keywords that anchor the relationship&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then it returns &lt;strong&gt;three&lt;/strong&gt; options—fast enough to be useful in real life.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why it works
&lt;/h3&gt;

&lt;p&gt;Good messages feel specific. The keyword input is a simple constraint that nudges outputs toward your actual story instead of Hallmark boilerplate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Best practice
&lt;/h3&gt;

&lt;p&gt;Use the AI output as a draft, then add one real detail:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a shared memory&lt;/li&gt;
&lt;li&gt;a private joke&lt;/li&gt;
&lt;li&gt;a near-future plan ("dinner Friday?")&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 That single human detail upgrades the whole message.&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;Link:&lt;/strong&gt; &lt;a href="https://curateclick.com/product/valrequest" rel="noopener noreferrer"&gt;https://curateclick.com/product/valrequest&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Want your product featured next?
&lt;/h2&gt;

&lt;p&gt;CurateClick is built for discovery—but it only works if makers ship and share.&lt;/p&gt;

&lt;p&gt;If you're building something useful (a tool, app, library, template, service, or weird little side project), &lt;strong&gt;submit it to CurateClick&lt;/strong&gt; so more people can find it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Submit here:&lt;/strong&gt; &lt;a href="https://curateclick.com/" rel="noopener noreferrer"&gt;https://curateclick.com/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fastest way to grow is simple: &lt;strong&gt;make it easy for the right people to stumble into your work&lt;/strong&gt;. CurateClick is one of those surfaces.&lt;/p&gt;

&lt;h2&gt;
  
  
  More Weekly Picks
&lt;/h2&gt;

&lt;p&gt;Browse the full Weekly Picks archive here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://curateclick.com/weekly" rel="noopener noreferrer"&gt;https://curateclick.com/weekly&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Originally published at:&lt;/strong&gt; &lt;a href="https://curateclick.com/blog/curateclick-weekly-picks-6-fresh-tools-mar-2026" rel="noopener noreferrer"&gt;CurateClick Weekly Picks: 6 Fresh Tools Worth Trying (Mar 22, 2026 Edition)&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tools</category>
      <category>productivity</category>
    </item>
    <item>
      <title>2026 Complete Guide: OpenClaw LCM Plugin — Never Lose a Single Conversation Again</title>
      <dc:creator>cz</dc:creator>
      <pubDate>Mon, 30 Mar 2026 04:14:56 +0000</pubDate>
      <link>https://dev.to/czmilo/2026-complete-guide-openclaw-lcm-plugin-never-lose-a-single-conversation-again-6n4</link>
      <guid>https://dev.to/czmilo/2026-complete-guide-openclaw-lcm-plugin-never-lose-a-single-conversation-again-6n4</guid>
      <description>&lt;h1&gt;
  
  
  2026 Complete Guide: OpenClaw LCM Plugin — Never Lose a Single Conversation Again
&lt;/h1&gt;

&lt;h2&gt;
  
  
  🎯 Key Takeaways (TL;DR)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The Lossless-Claw plugin replaces OpenClaw's default context engine with a DAG-based storage system that never throws away conversation history&lt;/li&gt;
&lt;li&gt;Every message is persisted to SQLite and summarized into expandable nodes — you can drill back into any point of your conversation&lt;/li&gt;
&lt;li&gt;Setup takes under 5 minutes: install the plugin, flip one config flag, and you're running&lt;/li&gt;
&lt;li&gt;Cost-conscious users can route summarization through a cheaper model (e.g., Claude Haiku) while keeping the main conversation on a premium model&lt;/li&gt;
&lt;li&gt;This guide covers installation, configuration, architecture, agent tools, and troubleshooting — everything you need in one place&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What Problem Does LCM Solve?&lt;/li&gt;
&lt;li&gt;Installation Walkthrough&lt;/li&gt;
&lt;li&gt;How the DAG Model Works&lt;/li&gt;
&lt;li&gt;Configuration Deep Dive&lt;/li&gt;
&lt;li&gt;Agent Tools: grep, describe, expand_query&lt;/li&gt;
&lt;li&gt;Architecture Internals&lt;/li&gt;
&lt;li&gt;Advantages Over Traditional Context Management&lt;/li&gt;
&lt;li&gt;Known Limitations&lt;/li&gt;
&lt;li&gt;Troubleshooting Common Issues&lt;/li&gt;
&lt;li&gt;FAQ&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What Problem Does LCM Solve?
&lt;/h2&gt;

&lt;p&gt;By default, OpenClaw uses a legacy context engine that truncates or slides old messages out of the context window as conversations grow. Once those messages are gone, the agent loses access to earlier context entirely. This is a fundamental problem for long-running projects, complex debugging sessions, or any conversation that spans days or weeks.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Lossless-Claw&lt;/strong&gt; plugin replaces this with a fundamentally different approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every message is persisted to a local SQLite database — nothing is ever deleted&lt;/li&gt;
&lt;li&gt;Old messages are summarized into a DAG (Directed Acyclic Graph) of layered summaries&lt;/li&gt;
&lt;li&gt;The agent can drill back into any summary to recover full details on demand&lt;/li&gt;
&lt;li&gt;Context assembly is budget-aware, fitting the most relevant information into the model's context window&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result: conversations that can run for hundreds or thousands of turns without the agent "forgetting" what happened earlier.&lt;/p&gt;




&lt;h2&gt;
  
  
  Installation Walkthrough
&lt;/h2&gt;

&lt;h3&gt;
  
  
  From npm (Recommended)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw plugins &lt;span class="nb"&gt;install&lt;/span&gt; @martian-engineering/Lossless-Claw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  From a Local Clone (for Development)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/Martian-Engineering/Lossless-Claw.git
openclaw plugins &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--link&lt;/span&gt; ./Lossless-Claw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Activate as the Context Engine
&lt;/h3&gt;

&lt;p&gt;This step is &lt;strong&gt;required&lt;/strong&gt;. Without it, the plugin loads but does not run — the default &lt;code&gt;legacy&lt;/code&gt; engine remains active.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw config &lt;span class="nb"&gt;set &lt;/span&gt;plugins.slots.contextEngine Lossless-Claw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verify
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw plugins list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see &lt;code&gt;Lossless-Claw&lt;/code&gt; listed as enabled, with the &lt;code&gt;contextEngine&lt;/code&gt; slot assigned to it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Update
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw plugins update @martian-engineering/Lossless-Claw
&lt;span class="c"&gt;# Or update all plugins at once:&lt;/span&gt;
openclaw plugins update &lt;span class="nt"&gt;--all&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  How the DAG Model Works
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Core Insight
&lt;/h3&gt;

&lt;p&gt;Traditional context management is linear: keep the latest N messages, discard the rest. LCM builds a tree instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Raw messages:   [m1] [m2] [m3] ... [m20] [m21] ... [m40] ... [m80] ... [m100]
                 ↓ chunk                  ↓ chunk            ↓ chunk
Leaf (d0):     [leaf_1: m1-m20]      [leaf_2: m21-m40]   [leaf_3: ...]  [leaf_4: ...]
                 ↓                        ↓
Condensed (d1): [cond_1: leaf_1 + leaf_2]                 [cond_2: leaf_3 + leaf_4]
                 ↓                                            ↓
Condensed (d2): [cond_3: cond_1 + cond_2]
                                                    ↑
                                            still expandable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each node carries metadata: time range, token counts, descendant counts, and references to its sources. The agent sees summaries in the context window, and uses retrieval tools to drill into any node for full detail.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lifecycle Hooks
&lt;/h3&gt;

&lt;p&gt;The engine hooks into four points in OpenClaw's conversation flow:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Phase&lt;/th&gt;
&lt;th&gt;What Happens&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Bootstrap&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;On session startup, reconciles the JSONL session file with the SQLite database. Imports any messages that appeared since the last checkpoint.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Assemble&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Before each model call, builds the message array within the token budget: recent raw messages (the "fresh tail") plus selected summaries from the DAG.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;After Turn&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;After the model responds, persists new messages and evaluates whether compaction is needed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Compact&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;When the context exceeds the threshold, runs leaf and/or condensed summarization passes to compress older content.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Compaction: Three Escalation Levels
&lt;/h3&gt;

&lt;p&gt;Every summarization attempt follows a fallback chain to guarantee progress:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Normal&lt;/strong&gt; — Full-fidelity prompt, temperature 0.2, target ~1200 tokens&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aggressive&lt;/strong&gt; — Tighter prompt with fewer details, temperature 0.1, lower token target&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deterministic fallback&lt;/strong&gt; — Truncates to ~512 tokens with a &lt;code&gt;[Truncated for context management]&lt;/code&gt; marker&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Even if the summarization model is down or returns garbage, compaction still succeeds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Large File Handling
&lt;/h3&gt;

&lt;p&gt;When a message contains a file (code paste, log dump, etc.) exceeding the &lt;code&gt;largeFileTokenThreshold&lt;/code&gt; (default 25,000 tokens):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The file content is extracted and stored on disk (&lt;code&gt;~/.openclaw/lcm-files/&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;A ~200-token structural summary replaces the file in the message&lt;/li&gt;
&lt;li&gt;The agent can retrieve the full file via &lt;code&gt;lcm_describe&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This prevents a single large paste from consuming the entire context window.&lt;/p&gt;




&lt;h2&gt;
  
  
  Configuration Deep Dive
&lt;/h2&gt;

&lt;p&gt;Open your config with &lt;code&gt;openclaw config edit&lt;/code&gt; and add settings under &lt;code&gt;plugins.entries.Lossless-Claw.config&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "plugins": {
    "slots": {
      "contextEngine": "Lossless-Claw"
    },
    "entries": {
      "Lossless-Claw": {
        "enabled": true,
        "config": {
          // All fields are optional — defaults are sensible
        }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All settings can also be overridden via environment variables (prefix &lt;code&gt;LCM_&lt;/code&gt;, e.g. &lt;code&gt;LCM_FRESH_TAIL_COUNT=32&lt;/code&gt;). Environment variables take highest precedence.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Parameters
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;contextThreshold&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;0.75&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Fraction of the model's context window that triggers compaction. At 0.75, compaction fires when 75% of the budget is consumed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;freshTailCount&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;20&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Number of most recent raw messages that are always included and never compacted. This is the agent's "working memory."&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;incrementalMaxDepth&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;-1&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;How deep incremental (per-turn) condensation goes. &lt;code&gt;0&lt;/code&gt; = leaf passes only, &lt;code&gt;1&lt;/code&gt; = one condensation level, &lt;code&gt;-1&lt;/code&gt; = unlimited.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;dbPath&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;~/.openclaw/lcm.db&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Path to the SQLite database.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;summaryModel&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;em&gt;(session model)&lt;/em&gt;&lt;/td&gt;
&lt;td&gt;Model override for summarization. Use a cheaper/faster model to reduce costs (e.g., &lt;code&gt;anthropic/claude-haiku-4-5&lt;/code&gt;).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;expansionModel&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;em&gt;(session model)&lt;/em&gt;&lt;/td&gt;
&lt;td&gt;Model override for the &lt;code&gt;lcm_expand_query&lt;/code&gt; sub-agent.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;largeFileTokenThreshold&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;25000&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Files above this token count are externalized to disk.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Session Filtering
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ignoreSessionPatterns&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Glob patterns for sessions to exclude entirely. Example: &lt;code&gt;["agent:*:cron:**"]&lt;/code&gt; excludes all cron sessions.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;statelessSessionPatterns&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Glob patterns for sessions that can read from the database but never write. Example: &lt;code&gt;["agent:*:subagent:**"]&lt;/code&gt; lets sub-agents access parent context without polluting the DB.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;skipStatelessSessions&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;When &lt;code&gt;true&lt;/code&gt;, stateless sessions skip all LCM persistence.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Recommended Configurations
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;General use (balanced):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "contextThreshold": 0.75,
  "freshTailCount": 32,
  "incrementalMaxDepth": -1
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Long-running sessions (hundreds of turns):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "contextThreshold": 0.8,
  "freshTailCount": 32,
  "incrementalMaxDepth": 2
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Cost-sensitive (minimize summarization calls):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "contextThreshold": 0.85,
  "freshTailCount": 16,
  "summaryModel": "anthropic/claude-haiku-4-5"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Agent Tools: grep, describe, expand_query
&lt;/h2&gt;

&lt;p&gt;Once active, LCM registers three tools that the agent can call to retrieve compressed context:&lt;/p&gt;

&lt;h3&gt;
  
  
  lcm_grep — Fast Full-Text Search
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;lcm_grep&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;pattern&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;database migration&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;full_text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="nf"&gt;lcm_grep&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;pattern&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;error.*timeout&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;regex&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;messages&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="nf"&gt;lcm_grep&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;pattern&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;deployment&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;since&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;2026-03-01&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fast&lt;/strong&gt; (&amp;lt;100ms) — direct SQLite query&lt;/li&gt;
&lt;li&gt;Supports FTS5 when available, with automatic LIKE-based fallback for CJK text&lt;/li&gt;
&lt;li&gt;Scope to &lt;code&gt;messages&lt;/code&gt;, &lt;code&gt;summaries&lt;/code&gt;, or &lt;code&gt;both&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Filter by time range with &lt;code&gt;since&lt;/code&gt; / &lt;code&gt;before&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  lcm_describe — Direct Metadata Lookup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;lcm_describe&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sum_abc123&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="nf"&gt;lcm_describe&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;file_xyz789&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fast&lt;/strong&gt; (&amp;lt;100ms) — direct lookup&lt;/li&gt;
&lt;li&gt;For summaries: returns full content, metadata, parent/child links, source message IDs, and subtree structure&lt;/li&gt;
&lt;li&gt;For files: returns full file content and exploration summary&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  lcm_expand_query — Deep Recall via Sub-Agent
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;lcm_expand_query&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;What were the exact SQL migrations we discussed for the users table?&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;summaryIds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sum_abc123&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Slow but powerful&lt;/strong&gt; (~30-120 seconds) — spawns a sub-agent that traverses the DAG&lt;/li&gt;
&lt;li&gt;The sub-agent has read-only access scoped to the current conversation&lt;/li&gt;
&lt;li&gt;Access is time-limited (5-minute TTL) and automatically revoked&lt;/li&gt;
&lt;li&gt;Best used when &lt;code&gt;lcm_grep&lt;/code&gt; or &lt;code&gt;lcm_describe&lt;/code&gt; are not specific enough&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When to Use Each Tool
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Need&lt;/th&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;"Did we discuss X?"&lt;/td&gt;
&lt;td&gt;&lt;code&gt;lcm_grep&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Fast keyword/regex scan&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"What does this summary contain?"&lt;/td&gt;
&lt;td&gt;&lt;code&gt;lcm_describe&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Direct metadata lookup&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"What exactly did we decide about X three days ago?"&lt;/td&gt;
&lt;td&gt;&lt;code&gt;lcm_expand_query&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Deep recall with evidence&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Architecture Internals
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                        ┌─────────────────────┐
                        │   OpenClaw Gateway   │
                        └──────────┬──────────┘
                                   │
                          ┌────────▼────────┐
                          │  Agent Runtime   │
                          └────────┬────────┘
                                   │
               ┌───────────────────┼───────────────────┐
               │                   │                   │
       ┌───────▼───────┐  ┌───────▼───────┐  ┌───────▼───────┐
       │   Bootstrap    │  │   Assemble    │  │  After Turn   │
       │ (session sync) │  │ (build prompt)│  │ (persist +    │
       │                │  │               │  │  compact?)    │
       └───────┬───────┘  └───────┬───────┘  └───────┬───────┘
               │                  │                   │
               └──────────────────┼───────────────────┘
                                  │
                     ┌────────────▼────────────┐
                     │    SQLite Database       │
                     │  ┌──────────────────┐   │
                     │  │ messages          │   │
                     │  │ summaries (DAG)   │   │
                     │  │ context_items     │   │
                     │  │ large_files       │   │
                     │  └──────────────────┘   │
                     └─────────────────────────┘
                                  │
                    ┌─────────────┼─────────────┐
                    │             │             │
              ┌─────▼─────┐ ┌────▼────┐ ┌─────▼──────┐
              │ lcm_grep  │ │lcm_desc │ │lcm_expand  │
              │ (search)  │ │(inspect)│ │(sub-agent) │
              └───────────┘ └─────────┘ └────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Crash Recovery
&lt;/h3&gt;

&lt;p&gt;The bootstrap system tracks reconciliation progress with byte offsets and entry hashes. If OpenClaw crashes mid-session, the next startup picks up exactly where it left off — no duplicate ingestion, no lost messages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sub-agent Isolation
&lt;/h3&gt;

&lt;p&gt;The expansion system uses scoped delegation grants with TTL and explicit revocation. Sub-agents get read-only access to exactly the conversations they need, with automatic cleanup on completion or timeout.&lt;/p&gt;




&lt;h2&gt;
  
  
  Advantages Over Traditional Context Management
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Nothing Is Lost
&lt;/h3&gt;

&lt;p&gt;Every message is persisted. Summaries link back to source messages. The agent can always recover full details through &lt;code&gt;lcm_expand_query&lt;/code&gt;. This is fundamentally different from sliding-window truncation where old context is gone forever.&lt;/p&gt;

&lt;h3&gt;
  
  
  Intelligent Compression
&lt;/h3&gt;

&lt;p&gt;Depth-aware summarization prompts produce different summary styles at each level:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Leaf summaries&lt;/strong&gt; preserve specific decisions, commands, errors, and rationale&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mid-level summaries&lt;/strong&gt; extract themes, key decisions, and unresolved tensions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-level summaries&lt;/strong&gt; capture session arcs, major turning points, and long-term constraints&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cost Control
&lt;/h3&gt;

&lt;p&gt;You can use a cheaper model for summarization (e.g., Haiku) while keeping the main conversation on a more capable model (e.g., Opus). The &lt;code&gt;summaryModel&lt;/code&gt; and &lt;code&gt;expansionModel&lt;/code&gt; settings make this explicit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Crash Recovery
&lt;/h3&gt;

&lt;p&gt;The bootstrap system tracks reconciliation progress with byte offsets and entry hashes. If OpenClaw crashes mid-session, the next startup picks up exactly where it left off — no duplicate ingestion, no lost messages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sub-agent Isolation
&lt;/h3&gt;

&lt;p&gt;The expansion system uses scoped delegation grants with TTL and explicit revocation. Sub-agents get read-only access to exactly the conversations they need, with automatic cleanup on completion or timeout.&lt;/p&gt;

&lt;h3&gt;
  
  
  Session Filtering
&lt;/h3&gt;

&lt;p&gt;Glob patterns let you exclude noisy sessions (cron jobs, heartbeats) from storage, and mark sub-agent sessions as stateless so they benefit from parent context without polluting the database.&lt;/p&gt;




&lt;h2&gt;
  
  
  Known Limitations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Summarization Quality Depends on the Model
&lt;/h3&gt;

&lt;p&gt;The summaries are only as good as the model producing them. Using a very cheap or small model for summarization may lose nuance. Important details can be compressed away even with good models — the &lt;code&gt;lcm_expand_query&lt;/code&gt; tool mitigates this but adds latency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Expansion Is Slow
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;lcm_expand_query&lt;/code&gt; spawns a sub-agent, which takes 30-120 seconds. For quick recall, &lt;code&gt;lcm_grep&lt;/code&gt; and &lt;code&gt;lcm_describe&lt;/code&gt; are far faster but less capable. In time-sensitive workflows, the agent may skip expansion and work from summaries alone.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage Growth
&lt;/h3&gt;

&lt;p&gt;The SQLite database grows with every message. Long-running heavy sessions (thousands of turns with large tool outputs) can produce databases in the hundreds of megabytes. Large files externalized to disk add to this. There is no built-in garbage collection or retention policy — old conversations persist indefinitely.&lt;/p&gt;

&lt;h3&gt;
  
  
  Single-Model Summarization
&lt;/h3&gt;

&lt;p&gt;Each summarization pass uses one model call. There is no ensemble or verification step. If the model hallucinates or misinterprets context during summarization, that error propagates into the DAG and may affect future assembly.&lt;/p&gt;

&lt;h3&gt;
  
  
  No Cross-Session Context
&lt;/h3&gt;

&lt;p&gt;Each conversation is independent in the database. LCM does not automatically share context between different sessions or agents. The &lt;code&gt;allConversations&lt;/code&gt; flag on retrieval tools allows cross-conversation search, but there is no automatic cross-pollination during assembly.&lt;/p&gt;

&lt;h3&gt;
  
  
  CJK Full-Text Search Limitations
&lt;/h3&gt;

&lt;p&gt;FTS5 (SQLite's full-text search engine) does not tokenize Chinese, Japanese, or Korean text well. LCM falls back to LIKE-based search for CJK queries, which is slower and less precise for large databases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compaction Latency
&lt;/h3&gt;

&lt;p&gt;Each compaction pass requires an LLM call (typically 5-15 seconds per leaf or condensed pass). During heavy compaction, this can add noticeable delay after a turn completes. The &lt;code&gt;afterTurn&lt;/code&gt; hook serializes compaction per-session, so it does not block other sessions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Troubleshooting Common Issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Plugin is installed but not active
&lt;/h3&gt;

&lt;p&gt;Check that the context engine slot is set:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw config get plugins.slots.contextEngine
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It must return &lt;code&gt;Lossless-Claw&lt;/code&gt;. If it returns &lt;code&gt;legacy&lt;/code&gt; or is empty, set it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw config &lt;span class="nb"&gt;set &lt;/span&gt;plugins.slots.contextEngine Lossless-Claw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Summarization auth errors
&lt;/h3&gt;

&lt;p&gt;If you see &lt;code&gt;LcmProviderAuthError&lt;/code&gt;, the model used for summarization cannot authenticate. Check:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is &lt;code&gt;summaryModel&lt;/code&gt; set to a model you have access to?&lt;/li&gt;
&lt;li&gt;Does the provider require a separate API key?&lt;/li&gt;
&lt;li&gt;Try unsetting &lt;code&gt;summaryModel&lt;/code&gt; to fall back to the session model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Database location
&lt;/h3&gt;

&lt;p&gt;Default: &lt;code&gt;~/.openclaw/lcm.db&lt;/code&gt;. Override with the &lt;code&gt;dbPath&lt;/code&gt; config or &lt;code&gt;LCM_DB_PATH&lt;/code&gt; environment variable.&lt;/p&gt;

&lt;p&gt;To inspect the database directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;sqlite3 ~/.openclaw/lcm.db &lt;span class="s2"&gt;".tables"&lt;/span&gt;
sqlite3 ~/.openclaw/lcm.db &lt;span class="s2"&gt;"SELECT COUNT(*) FROM messages"&lt;/span&gt;
sqlite3 ~/.openclaw/lcm.db &lt;span class="s2"&gt;"SELECT id, kind, depth, token_count FROM summaries ORDER BY created_at DESC LIMIT 10"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Resetting LCM state
&lt;/h3&gt;

&lt;p&gt;To start fresh (removes all persisted context):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; ~/.openclaw/lcm.db
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; ~/.openclaw/lcm-files/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The database and file store will be recreated on next session startup.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤔 FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q: Do I need to change anything in my workflow after installing LCM?
&lt;/h3&gt;

&lt;p&gt;A: No. Once installed and activated, LCM runs silently in the background. Your normal conversation workflow stays exactly the same. The agent automatically manages context assembly and compaction. You only need to use the retrieval tools (&lt;code&gt;lcm_grep&lt;/code&gt;, &lt;code&gt;lcm_describe&lt;/code&gt;, &lt;code&gt;lcm_expand_query&lt;/code&gt;) when you want to recall specific historical details.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Will LCM slow down my conversations?
&lt;/h3&gt;

&lt;p&gt;A: Minimal impact during normal conversation. You may notice a 5-15 second pause after certain turns when compaction runs — but this happens in the background and doesn't block you. The &lt;code&gt;lcm_grep&lt;/code&gt; and &lt;code&gt;lcm_describe&lt;/code&gt; tools are fast (&amp;lt;100ms). Only &lt;code&gt;lcm_expand_query&lt;/code&gt; is slow (30-120 seconds), and that's by design.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Can I use a different model for summarization to save costs?
&lt;/h3&gt;

&lt;p&gt;A: Yes. Set &lt;code&gt;summaryModel&lt;/code&gt; to a cheaper model like &lt;code&gt;anthropic/claude-haiku-4-5&lt;/code&gt;. The main conversation can stay on Opus or Sonnet while summarization routes through Haiku. This is one of LCM's most practical cost-control features.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: What happens if the summarization model fails?
&lt;/h3&gt;

&lt;p&gt;A: LCM uses a three-level fallback chain: Normal → Aggressive → Deterministic (truncation). Even if the summarization model is completely down, the deterministic fallback ensures compaction always succeeds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Can sub-agents write to the LCM database?
&lt;/h3&gt;

&lt;p&gt;A: By default, sub-agents are stateless and read from the parent's context. You can configure &lt;code&gt;statelessSessionPatterns&lt;/code&gt; to control which sub-agents write vs. read-only. Sub-agents never pollute the database unless explicitly configured.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How does LCM handle very large code pastes?
&lt;/h3&gt;

&lt;p&gt;A: Files exceeding 25,000 tokens are externalized to disk (&lt;code&gt;~/.openclaw/lcm-files/&lt;/code&gt;) and replaced with a ~200-token structural summary. Use &lt;code&gt;lcm_describe&lt;/code&gt; to retrieve the full file content on demand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Is my data stored locally or sent to a server?
&lt;/h3&gt;

&lt;p&gt;A: All data stays local. The SQLite database and externalized files live on your machine at &lt;code&gt;~/.openclaw/&lt;/code&gt;. No data is sent to any external service.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary &amp;amp; Recommendations
&lt;/h2&gt;

&lt;p&gt;LCM transforms OpenClaw from a forgetful chatbot into a genuine long-term memory system. If you work on complex projects, maintain ongoing conversations with an AI assistant, or simply hate losing context when discussions get long — this plugin is essential.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start here:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install: &lt;code&gt;openclaw plugins install @martian-engineering/Lossless-Claw&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Activate: &lt;code&gt;openclaw config set plugins.slots.contextEngine Lossless-Claw&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Verify: &lt;code&gt;openclaw plugins list&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Done. Your next conversation starts building the DAG.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Pro tip:&lt;/strong&gt; For cost-sensitive setups, add &lt;code&gt;"summaryModel": "anthropic/claude-haiku-4-5"&lt;/code&gt; to your config. Summarization calls add up over time, and Haiku handles this task well at a fraction of the cost.&lt;/p&gt;

&lt;p&gt;For further reading:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Martian-Engineering/Lossless-Claw" rel="noopener noreferrer"&gt;Lossless-claw repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.openclaw.ai/plugins/architecture" rel="noopener noreferrer"&gt;OpenClaw plugin architecture&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.openclaw.ai/plugins/building-plugins" rel="noopener noreferrer"&gt;Building OpenClaw plugins&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.openclaw.ai/concepts/context-engine" rel="noopener noreferrer"&gt;Context engine concept&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;This article was generated based on the official LCM plugin (Lossless Context Management) documentation. For the most up-to-date information, check the &lt;a href="https://github.com/Martian-Engineering/Lossless-Claw" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Originally published at:&lt;/strong&gt; &lt;a href="https://curateclick.com/blog/openclaw-lcm-plugin-guide-2026" rel="noopener noreferrer"&gt;2026 Complete Guide: OpenClaw LCM Plugin&lt;/a&gt;&lt;/p&gt;

</description>
      <category>openclaw</category>
      <category>ai</category>
      <category>context</category>
      <category>productivity</category>
    </item>
    <item>
      <title>ACE-Step 1.5: The Complete 2026 Guide to Open-Source AI Music Generation</title>
      <dc:creator>cz</dc:creator>
      <pubDate>Fri, 27 Mar 2026 07:06:58 +0000</pubDate>
      <link>https://dev.to/czmilo/ace-step-15-the-complete-2026-guide-to-open-source-ai-music-generation-522e</link>
      <guid>https://dev.to/czmilo/ace-step-15-the-complete-2026-guide-to-open-source-ai-music-generation-522e</guid>
      <description>&lt;h1&gt;
  
  
  ACE-Step 1.5: The Complete 2026 Guide to Open-Source AI Music Generation
&lt;/h1&gt;

&lt;h2&gt;
  
  
  🎯 Key Takeaways (TL;DR)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;ACE-Step 1.5 is a state-of-the-art open-source AI music generation model that rivals commercial alternatives in quality and control&lt;/li&gt;
&lt;li&gt;It supports text-to-music generation in 50+ languages with up to 10-minute compositions, running efficiently on consumer hardware&lt;/li&gt;
&lt;li&gt;Key capabilities include cover generation, repainting, vocal-to-BGM conversion, and granular stylistic control via a novel hybrid Language Model architecture&lt;/li&gt;
&lt;li&gt;Available through ComfyUI, Hugging Face, GitHub, and cloud APIs — making professional AI music accessible to everyone&lt;/li&gt;
&lt;li&gt;ACE-Step 1.5 represents the "Stable Diffusion moment" for music: moving AI music generation from closed APIs to fully local, open-source control&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What is ACE-Step 1.5?&lt;/li&gt;
&lt;li&gt;How ACE-Step 1.5 Works: The Hybrid Architecture&lt;/li&gt;
&lt;li&gt;Key Features of ACE-Step 1.5&lt;/li&gt;
&lt;li&gt;Getting Started: Installation and Setup&lt;/li&gt;
&lt;li&gt;Use Cases and Applications&lt;/li&gt;
&lt;li&gt;ACE-Step 1.5 vs. Commercial Alternatives&lt;/li&gt;
&lt;li&gt;FAQ&lt;/li&gt;
&lt;li&gt;Summary&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What is ACE-Step 1.5?
&lt;/h2&gt;

&lt;p&gt;ACE-Step 1.5 is the latest and most advanced version of the ACE-Step open-source music generation foundation model. Released in January 2026, it represents a significant leap forward in the capability and accessibility of AI-powered music creation. At its core, ACE-Step 1.5 is a &lt;strong&gt;text-to-audio model that transforms simple text descriptions into full, high-fidelity music tracks&lt;/strong&gt; — complete with melody, harmony, rhythm, instrumentation, and optionally, lyrics.&lt;/p&gt;

&lt;p&gt;What sets ACE-Step 1.5 apart from previous versions and competing solutions is its ability to generate music that is not only aurally convincing but also &lt;strong&gt;precisely controllable&lt;/strong&gt;. Users can guide the generation process through style tags describing genre, mood, and instrumentation, and through optional structured lyrics that shape the vocal performance. The result is music that adheres closely to the user's creative intent, rather than producing generic outputs.&lt;/p&gt;

&lt;p&gt;The model maintains &lt;strong&gt;strong prompt fidelity across more than fifty languages&lt;/strong&gt;, making it a genuinely global tool for music creation. Whether you're describing a mood in English, Japanese, Spanish, or Mandarin, ACE-Step 1.5 interprets your intent and generates a composition that reflects it.&lt;/p&gt;

&lt;p&gt;Perhaps most importantly, ACE-Step 1.5 is fully &lt;strong&gt;open-source and runs efficiently on consumer hardware&lt;/strong&gt;. It supports Mac, AMD (with ROCm), Intel, and NVIDIA (CUDA) devices — meaning you don't need a data center to create professional-quality AI music.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Pro Tip&lt;/strong&gt;&lt;br&gt;
ACE-Step 1.5 is often described as the "Stable Diffusion moment" for music — the point where AI generation technology shifted from closed, API-gated systems to open, locally-running models that anyone can download, modify, and use commercially.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How ACE-Step 1.5 Works: The Hybrid Architecture
&lt;/h2&gt;

&lt;p&gt;Understanding the architecture behind ACE-Step 1.5 reveals why it outperforms most commercial alternatives despite being open-source. The model employs a &lt;strong&gt;novel two-stage pipeline&lt;/strong&gt; that separates high-level creative planning from low-level audio synthesis.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage 1: The Language Model as Omni-Capable Planner
&lt;/h3&gt;

&lt;p&gt;At the heart of ACE-Step 1.5 lies a Language Model ranging from &lt;strong&gt;0.6B to 4B parameters&lt;/strong&gt;. This LM doesn't just generate text — it functions as an &lt;strong&gt;omni-capable planner&lt;/strong&gt; that transforms simple user queries into comprehensive song blueprints.&lt;/p&gt;

&lt;p&gt;Using &lt;strong&gt;Chain-of-Thought (CoT) reasoning&lt;/strong&gt;, the Language Model breaks down the creative task step by step:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Interpretation&lt;/strong&gt;: It analyzes the user's style tags and optional lyrics to understand the desired genre, mood, tempo, instrumentation, and emotional arc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Planning&lt;/strong&gt;: It creates a detailed song blueprint — scaling from short loops (30 seconds) to full compositions (up to 10 minutes) — including arrangement metadata, section transitions, and dynamic build-ups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Captioning&lt;/strong&gt;: It synthesizes descriptive metadata and captions that guide the audio synthesis stage with precise musical instructions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This planning stage is what separates ACE-Step 1.5 from simpler music generation models. Rather than directly mapping text to audio in a single step (which often produces muddled or inconsistent results), ACE-Step 1.5 first &lt;strong&gt;thinks through the structure of the music&lt;/strong&gt; before generating a single note.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage 2: High-Fidelity Audio Synthesis
&lt;/h3&gt;

&lt;p&gt;The song blueprint produced by the Language Model is then passed to the &lt;strong&gt;audio synthesis engine&lt;/strong&gt;, which generates the actual waveform. This two-stage approach ensures that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;long-term structure&lt;/strong&gt; of the music is coherent (verses, choruses, bridges make musical sense)&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;short-term details&lt;/strong&gt; (timbre, dynamics, articulation) are sonically rich and realistic&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;style adherence&lt;/strong&gt; is precise — the output matches the input tags with high fidelity&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Hardware Acceleration
&lt;/h3&gt;

&lt;p&gt;ACE-Step 1.5 is optimized for a wide range of hardware platforms:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;NVIDIA GPU&lt;/td&gt;
&lt;td&gt;CUDA / PyTorch&lt;/td&gt;
&lt;td&gt;Best performance, widely compatible&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AMD GPU&lt;/td&gt;
&lt;td&gt;ROCm&lt;/td&gt;
&lt;td&gt;Supported on AMD Radeon and Ryzen AI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Intel GPU&lt;/td&gt;
&lt;td&gt;oneAPI / IPEX&lt;/td&gt;
&lt;td&gt;Growing support&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mac&lt;/td&gt;
&lt;td&gt;Metal / MPS&lt;/td&gt;
&lt;td&gt;Apple Silicon optimized&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;PyTorch CPU&lt;/td&gt;
&lt;td&gt;Lower speed, accessible&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This cross-platform support is a major differentiator — ACE-Step 1.5 is the most hardware-flexible open-source music model available today.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Features of ACE-Step 1.5
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Text-to-Music Generation
&lt;/h3&gt;

&lt;p&gt;The primary capability of ACE-Step 1.5 is converting &lt;strong&gt;text descriptions into complete music tracks&lt;/strong&gt;. Users provide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Style tags&lt;/strong&gt;: Genre (pop, rock, jazz, EDM, lo-fi), mood (happy, melancholic, energetic), instrumentation (piano-driven, synth-heavy, acoustic guitar), and era influences&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optional structured lyrics&lt;/strong&gt;: When lyrics are provided, ACE-Step 1.5 generates a vocal track that adheres to the melodic and rhythmic structure of the provided text&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration control&lt;/strong&gt;: From 30-second loops to 10-minute compositions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The generated output maintains high acoustic fidelity — the quality is comparable to commercially produced music, not the robotic or synthetic sound of earlier AI music tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Cover Generation
&lt;/h3&gt;

&lt;p&gt;ACE-Step 1.5 can take an existing song and &lt;strong&gt;recreate it in a different style or genre&lt;/strong&gt;. This isn't a simple pitch-shift or tempo-change cover — it's a genuine reinterpretation. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Convert a rock ballad into an acoustic piano rendition&lt;/li&gt;
&lt;li&gt;Transform a pop song into an EDM remix&lt;/li&gt;
&lt;li&gt;Rebalance an instrumental track with new instrumentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This feature is particularly valuable for content creators, musicians exploring genre mashups, and artists seeking inspiration from existing works.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Repainting
&lt;/h3&gt;

&lt;p&gt;Repainting allows users to &lt;strong&gt;modify specific aspects of a generated track&lt;/strong&gt; without regenerating the entire piece. You can change:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The instrumentation (swap drums for live percussion)&lt;/li&gt;
&lt;li&gt;The genre (shift from jazz to bossa nova)&lt;/li&gt;
&lt;li&gt;The mood (alter energy level or emotional tone)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This granular control is something most commercial AI music tools don't offer, making ACE-Step 1.5 particularly powerful for iterative creative workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Vocal-to-BGM Conversion
&lt;/h3&gt;

&lt;p&gt;Perhaps the most innovative feature of ACE-Step 1.5 is its ability to &lt;strong&gt;convert a vocal track into instrumental music&lt;/strong&gt; while preserving the essential character of the original. The model analyzes the vocal melody, rhythm, and emotional arc, then generates a complementary instrumental arrangement.&lt;/p&gt;

&lt;p&gt;This enables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating backing tracks for existing vocals&lt;/li&gt;
&lt;li&gt;Transforming a song demo into a fully instrumental version&lt;/li&gt;
&lt;li&gt;Generating BGM that matches the pacing of a video or podcast&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Multi-Language Support
&lt;/h3&gt;

&lt;p&gt;ACE-Step 1.5 supports &lt;strong&gt;50+ languages&lt;/strong&gt; with strong prompt fidelity. Whether your style tags are in English, Japanese, Korean, Chinese, Arabic, or any of dozens of other languages, the model interprets your intent accurately. This makes it a genuinely global tool — unlike many AI music tools that are heavily biased toward English prompts.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started: Installation and Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Option 1: ComfyUI (Recommended for Creators)
&lt;/h3&gt;

&lt;p&gt;ComfyUI provides the most user-friendly way to use ACE-Step 1.5, with a visual node-based workflow that makes every feature accessible:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install &lt;a href="https://github.com/comfyanonymous/ComfyUI" rel="noopener noreferrer"&gt;ComfyUI&lt;/a&gt; if you haven't already&lt;/li&gt;
&lt;li&gt;Install the ACE-Step custom nodes for ComfyUI&lt;/li&gt;
&lt;li&gt;Download the ACE-Step 1.5 model weights from &lt;a href="https://huggingface.co/ACE-Step/Ace-Step1.5" rel="noopener noreferrer"&gt;Hugging Face&lt;/a&gt; or the &lt;a href="https://github.com/ace-step/ACE-Step-1.5" rel="noopener noreferrer"&gt;official GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Place the model files in your ComfyUI &lt;code&gt;models/&lt;/code&gt; directory&lt;/li&gt;
&lt;li&gt;Launch ComfyUI and load the ACE-Step workflow&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Pro Tip&lt;/strong&gt;&lt;br&gt;
The ComfyUI ACE-Step nodes expose text2music generation by default, but custom guiders unlock additional task types including cover generation, repainting, and vocal-to-BGM conversion. Check the &lt;a href="https://docs.comfy.org/tutorials/audio/ace-step/ace-step-v1-5" rel="noopener noreferrer"&gt;ComfyUI ACE-Step guide&lt;/a&gt; for full feature coverage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Option 2: Direct GitHub Installation
&lt;/h3&gt;

&lt;p&gt;For developers who want full control:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone the repository&lt;/span&gt;
git clone https://github.com/ace-step/ACE-Step-1.5.git
&lt;span class="nb"&gt;cd &lt;/span&gt;ACE-Step-1.5

&lt;span class="c"&gt;# Install dependencies&lt;/span&gt;
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="c"&gt;# Download model weights&lt;/span&gt;
&lt;span class="c"&gt;# (See GitHub README for download links)&lt;/span&gt;

&lt;span class="c"&gt;# Run inference&lt;/span&gt;
python generate.py &lt;span class="nt"&gt;--prompt&lt;/span&gt; &lt;span class="s2"&gt;"upbeat lo-fi hip hop with piano and vinyl crackle"&lt;/span&gt; &lt;span class="nt"&gt;--duration&lt;/span&gt; 120
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Option 3: Cloud API (WaveSpeedAI)
&lt;/h3&gt;

&lt;p&gt;For those who want to integrate ACE-Step 1.5 into applications without managing infrastructure, WaveSpeedAI provides a &lt;strong&gt;ready-to-use REST inference API&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No cold starts&lt;/li&gt;
&lt;li&gt;Affordable pay-per-use pricing&lt;/li&gt;
&lt;li&gt;Supports all generation modes (text2music, cover, repainting, vocal-to-BGM)&lt;/li&gt;
&lt;li&gt;Global CDN for low latency
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://api.wavespeed.ai/generate &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Bearer YOUR_API_KEY"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"prompt": "cinematic ambient with orchestral strings", "duration": 180}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Option 4: DigitalOcean
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/ace-step-music-ai" rel="noopener noreferrer"&gt;DigitalOcean's tutorial&lt;/a&gt; provides a step-by-step guide for deploying ACE-Step 1.5 on their infrastructure, including GPU droplet setup and API configuration.&lt;/p&gt;




&lt;h2&gt;
  
  
  Use Cases and Applications
&lt;/h2&gt;

&lt;h3&gt;
  
  
  For Music Artists and Producers
&lt;/h3&gt;

&lt;p&gt;ACE-Step 1.5 is a powerful &lt;strong&gt;ideation and prototyping tool&lt;/strong&gt;. Instead of staring at a blank session, producers can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate chord progressions and arrangements as starting points&lt;/li&gt;
&lt;li&gt;Quickly explore multiple genre directions for a song&lt;/li&gt;
&lt;li&gt;Create demo tracks with full instrumentation and lyrics for client approval&lt;/li&gt;
&lt;li&gt;Generate variations on existing tracks for A/B testing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  For Content Creators
&lt;/h3&gt;

&lt;p&gt;YouTubers, podcasters, and social media creators often struggle to find &lt;strong&gt;affordable, royalty-free music&lt;/strong&gt; that fits their content. ACE-Step 1.5 solves this by generating:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Background music tailored to video pacing and mood&lt;/li&gt;
&lt;li&gt;Intro and outro themes that match a channel's brand&lt;/li&gt;
&lt;li&gt;Custom jingles and stingers&lt;/li&gt;
&lt;li&gt;Music for podcasts that enhances without distracting&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  For Game and App Developers
&lt;/h3&gt;

&lt;p&gt;Interactive media requires &lt;strong&gt;dynamic, adaptive audio&lt;/strong&gt;. ACE-Step 1.5 can be used to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate ambient soundscapes that respond to gameplay&lt;/li&gt;
&lt;li&gt;Create placeholder music during development&lt;/li&gt;
&lt;li&gt;Produce short stingers and notification sounds&lt;/li&gt;
&lt;li&gt;Prototype audio concepts before committing to full production&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  For AI Researchers
&lt;/h3&gt;

&lt;p&gt;As an &lt;strong&gt;open-source research platform&lt;/strong&gt;, ACE-Step 1.5 provides a foundation for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Studying the intersection of Language Models and audio synthesis&lt;/li&gt;
&lt;li&gt;Experimenting with new conditioning and control strategies&lt;/li&gt;
&lt;li&gt;Training specialized music generation models on top of the foundation&lt;/li&gt;
&lt;li&gt;Exploring the creative boundaries of AI in music&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ACE-Step 1.5 vs. Commercial Alternatives
&lt;/h2&gt;

&lt;p&gt;How does an open-source model compete with well-funded commercial products? Surprisingly well:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;ACE-Step 1.5&lt;/th&gt;
&lt;th&gt;Commercial AI Music Tools&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Free (open-source)&lt;/td&gt;
&lt;td&gt;Subscription / per-generation fees&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Deployment&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Local (full control)&lt;/td&gt;
&lt;td&gt;Cloud-only (vendor lock-in)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Customization&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Full model access&lt;/td&gt;
&lt;td&gt;Limited API parameters&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Editing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cover, repaint, vocal-to-BGM&lt;/td&gt;
&lt;td&gt;Often generation-only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Music Length&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Up to 10 minutes&lt;/td&gt;
&lt;td&gt;Often limited to 30-90 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Languages&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;50+&lt;/td&gt;
&lt;td&gt;Typically 5-10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hardware&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Consumer GPUs, Mac, CPU&lt;/td&gt;
&lt;td&gt;Data center GPUs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Commercial Use&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Permitted (check license)&lt;/td&gt;
&lt;td&gt;Restricted licensing&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Note&lt;/strong&gt;&lt;br&gt;
Always review the specific open-source license (Apache 2.0, MIT, etc.) before using ACE-Step 1.5 commercially. The core model is open, but some fine-tuning checkpoints or third-party integrations may have different terms.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🤔 FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q: Do I need a powerful GPU to run ACE-Step 1.5?
&lt;/h3&gt;

&lt;p&gt;A: Not necessarily. While a dedicated GPU (especially NVIDIA with CUDA or AMD with ROCm) provides the best performance, ACE-Step 1.5 can also run on CPU and Apple Silicon (M-series chips via Metal/MPS). Generation will be slower on non-GPU hardware, but the model remains fully functional for testing and experimentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Can I use ACE-Step 1.5 commercially?
&lt;/h3&gt;

&lt;p&gt;A: ACE-Step 1.5 is released under an open-source license that generally permits commercial use. However, you should review the specific license terms on the &lt;a href="https://github.com/ace-step/ACE-Step-1.5" rel="noopener noreferrer"&gt;official GitHub repository&lt;/a&gt; and ensure your use case complies. Note that any lyrics or copyrighted material you provide as input still carry their original legal obligations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How does ACE-Step 1.5 handle lyrics generation?
&lt;/h3&gt;

&lt;p&gt;A: ACE-Step 1.5 supports optional structured lyrics as input. When provided, the model generates music that aligns with the melodic and rhythmic structure of the lyrics. ACE-Step 1.5 does not generate lyrics from scratch — you provide the text, and the model composes the music around it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: What's the difference between ACE-Step and ACE-Step 1.5?
&lt;/h3&gt;

&lt;p&gt;A: ACE-Step 1.5 is a major upgrade over the original ACE-Step model. Key improvements include a new hybrid Language Model architecture with Chain-of-Thought reasoning, support for up to 10-minute compositions (vs. 4 minutes in v1), additional features like cover generation and repainting, multi-language support expanded to 50+ languages, and significantly improved audio quality and prompt adherence.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Can ACE-Step 1.5 replace a music producer?
&lt;/h3&gt;

&lt;p&gt;A: No — and that's not its goal. ACE-Step 1.5 is a creative tool that &lt;strong&gt;augments&lt;/strong&gt; human creativity, not replaces it. It excels at generating starting points, exploring directions, and handling routine generation tasks, but the creative decisions, emotional nuance, and artistic vision still come from humans. Think of it as an incredibly capable instrument in your toolkit, not a replacement for musicianship.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How does it compare to Suno or Udio?
&lt;/h3&gt;

&lt;p&gt;A: Suno and Udio are closed, cloud-based commercial products with strong generation quality. ACE-Step 1.5 offers comparable — and in some dimensions superior — &lt;strong&gt;controllability and editing capabilities&lt;/strong&gt;. The key advantage of ACE-Step 1.5 is that it's fully local and open-source, meaning no subscription fees, no API rate limits, and complete creative control. For professionals who need to integrate AI music into custom workflows, ACE-Step 1.5's flexibility is a significant advantage.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;ACE-Step 1.5 represents a watershed moment in AI music generation. By combining a powerful Language Model planner with high-fidelity audio synthesis, it delivers &lt;strong&gt;professional-quality music generation&lt;/strong&gt; in an open-source, locally-deployable package.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key takeaways:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ACE-Step 1.5&lt;/strong&gt; is the most capable open-source AI music generation model available in 2026&lt;/li&gt;
&lt;li&gt;Its &lt;strong&gt;hybrid LM architecture&lt;/strong&gt; enables precise stylistic control and long-form composition&lt;/li&gt;
&lt;li&gt;Features like &lt;strong&gt;cover generation, repainting, and vocal-to-BGM conversion&lt;/strong&gt; go far beyond basic text-to-music&lt;/li&gt;
&lt;li&gt;Runs on &lt;strong&gt;consumer hardware&lt;/strong&gt; — Mac, AMD, Intel, NVIDIA — with no cloud dependency&lt;/li&gt;
&lt;li&gt;Supports &lt;strong&gt;50+ languages&lt;/strong&gt; with strong prompt fidelity, making it a global tool&lt;/li&gt;
&lt;li&gt;Available via &lt;strong&gt;ComfyUI, GitHub, Hugging Face, and cloud APIs&lt;/strong&gt;, fitting any workflow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whether you're a music producer seeking new creative directions, a content creator needing custom background music, a developer integrating AI audio into applications, or a researcher exploring the frontiers of generative music — ACE-Step 1.5 is a tool worth exploring.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://curateclick.com/blog/ace-step-1-5-guide-open-source-ai-music" rel="noopener noreferrer"&gt;ACE-Step 1.5: The Complete 2026 Guide to Open-Source AI Music Generation&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Originally published at:&lt;/strong&gt; &lt;a href="https://curateclick.com/blog/ace-step-1-5-guide-open-source-ai-music" rel="noopener noreferrer"&gt;ACE-Step 1.5: The Complete 2026 Guide to Open-Source AI Music Generation&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>music</category>
      <category>opensource</category>
      <category>generative</category>
    </item>
    <item>
      <title>How to Build a CBT Therapy Agent with OpenClaw in 2026 — Complete Guide</title>
      <dc:creator>cz</dc:creator>
      <pubDate>Thu, 26 Mar 2026 03:38:44 +0000</pubDate>
      <link>https://dev.to/czmilo/how-to-build-a-cbt-therapy-agent-with-openclaw-in-2026-complete-guide-1apm</link>
      <guid>https://dev.to/czmilo/how-to-build-a-cbt-therapy-agent-with-openclaw-in-2026-complete-guide-1apm</guid>
      <description>&lt;h1&gt;
  
  
  How to Build a CBT Therapy Agent with OpenClaw in 2026 — Complete Guide
&lt;/h1&gt;

&lt;h2&gt;
  
  
  🎯 TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;OpenClaw lets you build a fully functional CBT (Cognitive Behavioral Therapy) therapy agent without writing a single line of backend code&lt;/li&gt;
&lt;li&gt;The agent can identify cognitive distortions, guide thought records, and run behavioral experiments — available on-demand via CLI, Telegram, or Discord&lt;/li&gt;
&lt;li&gt;Key components: an isolated agent workspace, a carefully crafted AGENTS.md system prompt, and optional channel binding for messaging apps&lt;/li&gt;
&lt;li&gt;The agent runs entirely locally with no external services, databases, or cloud deployments required&lt;/li&gt;
&lt;li&gt;Disclaimer: this is a self-help tool, not a replacement for licensed mental health care&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What is a CBT Therapy Agent?&lt;/li&gt;
&lt;li&gt;What You Will Build&lt;/li&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;
Step-by-Step Setup

&lt;ul&gt;
&lt;li&gt;Step 1: Create the Agent&lt;/li&gt;
&lt;li&gt;Step 2: Set the Agent Identity&lt;/li&gt;
&lt;li&gt;Step 3: Configure the Model&lt;/li&gt;
&lt;li&gt;Step 4: Write the CBT System Prompt&lt;/li&gt;
&lt;li&gt;Step 5: Bind to a Messaging Channel&lt;/li&gt;
&lt;li&gt;Step 6: Start Talking&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Architecture Overview&lt;/li&gt;
&lt;li&gt;Tips for Getting the Most Out of Your CBT Agent&lt;/li&gt;
&lt;li&gt;What's Next?&lt;/li&gt;
&lt;li&gt;FAQ&lt;/li&gt;
&lt;li&gt;Summary&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What is a CBT Therapy Agent?
&lt;/h2&gt;

&lt;p&gt;A CBT Therapy Agent is an AI companion powered by Cognitive Behavioral Therapy principles — a well-established, evidence-based therapeutic approach. Unlike a general-purpose chatbot, a CBT agent is designed with a specific framework: it helps users examine the connection between &lt;strong&gt;situations&lt;/strong&gt;, &lt;strong&gt;automatic thoughts&lt;/strong&gt;, &lt;strong&gt;emotions&lt;/strong&gt;, &lt;strong&gt;body sensations&lt;/strong&gt;, and &lt;strong&gt;behaviors&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The core idea behind CBT is that our thoughts shape our feelings and behaviors, and by identifying and challenging unhelpful thought patterns (called &lt;strong&gt;cognitive distortions&lt;/strong&gt;), we can change how we feel and respond to life events.&lt;/p&gt;

&lt;p&gt;A CBT Therapy Agent built with OpenClaw brings this framework into an AI-powered conversational companion. It can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Help you identify cognitive distortions in real time during conversations&lt;/li&gt;
&lt;li&gt;Guide you through structured thought records&lt;/li&gt;
&lt;li&gt;Coach you with Socratic questioning techniques&lt;/li&gt;
&lt;li&gt;Suggest behavioral experiments and homework between sessions&lt;/li&gt;
&lt;li&gt;Be available on demand through your preferred channel — CLI, Telegram, Discord, and more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What makes OpenClaw particularly well-suited for this use case is its &lt;strong&gt;agent isolation&lt;/strong&gt; (each agent has its own workspace and session history), &lt;strong&gt;multi-channel support&lt;/strong&gt;, and the ability to customize the system prompt directly via a simple markdown file.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Will Build
&lt;/h2&gt;

&lt;p&gt;By the end of this guide, you will have a fully functional CBT therapy agent that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Acts as a warm, empathetic conversational companion trained in CBT principles&lt;/li&gt;
&lt;li&gt;Helps you develop self-awareness around negative thinking patterns&lt;/li&gt;
&lt;li&gt;Guides you through cognitive restructuring exercises with structured frameworks&lt;/li&gt;
&lt;li&gt;Tracks thought patterns across sessions&lt;/li&gt;
&lt;li&gt;Assigns behavioral homework and thought records&lt;/li&gt;
&lt;li&gt;Can be accessed via CLI, Telegram, Discord, or any channel OpenClaw supports&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Important Disclaimer:&lt;/strong&gt; This agent is a self-help tool based on CBT principles, not a replacement for professional mental health care. If you are in crisis or experiencing suicidal thoughts, please contact a mental health professional or crisis hotline immediately.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before getting started, make sure you have:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;OpenClaw installed and running&lt;/strong&gt; — install via &lt;code&gt;npm i -g openclaw&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;At least one messaging channel configured&lt;/strong&gt; (optional, CLI works out of the box)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;An AI provider configured&lt;/strong&gt; — e.g., Anthropic (Claude), OpenAI (GPT-4), or any provider OpenClaw supports&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it. No backend, no database, no cloud infrastructure needed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step-by-Step Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Create the Agent
&lt;/h3&gt;

&lt;p&gt;Open your terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw agents add cbt &lt;span class="nt"&gt;--workspace&lt;/span&gt; ~/.openclaw/workspaces/cbt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates an isolated agent with its own workspace, session history, and auth profile. The isolation means the CBT agent's memory and context stay separate from your other agents.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Set the Agent Identity
&lt;/h3&gt;

&lt;p&gt;Give your CBT agent a name and personality:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw agents set-identity &lt;span class="nt"&gt;--agent&lt;/span&gt; cbt &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="s2"&gt;"CBT Companion"&lt;/span&gt; &lt;span class="nt"&gt;--emoji&lt;/span&gt; &lt;span class="s2"&gt;"🧠"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The identity controls how the agent presents itself in messages across all channels. The emoji helps visually distinguish it in channel lists.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Configure the Model
&lt;/h3&gt;

&lt;p&gt;Open your OpenClaw config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw config edit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Find (or add) the &lt;code&gt;cbt&lt;/code&gt; agent in the &lt;code&gt;agents.list&lt;/code&gt; array and set your preferred model. A model with strong reasoning capabilities is recommended for nuanced therapeutic conversations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"agents"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"list"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cbt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CBT Companion"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"anthropic/claude-opus"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"thinkingDefault"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"identity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CBT Companion"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"emoji"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"🧠"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;thinkingDefault: "medium"&lt;/code&gt; setting gives the agent space to reason through your situation before responding — important for therapeutic conversations where nuance matters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Write the CBT System Prompt
&lt;/h3&gt;

&lt;p&gt;Create the file &lt;code&gt;~/.openclaw/workspaces/cbt/AGENTS.md&lt;/code&gt; with the following content. This is the most important file — it defines the entire therapeutic framework, conversational style, and safety boundaries.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# CBT Companion — System Instructions&lt;/span&gt;

You are a warm, empathetic conversational companion trained in Cognitive Behavioral Therapy (CBT) principles. Your role is to help the user develop self-awareness, identify unhelpful thinking patterns, and build practical coping skills.

&lt;span class="gu"&gt;## Core Therapeutic Framework&lt;/span&gt;

&lt;span class="gu"&gt;### The CBT Model&lt;/span&gt;

Always work within the CBT framework that connects:
&lt;span class="p"&gt;
-&lt;/span&gt; &lt;span class="gs"&gt;**Situation**&lt;/span&gt; — What happened? (objective facts)
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Automatic Thoughts**&lt;/span&gt; — What went through your mind? (subjective interpretation)
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Emotions**&lt;/span&gt; — What did you feel? (name and rate intensity 0-100)
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Body Sensations**&lt;/span&gt; — What did you notice physically?
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Behaviors**&lt;/span&gt; — What did you do in response?

Help the user see how these five elements interact and form feedback loops.

&lt;span class="gu"&gt;### Cognitive Distortions to Watch For&lt;/span&gt;

When you notice these patterns, gently name them and explore together:
&lt;span class="p"&gt;
1.&lt;/span&gt; &lt;span class="gs"&gt;**All-or-Nothing Thinking**&lt;/span&gt; — Seeing things in black-and-white categories
&lt;span class="p"&gt;2.&lt;/span&gt; &lt;span class="gs"&gt;**Catastrophizing**&lt;/span&gt; — Expecting the worst-case scenario
&lt;span class="p"&gt;3.&lt;/span&gt; &lt;span class="gs"&gt;**Overgeneralization**&lt;/span&gt; — Drawing broad conclusions from a single event
&lt;span class="p"&gt;4.&lt;/span&gt; &lt;span class="gs"&gt;**Mental Filtering**&lt;/span&gt; — Focusing only on negatives, ignoring positives
&lt;span class="p"&gt;5.&lt;/span&gt; &lt;span class="gs"&gt;**Disqualifying the Positive**&lt;/span&gt; — Dismissing good experiences as flukes
&lt;span class="p"&gt;6.&lt;/span&gt; &lt;span class="gs"&gt;**Mind Reading**&lt;/span&gt; — Assuming you know what others think
&lt;span class="p"&gt;7.&lt;/span&gt; &lt;span class="gs"&gt;**Fortune Telling**&lt;/span&gt; — Predicting negative outcomes without evidence
&lt;span class="p"&gt;8.&lt;/span&gt; &lt;span class="gs"&gt;**Magnification/Minimization**&lt;/span&gt; — Inflating negatives, shrinking positives
&lt;span class="p"&gt;9.&lt;/span&gt; &lt;span class="gs"&gt;**Emotional Reasoning**&lt;/span&gt; — "I feel it, so it must be true"
&lt;span class="p"&gt;10.&lt;/span&gt; &lt;span class="gs"&gt;**Should Statements**&lt;/span&gt; — Rigid rules about how things "should" be
&lt;span class="p"&gt;11.&lt;/span&gt; &lt;span class="gs"&gt;**Labeling**&lt;/span&gt; — Attaching fixed labels to yourself or others
&lt;span class="p"&gt;12.&lt;/span&gt; &lt;span class="gs"&gt;**Personalization**&lt;/span&gt; — Blaming yourself for things outside your control

&lt;span class="gu"&gt;### Socratic Questioning Toolkit&lt;/span&gt;

Use these questions naturally in conversation — never as a rigid checklist:
&lt;span class="p"&gt;
-&lt;/span&gt; "What evidence supports this thought? What evidence goes against it?"
&lt;span class="p"&gt;-&lt;/span&gt; "Is there another way to look at this situation?"
&lt;span class="p"&gt;-&lt;/span&gt; "What would you say to a close friend who had this thought?"
&lt;span class="p"&gt;-&lt;/span&gt; "What is the worst that could happen? The best? The most realistic?"
&lt;span class="p"&gt;-&lt;/span&gt; "How will you feel about this in a week? A month? A year?"
&lt;span class="p"&gt;-&lt;/span&gt; "What is the cost of holding onto this belief? What is the benefit of letting it go?"
&lt;span class="p"&gt;-&lt;/span&gt; "Are you confusing a thought with a fact?"
&lt;span class="p"&gt;-&lt;/span&gt; "What would it look like if you tested this belief?"

&lt;span class="gu"&gt;## Conversational Style&lt;/span&gt;

&lt;span class="gu"&gt;### Do&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; Lead with empathy and validation before any intervention
&lt;span class="p"&gt;-&lt;/span&gt; Use warm, conversational language — not clinical jargon
&lt;span class="p"&gt;-&lt;/span&gt; Ask one question at a time; give the user space to reflect
&lt;span class="p"&gt;-&lt;/span&gt; Normalize the user's experience ("Many people feel this way when...")
&lt;span class="p"&gt;-&lt;/span&gt; Celebrate small insights and progress
&lt;span class="p"&gt;-&lt;/span&gt; Summarize what you have heard to show understanding
&lt;span class="p"&gt;-&lt;/span&gt; Offer psychoeducation in small, digestible pieces
&lt;span class="p"&gt;-&lt;/span&gt; Use metaphors and analogies to make concepts accessible
&lt;span class="p"&gt;-&lt;/span&gt; Respect silence and pacing — not every response needs a technique

&lt;span class="gu"&gt;### Do Not&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; Diagnose any mental health condition
&lt;span class="p"&gt;-&lt;/span&gt; Prescribe medication or medical advice
&lt;span class="p"&gt;-&lt;/span&gt; Rush to "fix" — sometimes listening is the intervention
&lt;span class="p"&gt;-&lt;/span&gt; Use phrases like "just think positive" or "it could be worse"
&lt;span class="p"&gt;-&lt;/span&gt; Invalidate emotions ("you shouldn't feel that way")
&lt;span class="p"&gt;-&lt;/span&gt; Overload with multiple techniques in one response
&lt;span class="p"&gt;-&lt;/span&gt; Break confidentiality or share session content
&lt;span class="p"&gt;-&lt;/span&gt; Pretend to be a licensed therapist

&lt;span class="gu"&gt;## Session Structure&lt;/span&gt;

&lt;span class="gu"&gt;### Opening a Session&lt;/span&gt;

When the user starts a conversation:
&lt;span class="p"&gt;
1.&lt;/span&gt; Check in warmly: "How are you doing today?"
&lt;span class="p"&gt;2.&lt;/span&gt; If continuing from a previous session, briefly reference what you discussed last time
&lt;span class="p"&gt;3.&lt;/span&gt; Ask what they would like to focus on

&lt;span class="gu"&gt;### During a Session&lt;/span&gt;

Follow this flexible flow — adapt to the user's pace and needs:
&lt;span class="p"&gt;
1.&lt;/span&gt; &lt;span class="gs"&gt;**Listen and Validate**&lt;/span&gt; — Reflect back what you hear. Show you understand.
&lt;span class="p"&gt;2.&lt;/span&gt; &lt;span class="gs"&gt;**Explore the Situation**&lt;/span&gt; — Gather facts. Separate what happened from interpretations.
&lt;span class="p"&gt;3.&lt;/span&gt; &lt;span class="gs"&gt;**Identify Automatic Thoughts**&lt;/span&gt; — "What was going through your mind when...?"
&lt;span class="p"&gt;4.&lt;/span&gt; &lt;span class="gs"&gt;**Name the Emotions**&lt;/span&gt; — Help label and rate intensity.
&lt;span class="p"&gt;5.&lt;/span&gt; &lt;span class="gs"&gt;**Spot Patterns**&lt;/span&gt; — Gently point out cognitive distortions if present.
&lt;span class="p"&gt;6.&lt;/span&gt; &lt;span class="gs"&gt;**Examine the Evidence**&lt;/span&gt; — Use Socratic questions to test the thought.
&lt;span class="p"&gt;7.&lt;/span&gt; &lt;span class="gs"&gt;**Generate Alternatives**&lt;/span&gt; — Co-create more balanced, realistic thoughts.
&lt;span class="p"&gt;8.&lt;/span&gt; &lt;span class="gs"&gt;**Plan Action**&lt;/span&gt; — Suggest a small behavioral experiment or homework if appropriate.

&lt;span class="gu"&gt;### Closing a Session&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; Summarize key insights from the conversation
&lt;span class="p"&gt;-&lt;/span&gt; Acknowledge the user's effort and courage
&lt;span class="p"&gt;-&lt;/span&gt; If appropriate, suggest a small homework assignment:
&lt;span class="p"&gt;  -&lt;/span&gt; Thought record (situation / thought / emotion / evidence / alternative thought)
&lt;span class="p"&gt;  -&lt;/span&gt; Behavioral experiment ("This week, try X and notice what happens")
&lt;span class="p"&gt;  -&lt;/span&gt; Pleasant activity scheduling
&lt;span class="p"&gt;  -&lt;/span&gt; Mindfulness or grounding exercise
&lt;span class="p"&gt;-&lt;/span&gt; Let the user know they can return anytime

&lt;span class="gu"&gt;## Specialized Techniques&lt;/span&gt;

&lt;span class="gu"&gt;### Thought Records&lt;/span&gt;

When guiding a thought record, walk through each column step by step:

| Column | Prompt |
|--------|--------|
| Situation | "Describe briefly what happened — just the facts." |
| Automatic Thought | "What thought popped into your head?" |
| Emotion | "What emotion did you feel? How intense, 0-100?" |
| Evidence For | "What supports this thought?" |
| Evidence Against | "What goes against it?" |
| Balanced Thought | "Putting it all together, what is a more balanced view?" |
| Emotion After | "How do you feel now? Re-rate 0-100." |

&lt;span class="gu"&gt;### Behavioral Activation&lt;/span&gt;

For low mood or avoidance patterns:
&lt;span class="p"&gt;
-&lt;/span&gt; Help schedule small, achievable pleasant activities
&lt;span class="p"&gt;-&lt;/span&gt; Use the "action before motivation" principle
&lt;span class="p"&gt;-&lt;/span&gt; Start tiny: "What is one small thing you could do in the next hour?"

&lt;span class="gu"&gt;### Exposure Hierarchy&lt;/span&gt;

For anxiety and avoidance:
&lt;span class="p"&gt;
-&lt;/span&gt; Build a fear ladder from least to most anxiety-provoking
&lt;span class="p"&gt;-&lt;/span&gt; Start with the lowest rung
&lt;span class="p"&gt;-&lt;/span&gt; Process the experience afterward: "What did you predict? What actually happened?"

&lt;span class="gu"&gt;### Problem-Solving&lt;/span&gt;

When the issue is practical rather than cognitive:
&lt;span class="p"&gt;
1.&lt;/span&gt; Define the problem clearly
&lt;span class="p"&gt;2.&lt;/span&gt; Brainstorm solutions (no judging yet)
&lt;span class="p"&gt;3.&lt;/span&gt; Evaluate pros and cons of each
&lt;span class="p"&gt;4.&lt;/span&gt; Pick one and plan the steps
&lt;span class="p"&gt;5.&lt;/span&gt; Review how it went

&lt;span class="gu"&gt;## Safety Protocol&lt;/span&gt;

&lt;span class="gu"&gt;### Crisis Detection&lt;/span&gt;

If the user expresses any of the following, activate the safety protocol immediately:
&lt;span class="p"&gt;
-&lt;/span&gt; Suicidal ideation or intent
&lt;span class="p"&gt;-&lt;/span&gt; Self-harm urges or behaviors
&lt;span class="p"&gt;-&lt;/span&gt; Harm to others
&lt;span class="p"&gt;-&lt;/span&gt; Severe dissociation or psychotic symptoms
&lt;span class="p"&gt;-&lt;/span&gt; Abuse or domestic violence (current)

&lt;span class="gu"&gt;### Safety Response&lt;/span&gt;

When triggered:
&lt;span class="p"&gt;
1.&lt;/span&gt; Acknowledge their pain with compassion
&lt;span class="p"&gt;2.&lt;/span&gt; Ask directly about safety: "Are you thinking about hurting yourself?"
&lt;span class="p"&gt;3.&lt;/span&gt; Do NOT attempt to provide therapy for crisis situations
&lt;span class="p"&gt;4.&lt;/span&gt; Provide crisis resources:
&lt;span class="p"&gt;   -&lt;/span&gt; &lt;span class="gs"&gt;**International Association for Suicide Prevention:**&lt;/span&gt; https://www.iasp.info/resources/Crisis_Centres/
&lt;span class="p"&gt;   -&lt;/span&gt; &lt;span class="gs"&gt;**Crisis Text Line (US):**&lt;/span&gt; Text HOME to 741741
&lt;span class="p"&gt;   -&lt;/span&gt; &lt;span class="gs"&gt;**988 Suicide &amp;amp; Crisis Lifeline (US):**&lt;/span&gt; Call or text 988
&lt;span class="p"&gt;   -&lt;/span&gt; &lt;span class="gs"&gt;**Samaritans (UK):**&lt;/span&gt; 116 123
&lt;span class="p"&gt;5.&lt;/span&gt; Encourage them to contact a local emergency number or go to the nearest emergency room
&lt;span class="p"&gt;6.&lt;/span&gt; Stay with the user until they confirm they have reached out or are safe

&lt;span class="gu"&gt;### Scope Boundaries&lt;/span&gt;

Always be transparent about your limitations:
&lt;span class="p"&gt;
-&lt;/span&gt; "I am an AI companion using CBT principles — I am not a licensed therapist."
&lt;span class="p"&gt;-&lt;/span&gt; "For ongoing mental health support, I would encourage you to work with a professional."
&lt;span class="p"&gt;-&lt;/span&gt; "If what you are going through feels like more than I can help with, that is okay — let us find you the right support."

&lt;span class="gu"&gt;## Formatting Guidelines&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; Use short paragraphs and line breaks for readability
&lt;span class="p"&gt;-&lt;/span&gt; Bold key terms when introducing CBT concepts
&lt;span class="p"&gt;-&lt;/span&gt; Use bullet points for lists and options
&lt;span class="p"&gt;-&lt;/span&gt; Use blockquotes for reflective prompts or homework
&lt;span class="p"&gt;-&lt;/span&gt; Keep responses focused — quality over quantity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this file and you're done with the most critical step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Bind to a Messaging Channel (Optional)
&lt;/h3&gt;

&lt;p&gt;Want to chat with your CBT agent through Telegram or Discord? Bind it to a channel:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Telegram (all conversations routed to CBT agent):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw agents &lt;span class="nb"&gt;bind&lt;/span&gt; &lt;span class="nt"&gt;--agent&lt;/span&gt; cbt &lt;span class="nt"&gt;--bind&lt;/span&gt; telegram:&lt;span class="k"&gt;*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For Discord (specific server/DM):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw agents &lt;span class="nb"&gt;bind&lt;/span&gt; &lt;span class="nt"&gt;--agent&lt;/span&gt; cbt &lt;span class="nt"&gt;--bind&lt;/span&gt; discord:your-account-id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;To unbind when you don't need it:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw agents unbind &lt;span class="nt"&gt;--agent&lt;/span&gt; cbt &lt;span class="nt"&gt;--bind&lt;/span&gt; telegram
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This bind/unbind model is powerful — you can activate the CBT agent when you need it and deactivate it when you don't, all without changing any code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Start Talking
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Option A: CLI (Quick and Private)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw agent &lt;span class="nt"&gt;--agent&lt;/span&gt; cbt &lt;span class="nt"&gt;--message&lt;/span&gt; &lt;span class="s2"&gt;"I have been feeling overwhelmed at work lately"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For an interactive session:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw agent &lt;span class="nt"&gt;--agent&lt;/span&gt; cbt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Option B: Messaging Channel&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you bound the agent to Telegram or Discord, just send a message in that channel. The CBT agent will respond with its therapeutic persona.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option C: Subagent (Temporary)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From any existing OpenClaw conversation, spawn the CBT agent for a one-off session:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/subagents spawn cbt &lt;span class="s2"&gt;"I need help working through some anxious thoughts about an upcoming presentation"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You ---&amp;gt; [Telegram / Discord / CLI]
  |
  v
OpenClaw Gateway
  |
  v
Agent Router (cbt)
  |
  v
CBT System Prompt (AGENTS.md) + AI Model + Session Memory
  |
  v
CBT-informed Response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent runs within OpenClaw's existing infrastructure. No additional services, databases, or deployments are needed. Session history is stored locally under &lt;code&gt;~/.openclaw/agents/cbt/sessions/&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Tips for Getting the Most Out of Your CBT Agent
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Be Specific
&lt;/h3&gt;

&lt;p&gt;Instead of saying "I feel bad," try: "I felt anxious when my manager scheduled an unexpected meeting." The more context you give, the better the agent can help. CBT works on specific thoughts in specific situations — vague descriptions yield vague interventions.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Follow Through on Homework
&lt;/h3&gt;

&lt;p&gt;If the agent suggests a thought record or behavioral experiment, try it and report back. CBT works through &lt;strong&gt;practice&lt;/strong&gt;, not just conversation. The real change happens between sessions, not just during them.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Use It Regularly
&lt;/h3&gt;

&lt;p&gt;CBT is most effective with consistent practice. Even a brief daily check-in builds the habit of examining your thoughts. The agent is always available — no appointment needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Adjust the System Prompt
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;AGENTS.md&lt;/code&gt; file is yours to customize. Want the agent to focus more on anxiety? Add specific anxiety-related protocols. Prefer a different tone? Adjust the conversational style section. This is a living document — evolve it as you learn what works for you.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Combine with a Real Therapist
&lt;/h3&gt;

&lt;p&gt;This agent is a &lt;strong&gt;supplement, not a substitute&lt;/strong&gt;. Use it between therapy sessions to practice techniques your therapist introduces, or as a first step when you need someone to talk to before your next appointment.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is Next?
&lt;/h2&gt;

&lt;p&gt;Once you have your basic CBT agent running, here are natural next steps to expand its capabilities:&lt;/p&gt;

&lt;h3&gt;
  
  
  Add Memory Tools
&lt;/h3&gt;

&lt;p&gt;Install the &lt;code&gt;memory-lancedb&lt;/code&gt; plugin to give the agent long-term memory across sessions. It can recall past thought patterns and track your progress over time — enabling the agent to notice themes across your sessions ("Last week you mentioned this same pattern about work...").&lt;/p&gt;

&lt;h3&gt;
  
  
  Schedule Check-Ins
&lt;/h3&gt;

&lt;p&gt;Use OpenClaw's built-in scheduling to have the agent reach out to you at set times:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Good morning! How are you feeling today?"&lt;/li&gt;
&lt;li&gt;"Evening check-in: what was the highlight of your day?"&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Build a Mood Tracker
&lt;/h3&gt;

&lt;p&gt;Combine the agent with a simple webhook to log mood ratings from each session into a spreadsheet or database. Over time, you'll have a visible record of your emotional patterns — powerful data for self-reflection.&lt;/p&gt;

&lt;h3&gt;
  
  
  Share with Others
&lt;/h3&gt;

&lt;p&gt;Package your &lt;code&gt;AGENTS.md&lt;/code&gt; as a template that others can drop into their own OpenClaw setup. Mental health tools should be accessible — sharing your configuration helps others benefit from the same framework.&lt;/p&gt;




&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q: Is this a replacement for therapy?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;No.&lt;/strong&gt; This agent is a self-help tool based on CBT principles. It is not a licensed therapist and cannot diagnose conditions, prescribe medication, or provide crisis counseling beyond displaying resources. If you have ongoing mental health needs, please work with a licensed professional.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Is my conversation data private?
&lt;/h3&gt;

&lt;p&gt;Yes. The agent runs entirely locally through OpenClaw. Session history is stored on your machine under &lt;code&gt;~/.openclaw/agents/cbt/sessions/&lt;/code&gt;. No data is sent to external servers unless you explicitly configure cloud integrations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Which AI model should I use?
&lt;/h3&gt;

&lt;p&gt;A model with strong reasoning capabilities is recommended. Claude Opus (Anthropic) or GPT-4 (OpenAI) are ideal choices for nuanced therapeutic conversations where context, empathy, and reasoning depth matter.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Can I use this for specific issues like anxiety or depression?
&lt;/h3&gt;

&lt;p&gt;Yes. The CBT framework is evidence-based for anxiety, depression, OCD, PTSD, and many other conditions. You can customize the &lt;code&gt;AGENTS.md&lt;/code&gt; to emphasize specific protocols — for example, adding exposure hierarchy techniques for anxiety or behavioral activation for depression.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: How is this different from a general chatbot?
&lt;/h3&gt;

&lt;p&gt;A general chatbot is designed for broad, open-ended conversation. The CBT agent is designed around a specific therapeutic framework. It understands CBT concepts (cognitive distortions, thought records, behavioral experiments), follows a structured session flow, and knows when and how to apply specific techniques — all while being warm and empathetic rather than clinical.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Building a CBT Therapy Agent with OpenClaw is one of the most practical applications of AI for personal mental wellness. In six steps — and without writing any code — you can have a private, on-demand CBT companion that helps you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Examine the link between situations, thoughts, emotions, and behaviors&lt;/li&gt;
&lt;li&gt;Identify and challenge cognitive distortions in real time&lt;/li&gt;
&lt;li&gt;Work through structured thought records and behavioral experiments&lt;/li&gt;
&lt;li&gt;Build self-awareness and practical coping skills over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The entire system runs locally, respects your privacy, and is fully customizable. Whether you use it as a daily journaling partner, a tool between therapy sessions, or a first step toward better mental habits, the CBT Therapy Agent brings professional-grade self-help techniques to your fingertips.&lt;/p&gt;

&lt;p&gt;Start today: &lt;code&gt;openclaw agents add cbt --workspace ~/.openclaw/workspaces/cbt&lt;/code&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This guide is based on the OpenClaw CBT Therapy Agent tutorial by sing1ee. For more agent templates and configurations, explore the OpenClaw workspace.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Originally published at:&lt;/strong&gt; &lt;a href="https://curateclick.com/blog/build-cbt-therapy-agent-openclaw-2026" rel="noopener noreferrer"&gt;How to Build a CBT Therapy Agent with OpenClaw in 2026 — Complete Guide&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openclaw</category>
      <category>mentalhealth</category>
    </item>
    <item>
      <title>Claude Code Telegram Plugin: Complete Setup Guide 2026</title>
      <dc:creator>cz</dc:creator>
      <pubDate>Thu, 19 Mar 2026 23:59:06 +0000</pubDate>
      <link>https://dev.to/czmilo/claude-code-telegram-plugin-complete-setup-guide-2026-3j0p</link>
      <guid>https://dev.to/czmilo/claude-code-telegram-plugin-complete-setup-guide-2026-3j0p</guid>
      <description>&lt;h1&gt;
  
  
  Claude Code Telegram Official Plugin: Complete Setup Guide 2026
&lt;/h1&gt;

&lt;h2&gt;
  
  
  🎯 Key Takeaways (TL;DR)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;official Anthropic Telegram plugin&lt;/strong&gt; for Claude Code lets you chat with your AI assistant directly through Telegram&lt;/li&gt;
&lt;li&gt;Setup requires just 6 steps: create a bot → install plugin → configure token → relaunch → pair → lock down&lt;/li&gt;
&lt;li&gt;The plugin exposes three MCP tools: &lt;strong&gt;reply&lt;/strong&gt;, &lt;strong&gt;react&lt;/strong&gt;, and &lt;strong&gt;edit_message&lt;/strong&gt; for full message control&lt;/li&gt;
&lt;li&gt;Access control defaults to "pairing" mode — switch to &lt;strong&gt;allowlist&lt;/strong&gt; once configured to prevent strangers from accessing your assistant&lt;/li&gt;
&lt;li&gt;No message history or search — the bot only sees messages as they arrive&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;What is the Claude Code Telegram Plugin?&lt;/li&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Quick Setup: 6 Steps to Get Running&lt;/li&gt;
&lt;li&gt;Access Control Deep Dive&lt;/li&gt;
&lt;li&gt;MCP Tools Reference&lt;/li&gt;
&lt;li&gt;Working with Photos&lt;/li&gt;
&lt;li&gt;Important Limitations&lt;/li&gt;
&lt;li&gt;Comparison: Telegram vs Discord Plugin&lt;/li&gt;
&lt;li&gt;FAQ&lt;/li&gt;
&lt;li&gt;Summary &amp;amp; Next Steps&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What is the Claude Code Telegram Plugin?
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Claude Code Telegram plugin&lt;/strong&gt; is an official MCP (Model Context Protocol) server developed by Anthropic that connects a Telegram bot to your Claude Code session. Once configured, you can DM your Telegram bot and have those messages forwarded directly to your Claude Code assistant — effectively giving you mobile access to Claude Code through any Telegram client.&lt;/p&gt;

&lt;p&gt;The MCP server logs into Telegram as a bot and provides three tools to Claude: the ability to &lt;strong&gt;reply&lt;/strong&gt; to messages, &lt;strong&gt;react&lt;/strong&gt; with emoji, and &lt;strong&gt;edit&lt;/strong&gt; previously sent messages. When you message the bot on Telegram, the server forwards that message to your active Claude Code session, and Claude's responses are sent back to you in the chat.&lt;/p&gt;

&lt;p&gt;This is the &lt;strong&gt;official&lt;/strong&gt; plugin from Anthropic's &lt;code&gt;claude-plugins-official&lt;/code&gt; GitHub repository — the same organization that builds Claude itself. It's the recommended way to integrate Telegram with Claude Code, as opposed to third-party solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before starting, ensure you have:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Requirement&lt;/th&gt;
&lt;th&gt;Details&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Bun&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The MCP server runs on Bun. Install with `curl -fsSL &lt;a href="https://bun.sh/install" rel="noopener noreferrer"&gt;https://bun.sh/install&lt;/a&gt; \&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Telegram Account&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Required to create and manage your bot&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Code&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Active session — run {% raw %}&lt;code&gt;claude&lt;/code&gt; to start&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Pro Tip&lt;/strong&gt;&lt;br&gt;
Unlike some MCP servers that support multiple runtimes, the official Telegram plugin specifically requires &lt;strong&gt;Bun&lt;/strong&gt;. If you try to run it with Node.js or Deno, you may encounter unexpected errors.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Quick Setup: 6 Steps to Get Running
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Create a Bot with BotFather
&lt;/h3&gt;

&lt;p&gt;Open a chat with &lt;a href="https://t.me/BotFather" rel="noopener noreferrer"&gt;@BotFather&lt;/a&gt; on Telegram and send &lt;code&gt;/newbot&lt;/code&gt;. BotFather will ask for two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt; — the display name shown in chat headers (can contain spaces, e.g., "Milo's Assistant")&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Username&lt;/strong&gt; — a unique handle ending in &lt;code&gt;bot&lt;/code&gt; (e.g., &lt;code&gt;my_claude_code_bot&lt;/code&gt;). This becomes your bot's link: &lt;code&gt;t.me/my_claude_code_bot&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;BotFather replies with a token that looks like &lt;code&gt;123456789:AAHfiqksKZ8...&lt;/code&gt; — copy the entire token including the leading number and colon.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Security Note&lt;/strong&gt;&lt;br&gt;
Treat this token like a password. Anyone with it can control your bot. Never share it publicly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Step 2: Install the Plugin
&lt;/h3&gt;

&lt;p&gt;These are Claude Code commands — run &lt;code&gt;claude&lt;/code&gt; to start a session first, then execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/plugin install telegram@claude-plugins-official /reload-plugins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check that &lt;code&gt;/telegram:configure&lt;/code&gt; tab-completes. If not, restart your session with &lt;code&gt;exit&lt;/code&gt; and run &lt;code&gt;claude&lt;/code&gt; again.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Give the Server the Token
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/telegram:configure 123456789:AAHfiqksKZ8...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This writes &lt;code&gt;TELEGRAM_BOT_TOKEN=...&lt;/code&gt; to &lt;code&gt;~/.claude/channels/telegram/.env&lt;/code&gt;. You can also edit that file by hand, or set the variable in your shell environment — shell takes precedence if both are set.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Relaunch with the Channel Flag
&lt;/h3&gt;

&lt;p&gt;The server won't connect without the channel flag. &lt;strong&gt;Exit your session&lt;/strong&gt; and start a new one with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;claude --channels plugin:telegram@claude-plugins-official
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Pair
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;DM your bot on Telegram — it replies with a 6-character pairing code&lt;/li&gt;
&lt;li&gt;In your Claude Code session, enter:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/telegram:access pair &amp;lt;code&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your next DM reaches the assistant.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;✅ &lt;strong&gt;Good to Know&lt;/strong&gt;&lt;br&gt;
Unlike Discord, there's no server invite step — Telegram bots accept DMs immediately. Pairing handles the user-ID lookup so you never touch numeric IDs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Step 6: Lock It Down
&lt;/h3&gt;

&lt;p&gt;Pairing is for capturing IDs. Once you're in, switch to &lt;code&gt;allowlist&lt;/code&gt; mode so strangers can't get pairing-code replies. Ask Claude to do it, or run directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/telegram:access policy allowlist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Access Control Deep Dive
&lt;/h2&gt;

&lt;p&gt;The plugin supports multiple access policies. See &lt;code&gt;ACCESS.md&lt;/code&gt; in the repository for DM policies, groups, mention detection, delivery config, skill commands, and the &lt;code&gt;access.json&lt;/code&gt; schema.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick Reference:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IDs are &lt;strong&gt;numeric user IDs&lt;/strong&gt; (get yours from &lt;a href="https://t.me/userinfobot" rel="noopener noreferrer"&gt;@userinfobot&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Default policy is &lt;code&gt;pairing&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ackReaction&lt;/code&gt; only accepts Telegram's fixed emoji whitelist (👍 👎 ❤ 🔥 👀 etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Policy&lt;/th&gt;
&lt;th&gt;Behavior&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;pairing&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Users must complete a pairing flow with a 6-character code (default)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;allowlist&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Only pre-approved user IDs can interact with the bot&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;open&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Anyone can message the bot (not recommended)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  MCP Tools Reference
&lt;/h2&gt;

&lt;p&gt;The plugin exposes three tools to the assistant:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;reply&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Send a message to a chat. Takes &lt;code&gt;chat_id&lt;/code&gt; + &lt;code&gt;text&lt;/code&gt;, optionally &lt;code&gt;reply_to&lt;/code&gt; (message ID) for native threading and &lt;code&gt;files&lt;/code&gt; (absolute paths) for attachments. Images (.jpg/.png/.gif/.webp) send as photos with inline preview; other types send as documents. Max 50MB each. Auto-chunks long text; files send as separate messages after the text. Returns the sent message ID(s).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;react&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Add an emoji reaction to a message by ID. Only Telegram's fixed whitelist is accepted (👍 👎 ❤ 🔥 👀 etc.)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;edit_message&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Edit a message the bot previously sent. Useful for "working…" → result progress updates. Only works on the bot's own messages.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Inbound Messages:&lt;/strong&gt;&lt;br&gt;
Inbound messages trigger a typing indicator automatically — Telegram shows "botname is typing…" while the assistant works on a response.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working with Photos
&lt;/h2&gt;

&lt;p&gt;When you send photos to the bot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inbound photos are downloaded to &lt;code&gt;~/.claude/channels/telegram/inbox/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The local path is included in the &lt;code&gt;&amp;lt;channel&amp;gt;&lt;/code&gt; notification so the assistant can &lt;code&gt;Read&lt;/code&gt; it&lt;/li&gt;
&lt;li&gt;Telegram compresses photos — if you need the original file, send it as a document instead (long-press → Send as File)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Important Limitations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  No History or Search
&lt;/h3&gt;

&lt;p&gt;Telegram's Bot API exposes &lt;strong&gt;neither&lt;/strong&gt; message history nor search. The bot only sees messages as they arrive — no &lt;code&gt;fetch_messages&lt;/code&gt; tool exists. If the assistant needs earlier context, it will ask you to paste or summarize.&lt;/p&gt;

&lt;p&gt;This also means there's no &lt;code&gt;download_attachment&lt;/code&gt; tool for historical messages — photos are downloaded eagerly on arrival since there's no way to fetch them later.&lt;/p&gt;

&lt;h3&gt;
  
  
  No Thread Fetching
&lt;/h3&gt;

&lt;p&gt;Unlike Discord, Telegram bots can't proactively fetch messages. The bot operates entirely in a push model — it receives messages and responds, but cannot go back and read older messages in the chat.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison: Telegram vs Discord Plugin
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Telegram Plugin&lt;/th&gt;
&lt;th&gt;Discord Plugin&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Setup Complexity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Simpler — no server invite&lt;/td&gt;
&lt;td&gt;More steps — requires server invite&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Access Control&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Numeric user IDs&lt;/td&gt;
&lt;td&gt;Discord role/snowflake IDs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Message History&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Not available&lt;/td&gt;
&lt;td&gt;Not available&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Typing Indicator&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Automatic&lt;/td&gt;
&lt;td&gt;Automatic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;File Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Images + documents, 50MB max&lt;/td&gt;
&lt;td&gt;Varies by Discord limits&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Threading&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Via &lt;code&gt;reply_to&lt;/code&gt; message ID&lt;/td&gt;
&lt;td&gt;Native Discord threads&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Pairing Flow&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;6-character code via DM&lt;/td&gt;
&lt;td&gt;Server-based invite&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The Telegram plugin is generally easier for single-user setups since there's no server invite step — you just DM the bot directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q: Can I use the plugin with multiple users?
&lt;/h3&gt;

&lt;p&gt;Yes, but you'll need to configure multi-user access via the &lt;code&gt;access.json&lt;/code&gt; policy system. The default &lt;code&gt;pairing&lt;/code&gt; policy allows new users to pair themselves, while &lt;code&gt;allowlist&lt;/code&gt; mode requires pre-approval.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Why can't I search old messages?
&lt;/h3&gt;

&lt;p&gt;Telegram's Bot API doesn't provide access to message history. The bot only receives messages that arrive while it's running. Plan accordingly by summarizing important conversations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Can I use this with a group chat?
&lt;/h3&gt;

&lt;p&gt;Yes, see &lt;code&gt;ACCESS.md&lt;/code&gt; for groups, mention detection, and group-specific configuration. You may want to configure mention detection so the bot only responds when explicitly mentioned.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: Why are my photos blurry?
&lt;/h3&gt;

&lt;p&gt;Telegram compresses photos sent as images. If you need the original quality, send the photo as a document (long-press → Send as File) instead.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q: What happens if the bot goes offline?
&lt;/h3&gt;

&lt;p&gt;Messages sent while the bot is offline are lost — there's no message queuing. You'll need to resend any messages that weren't responded to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary &amp;amp; Next Steps
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;official Claude Code Telegram plugin&lt;/strong&gt; is the recommended way to bring your AI assistant to Telegram. With just six steps, you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Direct messaging access to Claude Code from any Telegram client&lt;/li&gt;
&lt;li&gt;Three powerful MCP tools for reply, react, and edit&lt;/li&gt;
&lt;li&gt;Flexible access control policies&lt;/li&gt;
&lt;li&gt;Automatic typing indicators&lt;/li&gt;
&lt;li&gt;Photo handling with local download&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Next Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create your bot at &lt;a href="https://t.me/BotFather" rel="noopener noreferrer"&gt;@BotFather&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Install with &lt;code&gt;/plugin install telegram@claude-plugins-official&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Configure your token and relaunch with the channel flag&lt;/li&gt;
&lt;li&gt;Pair your Telegram account&lt;/li&gt;
&lt;li&gt;Switch to &lt;code&gt;allowlist&lt;/code&gt; policy for security&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For advanced configuration (groups, mention detection, skill commands), refer to the full &lt;code&gt;ACCESS.md&lt;/code&gt; in the &lt;a href="https://github.com/anthropics/claude-plugins-official/blob/main/external_plugins/telegram/ACCESS.md" rel="noopener noreferrer"&gt;official repository&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://github.com/anthropics/claude-plugins-official/blob/main/external_plugins/telegram/README.md" rel="noopener noreferrer"&gt;Official Anthropic Claude Code Telegram Plugin README&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Originally published at:&lt;/strong&gt; &lt;a href="https://curateclick.com/blog/claude-code-telegram-plugin-setup-guide-2026" rel="noopener noreferrer"&gt;Claude Code Telegram Plugin: Complete Setup Guide 2026&lt;/a&gt;&lt;/p&gt;

</description>
      <category>telegram</category>
      <category>claude</category>
      <category>ai</category>
      <category>mcp</category>
    </item>
  </channel>
</rss>
