<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Soulman </title>
    <description>The latest articles on DEV Community by Soulman  (@soulman_250).</description>
    <link>https://dev.to/soulman_250</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/soulman_250"/>
    <language>en</language>
    <item>
      <title>Phala.com Is Partnering With AI x Web3 School to Put Private, Verifiable Compute in Developers’ Hands</title>
      <dc:creator>Soulman </dc:creator>
      <pubDate>Wed, 06 May 2026 22:52:54 +0000</pubDate>
      <link>https://dev.to/soulman_250/phalacom-is-partnering-with-ai-x-web3-school-to-put-private-verifiable-compute-in-developers-cb0</link>
      <guid>https://dev.to/soulman_250/phalacom-is-partnering-with-ai-x-web3-school-to-put-private-verifiable-compute-in-developers-cb0</guid>
      <description>&lt;p&gt;Note: Adapted from the official Phala.com X announcement; check it HERE: &lt;a href="https://x.com/phalanetwork/status/2051365284183671144" rel="noopener noreferrer"&gt;https://x.com/phalanetwork/status/2051365284183671144&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf13k3y57qr9i8d1gh3y.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf13k3y57qr9i8d1gh3y.jpeg" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
Phala.com is partnering with AI x Web3 School, a global developer program run by LXDAO and ETHPanda, to bring privacy-preserving compute into the hands of builders working at the crossover of AI and Web3. The program is structured around a Bootcamp and Hackathon, and the goal is simple: help developers go from understanding concepts to actually shipping projects. Courses are free and open right now, so if you have been sitting on an idea at this intersection, the barrier to getting started just got lower.&lt;/p&gt;

&lt;p&gt;The collaboration centers on giving participants real exposure to infrastructure that runs in TEE-secured cloud environments, private by default. That means the code and data being processed stay protected during execution, not just in storage. For developers building AI agents that interact with onchain applications, this distinction matters a lot. Phala technical experts will be present during the Hackathon itself, walking through deployment step by step rather than just presenting slides, so participants leave with skills they can actually reuse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Trusted Compute Is Becoming a Foundation Layer&lt;/strong&gt;&lt;br&gt;
As AI agents become more capable and start handling real tasks, the question of where they run and whether their outputs can be trusted becomes unavoidable. Running an agent in a standard cloud environment gives you no way to verify that the execution happened as intended or that sensitive data was not exposed along the way.&lt;br&gt;
TEEs solve this by creating a hardware-level isolated space where computation happens privately and the results can be verified. Phala has been building this infrastructure specifically for decentralized AI use cases, and this partnership brings that work directly into a learning environment designed around practical building.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Participants Will Actually Work On&lt;/strong&gt;&lt;br&gt;
The curriculum covers how TEEs provide isolation and verifiability for AI computation, how to think about model execution and data privacy when agents need to interact with onchain systems, and how to structure Hackathon projects around these building blocks.&lt;br&gt;
High quality projects and case studies from the program will be documented in an open-source Handbook, turning individual builds into reusable references for the wider developer community. The program also connects participants across both the Phala and AI x Web3 School ecosystems, which matters when you are building in a space where community and infrastructure access go hand in hand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting Started&lt;/strong&gt;&lt;br&gt;
The current phase of AI x Web3 School is free and open to developers. Whether you are new to this space or already building and looking to understand the infrastructure layer better, the program is designed to meet you where you are. Pre-registration is open now at &lt;a href="https://web3career.build/en/programs/AI-Web3-School" rel="noopener noreferrer"&gt;https://web3career.build/en/programs/AI-Web3-School&lt;/a&gt; and you can follow aiweb3school for updates on course releases and Hackathon details.​​​​​​​​​​​​​​​​&lt;/p&gt;

</description>
      <category>ai</category>
      <category>developer</category>
      <category>programming</category>
      <category>security</category>
    </item>
    <item>
      <title>Clawdi Just Changed How AI Agents Work Together</title>
      <dc:creator>Soulman </dc:creator>
      <pubDate>Wed, 06 May 2026 22:45:25 +0000</pubDate>
      <link>https://dev.to/soulman_250/clawdi-just-changed-how-ai-agents-work-together-2jlb</link>
      <guid>https://dev.to/soulman_250/clawdi-just-changed-how-ai-agents-work-together-2jlb</guid>
      <description>&lt;p&gt;Note: Adapted from the official X Clawdi announcement at &lt;a href="https://x.com/openclawdiai/status/2049883505187074150?s=46" rel="noopener noreferrer"&gt;https://x.com/openclawdiai/status/2049883505187074150?s=46&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy9nrsrnq1prs3at297z.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy9nrsrnq1prs3at297z.jpeg" alt=" " width="800" height="443"&gt;&lt;/a&gt;&lt;br&gt;
If you’ve been using AI coding agents for any serious amount of work, you’ve probably noticed the same frustration. Claude Code on your laptop doesn’t remember what Codex did on another machine. Switch frameworks and you’re starting over. Every session is a blank slate, and all the context you built up just disappears. That’s not a tooling problem, it’s a fundamentally broken workflow, and it’s one that most developers have just quietly accepted as normal. Clawdi was built to fix it, and this latest update is the most complete version of that vision yet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wntk0mw1fssglzrsfu4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wntk0mw1fssglzrsfu4.jpeg" alt=" " width="800" height="542"&gt;&lt;/a&gt;&lt;br&gt;
The idea is straightforward. Instead of your memory, files, API keys, and skills living inside a specific agent, they live in Clawdi. Every agent connects to that same environment. Switch from Claude Code to Codex to Hermes and nothing is lost, because the context was never tied to the agent in the first place. It all runs in a TEE-secured cloud, which means your keys and memory are private by default, not sitting on a shared server somewhere with no visibility into how they’re handled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ce0mge5krqv7yne7l7z.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ce0mge5krqv7yne7l7z.jpeg" alt=" " width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Hermes Agent and What It Actually Does&lt;/strong&gt;&lt;br&gt;
The headline feature in this update is the Hermes Agent, now available to deploy in one click from the dashboard. What makes it different from a standard agent setup is that it builds on what it learned from previous tasks rather than resetting each time. It holds memory across sessions, picks up where Claude Code left off, and comes with over 200 tool integrations already configured so there’s no manual setup involved. If you’ve spent time getting an agent into a useful state only to lose that context when you close the session, Hermes is the direct answer to that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmm5om5su66b2isemuyo.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmm5om5su66b2isemuyo.jpeg" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Dashboard That Actually Shows You Everything&lt;/strong&gt;&lt;br&gt;
The Clawdi Cloud dashboard was rebuilt around the idea of a single view for all your agents. Claude Code, Codex, Hermes, and OpenClaw sessions now appear side by side, with activity history, recent sessions, messages, memories, vault keys, and connectors all accessible from one place. Adding a new agent means pasting one prompt into your tool of choice and it configures itself and shows up in the dashboard automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0y4vqjiskyzhyw231z0j.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0y4vqjiskyzhyw231z0j.jpeg" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;br&gt;
New additions include a built-in console with a terminal and file editor, an Agent Portraits page that gives each agent a shareable public profile, and a Connectors page with over 500 tools you can add in one click. Built-in skills now cover searching X posts, browsing live news, and checking Polymarket predictions out of the box, and messaging support spans 11 platforms including Telegram, Discord, Slack, and WhatsApp all managed from a single panel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pricing and How to Get Started&lt;/strong&gt;&lt;br&gt;
The free tier gives you access to either OpenClaw or Hermes running in hardware-secured infrastructure with support for 13 or more messaging apps. Pro at $29 a month unlocks both agents together along with a web terminal and custom ports. Max at $99 a month steps up to 4 vCPU, 8GB RAM, and 40GB storage. Enterprise covers SSO, audit logs, and a 99.9% uptime guarantee.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml5t55kgyk1eoltx3hwd.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml5t55kgyk1eoltx3hwd.jpeg" alt=" " width="800" height="487"&gt;&lt;/a&gt;&lt;br&gt;
If you prefer to run it locally, the full version is available via npm install -g clawdi, MIT licensed and free to keep. It works with Claude Code, Codex, Cursor, OpenClaw, and Hermes. The whole thing takes under three minutes to get running and you can start for free at clawdi.ai.​​​​​​​​​​​​​​​​&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>agents</category>
      <category>developer</category>
    </item>
    <item>
      <title>One Framework to Run Confidential Workloads Across AWS, Google Cloud, and Phala</title>
      <dc:creator>Soulman </dc:creator>
      <pubDate>Wed, 29 Apr 2026 23:21:01 +0000</pubDate>
      <link>https://dev.to/soulman_250/one-framework-to-run-confidential-workloads-across-aws-google-cloud-and-phala-3k3g</link>
      <guid>https://dev.to/soulman_250/one-framework-to-run-confidential-workloads-across-aws-google-cloud-and-phala-3k3g</guid>
      <description>&lt;p&gt;&lt;strong&gt;Note: This article is adapted from the official Phala Network post: “dstack: One Confidential Compute Framework Across AWS, Google Cloud, and Phala” — published April 23, 2026. see it here: &lt;a href="https://phala.com/posts/dstack-one-confidential-compute-framework-aws-google-cloud-phala" rel="noopener noreferrer"&gt;https://phala.com/posts/dstack-one-confidential-compute-framework-aws-google-cloud-phala&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9qda0cq99by2ily8x16.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9qda0cq99by2ily8x16.webp" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have ever tried shipping something on confidential compute infrastructure, you know the setup tax is real. Choosing AWS Nitro Enclaves or Google Cloud Confidential VMs is not just a hosting decision. It pulls in a whole chain of choices about how you package your workload, how your application proves its identity, and how it gets access to secrets at runtime. Every platform does this differently, and if you ever need to move between them, you are essentially starting from scratch.&lt;br&gt;
That is the specific problem dstack is solving. It is an open framework from Phala.com that lets you write one workload definition, in standard Docker Compose format, and deploy it across AWS, Google Cloud, or Phala own infrastructure without rebuilding your trust model each time. On Google Cloud your workload runs inside a Confidential VM. On AWS it gets packaged into an enclave image. Either way, the environment is hardware-secured, the cloud provider cannot read your memory, and a compromised host machine cannot reach inside what is running. That baseline protection holds regardless of which backend you are on, and your application does not have to be written differently for each one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Verification Step Most Systems Skip&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here is where dstack separates itself from tools that just wrap cloud deployment in a nicer interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcqeha4jwuad4v3568gw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcqeha4jwuad4v3568gw.jpeg" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When your workload boots, it does not automatically get access to secrets just because it is running in a secure environment. Before anything sensitive is released, the workload has to prove exactly what code is running inside it. That proof is a cryptographic measurement, a fingerprint of the workload at that exact moment, and it gets sent to dstack’s key management component for verification. If the measurement matches what was previously authorized, keys get released. If the code has been changed in any way, the measurement is different and nothing gets released. No exceptions.&lt;br&gt;
Your application talks to dstack through a consistent local interface regardless of which backend is underneath. On Google Cloud a component called the Guest Agent handles the verification work. On AWS a smaller utility called dstack-util does the same job inside the enclave. The backend differs because the platforms differ, but the interface your application sees and the logic your trust model follows stays the same across both. That is the design choice that makes dstack genuinely portable rather than just multi-cloud in name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Phala.com Brings to the Table Beyond the Cloud Providers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hardware-secured environments are something AWS and Google both offer. What Phala adds through dstack is a policy layer that sits above all of it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsspindt42qjibbmvszwf.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsspindt42qjibbmvszwf.jpeg" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rather than keeping authorization decisions buried inside your infrastructure config, dstack lets you register approved workload measurements on-chain. That means any change to what your system trusts is recorded, visible, and traceable by anyone who needs to verify it. Governance over your trust model gets lifted out of the infrastructure layer entirely, which is meaningful when you are running sensitive workloads across multiple teams or need to demonstrate compliance to someone outside your organization. And because that policy layer is not tied to any single cloud backend, it works consistently whether you are on AWS, Google Cloud, or Phala’s own network.&lt;br&gt;
This matters most for AI infrastructure. Model weights, inference inputs, API credentials, proprietary business logic running inside a model server. These assets need protection while the application is actively using them, not just while they are sitting in a database. Most encryption handles the storage problem. dstack handles the runtime problem, and it does it in a way that does not chain you to one cloud provider’s implementation of secure compute.&lt;br&gt;
The dstack repository is public and worth exploring if you are building anything in this space. Phala has put real work into making confidential compute portable in a way that holds up past the demo stage, and the GitHub documentation walks through both the Google Cloud and AWS deployment paths in practical detail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interested in building with dstack? Visit the official &lt;a href="https://phala.com/" rel="noopener noreferrer"&gt;https://phala.com/&lt;/a&gt; site and explore the dstack GitHub repository to dig into the code and deployment guides.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>security</category>
      <category>api</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Meet Clawdi v2.0: The Missing Layer for Multi-Agent Workflows</title>
      <dc:creator>Soulman </dc:creator>
      <pubDate>Wed, 29 Apr 2026 23:08:43 +0000</pubDate>
      <link>https://dev.to/soulman_250/meet-clawdi-v20-the-missing-layer-for-multi-agent-workflows-1o35</link>
      <guid>https://dev.to/soulman_250/meet-clawdi-v20-the-missing-layer-for-multi-agent-workflows-1o35</guid>
      <description>&lt;p&gt;&lt;strong&gt;Note: Adapted from the official Clawdi announcement post.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Clawdi v2.0 is here, and if you have been running AI agents across different projects and devices, this one is worth paying attention to. The core idea is simple: one installation that gives all your agents access to the same memory, API keys, skills, and files, no matter what device you are on. Think of it the way you think about iCloud syncing your phone and laptop without you having to manage anything manually. Clawdi does that, but for your agents.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What This Actually Means for Your Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Right now, if you run OpenClaw, Hermes, Claude Code, and Codex on the same project, each agent typically operates in its own bubble. You end up duplicating context, re-entering credentials, and keeping separate configurations in sync by hand. Clawdi v2.0 removes that friction entirely. Install it once, and every agent you run pulls from a shared layer of memory, keys, skills, and files. They are all working from the same foundation, which means less setup time and fewer mistakes from agents operating on stale or incomplete information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why the Security Side Actually Matters&lt;/strong&gt;&lt;br&gt;
Clawdi runs on Phala Network’s TEE-secured cloud infrastructure. A Trusted Execution Environment means your data is encrypted during processing, not just when it is stored or transferred. No one outside that secure environment, not even the cloud provider, can see what is happening inside it. For developers handling API keys, sensitive project files, or proprietary logic across multiple agents, that is not a minor detail. It means you get the convenience of shared cloud infrastructure without giving up control over what stays private. Everything is encrypted by default, so you do not have to configure anything special to get that protection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting Started&lt;/strong&gt;&lt;br&gt;
If you are already building with OpenClaw or any of the supported agents, the upgrade path is straightforward. Install Clawdi v2.0 once on your device, connect your agents, and the shared layer is ready. You can explore the full setup at &lt;a href="https://www.clawdi.ai/" rel="noopener noreferrer"&gt;https://www.clawdi.ai/&lt;/a&gt; and follow the Clawdi on X for updates as the ecosystem keeps growing.&lt;/p&gt;

&lt;p&gt;This is the kind of coordination layer that makes running multiple agents feel like running one well-organized system rather than several disconnected ones.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>security</category>
      <category>agents</category>
    </item>
    <item>
      <title>Phala Cloud Expands Its Model Lineup With Over 200 Options, All Running in Private Infrastructure</title>
      <dc:creator>Soulman </dc:creator>
      <pubDate>Thu, 23 Apr 2026 12:14:38 +0000</pubDate>
      <link>https://dev.to/soulman_250/phala-cloud-expands-its-model-lineup-with-over-200-options-all-running-in-private-infrastructure-3php</link>
      <guid>https://dev.to/soulman_250/phala-cloud-expands-its-model-lineup-with-over-200-options-all-running-in-private-infrastructure-3php</guid>
      <description>&lt;p&gt;&lt;strong&gt;Note: Adapted from the official X Phala Network announcement. Check here: &lt;a href="https://x.com/phalanetwork/status/2046629390889754758" rel="noopener noreferrer"&gt;https://x.com/phalanetwork/status/2046629390889754758&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7krqsi6p7u92ndp8jzf3.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7krqsi6p7u92ndp8jzf3.jpeg" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;br&gt;
If you’ve been building on Phala Cloud, there’s a meaningful update worth knowing about. Since late February, a new set of models has been added to the platform, and the range covers a lot of ground, from multimodal reasoning to heavy-duty coding agents to large-scale inference workloads. Whether you’re a solo developer, a team shipping a product, or an institution that needs to keep inference data locked down, there’s likely something in here relevant to what you’re building.&lt;br&gt;
more capable models are now available, and they all run the same way everything on Phala Cloud runs, inside a TEE-secured environment that keeps your data private by default.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Models Were Added&lt;/strong&gt;&lt;br&gt;
Six notable additions landed since late February.&lt;br&gt;
1️⃣ MoonshotAI’s Kimi K2.6 brings multimodal capability and is well suited for multi-agent workflows and coding-driven UI generation. Check here: &lt;a href="https://www.redpill.ai/models/moonshotai/kimi-k2.6" rel="noopener noreferrer"&gt;https://www.redpill.ai/models/moonshotai/kimi-k2.6&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2️⃣ Qwen3 Coder Next from Qwen is built specifically for coding agents and local development, using a sparse architecture that keeps things efficient. Check here: &lt;a href="https://www.redpill.ai/models/qwen/qwen3-coder-next" rel="noopener noreferrer"&gt;https://www.redpill.ai/models/qwen/qwen3-coder-next&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3️⃣ GLM 5.1 from Z.ai is a strong pick for complex, long-running engineering tasks where you need a model that can hold context and stay useful across a full workflow. Check here: &lt;a href="https://www.redpill.ai/models/z-ai/glm-5.1" rel="noopener noreferrer"&gt;https://www.redpill.ai/models/z-ai/glm-5.1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4️⃣ On the larger end, Xiaomi’s MiMo-V2-Flash is a 309 billion parameter model using a Mixture-of-Experts setup, which makes it practical for demanding workloads without burning through compute unnecessarily. Check here: &lt;a href="https://www.redpill.ai/models/xiaomi/mimo-v2-flash" rel="noopener noreferrer"&gt;https://www.redpill.ai/models/xiaomi/mimo-v2-flash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5️⃣ Qwen3.5 397B A17B is a vision-language model combining two different architectural approaches to deliver solid performance across reasoning, coding, and interface interactions. Check here: &lt;a href="https://www.redpill.ai/models/qwen/qwen3.5-397b-a17b" rel="noopener noreferrer"&gt;https://www.redpill.ai/models/qwen/qwen3.5-397b-a17b&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6️⃣ Qwen3.5–27B rounds things out as a lighter, faster option that still holds its own against much larger models in most practical tasks. Check here: &lt;a href="https://www.redpill.ai/models/qwen/qwen3.5-27b" rel="noopener noreferrer"&gt;https://www.redpill.ai/models/qwen/qwen3.5-27b&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why the Infrastructure Behind These Models Matters&lt;/strong&gt;&lt;br&gt;
Adding new models to a platform is straightforward. What’s less common is running all of them inside hardware-level private infrastructure. Every model on Phala Cloud runs inside a Trusted Execution Environment, secured by Intel TDX and NVIDIA Confidential Computing. That means the computation itself is isolated and verifiable, not just the data in transit.&lt;/p&gt;

&lt;p&gt;For individual developers, this means you can build and test against powerful models without worrying about your prompts or outputs being exposed. For teams and companies, it means your inference workload doesn’t become a liability. For institutions handling regulated or sensitive data, it means you can actually use these models in production without a long conversation with your compliance team first.&lt;/p&gt;

&lt;p&gt;This is the core of what Phala Cloud is doing differently. It’s not just a hosted model catalog. It’s a place where the privacy guarantee is built into the hardware layer, not bolted on afterward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Get Started&lt;/strong&gt;&lt;br&gt;
The full catalog currently includes over 200 models, and the six highlighted above are a good starting point depending on what you’re building. If you’re working on a coding agent, Qwen3 Coder Next is worth testing first. If you need multimodal capability or multi-agent coordination, Kimi K2.6 is the natural starting point. For large-scale or institution-level workloads, MiMo-V2-Flash gives you serious capacity with the same privacy guarantees as everything else on the platform.&lt;/p&gt;

&lt;p&gt;You can explore the full model list and start building at Phala Cloud today. The infrastructure is already running. You just need to bring the idea.​​​​​​​​​​​​​​​​&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check the full list here: &lt;a href="https://www.redpill.ai/models" rel="noopener noreferrer"&gt;https://www.redpill.ai/models&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>web3</category>
      <category>developer</category>
      <category>phala</category>
    </item>
    <item>
      <title>OpenClaw Gets Things Done. Hermes Gets Better at Getting Things Done.</title>
      <dc:creator>Soulman </dc:creator>
      <pubDate>Thu, 23 Apr 2026 12:02:26 +0000</pubDate>
      <link>https://dev.to/soulman_250/openclaw-gets-things-done-hermes-gets-better-at-getting-things-done-j1i</link>
      <guid>https://dev.to/soulman_250/openclaw-gets-things-done-hermes-gets-better-at-getting-things-done-j1i</guid>
      <description>&lt;p&gt;Note: This article is Adapted from the official Phala Network / Clawdi blog. Original content published at clawdi.ai&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiaek40pogfvrbknhq2bu.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiaek40pogfvrbknhq2bu.jpeg" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;br&gt;
There are two agents available on Clawdi right now, and they work in fundamentally different ways. OpenClaw is built for execution. You give it a task, it completes the task. Clean, direct, reliable. Hermes takes a different approach. It does the work, but when it finishes, it pauses and evaluates what just happened. What worked, what did not, and how it could do it better. Then it writes a skill document from that experience and carries it forward. The next time a similar task comes up, Hermes already has something to reference. It is not starting from scratch every time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why That Distinction Actually Matters&lt;/strong&gt;&lt;br&gt;
Most tools do not improve through use. You run them, they return a result, and that is the end of the interaction. Hermes is different because it is building a record of its own performance over time. Each completed task adds to what it knows. That means the more you use it, the more capable it becomes at handling the kinds of work you keep throwing at it. It is a practical form of learning built directly into how the agent operates, not a feature added on top.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where It Runs and Why That Is Worth Knowing&lt;/strong&gt;&lt;br&gt;
Both agents run inside a TEE-secured cloud environment on Clawdi. TEE stands for Trusted Execution Environment, which is a sealed space where your data is processed without being exposed outside of it. Private by default. Today, the Stock Expert agent was deployed on the platform, running inside that same secure setup. You are not trading privacy for capability here. The infrastructure handles both. That combination, agents that learn and infrastructure that protects your data, is what makes the Agent Store on Clawdi worth paying attention to.&lt;/p&gt;

&lt;p&gt;If you want to see how it works for yourself, head over to &lt;a href="https://www.clawdi.ai/" rel="noopener noreferrer"&gt;https://www.clawdi.ai/&lt;/a&gt; and explore the Agent Store. The Stock Expert agent is live and ready to use.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>developer</category>
      <category>agents</category>
    </item>
    <item>
      <title>Why Clawdi Keeps Your Agent Setup Intact</title>
      <dc:creator>Soulman </dc:creator>
      <pubDate>Thu, 23 Apr 2026 11:51:08 +0000</pubDate>
      <link>https://dev.to/soulman_250/why-clawdi-keeps-your-agent-setup-intact-1d4p</link>
      <guid>https://dev.to/soulman_250/why-clawdi-keeps-your-agent-setup-intact-1d4p</guid>
      <description>&lt;p&gt;&lt;strong&gt;Adapted from the official Clawdi/Phala Network announcement. Originally published at clawdi.ai.&lt;/strong&gt;&lt;br&gt;
If you’ve been building with AI agents for a while, you’ve probably run into the same problem: every time you switch frameworks, you lose everything. Your memory, your connections, your scheduled jobs, your custom skills. It all has to be rebuilt from scratch because it was tied to the agent, not to a workspace you actually own.&lt;br&gt;
Clawdi solves this in a way that’s worth understanding properly, because it changes how you think about agent infrastructure entirely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s Actually Happening Under the Hood&lt;/strong&gt;&lt;br&gt;
When you run OpenClaw or Hermes on Clawdi, the agent itself is not where your data lives. Phala Network’s Intel TDX confidential virtual machine is. That’s the encrypted environment where your MCP servers, memory, skills, connections, and cron jobs are all stored. The agent is just the layer that interacts with them. The workspace is the thing that actually holds your setup together.&lt;br&gt;
Because that workspace runs inside a TEE-secured cloud environment, everything in it stays private by default. No outside access, no exposure at the infrastructure level. It’s not something you have to configure or turn on. It’s just how it works from the start.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Switching Frameworks No Longer Means Starting Over&lt;/strong&gt;&lt;br&gt;
This separation between the agent and the workspace is what makes framework switching painless. Since your memory, tools, and configuration all live in the encrypted workspace rather than inside a specific agent, you can move from OpenClaw to Hermes, or the other way around, without rebuilding anything. Your setup follows the workspace, not the agent you happened to be using.&lt;br&gt;
For developers who are still deciding which framework fits their workflow, or who want the flexibility to change direction later, this is a meaningful shift. You’re not locked in by the infrastructure. You can explore, iterate, and switch without the cost of starting over every time.&lt;/p&gt;

&lt;p&gt;If you want to explore Clawdi or try it yourself, head to &lt;a href="https://www.clawdi.ai/" rel="noopener noreferrer"&gt;https://www.clawdi.ai/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>security</category>
      <category>agents</category>
    </item>
    <item>
      <title>Hermes Is Now on Clawdi. Here Is What That Changes for Developers</title>
      <dc:creator>Soulman </dc:creator>
      <pubDate>Thu, 23 Apr 2026 11:45:27 +0000</pubDate>
      <link>https://dev.to/soulman_250/hermes-is-now-on-clawdi-here-is-what-that-changes-for-developers-1n77</link>
      <guid>https://dev.to/soulman_250/hermes-is-now-on-clawdi-here-is-what-that-changes-for-developers-1n77</guid>
      <description>&lt;p&gt;Note: Adapted from the official Clawdi announcement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cjv7awr9l07tab4jhz6.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cjv7awr9l07tab4jhz6.webp" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
If you’ve ever tried running an AI agent in a real workflow, you already know the gap between “it works on my machine” and “it actually runs every day.” Most agent frameworks are genuinely capable. The problem is everything around them. The setup, the infrastructure, the API connections that break, the maintenance that quietly eats your time. That gap is exactly what this update closes. Hermes, the AI agent framework built for long-running tasks and self-improving behavior, is now available on Clawdi. And the way you access it has changed significantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Running Hermes on Clawdi Actually Looks Like&lt;/strong&gt;&lt;br&gt;
You no longer need to manage the environment yourself. Clawdi handles the infrastructure, scheduling, memory, state, and integrations so Hermes can focus on doing the actual work. That means you can run it in the cloud, keep it connected to your tools, and have it persist across sessions without touching a server or resetting your setup. It also runs inside a TEE-secured cloud environment, which means your tasks and data stay private by default. You are not exposing sensitive workflows to an open system. For developers building on top of agent frameworks, that kind of baseline trust matters. The integrations side is worth noting too. Clawdi now supports over 500 app connections, along with messaging channels and built-in scheduling, all of which stay live whether you are running Hermes, OpenClaw, or both.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Having Both Frameworks in One Place Matters&lt;/strong&gt;&lt;br&gt;
OpenClaw and Hermes are good at different things, and that is by design. OpenClaw handles multi-app workflows and automation well. Hermes is built more for tasks that need to run continuously, adapt over time, and operate with less intervention. On Clawdi, both run in the same environment. That means your integrations stay connected regardless of which framework you are using, your workflows persist, and you do not lose your setup every time you switch between them. In practice, this removes one of the most common reasons developers stop using agent tools consistently. It is not that the tools stop being useful. It is that rebuilding context every time you change tools is exhausting. Having both available in a shared environment with shared memory and state cuts that friction out entirely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What You Can Start Doing With It Now&lt;/strong&gt;&lt;br&gt;
Once Hermes is running inside Clawdi, it works as a real part of your workflow rather than a demo you spin up occasionally. You can use it to monitor tasks across tools, handle recurring processes, run background jobs, and stay on top of updates without checking in manually. The practical difference here is persistence. Most agent setups require you to be present for things to keep moving. With Clawdi handling the infrastructure layer, Hermes keeps running whether you are there or not. If you want to try it, setup takes a few minutes and you can start experimenting right away at &lt;a href="https://www.clawdi.ai/%E2%80%8B%E2%80%8B%E2%80%8B%E2%80%8B%E2%80%8B" rel="noopener noreferrer"&gt;https://www.clawdi.ai/​​​​​&lt;/a&gt;&lt;br&gt;
The Setup takes a few minutes. You can start at clawdi.ai.​​​​​​&lt;br&gt;
Read full details in the original blog post from where this thread is adopted from, find it here: &lt;a href="https://www.clawdi.ai/blog/you-can-now-use-hermes-without-setting-it-up" rel="noopener noreferrer"&gt;https://www.clawdi.ai/blog/you-can-now-use-hermes-without-setting-it-up&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>security</category>
      <category>devops</category>
    </item>
    <item>
      <title>With Clawdi.ai, You Don’t Have to Choose Between Hermes or Openclaw</title>
      <dc:creator>Soulman </dc:creator>
      <pubDate>Wed, 15 Apr 2026 22:23:59 +0000</pubDate>
      <link>https://dev.to/soulman_250/with-clawdiai-you-dont-have-to-choose-between-hermes-or-openclaw-4079</link>
      <guid>https://dev.to/soulman_250/with-clawdiai-you-dont-have-to-choose-between-hermes-or-openclaw-4079</guid>
      <description>&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;strong&gt;This article is Adapted from the official Clawdi post at clawdi.ai&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vxtcmdhqv7ni6cl6pqe.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vxtcmdhqv7ni6cl6pqe.jpeg" alt=" " width="800" height="459"&gt;&lt;/a&gt;&lt;br&gt;
If you’ve been building AI agents and wondering whether to go with Hermes or Openclaw, Clawdi removes that question entirely. It supports both, so you can work with whichever model fits your use case without locking yourself into one direction. That kind of flexibility matters when you’re still figuring out what works best for a given project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skills That Grow and Memory That Actually Works&lt;/strong&gt;&lt;br&gt;
One of the more practical things about Clawdi is that the skills it comes with don’t stay fixed. They evolve the more you use them, which means the tool gets more useful over time rather than hitting a ceiling early. The memory works the same way you’d want it to. It holds context across sessions, so you’re not starting from scratch every time or repeating information you’ve already given it. For anyone who has dealt with AI tools that forget everything the moment a session ends, this is a meaningful difference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Private by Default, Ready in 10 Minutes&lt;/strong&gt;&lt;br&gt;
Getting started takes about 10 minutes and there’s nothing to set up on the infrastructure side. No servers, no complex configuration, you just get in and start working. Everything runs in Phala’s TEE-secured cloud, which means your data is private by default. That’s not a setting buried in a menu somewhere, it’s just how it works out of the box. If you’re building agents that handle sensitive workflows, that baseline matters more than most tools acknowledge.&lt;/p&gt;

&lt;p&gt;You can check it out at &lt;a href="https://www.clawdi.ai/" rel="noopener noreferrer"&gt;https://www.clawdi.ai/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>productivity</category>
      <category>agents</category>
    </item>
    <item>
      <title>Reasons Why You Should Check Agent Stock Expert That’s Recently Launched On Clawdi.ai</title>
      <dc:creator>Soulman </dc:creator>
      <pubDate>Wed, 15 Apr 2026 22:15:03 +0000</pubDate>
      <link>https://dev.to/soulman_250/reasons-why-you-should-check-agent-stock-expert-thats-recently-launched-on-clawdiai-mmj</link>
      <guid>https://dev.to/soulman_250/reasons-why-you-should-check-agent-stock-expert-thats-recently-launched-on-clawdiai-mmj</guid>
      <description>&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;em&gt;this article is Adapted from the official Openclaw announcement. Read the Full blog post here: &lt;a href="https://www.clawdi.ai/" rel="noopener noreferrer"&gt;https://www.clawdi.ai/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo81mn1i8rhv0dowjt45g.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo81mn1i8rhv0dowjt45g.webp" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
Openclaw just launched Agent Store on Clawdi, and the first agent available is Stock Expert. If you’ve ever spent 20 minutes jumping between charts, filings, news tabs, and analyst takes just to form a basic view on a stock, this is built for that exact problem. You ask it a question, it does the legwork, and you get back something you can actually work with.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What the Stock Expert Agent Does&lt;/strong&gt;&lt;br&gt;
You don’t need to think about what’s running under the hood to use it, but here’s what’s actually happening. When you ask it to analyze a stock, it pulls real-time price data, looks at recent filings and announcements, checks technical signals like trend and momentum, evaluates revenue and earnings growth, and reviews financial health. It then puts all of that together into a structured output covering valuation, risks, and a suggested strategy. The goal isn’t to give you more information. It’s to give you a clear picture so you can decide what to do next without having to connect the dots yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why This Is Worth Paying Attention To&lt;/strong&gt;&lt;br&gt;
Most research tools either dump data on you or hand you someone else’s opinion. Neither saves you much time. What makes this different is that it runs inside Phala’s TEE infrastructure, meaning the analysis happens in a private, verifiable environment. For anyone building financial tools or working in an institutional context where data handling matters, that’s a meaningful detail. You’re not routing sensitive research through a black box. You’re working inside infrastructure designed to keep that process clean and auditable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Get Started&lt;/strong&gt;&lt;br&gt;
There’s no setup involved. You sign up on Clawdi, go to the agent catalog, find Stock Expert, click Add, and start asking questions. Prompts like “Analyze AAPL,” “What changed for TSLA this week,” or “Is NVDA overvalued right now” are enough to get something useful back. It won’t make decisions for you and it won’t eliminate risk, but it will cut down the time it takes to go from not knowing what’s going on with a stock to having a clear view of it.&lt;/p&gt;

&lt;p&gt;The Agent Store is live now at clawdi.ai.​​​​​​​​​​​​​​​it’s worth checking out &lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>devops</category>
      <category>agents</category>
    </item>
    <item>
      <title>There is More in Clawdi Ai than Just Using the Chat Screen</title>
      <dc:creator>Soulman </dc:creator>
      <pubDate>Thu, 09 Apr 2026 21:47:39 +0000</pubDate>
      <link>https://dev.to/soulman_250/there-is-more-in-clawdi-ai-than-just-using-the-chat-screen-53l9</link>
      <guid>https://dev.to/soulman_250/there-is-more-in-clawdi-ai-than-just-using-the-chat-screen-53l9</guid>
      <description>&lt;p&gt;Note: Adapted from the official Clawdi/OpenClaw blog post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk65m6v5s8fgubv85k5lr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk65m6v5s8fgubv85k5lr.jpeg" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most people who sign up for Clawdi follow the same pattern. They open the dashboard, start a conversation with their agent, maybe adjust a system message or tweak a prompt, and call it a day. That works well enough as a starting point, but it leaves most of the platform untouched. The features that actually make OpenClaw useful for serious work are sitting one or two clicks away from that chat screen, and the majority of users never find them.&lt;/p&gt;

&lt;p&gt;There are three things worth knowing about: the Control UI, agent templates, and per-agent channel bindings. Each one solves a real problem that shows up the moment you try to do something more than basic conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What the Control UI Actually Gives You&lt;/strong&gt;&lt;br&gt;
The Control UI is a live window into your agent’s environment. Inside it you’ll find a Files tab where you can browse and edit the agent’s workspace in a built-in code editor, a Logs tab that streams your gateway and deployment activity in real time, and for Pro and Max users, a Terminal tab that gives you a full shell directly into the container your agent is running in.&lt;br&gt;
What this means in practice is that you can tail the logs while you’re actively chatting with your agent, so when something feels off you can see exactly what’s happening instead of guessing. You can install tools, run commands, and test things directly inside the environment without rebuilding anything. You can edit files and configuration in place. If you’ve been using Clawdi without ever opening the Control UI, you’ve been working without visibility into what your agent is actually doing, and that makes debugging significantly harder than it needs to be.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Templates and Channel Bindings Clean Up the Rest&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm8182nf3a8lyznzxj3s.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm8182nf3a8lyznzxj3s.jpeg" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Agent templates exist because most people waste time rebuilding the same setup from scratch every time they create a new agent. You pick a model, choose a reasoning mode, decide which skills to include, figure out how the execution layer should be configured, and by the time you’re done you’ve spent twenty minutes on decisions that don’t change much between agents. Templates bundle all of that. You get a pre-configured model and reasoning profile, a default set of skills already connected, and settings that reflect how production agents are actually run. You pick the template closest to what you need, rename it, adjust a couple of things, and you’re live. It removes a lot of friction and reduces the kind of small mistakes that come from manually copying configs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Per-agent channel bindings solve&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6o00x8qbryaqweaxzc0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6o00x8qbryaqweaxzc0.jpeg" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
Per-agent channel bindings solve a different problem that shows up once you’re running more than one agent. Each agent can have its own dedicated connection to Telegram, Discord, Slack, or whichever platform your team uses, so they stay separate and observable. There’s also a config side panel inside the chat view that shows you exactly which model the agent is running, which channels it’s connected to, and which skills are attached. You no longer have to dig through configuration files to understand what you’re talking to. You can open the chat, look at the panel, and know immediately, then adjust on the spot if needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Actually Start Using More of It&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The practical shift here is treating your agent as an environment you can see and operate rather than a chat interface you talk at. Open your agent and click into the Control UI. Look at the logs while you have a conversation. If you’re on Pro or Max, open the Terminal and run something simple just to get familiar with it. When you create your next agent, start from a template instead of from scratch. Give each agent its own channel connection so different workflows stay clean and separate.&lt;br&gt;
That progression from chat-only to full environment access is what separates a basic Clawdi setup from one that actually works like infrastructure. The full step-by-step guide covering each of these features is linked below if you want to walk through the setup properly.&lt;/p&gt;

&lt;p&gt;Read the full guide here: &lt;a href="https://www.clawdi.ai/blog/only-20-percent-of-openclaw-unlocked" rel="noopener noreferrer"&gt;https://www.clawdi.ai/blog/only-20-percent-of-openclaw-unlocked&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>devops</category>
      <category>agents</category>
    </item>
    <item>
      <title>Hardening AI agents with hardware level security</title>
      <dc:creator>Soulman </dc:creator>
      <pubDate>Tue, 31 Mar 2026 15:46:03 +0000</pubDate>
      <link>https://dev.to/soulman_250/hardening-ai-agents-with-hardware-level-security-mm4</link>
      <guid>https://dev.to/soulman_250/hardening-ai-agents-with-hardware-level-security-mm4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdo6sr00ioovly0ubwr6.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdo6sr00ioovly0ubwr6.jpeg" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;br&gt;
Most developers recognize the inherent risk in deploying AI agents that handle sensitive API keys or private customer data. Traditional cloud environments often leave this information vulnerable to the infrastructure provider or external breaches. OpenClaw addresses this by running entirely within Phala’s Trusted Execution Environments, which are secure enclaves built directly into the processor. This architectural choice moves security away from "trusted" policies and into the physical hardware, ensuring your agent's execution is isolated and verifiable. Source: [(&lt;a href="https://phala.com/posts/erc-8004-launch)" rel="noopener noreferrer"&gt;https://phala.com/posts/erc-8004-launch)&lt;/a&gt;]&lt;br&gt;
&lt;a href="https://the-scarlet-thread.medium.com/why-do-ai-agents-need-tees-how-does-phala-network-fit-in-9a9464674e54" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The power of the Phala TEE stack&lt;/strong&gt;&lt;br&gt;
By leveraging Phala’s TEE technology, Clawdi.ai creates a secure vault for your data that is invisible even to the host machines. This setup uses memory encryption and isolated execution to ensure that sensitive operations, like managing private keys or processing proprietary datasets, remain completely confidential. It effectively solves the trust issue between the developer and the cloud provider, as the TEE provides cryptographic proof that the code is running exactly as intended without any unauthorized interference. Source: [&lt;a href="https://www.panewslab.com/en/articles/d566ft503z4v" rel="noopener noreferrer"&gt;https://www.panewslab.com/en/articles/d566ft503z4v&lt;/a&gt;]&lt;br&gt;
[&lt;a href="https://x.com/phalanetwork/status/2038981366126129339?s=46" rel="noopener noreferrer"&gt;https://x.com/phalanetwork/status/2038981366126129339?s=46&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rapid deployment for institutions&lt;/strong&gt;&lt;br&gt;
For institutions and developers who need to balance speed with high security, this platform offers a streamlined path to production. You can move from a standard Docker application to a hardware-secured deployment in about three minutes for $29 a month. This approach provides a practical way to achieve enterprise-grade privacy and data sovereignty without the complexity of building custom confidential computing infrastructure from scratch.&lt;br&gt;
&lt;strong&gt;🔗 Useful links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Website: &lt;a href="https://phala.com/" rel="noopener noreferrer"&gt;https://phala.com/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Clawdi ai: &lt;a href="https://www.clawdi.ai/" rel="noopener noreferrer"&gt;https://www.clawdi.ai/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Telegram: &lt;a href="https://t.me/phalanetwork" rel="noopener noreferrer"&gt;https://t.me/phalanetwork&lt;/a&gt; (very active)&lt;/li&gt;
&lt;li&gt;X: &lt;a href="https://x.com/phalanetwork" rel="noopener noreferrer"&gt;https://x.com/phalanetwork&lt;/a&gt;?&lt;/li&gt;
&lt;li&gt;Discord: &lt;a href="https://discord.gg/phala-network" rel="noopener noreferrer"&gt;https://discord.gg/phala-network&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>security</category>
      <category>ai</category>
      <category>tee</category>
    </item>
  </channel>
</rss>
