<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 李李泽</title>
    <description>The latest articles on DEV Community by 李李泽 (@_1716c6e3e44bde0bff67a).</description>
    <link>https://dev.to/_1716c6e3e44bde0bff67a</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/_1716c6e3e44bde0bff67a"/>
    <language>en</language>
    <item>
      <title>Building Open Infrastructure for AI Agents: Identity, Storage, Teams, and Self-Hosting</title>
      <dc:creator>李李泽</dc:creator>
      <pubDate>Sun, 12 Apr 2026 02:59:06 +0000</pubDate>
      <link>https://dev.to/_1716c6e3e44bde0bff67a/building-open-infrastructure-for-ai-agents-identity-storage-teams-and-self-hosting-4b70</link>
      <guid>https://dev.to/_1716c6e3e44bde0bff67a/building-open-infrastructure-for-ai-agents-identity-storage-teams-and-self-hosting-4b70</guid>
      <description>&lt;p&gt;When people talk about AI agents, the conversation usually centers on prompting, workflows, tools, and orchestration frameworks.&lt;/p&gt;

&lt;p&gt;That makes sense. Those are the visible parts.&lt;/p&gt;

&lt;p&gt;But once you try to make agents persistent, collaborative, and actually usable over time, a different set of problems shows up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How does an agent keep a long-lived identity?&lt;/li&gt;
&lt;li&gt;How do services trust that identity?&lt;/li&gt;
&lt;li&gt;Where do files live?&lt;/li&gt;
&lt;li&gt;How do multiple agents share a team boundary?&lt;/li&gt;
&lt;li&gt;How do they collaborate without hardcoded credentials everywhere?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I've been building &lt;strong&gt;Hivo&lt;/strong&gt; to explore that layer:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/zhiyuzi/Hivo" rel="noopener noreferrer"&gt;https://github.com/zhiyuzi/Hivo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem I wanted to solve
&lt;/h2&gt;

&lt;p&gt;Most agent demos start from the top of the stack. They show planning, reasoning, browsing, and tool use.&lt;/p&gt;

&lt;p&gt;But in practice, once you want an agent system to survive beyond a single run, you need infrastructure. Not just memory, but infrastructure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;identity&lt;/li&gt;
&lt;li&gt;authorization&lt;/li&gt;
&lt;li&gt;storage&lt;/li&gt;
&lt;li&gt;team membership&lt;/li&gt;
&lt;li&gt;collaboration primitives&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without those pieces, many multi-agent systems stay stuck as demos, because every real workflow ends up reinventing them in an ad hoc way.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Hivo includes today
&lt;/h2&gt;

&lt;p&gt;Hivo is an open, self-hostable suite of microservices for AI agents.&lt;/p&gt;

&lt;p&gt;Right now the repo includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;hivo-identity&lt;/code&gt; for identity registration, JWT issuance and refresh, JWKS, and OIDC discovery&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;hivo-acl&lt;/code&gt; for cross-service authorization&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;hivo-drop&lt;/code&gt; for file upload, download, listing, and sharing&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;hivo-club&lt;/code&gt; for teams, orgs, roles, memberships, and invite links&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;hivo-salon&lt;/code&gt; for group messaging, mentions, bulletin updates, and shared files&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@hivoai/cli&lt;/code&gt; as a unified CLI distributed through npm and GitHub Releases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is not to replace orchestration frameworks. The goal is to provide the infrastructure layer underneath them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I kept it self-hostable
&lt;/h2&gt;

&lt;p&gt;I think the future agent ecosystem will be mixed.&lt;/p&gt;

&lt;p&gt;Some people will want public infrastructure. Others will want private deployments for internal agents, lab environments, or security-sensitive workflows.&lt;/p&gt;

&lt;p&gt;That is why Hivo is designed to be fully self-hostable. Public endpoints are useful for trying things quickly, but the system is meant to work with your own issuer, your own trust boundary, and your own deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  A small example
&lt;/h2&gt;

&lt;p&gt;Imagine a team of agents working together on a research task:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;One agent registers its identity.&lt;/li&gt;
&lt;li&gt;It creates a team.&lt;/li&gt;
&lt;li&gt;It invites other agents into that team.&lt;/li&gt;
&lt;li&gt;Agents upload files into shared storage.&lt;/li&gt;
&lt;li&gt;They discuss updates in a shared group space.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This sounds simple, but in most agent setups it quickly turns into custom scripts, hand-managed tokens, and awkward service glue.&lt;/p&gt;

&lt;p&gt;That is the gap Hivo is trying to close.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick start
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @hivoai/cli
npx skills add zhiyuzi/Hivo &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;
hivo identity register your-handle@your-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What I want feedback on
&lt;/h2&gt;

&lt;p&gt;There are a few questions I am still actively thinking about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is identity / ACL / storage / collaboration the right base layer for agent infrastructure?&lt;/li&gt;
&lt;li&gt;Should systems like this stay modular, or collapse into fewer services?&lt;/li&gt;
&lt;li&gt;What is still missing for a serious self-hosted agent stack?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're building agent systems, self-hosting AI tooling, or thinking about infrastructure for long-lived agents, I'd love to hear what feels right and what feels overbuilt.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>Every AI tool helps you search. None of them ask: do you know what you're looking for?</title>
      <dc:creator>李李泽</dc:creator>
      <pubDate>Mon, 02 Mar 2026 11:57:42 +0000</pubDate>
      <link>https://dev.to/_1716c6e3e44bde0bff67a/every-ai-tool-helps-you-search-none-of-them-ask-do-you-know-what-youre-looking-for-3n2g</link>
      <guid>https://dev.to/_1716c6e3e44bde0bff67a/every-ai-tool-helps-you-search-none-of-them-ask-do-you-know-what-youre-looking-for-3n2g</guid>
      <description>&lt;h3&gt;
  
  
  The real bottleneck isn't information
&lt;/h3&gt;

&lt;p&gt;You're tracking a domain — AI coding tools, product opportunities, whatever. You check Hacker News, GitHub Trending, Reddit, arxiv, a dozen sources. An hour gone, mostly noise, and you still almost miss the one thing that mattered.&lt;/p&gt;

&lt;p&gt;So you build (or buy) a tool that automates the collection. Great. Now you have 500 items instead of 50, and a summary on top. The noise is organized, but it's still noise.&lt;/p&gt;

&lt;p&gt;Here's what I've learned after building and using an intelligence agent for weeks: &lt;strong&gt;the data sources are public. Everyone can access the same feeds. The real differentiator is the &lt;em&gt;lens&lt;/em&gt; — who's looking, and what they're looking for.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A watch intent that says "track AI coding tools" produces a very different report than one that says "I'm evaluating whether to enter this market. Focus on IDE-level products, track the technical architecture competition between Cursor/Windsurf/Copilot, known blind spot: I have no coverage on the demand side or academic frontier."&lt;/p&gt;

&lt;p&gt;Same sources. Same LLM. Completely different intelligence quality.&lt;/p&gt;

&lt;h3&gt;
  
  
  The problem with "just tell me what you want"
&lt;/h3&gt;

&lt;p&gt;Most AI tools ask you to describe what you want, then go fetch it. The assumption is that you know what you want. But in practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You describe your interest using the vocabulary you already have — which means you can't find things described in terms you don't know yet&lt;/li&gt;
&lt;li&gt;You focus on what you're already aware of — systematically missing adjacent areas&lt;/li&gt;
&lt;li&gt;You don't know what you don't know — and no amount of "add more sources" fixes that&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not a search problem. It's a cognitive framing problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I built
&lt;/h3&gt;

&lt;p&gt;I'm the developer of &lt;a href="https://github.com/zhiyuzi/Signex" rel="noopener noreferrer"&gt;Signex&lt;/a&gt;, an open-source personal intelligence agent that runs inside Claude Code. It monitors topics you care about, collects from 15+ data sources, analyzes through different lenses, and delivers reports. It remembers your feedback and adjusts over time.&lt;/p&gt;

&lt;p&gt;A few weeks ago I shared the initial release. The collection and analysis pipeline worked well. But I kept hitting the same wall: &lt;strong&gt;the quality of the output was bounded by the quality of the input — the user's intent definition and self-awareness.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So for V6, I built two core skills that address this directly: &lt;code&gt;identity-shape&lt;/code&gt; and &lt;code&gt;watch-shape&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  identity-shape: knowing who's looking
&lt;/h3&gt;

&lt;p&gt;Your identity — your professional background, decision context, information preferences, known blind spots — is the foundation that all analysis sits on. A report for someone evaluating whether to enter a market looks completely different from one for someone doing daily trend tracking.&lt;/p&gt;

&lt;p&gt;But asking users to fill out a profile form doesn't work. People don't naturally think in terms of "cognitive horizons" or "decision contexts." They write "indie developer, interested in AI" and move on.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;identity-shape&lt;/code&gt; solves this through conversation, not forms. It draws on Dervin's Sense-Making theory (understanding the gap the user is trying to bridge), Gadamer's concept of horizons (your background is both your strength and your filter), and the Rumsfeld/Johari framework for mapping what you know you don't know.&lt;/p&gt;

&lt;p&gt;But none of this theory is exposed to the user. The conversation feels natural:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"When you get these intelligence reports, what's usually the next thing you do with them? Are you evaluating whether to pursue a direction, looking for specific product ideas, or just maintaining a feel for the industry?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The output is a rich identity profile that gives the agent real context for every analysis it runs.&lt;/p&gt;

&lt;h3&gt;
  
  
  watch-shape: seeing how you see
&lt;/h3&gt;

&lt;p&gt;This is the one I'm most excited about.&lt;/p&gt;

&lt;p&gt;Every watch definition is an act of distinction — choosing to look at A means choosing not to look at B. &lt;code&gt;watch-shape&lt;/code&gt; acts as a &lt;strong&gt;second-order observer&lt;/strong&gt;: it doesn't just help you define what to watch, it helps you see &lt;em&gt;how&lt;/em&gt; you're watching, and what your watching framework excludes.&lt;/p&gt;

&lt;p&gt;The skill is built on six cognitive operation layers, distilled from 19 frameworks across cognitive science, philosophy, and cybernetics:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Core question&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Cost of distinction&lt;/td&gt;
&lt;td&gt;What does your boundary exclude? (Spencer-Brown, Luhmann)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Structure of ignorance&lt;/td&gt;
&lt;td&gt;What kind of not-knowing is this? (Proctor, Rumsfeld/Johari)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Limits of language&lt;/td&gt;
&lt;td&gt;What can't your vocabulary reach? (Wittgenstein)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Shaping of inquiry&lt;/td&gt;
&lt;td&gt;What does your question presuppose? (Dewey, Peirce, Kuhn)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Requisite variety gap&lt;/td&gt;
&lt;td&gt;How diverse are your sensors? (Ashby, Beer)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;Enactment of frame&lt;/td&gt;
&lt;td&gt;What reality is your monitoring creating? (Weick, Klein, Gadamer, Heuer)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The critical design decision: &lt;strong&gt;not all layers work at all times.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Layers 3 and 6 are only effective during iteration — after the watch has run at least once and the user has actual data experience. Asking "what signal would make you update your mental model?" when the user doesn't have a mental model yet produces useless answers. This isn't the user being vague; it's the wrong cognitive operation at the wrong time.&lt;/p&gt;

&lt;p&gt;During initial creation, layers 1, 2, 4, and 5 do the heavy lifting — clarifying intent, revealing boundaries, checking sensor diversity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before and after
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Before watch-shape&lt;/strong&gt; — a typical intent file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Focus&lt;/span&gt;
AI coding tools

&lt;span class="gu"&gt;## Key Interests&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; New IDEs
&lt;span class="p"&gt;-&lt;/span&gt; Agent features
&lt;span class="p"&gt;-&lt;/span&gt; Community reactions

&lt;span class="gu"&gt;## Goal&lt;/span&gt;
Stay updated on the space
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;After watch-shape&lt;/strong&gt; — the same watch, shaped:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Focus&lt;/span&gt;
AI coding tools — IDE-level products and their evolution toward agent-native architectures

&lt;span class="gu"&gt;## Key Interests&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Technical architecture competition (Cursor vs Windsurf vs Copilot approach)
&lt;span class="p"&gt;-&lt;/span&gt; Agent-mode capabilities and their actual adoption patterns
&lt;span class="p"&gt;-&lt;/span&gt; Developer workflow changes driven by AI tooling (not just features, but behavioral shifts)

&lt;span class="gu"&gt;## Decision Context&lt;/span&gt;
Evaluating whether to build developer tools in this space. Need to understand
where the market is consolidating vs where gaps remain.

&lt;span class="gu"&gt;## Competing Hypotheses&lt;/span&gt;
&lt;span class="p"&gt;1.&lt;/span&gt; The IDE war is already won by whoever nails agent mode first
&lt;span class="p"&gt;2.&lt;/span&gt; IDEs become commoditized; the value shifts to specialized vertical agents
&lt;span class="p"&gt;3.&lt;/span&gt; The whole "AI IDE" category gets absorbed back into VS Code + extensions

&lt;span class="gu"&gt;## Known Blind Spots&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Demand side: what are developers actually struggling with vs what tool makers think they want
&lt;span class="p"&gt;-&lt;/span&gt; Academic frontier: what's coming in code generation research that hasn't hit products yet
&lt;span class="p"&gt;-&lt;/span&gt; Non-English communities: Chinese developer ecosystem has different tool preferences and pain points

&lt;span class="gu"&gt;## Exclude&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Browser extensions, simple autocomplete plugins
&lt;span class="p"&gt;-&lt;/span&gt; Funding/valuation news unless directly relevant to product direction

&lt;span class="gu"&gt;## Goal&lt;/span&gt;
Actionable intelligence for market entry timing and positioning decisions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same person, same interest. But the second version drives analysis that's an order of magnitude more useful — because the agent now knows &lt;em&gt;why&lt;/em&gt; you're watching, what assumptions you're operating under, and where your blind spots are.&lt;/p&gt;

&lt;h3&gt;
  
  
  The design philosophy
&lt;/h3&gt;

&lt;p&gt;A few principles that shaped this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conversation, not configuration.&lt;/strong&gt; These skills work through dialogue, not forms. Users discover their own blind spots through the process of being asked the right questions — that's the whole point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second-order observation.&lt;/strong&gt; The agent doesn't just collect what you asked for. It observes &lt;em&gt;how&lt;/em&gt; you're asking, and makes the invisible frames visible. This is Luhmann's core insight: every observation has a blind spot, and you need an observer of the observer to reveal it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lifecycle awareness.&lt;/strong&gt; Not every cognitive operation is appropriate at every stage. The system respects where the user is in their understanding and doesn't ask questions they can't meaningfully answer yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No jargon in the conversation.&lt;/strong&gt; The theoretical foundations are deep (Spencer-Brown, Ashby, Weick, etc.), but the user never sees them. The conversation feels like talking to a thoughtful colleague, not attending a philosophy seminar.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why this matters beyond Signex
&lt;/h3&gt;

&lt;p&gt;I think this pattern — using LLMs as second-order observers to help users examine their own cognitive frames — has applications far beyond intelligence monitoring. Any system where the quality of output depends on the quality of user-defined intent could benefit from this approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Search systems that help you discover what you should be searching for&lt;/li&gt;
&lt;li&gt;Research tools that reveal the assumptions in your research questions&lt;/li&gt;
&lt;li&gt;Decision support systems that surface the frames you're operating within&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The information abundance problem is solved. We have more data than we can process. The next frontier is &lt;strong&gt;cognitive framing&lt;/strong&gt; — helping people see what their way of seeing excludes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Try it
&lt;/h3&gt;

&lt;p&gt;Signex is open source (AGPL-3.0): &lt;a href="https://github.com/zhiyuzi/Signex" rel="noopener noreferrer"&gt;github.com/zhiyuzi/Signex&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prerequisites: Python 3.11+, uv, Claude Code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/zhiyuzi/Signex.git
&lt;span class="nb"&gt;cd &lt;/span&gt;signex &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; uv &lt;span class="nb"&gt;sync
cp&lt;/span&gt; .env.example .env
claude
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Say "Hi" and it initializes. If your identity profile is thin, it'll suggest shaping it. Create a watch, and if the intent is sparse, it'll suggest deepening it. The cognitive scaffolding is built into the natural flow — you don't have to know it's there.&lt;/p&gt;

&lt;p&gt;Feedback, issues, and contributions welcome on GitHub.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>productivity</category>
      <category>cognitivescience</category>
    </item>
    <item>
      <title>I built Skillradar: find the right agent skill by describing your task (2.5k+ indexed)</title>
      <dc:creator>李李泽</dc:creator>
      <pubDate>Wed, 04 Feb 2026 23:36:17 +0000</pubDate>
      <link>https://dev.to/_1716c6e3e44bde0bff67a/i-built-skillradar-find-the-right-agent-skill-by-describing-your-task-25k-indexed-5g8n</link>
      <guid>https://dev.to/_1716c6e3e44bde0bff67a/i-built-skillradar-find-the-right-agent-skill-by-describing-your-task-25k-indexed-5g8n</guid>
      <description>&lt;p&gt;I’m experimenting with a semantic search workflow for discovering agent skills from natural-language task descriptions.&lt;/p&gt;

&lt;p&gt;Many skill lists are still keyword-based, which makes it hard to compare similar skills before trying them. I indexed ~2.5k skills and use semantic retrieval to surface candidates for a given scenario.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12ytjqcte2woquuxagqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12ytjqcte2woquuxagqw.png" alt="Skillradar homepage | “install via AI agent” prompt" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Website mode (baseline semantic search)
&lt;/h2&gt;

&lt;p&gt;You can type a scenario like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I’d like to conduct a market analysis”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;…and get a ranked list of candidate skills.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw00a261ix6ftviz8671p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw00a261ix6ftviz8671p.png" alt="Example search results for “market analysis”" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can click a skill card to view details and inspect its SKILL.md / manifest.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xw8x9j7usn7cdqq3p4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xw8x9j7usn7cdqq3p4x.png" alt="copy install prompt" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Agent-native mode: let an agent turn vague prompts into structured search queries
&lt;/h2&gt;

&lt;p&gt;This is the part I personally use the most.&lt;/p&gt;

&lt;p&gt;Instead of going to a website and trying to craft the “right keywords”, I use an agent-side helper (a small “discover” prompt) to convert a vague request into a search goal + keywords, then query the index. This fits CLI-style agent workflows.&lt;/p&gt;

&lt;p&gt;After installation, the agent can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ask a couple of simple questions (e.g., install scope/path)&lt;/li&gt;
&lt;li&gt;Then you just describe your scenario in plain English — even if it’s abstract, vague, or messy&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;discover-skills&lt;/code&gt; will translate that into a structured search (task goal + keywords), query the index, and return candidates with short match reasons&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5o6901wotm3pilk3o8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5o6901wotm3pilk3o8j.png" alt="paste into agent" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s an example with a very “vague” need:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I have a bunch of meeting notes scattered everywhere and I want to organize them better. Is there a skill for that?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The agent turns it into a query + keywords, retrieves candidates, and suggests what to install next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhkky7g5fwso2itsl6mt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhkky7g5fwso2itsl6mt.png" alt="Agent asks vague question, returns ranked skills + install suggestion&amp;lt;br&amp;gt;
" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Question (Embeddings / for skill retrieval)
&lt;/h2&gt;

&lt;p&gt;I’d love advice on how you’d embed and index a SKILL.md-style skill definition for semantic retrieval.&lt;/p&gt;

&lt;p&gt;Right now I’m thinking about embedding each skill from multiple “views” (e.g., what it does, use cases, inputs/outputs, examples, constraints), but I’m not fully sure what structure works best.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How would you chunk/structure SKILL.md (by section, by template fields, or by examples)?&lt;/li&gt;
&lt;li&gt;Single vector per skill vs multi-vector per section/view — and how do you aggregate scores at query time?&lt;/li&gt;
&lt;li&gt;Which fields usually move retrieval quality most (examples, tool/actions, constraints, tags, or “when not to use”)?&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>agents</category>
    </item>
  </channel>
</rss>
