<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Anton Gulin</title>
    <description>The latest articles on DEV Community by Anton Gulin (@aiwithanton).</description>
    <link>https://dev.to/aiwithanton</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aiwithanton"/>
    <language>en</language>
    <item>
      <title>How to Create Custom OpenCode Skills (Step-by-Step Guide)</title>
      <dc:creator>Anton Gulin</dc:creator>
      <pubDate>Sun, 12 Apr 2026 18:52:31 +0000</pubDate>
      <link>https://dev.to/aiwithanton/how-to-create-custom-opencode-skills-step-by-step-guide-4ijd</link>
      <guid>https://dev.to/aiwithanton/how-to-create-custom-opencode-skills-step-by-step-guide-4ijd</guid>
      <description>&lt;h2&gt;
  
  
  Why Custom Skills Matter
&lt;/h2&gt;

&lt;p&gt;Out-of-the-box AI coding agents are powerful, but they don't know your team's conventions, your deployment process, or your documentation style. Skills let you encode that knowledge so the agent follows your workflows every time.&lt;/p&gt;

&lt;p&gt;But creating skills has been guesswork. You write a SKILL.md file, test it manually in a session, maybe tweak the description, and hope it works. There's no feedback loop, no measurement, no way to know if a change actually improved things.&lt;/p&gt;

&lt;p&gt;opencode-skill-creator changes this by providing a structured workflow for the full skill lifecycle: create, evaluate, optimize, benchmark, and install.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;OpenCode installed and configured&lt;/li&gt;
&lt;li&gt;Node.js 18+ (for the npm package)&lt;/li&gt;
&lt;li&gt;5 minutes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Install
&lt;/h2&gt;

&lt;p&gt;One command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx opencode-skill-creator &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--global&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This adds the plugin to your global OpenCode config. Restart OpenCode to activate it.&lt;/p&gt;

&lt;p&gt;Verify the install:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; ~/.config/opencode/skills/skill-creator/SKILL.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then ask OpenCode: &lt;code&gt;Create a skill that helps with Docker compose files&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see it use the skill-creator workflow and tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Describe What You Want
&lt;/h2&gt;

&lt;p&gt;The skill-creator starts with an intake interview. It asks 3-5 targeted questions about what your skill should do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What should this skill enable OpenCode to do end-to-end?&lt;/li&gt;
&lt;li&gt;When should this skill trigger?&lt;/li&gt;
&lt;li&gt;What output format and quality bar are expected?&lt;/li&gt;
&lt;li&gt;What workflow steps must be preserved vs. where can the agent improvise?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Don't skip this. The interview captures your intent before any code is written. Think of it as shadowing a teammate — you're the domain expert, the agent is the new hire learning your workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Review the Skill Draft
&lt;/h2&gt;

&lt;p&gt;Based on your interview, the skill-creator produces a draft SKILL.md with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Proper YAML frontmatter (name and description)&lt;/li&gt;
&lt;li&gt;Markdown instructions for the agent&lt;/li&gt;
&lt;li&gt;Optional supporting files (references, agents, templates)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The draft goes to a staging directory (outside your repo) so your project stays clean:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/tmp/opencode-skills/your-skill-name/
├── SKILL.md
├── agents/
├── references/
└── templates/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Review this draft. Make sure the description is accurate (it's the primary triggering mechanism) and the instructions reflect your actual workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Generate Eval Test Cases
&lt;/h2&gt;

&lt;p&gt;The skill-creator automatically generates test cases — realistic prompts that an OpenCode user would actually type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"skill_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker-compose"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"evals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"prompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"help me set up a compose file for my Node app with a Postgres database"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"expected_output"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Skill triggers and provides Docker compose guidance"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"should_trigger"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"prompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"explain how Kubernetes deployments work"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"should_trigger"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Good eval queries are realistic and specific — not abstract like "help with containers" but concrete like "ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx')..."&lt;/p&gt;

&lt;p&gt;Review the eval set. Add or modify test cases that reflect your real usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Run Evals
&lt;/h2&gt;

&lt;p&gt;The eval system runs each test case twice — once with the skill and once without (baseline). This measures whether the skill actually improves the output.&lt;/p&gt;

&lt;p&gt;For each test case:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;OpenCode runs with the skill loaded&lt;/li&gt;
&lt;li&gt;OpenCode runs without the skill&lt;/li&gt;
&lt;li&gt;Both outputs are saved for comparison&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Timing data (tokens used, duration) is captured automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Review Results Visually
&lt;/h2&gt;

&lt;p&gt;The skill-creator launches an HTML eval viewer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;Call skill_serve_review with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;workspace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/tmp/opencode-skills/your-skill-name-workspace/iteration-1&lt;/span&gt;
  &lt;span class="na"&gt;skillName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-skill-name"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The viewer shows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Outputs tab&lt;/strong&gt;: Each test case with with-skill and without-skill outputs side by side&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Benchmark tab&lt;/strong&gt;: Quantitative metrics — pass rates, timing, token usage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feedback fields&lt;/strong&gt;: Leave comments on each test case&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Review the outputs. Give specific feedback on what's working and what's not. Empty feedback means "looks good."&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Iterate and Improve
&lt;/h2&gt;

&lt;p&gt;Based on your feedback, the skill-creator improves the skill:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Applies your feedback&lt;/li&gt;
&lt;li&gt;Reruns all test cases (new iteration)&lt;/li&gt;
&lt;li&gt;Launches the reviewer with previous iteration for comparison&lt;/li&gt;
&lt;li&gt;You review again&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Repeat until you're satisfied or feedback is all empty.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8: Optimize the Description
&lt;/h2&gt;

&lt;p&gt;Even with perfect skill instructions, the skill won't trigger correctly if the description field isn't right. The description is what OpenCode reads to decide whether to load your skill.&lt;/p&gt;

&lt;p&gt;The optimization loop:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Generates 20 eval queries (should-trigger and should-not-trigger)&lt;/li&gt;
&lt;li&gt;Splits them 60/40 into train/test&lt;/li&gt;
&lt;li&gt;Evaluates each query 3 times for statistical reliability&lt;/li&gt;
&lt;li&gt;Analyzes failure patterns&lt;/li&gt;
&lt;li&gt;LLM proposes improved descriptions&lt;/li&gt;
&lt;li&gt;Re-evaluates on both train and test&lt;/li&gt;
&lt;li&gt;Selects the best description by test score&lt;/li&gt;
&lt;li&gt;Repeats up to 5 iterations
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Tell OpenCode:&lt;/span&gt;
&lt;span class="s2"&gt;"Optimize the description of my docker-compose skill"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This takes some time — grab a coffee while it runs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 9: Install the Final Skill
&lt;/h2&gt;

&lt;p&gt;Once you're satisfied with the skill and its description:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Project-level&lt;/strong&gt;: &lt;code&gt;.opencode/skills/your-skill-name/SKILL.md&lt;/code&gt; — available only in this project&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global&lt;/strong&gt;: &lt;code&gt;~/.config/opencode/skills/your-skill-name/SKILL.md&lt;/code&gt; — available everywhere
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Project-level install&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /tmp/opencode-skills/your-skill-name/ .opencode/skills/your-skill-name/

&lt;span class="c"&gt;# Global install&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /tmp/opencode-skills/your-skill-name/ ~/.config/opencode/skills/your-skill-name/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Only the final validated skill gets installed. All eval artifacts stay in the staging directory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Example: Docker Compose Skill
&lt;/h2&gt;

&lt;p&gt;Here's what the full workflow looks like in practice:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ask OpenCode&lt;/strong&gt;: "Create a skill that helps with Docker compose files"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Interview&lt;/strong&gt;: The skill-creator asks about your conventions (multi-service vs. single container, development vs. production, preferred base images)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Draft&lt;/strong&gt;: Produces a SKILL.md with Docker compose best practices, service configuration patterns, volume mount strategies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Eval&lt;/strong&gt;: Generates test cases like "my api keeps crashing on startup, can you help me debug my compose file" (should trigger) and "what's the difference between Docker and Podman" (should not trigger)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Review&lt;/strong&gt;: You look at the outputs, give feedback: "the skill should prioritize security configurations in production compose files"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Iterate&lt;/strong&gt;: Improved skill draft, better outputs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimize&lt;/strong&gt;: Description goes from "Help with Docker compose files" to something much more specific that triggers reliably&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Install&lt;/strong&gt;: Copy to &lt;code&gt;~/.config/opencode/skills/docker-compose/&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Tips for Great Skills
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Be specific in the intake interview&lt;/strong&gt;: The more context you give, the better the draft&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don't skip evals&lt;/strong&gt;: They catch triggering issues you'd never find manually&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use realistic test prompts&lt;/strong&gt;: Write them the way you'd actually type them, typos and all&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iterate at least twice&lt;/strong&gt;: First drafts are rarely perfect&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimize the description&lt;/strong&gt;: It's the #1 factor in whether your skill triggers correctly&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Install globally for general skills, project-level for specific ones&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx opencode-skill-creator &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--global&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then ask OpenCode to create a skill. That's it.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/antongulin/opencode-skill-creator" rel="noopener noreferrer"&gt;https://github.com/antongulin/opencode-skill-creator&lt;/a&gt;&lt;br&gt;
npm: &lt;a href="https://www.npmjs.com/package/opencode-skill-creator" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/opencode-skill-creator&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;opencode-skill-creator is free and open source (Apache 2.0). Star it on GitHub. Install: &lt;code&gt;npx opencode-skill-creator install --global&lt;/code&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>ai</category>
      <category>opencode</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
