I published 13 video editing skills on ClawHub over the span of a week. For the first three days, only two appeared in search results. By day five, twelve of them held the #1 spot for their target keywords.
Nothing changed about the skills themselves. Same API, same functionality, same code. What changed was how I named and described them.
Here's everything I learned about ClawHub's search ranking — with real numbers from my testing.
The slug is everything
ClawHub uses vector search for skill discovery. I spent two weeks querying the search API (/api/search?q=keyword) with different keywords and recording scores. The pattern was consistent across 30+ queries:
If your slug contains the search keyword, you score 3.0+. If it doesn't, your ceiling is about 2.0.
Here's what that looks like in practice:
| Slug | Query | Score |
|---|---|---|
auto-caption |
"auto caption" | 3.147 |
ai-video-editing |
"video editing" | 3.217 |
nemo-subtitle (brand name) |
"add subtitles" | 1.859 |
nemo-shorts (brand name) |
"shorts maker" | 1.757 |
The first two slugs contain the exact search term. They score 3.0+. The bottom two have brand-name slugs — they rank well because of description optimization, but they'll never break 2.0.
I tested this by publishing a new skill with the slug auto-caption for the keyword "auto caption." Within 6 hours it hit #1 at 3.099. A different skill covering the exact same feature, with a brand-name slug, had been stuck at 1.7 for days.
The lesson hit hard: if you're serious about a keyword, put it in your slug. Not your description, not your displayName — your slug. Everything else is supplementary.
There's a catch, though. Slugs are permanent. You can't rename them after publishing. So if you picked a vanity name like I did for my first batch (nemo-video, nemo-edit, nemo-subtitle), you're locked into the description-optimization game with a lower ceiling. I ended up publishing separate skills with keyword-rich slugs to cover the gaps.
DisplayName: powerful but fragile
After discovering the slug effect, I figured displayName was just cosmetic. Wrong.
I ran an accidental experiment. While cleaning up my skills, I shortened a displayName from 70 characters to 30 — cut out keyword suffixes like "for TikTok, Reels, and YouTube Shorts" to make it look neater.
Within hours, the skill dropped out of the top 15 results for every keyword it had ranked for. Not a gradual decline — gone. I restored the original long name and the rankings came back within an hour. No other changes.
My read: displayName feeds into the same vector embedding as description. Every keyword you remove from it shrinks your footprint in the search space. The "cleaner" name was literally invisible to the queries that used to find it.
The rule I follow now: stuff every relevant keyword into your displayName, readability second. Video Caption Tool - Burn Captions, AI Subtitles and SRT Export is ugly. It also ranks.
First sentence of your description is disproportionately weighted
This one came from debugging a specific failure. My subtitle skill wasn't ranking for "add subtitles" even though those exact words appeared in the description — in the third sentence.
I moved "Add subtitles" to the very first word of the description. Next index cycle, the skill jumped from outside top 10 to #1 for that query.
The practical takeaway: open your description with the exact keyword phrase you want to rank for. Not a paraphrase, not a synonym — the literal words someone would type into the search box. Save the creative writing for sentence two.
The 4 AM crash
This is the part nobody warns you about.
Last night I checked rankings before bed — 12 keywords holding #1. At 4:18 AM I ran a routine scan. Ten of those twelve had vanished from the results entirely. Not dropped to #5 or #8. Gone.
I spent the next hour figuring out what happened. The pattern was clear: skills with keyword-in-slug (like auto-caption) were still ranked. Skills that relied on description keywords for their ranking had disappeared. The scores for competitor skills hadn't changed — our skills had simply been removed from the index.
The fix was dumb. I bumped the version number on two affected skills (no code changes, just a version bump in the SKILL.md frontmatter) and republished. Within 30 minutes, both were back at #1 with scores slightly higher than before.
My best guess: ClawHub periodically rebuilds its vector index, and description-derived embeddings are more volatile during rebuilds than slug-derived ones. Slug matches are probably handled by a separate scoring path that survives reindexing.
The practical defense: monitor your rankings, and if something falls off a cliff overnight, try republishing. A version bump with zero changes was enough to re-enter the index.
Where this leaves things
Right now, 12 of my 13 skills hold a #1 ranking for at least one keyword. The 13th has a brand-name slug and a crowded keyword — it ranks #3, which is fine.
Other skill authors are hitting the same walls. There's a thread on the OpenClaw repo (#50090) where several of us have been sharing data on what they're calling "invisible trigger failures" — skills that load fine but never get selected because the search ranking is opaque.
What would actually fix this: a simple dashboard in ClawHub showing skill authors which queries match their skill and where they rank. The search API exists. The data is there. It just isn't surfaced to the people who need it most.
Update (March 23)
Found the root cause. ClawHub pushed two commits on March 22 at 18:03 UTC — "narrow skill package catalog search" and "stabilize package catalog search." The code change added an exact-slug-match priority path to the search function (resolveSkillBySlugOrAlias runs first, then vector search fills remaining slots). It also removed a scan-page limit constant and restructured the pagination loop.
The overnight crash happened about 2 hours after this deploy. Skills relying on description-keyword matches fell out of the index during the transition. Slug-match skills stayed because they now hit the new exact-match path directly.
The "republish to force reindex" workaround still works — but now I understand why. You're not fixing a random glitch. You're forcing the new search function to rebuild your skill's search digest entry under the updated scoring logic.
Worth noting: the new exact-match path is actually good for skill authors long-term. If your slug matches the query, you're now guaranteed to appear in results. That wasn't true before (which is exactly what #52034 reported).
This is part of a series on building AI video tools with OpenClaw. Previous posts: How I Built an AI Video Editor | What Broke When I Wrapped a Video API | Automating TikTok and Reels | Reverse-Engineering ClawHub's Top Video Skills
Update (March 23)
After publishing this, I dug into what caused the 4 AM ranking crash. Turns out it was not random.
ClawHub pushed two commits on March 22 at 18:03-18:04 UTC — about 8 hours before the crash hit my rankings. The key change in commit 801cc55 ("fix: narrow skill package catalog search"): the search function now runs an exact slug match first via resolveSkillBySlugOrAlias() before falling back to vector scanning. Previously it was pure vector search. The commit also removed a constant called MAX_SKILL_CATALOG_SEARCH_SCAN_PAGES (previously 200), narrowing the vector scan range.
When this deployed, it triggered a search index rebuild. Skills relying on description-derived embeddings lost their scores during the rebuild. Skills with keyword-in-slug survived because the new exact-match path handles them before the vector scan runs.
The version bump fix worked because republishing forces ClawHub to rebuild the skill search digest. Not ideal that a zero-change version bump is the recovery path, but it works.
Long-term this makes slug strategy even more important. The description optimization ceiling probably got lower with the narrowed vector scan. Filed the underlying instability in #52034.
Top comments (0)