My skill rankings crashed at 4 AM.
Twelve #1 positions the night before. I checked at 4:18 AM and ten of them were gone. Two survived. The pattern was obvious once I saw it: the two survivors had their keyword directly in the slug. Everything relying on description text had vanished.
I spent the next two hours in the ClawHub source code figuring out what happened.
Two commits, one explanation
On March 22 at 18:03 UTC, ClawHub pushed commit 801cc55 — "fix: narrow skill package catalog search." One minute later, 9ea7508 landed to stabilize the change.
The key diff is in convex/skills.ts. Before the change, search worked like this: take a query string, scan up to 200 pages of the skillSearchDigest index, score every result against the query vector, return the best matches.
After the change, there's a new first step:
const exactSkill = await resolveSkillBySlugOrAlias(ctx, queryText);
if (exactSkill.skill) {
const exactDigest = await ctx.db
.query("skillSearchDigest")
.withIndex("by_skill", (q) => q.eq("skillId", exactSkill.skill!._id))
.unique();
if (exactDigest && skillCatalogMatchesFilters(exactDigest, args)) {
const exactScore = scoreSkillCatalogResult(exactDigest, queryText);
if (exactScore > 0) {
seen.add(exactDigest.skillId);
matches.push({ score: exactScore, package: toPublicSkillCatalogItem(exactDigest) });
}
}
}
Search now tries to resolve the query as an exact slug first. If it finds a match, that skill gets scored and added to results before any vector scanning happens.
The second change: the old paginated scan loop (up to MAX_SKILL_CATALOG_SEARCH_SCAN_PAGES iterations) got replaced with a single page fetch. The constant was deleted entirely.
What this actually means
Three things changed for skill authors:
Slug match is now guaranteed. If someone searches "auto-caption" and your slug is auto-caption, you're in the results. Period. Before this, you depended on vector similarity catching you during the page scan. Usually it did. Sometimes it didn't — which is exactly what issue #52034 reported. This is genuinely good news. If you picked a descriptive slug when you first published, you just got a safety net for free.
Vector-only skills got fragile. My skills with keywords only in the description were scoring 1.7-1.9 through vector similarity. When the index rebuilt during the deploy, those embeddings temporarily disappeared. Slug-match skills didn't care because they hit the new exact path directly. I had a few skills that relied entirely on description keywords for certain search terms — every one of them dropped out overnight.
The scan window narrowed. Removing the multi-page loop means vector search covers fewer candidates per query. If your skill's embedding sits deep in the index, it might not get scanned at all. This probably won't affect most skills, but if you're ranking on a long-tail keyword with a lot of competition, your margin just got thinner.
The fix took 30 minutes
I published a version bump on one of the affected skills — no code changes, just a patch version increment. Within 30 minutes, its rankings came back. The publish triggered a fresh skillSearchDigest entry, which the new single-page scan picked up immediately.
I then bumped the rest. By 5:30 AM, all twelve #1 positions were restored. Some scores actually came back slightly higher than before — video editing went from 3.217 to 3.242. Not sure if that's the new scoring function being more generous or just normal variance.
One thing worth mentioning: I also noticed that skills from a newer account I'd been testing with disappeared from search entirely. Not just dropped in rank — gone. Even exact slug searches returned nothing. Still not sure if that's related to the algorithm change or a separate account-trust issue. Something to watch.
What I'd tell other skill authors
Pick your slug carefully. Before this update, slug was already the highest-weight signal (I wrote about that in my previous post). Now it's even more important because it has a dedicated resolution path that bypasses vector search entirely.
If your rankings suddenly drop, try publishing a patch version. You're not fixing a bug — you're forcing a digest rebuild under the new scoring logic.
And if you want to check where you stand right now, the search API is public:
https://clawhub.ai/api/search?q=your+keyword&limit=10
No auth needed. The score field in the response is what determines your position.
The search code is open source. Reading it took me two hours. It saved me from thinking my skills were broken when it was just the platform rebuilding its index.
This is part of a series on building AI video tools with OpenClaw. Previous posts: How I Built an AI Video Editor | What Broke When I Wrapped a Video API | Automating TikTok and Reels | Reverse-Engineering ClawHub's Top Video Skills | 12 #1 Rankings in 5 Days
Top comments (0)