DEV Community

Lin Zhu
Lin Zhu

Posted on

HappyCapy ships skill-share analytics, contributor leaderboards & one-click install for Claude agents

What HappyCapy is (30-second version)

HappyCapy is a marketplace for Claude Skills — composable units of agent behavior you can drop into any Claude-powered app. Think npm for agent reasoning patterns, but with first-class Anthropic SDK integration.

This week we shipped three things: skill-share analytics, a contributor leaderboard, and a one-click installer. Here's what each means for you as a builder.

One-click skill adoption

Previously, adding a community skill to your agent meant copying boilerplate, wiring up the tool manifest, and hoping the author's README was accurate. The new installer does all that in one call:

import anthropic
from happycapy import install_skill

skill = install_skill("web-search-summarizer", version="1.2.0")

client = anthropic.Anthropic()

response = client.messages.create(
    model="claude-opus-4-7",
    max_tokens=1024,
    tools=skill.tools,           # auto-wired tool definitions
    system=skill.system_prompt,  # author's tested system prompt
    messages=[
        {"role": "user", "content": "Summarize the top 3 AI papers from last week."}
    ]
)
print(response.content)
Enter fullscreen mode Exit fullscreen mode

skill.tools and skill.system_prompt are pre-validated against the Anthropic tool-use spec. If the skill author updates to a new schema format, the installer adapts on the fly — your call site stays unchanged.

Skill-share analytics

As a consumer, you can now see how a skill performs in the wild before you adopt it. The dashboard surfaces:

  • Invocation counts — how often the skill's tools are called across all installs
  • Error rate by model — whether the skill behaves differently on Haiku vs Sonnet vs Opus
  • Latency p50 / p95 — useful when a skill sits in a latency-sensitive path
  • Token usage histogram — reason about cost before you commit

As an author, the same data flows into your contributor dashboard. You can see which tools are actually invoked, where users drop off, and which model tiers trigger the most errors.

Claude's tool-use behavior varies subtly across model tiers. A skill that's solid on Opus can hit edge cases on Haiku. Analytics makes that visible before it bites you in prod.

Contributor leaderboards

The leaderboard ranks contributors by a composite score: installs, 30-day retention, and community ratings — not raw download counts.

High-retention skills tend to be the ones that actually work in production rather than just demos that look sharp in a README. The leaderboard surfaces those.

For contributors, it creates a concrete feedback loop. High installs but low retention means something is wrong with the out-of-the-box experience — more actionable than a vague thumbs-down.

Skills compose

Skills are designed to layer. The installer supports loading multiple skills and merging their tool manifests:

from happycapy import install_skill, merge_skills

search = install_skill("web-search-summarizer")
memory = install_skill("persistent-memory")

combined = merge_skills(search, memory)

response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=2048,
    tools=combined.tools,
    system=combined.system_prompt,
    messages=[...]
)
Enter fullscreen mode Exit fullscreen mode

merge_skills handles tool name collisions and system prompt merging with a priority stack — last skill listed wins on conflicts. Inspect combined.manifest to see the resolved state before you run anything.

What's next

Skill versioning with semantic diff (see exactly what changed between v1.1 and v1.2 before upgrading) and a local test runner that replays real invocation traces against a new skill version.

If you're building Claude agents and want to stop copy-pasting tool boilerplate, check it out: https://happycapy.ai. Free to browse; publishing requires an account.

Top comments (0)