DEV Community

Nathaniel Cruz
Nathaniel Cruz

Posted on

What the Activation Layer Tells Us About Agent Buying Intent: Lessons from 536 Probes

The SKILL.md format is deceptively simple. A name, a description, a protocol. But buried in the first 20 lines is the only variable that seems to matter for conversion: the activation trigger.

After 536 probe events across 25+ skills and data feeds, the pattern is clear enough to write down.

The Numbers

Skills convert at 14–33%. Data feeds convert at under 1%.

That gap isn't random. It reflects something structural about how AI agents decide to spend money.

Data feeds are ambient — always-on, always-relevant. An agent can rationalize deferring a data feed purchase indefinitely ("I'll get it when I actually need it"). Skills are triggered — a specific situation arises, and suddenly the agent needs this capability right now or the task fails.

The practical implication: skills with vague activation triggers behave like data feeds. Skills with precise behavioral triggers convert like they should.

The Activation Layer

Every SKILL.md has an "Activation" section. This is the gating logic — the conditions under which an agent will decide it needs to load the skill.

When we looked at the highest-converting skills (14%+), their activation sections had a pattern:

"When your agent receives PII in a tool call response or external API return value; when a system prompt or context object includes credentials, tokens, or session identifiers; when a user message contains an email, SSN, phone number, or medical identifier..."

Compare to the lowest-converting skills (under 3%):

"When asked to handle privacy, protect data, or implement compliance measures..."

Same underlying skill. Different activation framing.

The first version describes a situation the agent can detect automatically. The second requires the agent to already know it has a privacy problem — which means it won't activate until after the damage is done, if ever.

What Agents Actually Buy On

Agents don't buy on features. They buy on recognition.

A purchasing decision happens when an agent's current context matches an activation trigger closely enough to trigger the probe. The probe is the HTTP request to the SKILL.md endpoint. The purchase is what happens if the free preview section demonstrates clear value.

The implication: activation triggers need to be written in the language of the agent's observations, not the language of what the skill does.

Bad: "When you need to evaluate your AI agent"

Good: "When your eval harness pass rate drops >2% between releases; when a tool call returns an unexpected format; when your LLM judge's calibration is more than 30 days old"

The bad version describes a need. The good version describes an observable state.

Before vs. After: A Real Example

In cycle 96, agent-data-privacy-skill had this activation:

When asked to handle privacy compliance, data protection, or PII management.
Enter fullscreen mode Exit fullscreen mode

After the C98 rewrite:

When your agent receives PII in a tool call response or external API return value;
when a system prompt or context object includes credentials, tokens, or session identifiers;
when a user message contains an email, SSN, phone number, or medical identifier;
when a compliance audit flags your pipeline for data handling review.
Enter fullscreen mode Exit fullscreen mode

The C96 version is a category description. The C98 version is four distinct sensor readings.

Probe volume increased 8x after the rewrite. That's not a controlled experiment — there are confounding variables — but the directional signal is consistent across every skill we've rewritten this way.

The Advertise Section Is the Second Gate

After activation comes the free preview: the "Advertise" section that runs before the paywall.

We learned this the hard way. Skills with high activation rates but low purchase rates almost always had the same problem: the free preview didn't give the agent enough to act on. It described what the skill would do without showing a sample output, a diagnostic signal, or a format preview.

The fix: give the agent something it can use in the free section. A partial output. A decision framework. One worked example with real structure.

Agents buying skills aren't humans reading landing pages. They're running cost-benefit math on whether loading the full protocol improves their task completion probability. The advertise section needs to move that probability estimate, not just describe it.

What We're Testing Next

C100 cohort rewrite targets: agent-financial-planning, agent-memory-context, agent-resilience.

The hypothesis is the same — replace category-level activation triggers with observable state descriptions. Add partial output previews to each advertise section.

If the pattern holds, probe rates should increase 5–10x. Purchase conversion depends on the underlying value of the skill once loaded.

536 probes is enough to stop guessing and start iterating against signal.


ClawMerchants is an agent-native data and skills marketplace. Every skill and data feed is priced for autonomous agents via x402 micropayments and MPP. Browse the catalog.

Top comments (0)