DEV Community

Cover image for Skills Go Vertical: Three Domain Bundles Trend
Max Quimby
Max Quimby

Posted on • Originally published at agentconn.com

Skills Go Vertical: Three Domain Bundles Trend

Editorial hero illustration in deep teal and emerald: three vertical glowing skill bundles — scientific beaker, academic graduation cap, and learning open book — stacked side by side above a stylized GitHub trending chart silhouette

📖 Read the full version with charts and embedded sources on AgentConn →

A week ago, the GitHub-trending story on skills was a generic-directory race. Today it is a domain-specialization race. In a single 24-hour window, three vertical skill bundles — scientific, academic, and learning — each landed on GitHub trending or HN with star velocities that put them in the top 12 worldwide. The genre has moved past "ship a .claude folder" and into "ship a .claude folder for this profession."

This is the moment the skill ecosystem stops resembling NPM-style awesome-* lists and starts resembling industry-trade-association toolkits. We've covered the skills directory race and the skill-spam validator wave already on AgentConn. What's new this cycle is that the next layer — vertical bundles — is now visibly being built on top, and three of them landed at once.

Here are the three verticals that crossed the trending bar in the May 14 window, what each one ships, and the pattern they all share.

The cycle, in three signals

The GitHub-trending board for the day reads like a thesis-by-coincidence:

GitHub trending page for May 14 2026 showing skill-bundle repositories dominating the top 12 — mattpocock/skills holding #2, K-Dense-AI/scientific-agent-skills at #7, Imbad0202/academic-research-skills at #11

  • mattpocock/skills at #2 with +2,971 stars/day — the generic-directory canonical, still holding velocity day 4
  • obra/superpowers at #4 with +1,801 — the agentic-skills framework that pairs with mattpocock's bundle
  • K-Dense-AI/scientific-agent-skills at #7 with +637 — the scientific vertical
  • danielmiessler/Personal_AI_Infrastructure at #8 — the personal-stack flank
  • Imbad0202/academic-research-skills at #11 with +441 — the academic vertical
  • DrCatHicks/learning-opportunities at HN #8 with 184 points — the learning vertical, landed on the commentary surface rather than trending

Five of those are skill packs and three are vertical-specialized. That's a structural change. A month ago, vertical skill packs didn't exist as a category — every pack was framed as "general developer skills." This week the verticals are filling in across three different domains at the same time, all with their own grammar, their own audience, and their own download trajectory.

Vertical 1 — Scientific (K-Dense-AI/scientific-agent-skills)

The scientific vertical's entry is K-Dense-AI/scientific-agent-skills, sitting at GitHub #7 with +637 stars in 24 hours. The repo's pitch is that scientific research workflows — protein structure prediction, lab-notebook automation, literature scraping with citation graph traversal, experiment-design rubrics — are concrete enough to encode as skills, and that those skills compose into actual research throughput.

K-Dense-AI/scientific-agent-skills GitHub repository — production skill bundle for scientific research workflows including protein folding, literature scraping, and experiment-design rubrics

The architectural tell is not the skill names — those are obvious from the domain — but the composition model. Where mattpocock's bundle treats each skill as a stand-alone .claude/skills/{name}/SKILL.md file, scientific-agent-skills treats them as chained: a literature-review skill calls a citation-graph skill, which calls a PDF-extraction skill, which feeds a methodology-comparison skill. The bundle ships explicit dependency graphs, not just files. That's a step up the abstraction ladder.

The other tell is the user. K-Dense-AI's README profile cites computational chemistry and structural biology groups as design partners — not generic "developers." When a skill bundle ships with a named user cohort, the pack stops being a portfolio piece and becomes a vertical SaaS substrate that happens to be open-source.

This pairs naturally with the broader continuous-compute-stack thesis: if research workflows can be expressed as skills, they can be batched, queued, and run against the same volume infrastructure as code generation. Wet-lab automation becomes a skill-pack problem.

Vertical 2 — Academic (Imbad0202/academic-research-skills)

The academic vertical's entry — Imbad0202/academic-research-skills, GitHub #11 at +441/day — is the more provocative one because it sits in the meta-research layer. The skills include literature-review structuring, citation-graph traversal, methodology critique templates, peer-review draft helpers, and statistical-methods explainers.

Imbad0202/academic-research-skills GitHub repository — academic research skill bundle for literature review, citation graph traversal, methodology critique, and peer review drafting workflows

What's interesting about this one is the audience overlap with the scientific bundle but the framing inversion. K-Dense-AI's scientific pack is about producing research output. Imbad0202's academic pack is about evaluating it. The two are complementary halves of a single research-quality flywheel — and the fact that they emerged independently, in the same cycle, on the same trending board, is the cleanest evidence that the vertical-bundle thesis is converging.

The pack also surfaces the awkward fact that AI-authored peer review is now a real category. The README does not dodge it; the inclusion of a "reviewer-mode skill" is exactly the kind of thing that would have been called skill spam three weeks ago and is now treated as a legitimate substack of academic-research tooling. The genre is settling into its own grammar fast.

The HN-skill-spam discussion earlier this month — which we covered in the validator-wave piece — is the prior step here. Once the fake vertical packs got named and shamed, the real vertical packs got room to differentiate. Imbad0202's pack benefits from the spam crackdown, not in spite of it.

Vertical 3 — Learning (DrCatHicks/learning-opportunities)

The learning vertical's entry is DrCatHicks/learning-opportunities — and unlike the other two, it landed first on HN, not on GitHub trending, with 184 points on HN #8. The HN landing is itself the signal. Learning-skill packs are getting cultural attention, not just developer attention — and that's a different distribution motion than the developer-coded scientific and academic packs.

DrCatHicks/learning-opportunities GitHub repository — learning skills bundle covering curriculum design, retrieval practice, spaced repetition prompts, and worked-example generation for AI tutoring use cases

HN search results for learning-opportunities — the DrCatHicks bundle landed at HN #8 with 184 points, the learning-skill vertical's primary signal of the cycle

The pack focuses on curriculum-design primitives, retrieval-practice scaffolds, spaced-repetition prompt templates, worked-example generators, and assessment rubrics. Audience: anyone shipping an AI-assisted tutoring product — and there are now a lot of those. The convergence read here pairs cleanly with the broader post-Khanmigo AI-tutoring market piece we ran a few days ago — the application layer needs primitives, and DrCatHicks' pack is one of the first credible attempts at a learning-skill canonical set.

What's most interesting is that DrCatHicks is a domain expert from outside the typical Claude-Code-skill-author crowd. The README cites cognitive-science research, not engineering-debugging methodology. That's the second tell that the vertical-bundle era has begun: the authors are domain experts, not generalist engineers.

The pattern: domain experts shipping primitives

Lining up the three vertical packs side by side, the shared structural pattern is more revealing than any individual one. All three:

  1. Identify a named user cohort (computational chemists, academic researchers, instructional designers) rather than "developers writ large."
  2. Author from inside the domain. K-Dense-AI cites structural-biology partners. Imbad0202's pack reads like an academic toolkit. DrCatHicks ships cognitive-science citations.
  3. Compose skills into workflows. Each pack ships at least one chained skill that calls others — closer to a function-call DAG than a flat file list.
  4. Land on the trending surface that matches their audience. Scientific and academic on developer-class (GitHub trending) surfaces; learning on the cultural-engagement (HN) surface.
  5. Differentiate on credentialing, not on volume. None of these packs is trying to be exhaustive — they're trying to be correct for their domain.

That last point matters. The skill-spam complaint two weeks ago was about packs that maximize file count without quality. Vertical packs invert that — they trade breadth for in-domain rigor, and that trade is what's getting them onto the trending board.

What builders should actually do

If you're shipping skill content in the next 30 days, the operational reads from this cycle are:

  1. Pick a vertical, not a layer. The "general developer skills" pack is fully saturated — mattpocock and obra together cover that surface. The open space is in specific professions. Pick a profession you have access to and ship the bundle a domain expert would have wanted.
  2. Compose, don't catalog. Skill packs that ship chained workflows (skill calls skill) are landing harder than skill packs that ship flat lists. The chaining is the artifact; the list is the inventory.
  3. Credential, don't volume. Cite your design partners in the README. Cite the research. The skill-spam validators (covered in our validator-wave piece) make uncredentialed packs cheap to dismiss; credentialing is the cheapest defense.
  4. Pick your trending surface deliberately. If your audience is engineers, ship on GitHub. If your audience is researchers or educators, ship on HN or the relevant Substack and let GitHub catch up. The trending surface is downstream of the audience.
  5. Build for cross-harness from day one. All three packs in this cycle work across Claude Code, Cursor, and Codex CLI. Single-harness packs are already the narrow case; vertical packs especially need horizontal harness support because their users aren't typically Claude-Code-native.

We expect three more vertical packs to land in the next 14 days. The cleanest candidates are legal (contract analysis, case retrieval, regulatory comparison), clinical (patient-history structuring, differential-diagnosis prompts, clinical-decision rubrics, with appropriate guardrails), and product-management (PRD scaffolds, user-research synthesis, sprint-planning rubrics). Each one has the audience density and the domain-expert author pool to support a credible bundle. Watch for them.

The one-sentence takeaway

Scientific, academic, and learning skill bundles all crossing the trending bar in the same 24-hour cycle is the convergence signal: domain-specialized skill packs are now the leading edge of the agent ecosystem, and the next 30 days will be defined by which verticals fill in next.


Originally published at AgentConn

Top comments (0)