I just shipped agent-harness v1.0.0.
It’s a Node.js / TypeScript CLI for discovering, staging, activating, and wiring reusable AI-agent assets across:
- VS Code / GitHub Copilot
- Cursor
- OpenCode
- Zed
- Claude Code
- Pi
Links:
- Repo: https://github.com/ar27111994/agent-harness
- Release: https://github.com/ar27111994/agent-harness/releases/tag/v1.0.0
- npm: https://www.npmjs.com/package/@ar27111994/agent-harness
- Feedback discussion: https://github.com/ar27111994/agent-harness/discussions/126
Why I made this
This project came from a very specific frustration.
There are already a lot of big all-in-one AI skills/prompts/context bundles out there:
antigravity-awesome-skillscursor-skillsawesome-agent-skillsawesome-claude-skillsawesome-copilot- and similar projects in that orbit
A lot of those are useful. I’m not against them.
But for my own day-to-day project mix, they often felt too blunt.
The problem wasn’t “I need more context.”
It was more like:
- too much irrelevant context getting pulled in
- too little selectivity
- too much bundle gravity
- not enough control over what gets wired where
- too much risk of bloating already-expensive context windows
So I wanted something narrower and more restrained.
Not a mega-bundle.
Not a giant dump of everything.
Not “install this, and now your AI stack has 500 things attached to it.”
I wanted something that tries to answer a more practical question:
How do you make reusable agent assets portable across tools and projects without turning every repo into a context landfill?
That’s the reason agent-harness exists.
What it actually does
At a high level, agent-harness manages a lifecycle for reusable agent assets:
- discover what might be relevant to a workspace
- mirror and stage reproducible artifacts
- install them into lifecycle-aware host stores
- activate selected assets into runtime views
- wire them into the target host/workspace
The important part for me is not just that it does these steps.
It’s that it tries to do them selectively and host-aware, instead of treating every tool as one undifferentiated pile of prompts and skills.
That distinction mattered a lot in practice.
What I was optimizing for
I was not trying to build the biggest possible agent asset bundle.
I was optimizing for:
- portability across different hosts
- reuse across different kinds of repos
- more disciplined context injection
- less irrelevant baggage
- a workflow that survives real product development instead of just looking impressive in a screenshot
That’s also why the project ended up focusing on discovery, staging, activation, and wiring instead of just curating one huge collection.
Honest note: this was mostly vibe-coded
This project was mostly vibe-coded at the start.
That probably shows in both good and bad ways.
The good side is that it moved quickly and came from a very real pain point.
The bad side is that projects built like this can drift into strange abstractions, workflow-specific assumptions, and over-engineering.
So after getting the core idea working, I pushed it through a much less romantic phase:
- release audits
- real workspace testing
- Windows-specific validation
- issue-driven cleanup
- host adapter tightening
- docs/changelog/release workflow cleanup
In other words: it started loose, then got forced into a stricter shape.
What I want now is criticism
At this point, I’m not really looking for generic applause.
What I want most is feedback — especially negative feedback.
Questions I actually want answered:
- Is this solving a real problem, or mostly my problem?
- Is the “restrained alternative” framing actually valid?
- Which parts feel over-engineered?
- Which hosts are worth supporting, and which are not?
- Where does the abstraction break down?
- What should be removed or simplified?
- Where would this fail in your real workflow?
If you try it and it feels wrong, that is useful.
If you think the whole premise is flawed, that is useful too.
If you think this should be much smaller, much dumber, or much more opinionated, that’s useful.
Please open issues if you test it
The best outcome from posting this is not traffic.
It’s not stars.
It’s not generic “nice work.”
The best outcome is:
- issue reports
- criticism
- real-world friction reports
- product validation or invalidation
If you try it, please open an issue or drop feedback in the GitHub discussion.
Feedback discussion: https://github.com/ar27111994/agent-harness/discussions/126
If possible, include:
- repo type/stack
- host used
- OS
- what you expected
- what actually happened
- what felt confusing, unnecessary, or weak
- whether the core idea itself seems useful or not
If your reaction is “this is overbuilt” or “this is the wrong abstraction,” please tell me.
That’s honestly the kind of signal I’m looking for right now.
Top comments (0)