The real bottleneck isn't information
You're tracking a domain — AI coding tools, product opportunities, whatever. You check Hacker News, GitHub Trending, Reddit, arxiv, a dozen sources. An hour gone, mostly noise, and you still almost miss the one thing that mattered.
So you build (or buy) a tool that automates the collection. Great. Now you have 500 items instead of 50, and a summary on top. The noise is organized, but it's still noise.
Here's what I've learned after building and using an intelligence agent for weeks: the data sources are public. Everyone can access the same feeds. The real differentiator is the lens — who's looking, and what they're looking for.
A watch intent that says "track AI coding tools" produces a very different report than one that says "I'm evaluating whether to enter this market. Focus on IDE-level products, track the technical architecture competition between Cursor/Windsurf/Copilot, known blind spot: I have no coverage on the demand side or academic frontier."
Same sources. Same LLM. Completely different intelligence quality.
The problem with "just tell me what you want"
Most AI tools ask you to describe what you want, then go fetch it. The assumption is that you know what you want. But in practice:
- You describe your interest using the vocabulary you already have — which means you can't find things described in terms you don't know yet
- You focus on what you're already aware of — systematically missing adjacent areas
- You don't know what you don't know — and no amount of "add more sources" fixes that
This is not a search problem. It's a cognitive framing problem.
What I built
I'm the developer of Signex, an open-source personal intelligence agent that runs inside Claude Code. It monitors topics you care about, collects from 15+ data sources, analyzes through different lenses, and delivers reports. It remembers your feedback and adjusts over time.
A few weeks ago I shared the initial release. The collection and analysis pipeline worked well. But I kept hitting the same wall: the quality of the output was bounded by the quality of the input — the user's intent definition and self-awareness.
So for V6, I built two core skills that address this directly: identity-shape and watch-shape.
identity-shape: knowing who's looking
Your identity — your professional background, decision context, information preferences, known blind spots — is the foundation that all analysis sits on. A report for someone evaluating whether to enter a market looks completely different from one for someone doing daily trend tracking.
But asking users to fill out a profile form doesn't work. People don't naturally think in terms of "cognitive horizons" or "decision contexts." They write "indie developer, interested in AI" and move on.
identity-shape solves this through conversation, not forms. It draws on Dervin's Sense-Making theory (understanding the gap the user is trying to bridge), Gadamer's concept of horizons (your background is both your strength and your filter), and the Rumsfeld/Johari framework for mapping what you know you don't know.
But none of this theory is exposed to the user. The conversation feels natural:
"When you get these intelligence reports, what's usually the next thing you do with them? Are you evaluating whether to pursue a direction, looking for specific product ideas, or just maintaining a feel for the industry?"
The output is a rich identity profile that gives the agent real context for every analysis it runs.
watch-shape: seeing how you see
This is the one I'm most excited about.
Every watch definition is an act of distinction — choosing to look at A means choosing not to look at B. watch-shape acts as a second-order observer: it doesn't just help you define what to watch, it helps you see how you're watching, and what your watching framework excludes.
The skill is built on six cognitive operation layers, distilled from 19 frameworks across cognitive science, philosophy, and cybernetics:
| Layer | Operation | Core question |
|---|---|---|
| 1 | Cost of distinction | What does your boundary exclude? (Spencer-Brown, Luhmann) |
| 2 | Structure of ignorance | What kind of not-knowing is this? (Proctor, Rumsfeld/Johari) |
| 3 | Limits of language | What can't your vocabulary reach? (Wittgenstein) |
| 4 | Shaping of inquiry | What does your question presuppose? (Dewey, Peirce, Kuhn) |
| 5 | Requisite variety gap | How diverse are your sensors? (Ashby, Beer) |
| 6 | Enactment of frame | What reality is your monitoring creating? (Weick, Klein, Gadamer, Heuer) |
The critical design decision: not all layers work at all times.
Layers 3 and 6 are only effective during iteration — after the watch has run at least once and the user has actual data experience. Asking "what signal would make you update your mental model?" when the user doesn't have a mental model yet produces useless answers. This isn't the user being vague; it's the wrong cognitive operation at the wrong time.
During initial creation, layers 1, 2, 4, and 5 do the heavy lifting — clarifying intent, revealing boundaries, checking sensor diversity.
Before and after
Before watch-shape — a typical intent file:
## Focus
AI coding tools
## Key Interests
- New IDEs
- Agent features
- Community reactions
## Goal
Stay updated on the space
After watch-shape — the same watch, shaped:
## Focus
AI coding tools — IDE-level products and their evolution toward agent-native architectures
## Key Interests
- Technical architecture competition (Cursor vs Windsurf vs Copilot approach)
- Agent-mode capabilities and their actual adoption patterns
- Developer workflow changes driven by AI tooling (not just features, but behavioral shifts)
## Decision Context
Evaluating whether to build developer tools in this space. Need to understand
where the market is consolidating vs where gaps remain.
## Competing Hypotheses
1. The IDE war is already won by whoever nails agent mode first
2. IDEs become commoditized; the value shifts to specialized vertical agents
3. The whole "AI IDE" category gets absorbed back into VS Code + extensions
## Known Blind Spots
- Demand side: what are developers actually struggling with vs what tool makers think they want
- Academic frontier: what's coming in code generation research that hasn't hit products yet
- Non-English communities: Chinese developer ecosystem has different tool preferences and pain points
## Exclude
- Browser extensions, simple autocomplete plugins
- Funding/valuation news unless directly relevant to product direction
## Goal
Actionable intelligence for market entry timing and positioning decisions
Same person, same interest. But the second version drives analysis that's an order of magnitude more useful — because the agent now knows why you're watching, what assumptions you're operating under, and where your blind spots are.
The design philosophy
A few principles that shaped this:
Conversation, not configuration. These skills work through dialogue, not forms. Users discover their own blind spots through the process of being asked the right questions — that's the whole point.
Second-order observation. The agent doesn't just collect what you asked for. It observes how you're asking, and makes the invisible frames visible. This is Luhmann's core insight: every observation has a blind spot, and you need an observer of the observer to reveal it.
Lifecycle awareness. Not every cognitive operation is appropriate at every stage. The system respects where the user is in their understanding and doesn't ask questions they can't meaningfully answer yet.
No jargon in the conversation. The theoretical foundations are deep (Spencer-Brown, Ashby, Weick, etc.), but the user never sees them. The conversation feels like talking to a thoughtful colleague, not attending a philosophy seminar.
Why this matters beyond Signex
I think this pattern — using LLMs as second-order observers to help users examine their own cognitive frames — has applications far beyond intelligence monitoring. Any system where the quality of output depends on the quality of user-defined intent could benefit from this approach:
- Search systems that help you discover what you should be searching for
- Research tools that reveal the assumptions in your research questions
- Decision support systems that surface the frames you're operating within
The information abundance problem is solved. We have more data than we can process. The next frontier is cognitive framing — helping people see what their way of seeing excludes.
Try it
Signex is open source (AGPL-3.0): github.com/zhiyuzi/Signex
Prerequisites: Python 3.11+, uv, Claude Code.
git clone https://github.com/zhiyuzi/Signex.git
cd signex && uv sync
cp .env.example .env
claude
Say "Hi" and it initializes. If your identity profile is thin, it'll suggest shaping it. Create a watch, and if the intent is sparse, it'll suggest deepening it. The cognitive scaffolding is built into the natural flow — you don't have to know it's there.
Feedback, issues, and contributions welcome on GitHub.
Top comments (0)