You can't compete with infinite compute. But you can find adjacent spaces where depth matters more than scale.
The Strategic Insight
I couldn't compete with OpenAI, Anthropic, or Google on LLM capabilities. They have compute, talent, and capital I'll never approach.
So I pivoted to something related but fundamentally different: not building better models, but better infrastructure for understanding what these models help us discover about ourselves.
Instead of "how do we make LLMs smarter?", I asked: "what do our conversations with LLMs reveal about how we think, and how can we make that structure queryable, browseable, and conversable?"
This turned out to be a far less saturated space with deep intellectual problems and practical applications.
The lesson: when you can't compete on the main axis, find the orthogonal space where your particular skills and constraints become advantages.
The Cognitive MRI
I analyzed years of my own AI conversation logs—thousands of chats with ChatGPT, Claude, and other systems spanning code, research, philosophy, health, projects.
Linear text hides structure. But when you construct semantic similarity networks from these conversations and analyze their topology, something fascinating emerges.
Method (Brief)
- Embed conversations using standard language models
- Weight user inputs 2x more than AI responses (ablation studies showed this maximizes modularity)
- Construct similarity graph with cosine similarity edge weights
- Apply threshold cutoff (phase transition appears around θ ≈ 0.9)
- Identify communities through network clustering
The result: a map of how your knowledge exploration actually structures itself.
Key Finding: Heterogeneous Topology
Different conversations have fundamentally different network structures:
Programming work: Tree-like hierarchies
- Linear problem-solving paths
- Branching from problem to solution
- Few cross-domain connections
Research work: Small-world networks
- Hub-and-spoke structure
- Central concepts with many connections
- Bridge nodes linking distant domains
Other domains: Hybrid structures between these extremes
The topology reveals the cognitive mode of each domain. This wasn't predicted—it emerged from the data.
Bridge Nodes
A few key conversations act as bridge nodes that hold the entire knowledge graph together. They're not the "most important" conversations—they're the ones that connect otherwise separate communities.
Remove these bridges, and your knowledge graph fragments. They represent the conceptual linchpins of how you integrate different domains.
What Current RAG Systems Miss
Typical retrieval-augmented generation:
- Convert query to embeddings
- Find nearest neighbors in vector space
- Return similar documents
This is nearest-neighbor search in a metric space. It doesn't know:
- Which documents are strongly connected
- Where bridge nodes link distant communities
- Which documents act as hubs
- How knowledge clusters actually organize
We're throwing away the graph structure.
The Vision: Queryable, Browseable, Conversable
What if tools actually understood their own network structure?
Queryable
Not just "find similar documents" but:
- "What are the bridge concepts between these topics?"
- "Which documents act as hubs in this domain?"
- "Show me the path connecting these ideas"
The query language becomes richer because you're navigating graph structure.
Browseable
Instead of chaotic browsing or artificial hierarchies, surface the actual network topology:
- Show natural clusters
- Highlight hub documents
- Reveal bridge concepts
- Display semantic connectivity
Navigation mirrors how knowledge actually connects.
Conversable
An LLM could reason about topology:
"These three documents form a tight cluster because they explore X from different angles. This other document bridges to cluster Y through principle Z."
Not just summarizing content—reasoning about the graph.
The Infrastructure: Complex-Net RAG
I'm building a Python package with a domain-specific language for network-augmented retrieval.
Apply it to:
- Your AI conversation history
- Your ebook collection
- Your browser bookmarks
- Your email archives
- Your personal documents
Everything becomes part of one unified knowledge graph where topology reveals structure you didn't know existed.
Research Strategy: Why This Space Works
This research direction emerged from strategic necessity:
- Identified a crowded space (LLM capabilities) where I couldn't compete
- Found adjacent territory (conversation structure) that was less saturated
- Brought existing expertise (one networking class + programming/stats)
- Produced publishable results quickly (Complex Networks 2025, December in New York)
- Opened a sustainable research program
Low barrier to entry: You don't need to be a networks expert from day one. You need intellectual curiosity and willingness to learn.
High generativity: Endless research questions emerge:
- Methodological improvements
- Applications to new domains
- Systems and architecture
- Cognitive science angles
Practical usefulness: Solves a real problem that will grow more pressing as we accumulate more AI-mediated knowledge.
Sustainable: Not chasing trends—building foundational infrastructure.
One Networking Class, One Publication
I took one networking class. It resulted in a peer-reviewed publication and a conference talk.
This isn't about being brilliant. It's about:
- Asking the right questions in less-saturated spaces
- Bringing complementary skills (programming, statistics, mathematical thinking)
- Working strategically rather than competing on compute
- Building infrastructure that supports long-term programs
The activation energy in this space is lower than people think. The bottleneck isn't prerequisite knowledge—it's intellectual curiosity and problem-solving ability.
What Makes This Different
Most AI research focuses on model capabilities: bigger, faster, smarter.
This focuses on infrastructure for understanding: what do conversations reveal, how do we make that structure useful, how do we build tools that leverage topology.
It's the difference between:
- Building better search engines vs. understanding how knowledge organizes
- Optimizing retrieval vs. revealing structure
- Responding to queries vs. reasoning about topology
- Scaling compute vs. scaling comprehension
Both are valuable. The second is vastly less crowded.
For Researchers Looking for Niches
Don't compete on the main axis everyone else is competing on. Find the orthogonal space where your particular constraints become advantages.
Look for problems that are:
- Adjacent to hyped areas but less saturated
- Solvable with your existing skills + learnable tools
- Generative of multiple follow-up questions
- Practically useful beyond academic novelty
- Sustainable over multi-year timelines
The key is strategic positioning: where can you actually contribute something novel without infinite compute or decades of specialization?
The Bottom Line
Strategic insight: Can't compete on compute? Find adjacent spaces where depth matters more than scale.
Technical finding: Conversation networks have heterogeneous topology that reveals cognitive structure.
Vision: Build infrastructure that makes knowledge queryable, browseable, and conversable through network-aware tools.
Broader point: The next frontier of complex networks isn't just analyzing networks in data—it's building tools that understand the network structure they operate on.
When the main path is crowded, look for the orthogonal space where your constraints become advantages.
Complex Networks 2025, New York, December. Revealing thought topology hidden in conversation.
Top comments (0)