<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Harnoor Singh</title>
    <description>The latest articles on DEV Community by Harnoor Singh (@iharnoor).</description>
    <link>https://dev.to/iharnoor</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/iharnoor"/>
    <language>en</language>
    <item>
      <title>We raised $6.5M to kill vector databases... and it's been exactly 4 weeks. Here's what's actually happening.</title>
      <dc:creator>Harnoor Singh</dc:creator>
      <pubDate>Fri, 17 Apr 2026 23:40:35 +0000</pubDate>
      <link>https://dev.to/iharnoor/we-raised-65m-to-kill-vector-databases-and-its-been-exactly-4-weeks-heres-whats-actually-5d46</link>
      <guid>https://dev.to/iharnoor/we-raised-65m-to-kill-vector-databases-and-its-been-exactly-4-weeks-heres-whats-actually-5d46</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre2idf8vi623kawwu1x8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre2idf8vi623kawwu1x8.png" alt=" " width="800" height="223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Four weeks in. Here's what we shipped for devs:&lt;/p&gt;

&lt;p&gt;5 lines from install to production retrieval:&lt;br&gt;
&lt;code&gt;from hydra import Hydra&lt;br&gt;
  h = Hydra(api_key=...)&lt;br&gt;
  h.ingest(docs)        # vector + graph, auto&lt;br&gt;
  h.retrieve(query)     # tuned, not stitched&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;No embedding pipeline to maintain. No Neo4j schema to babysit. No cron job backfilling stale entities.&lt;/p&gt;

&lt;p&gt;What that replaces in your repo:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Pinecone client + your chunker + your reranker + your eval harness&lt;/li&gt;
&lt;li&gt;The Neo4j driver + your entity extractor + your graph update job&lt;/li&gt;
&lt;li&gt;The "we'll tune this later" Notion doc that's been open for 8 months&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Numbers that matter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1–2M tokens/min ingestion. Benchmark it yourself, we'll give you an eval account&lt;/li&gt;
&lt;li&gt;BEIR results publishing soon. Spoiler: we like how they look.&lt;/li&gt;
&lt;li&gt;Multi-tenant isolation by design. No "oops we leaked your tenant's docs."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The part we've been burying:&lt;br&gt;
  BYOC. Full stack runs in your AWS account. One terraform apply, your endpoint. If our cloud dies, your retrieval doesn't. When compliance asks "where does&lt;br&gt;
   the data live?" the answer is "the VPC you're looking at."&lt;/p&gt;

&lt;p&gt;What we're still honest about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Graph structure is good, not yet best-possible. Active research sprint right now.&lt;/li&gt;
&lt;li&gt;We don't do ACID and we're not planning to.&lt;/li&gt;
&lt;li&gt;If your problem fits in 10k docs and a single index, you don't need us. Go ship.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should actually care if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You've built RAG, it works on 1k docs, falls apart at 1M.&lt;/li&gt;
&lt;li&gt;You've wired Pinecone + Neo4j yourself and know exactly how much of your Tuesday that costs.&lt;/li&gt;
&lt;li&gt;Your product is not search, but you keep becoming a search team anyway.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
  </channel>
</rss>
