<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Opensolr</title>
    <description>The latest articles on DEV Community by Opensolr (@opensolr).</description>
    <link>https://dev.to/opensolr</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/opensolr"/>
    <language>en</language>
    <item>
      <title>I made search engines understand emojis (and it's weirdly useful)</title>
      <dc:creator>Opensolr</dc:creator>
      <pubDate>Thu, 11 Dec 2025 00:28:04 +0000</pubDate>
      <link>https://dev.to/opensolr/i-made-search-engines-understand-emojis-and-its-weirdly-useful-2acp</link>
      <guid>https://dev.to/opensolr/i-made-search-engines-understand-emojis-and-its-weirdly-useful-2acp</guid>
      <description>&lt;p&gt;Been working on hybrid search (lexical + vector) for a while and accidentally discovered something fun: when you use good embeddings, you can literally search with emojis.&lt;/p&gt;

&lt;p&gt;Not as a gimmick - it actually works because the embedding model (BGE-M3, 1024 dimensions) learned semantic relationships between concepts and their emoji representations.&lt;/p&gt;

&lt;h1&gt;
  
  
  Try it yourself
&lt;/h1&gt;

&lt;p&gt;These are live search engines running on real e-commerce data:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Type 🔑 (key emoji) → get actual keys:&lt;/strong&gt; &lt;a href="https://search.opensolr.com/dedeman?q=%F0%9F%94%91" rel="noopener noreferrer"&gt;https://search.opensolr.com/dedeman?q=🔑&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Type 🚲 (bike) → get bicycles and accessories:&lt;/strong&gt; &lt;a href="https://search.opensolr.com/dedeman?q=%F0%9F%9A%B2" rel="noopener noreferrer"&gt;https://search.opensolr.com/dedeman?q=🚲&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Type 🖨️📄 (printer + paper) → get printer supplies:&lt;/strong&gt; &lt;a href="https://search.opensolr.com/b2b?q=%F0%9F%96%A8%EF%B8%8F%F0%9F%93%84" rel="noopener noreferrer"&gt;https://search.opensolr.com/b2b?q=🖨️📄&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This one's my favorite - type "cute domestic pet earrings" on a jewelry store:&lt;/strong&gt; &lt;a href="https://search.opensolr.com/rueb?q=cute+domestic+pet+earrings" rel="noopener noreferrer"&gt;https://search.opensolr.com/rueb?q=cute+domestic+pet+earrings&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(it finds cat and dog earrings even though the product titles are in a completely different language)&lt;/p&gt;

&lt;h1&gt;
  
  
  How it actually works
&lt;/h1&gt;

&lt;p&gt;The pipeline is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Crawl website → extract text with Trafilatura&lt;/li&gt;
&lt;li&gt;Generate 1024D embeddings via BGE-M3&lt;/li&gt;
&lt;li&gt;Store in Solr with both text + vectors&lt;/li&gt;
&lt;li&gt;At query time: run lexical search + KNN vector search&lt;/li&gt;
&lt;li&gt;Combine scores (hybrid approach)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The emoji thing works because BGE-M3 was trained on multilingual + multimodal data. The model learned that 🔑 and "key" and "Schlüssel" (German) and "cheie" (Romanian) are all semantically close.&lt;/p&gt;

&lt;p&gt;So when someone searches 🚲, the embedding is close to "bicycle", "bike", "Fahrrad", "bicicletă", etc.&lt;/p&gt;

&lt;h1&gt;
  
  
  The weird part
&lt;/h1&gt;

&lt;p&gt;Cross-language search just... works. The Romanian e-commerce site has products in Romanian, but you can search in English or with emojis and it finds relevant stuff. No translation layer, no language detection preprocessing - the embeddings handle it.&lt;/p&gt;

&lt;p&gt;Same with conceptual queries. "things to wear around neck" finds necklaces, pendants, chains - even though no product has "things to wear around neck" in the title.&lt;/p&gt;

&lt;h1&gt;
  
  
  Stack details for the curious
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Embeddings:&lt;/strong&gt; BGE-M3 (BAAI), 1024 dimensions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inference:&lt;/strong&gt; Running on RTX 4000 Ada, ~2-5ms per query&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search:&lt;/strong&gt; Solr 9.6 with dense vector support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Crawling:&lt;/strong&gt; Custom PHP + Python (Playwright for JS-heavy sites, Trafilatura for extraction)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extra features:&lt;/strong&gt; VADER for sentiment, langid for language detection, custom price extraction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Query latency is ~40-50ms total including embedding generation.&lt;/p&gt;

&lt;h1&gt;
  
  
  Hybrid vs pure vector
&lt;/h1&gt;

&lt;p&gt;Pure vector search is cool but has issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exact matches sometimes rank lower than "similar" results&lt;/li&gt;
&lt;li&gt;Product codes/SKUs get weird results&lt;/li&gt;
&lt;li&gt;Users expect "nike shoes" to prioritize exact Nike matches&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hybrid fixes this. Lexical handles exact matches, vectors handle the "I don't know the exact word but I know what I want" queries.&lt;/p&gt;

&lt;p&gt;The Solr query can be seen in the debig view (bottom-right button) where you can see the actual vector query functions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vectorQuery = {!knn f=embeddings topK=250}[-0.032, 0.009, -0.049, ...]

lexicalQuery = {!edismax qf="title^550 description^450 uri^1 text^0.1" 
                         pf="title^1100 description^900" ...}

q = {!func}sum(
      product(1, query($vectorQuery)), 
      product(1, div(query($lexicalQuery), sum(query($lexicalQuery), 6)))
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Bonus: AI-generated hints
&lt;/h1&gt;

&lt;p&gt;Added an experimental feature where the search can explain results. Search "measure 🔥" on a technical documentation site and it tells you which specific device to use for measuring temperature/fire:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://search.opensolr.com/fluke?q=measure+%F0%9F%94%A5" rel="noopener noreferrer"&gt;https://search.opensolr.com/fluke?q=measure+🔥&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It pulls context from indexed PDFs and generates a recommendation. Uses a local LLM (running on same GPU).&lt;/p&gt;

&lt;p&gt;Anyway, thought some of you might find the emoji thing interesting. The cross-language aspect was unexpected - I didn't build it for that, it just emerged from using multilingual embeddings.&lt;/p&gt;

&lt;p&gt;Happy to answer questions about the setup or hybrid search in general.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Hybrid Search in Apache Solr is NOW Production-Ready (with 1024D vectors!)</title>
      <dc:creator>Opensolr</dc:creator>
      <pubDate>Tue, 09 Dec 2025 13:43:57 +0000</pubDate>
      <link>https://dev.to/opensolr/hybrid-search-in-apache-solr-is-now-production-ready-with-1024d-vectors-26i4</link>
      <guid>https://dev.to/opensolr/hybrid-search-in-apache-solr-is-now-production-ready-with-1024d-vectors-26i4</guid>
      <description>&lt;p&gt;A few days back I shared my experiments with hybrid search (combining traditional lexical search with vector/semantic search). Well, I've been busy, and I'm back with some &lt;strong&gt;major upgrades&lt;/strong&gt; that I think you'll find interesting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; We now have 1024-dimensional embeddings, blazing fast GPU inference, and you can generate embeddings via our free API endpoint. Plus: you can literally search with emojis now. Yes, really. 🚲 finds bicycles. 🐕 finds dog jewelry. Keep reading.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Changed?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Upgraded from 384D to 1024D Embeddings
&lt;/h3&gt;

&lt;p&gt;We switched from &lt;code&gt;paraphrase-multilingual-MiniLM-L12-v2&lt;/code&gt; (384 dimensions) to &lt;code&gt;BAAI/bge-m3&lt;/code&gt; (1024 dimensions).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why does this matter?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Think of dimensions like pixels in an image. A 384-pixel image is blurry. A 1024-pixel image is crisp. More dimensions = the model can capture more nuance and meaning from your text.&lt;/p&gt;

&lt;p&gt;The practical result? Searches that "kind of worked" before now work &lt;strong&gt;really well&lt;/strong&gt;, especially for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Non-English languages (Romanian, German, French, etc.)&lt;/li&gt;
&lt;li&gt;Domain-specific terminology&lt;/li&gt;
&lt;li&gt;Conceptual/semantic queries&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Moved Embeddings to GPU
&lt;/h3&gt;

&lt;p&gt;Before: CPU embeddings taking 50-100ms per query. Now: GPU embeddings taking ~2-5ms per query.&lt;/p&gt;

&lt;p&gt;The embedding is so fast now that even with a network round-trip from Europe to USA and back, it's &lt;strong&gt;still faster&lt;/strong&gt; than local CPU embedding was. Let that sink in.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Optimized the Hybrid Formula
&lt;/h3&gt;

&lt;p&gt;After a lot of trial and error, we settled on this normalization approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;score = vector_score + (lexical_score / (lexical_score + k))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where &lt;code&gt;k&lt;/code&gt; is a tuning parameter (we use k=10). This gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lexical score normalized to 0-1 range&lt;/li&gt;
&lt;li&gt;Vector and lexical scores that play nice together&lt;/li&gt;
&lt;li&gt;No division by zero issues&lt;/li&gt;
&lt;li&gt;Intuitive tuning (k = the score at which you get 0.5)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Quality Filter with frange
&lt;/h3&gt;

&lt;p&gt;Here's a pro tip: use Solr's &lt;code&gt;frange&lt;/code&gt; to filter out garbage vector matches:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fq={!frange l=0.3}query($vectorQuery)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This says "only show me documents where the vector similarity is at least 0.3". Anything below that is typically noise anyway. This keeps your results clean and your users happy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Live Demos (Try These!)
&lt;/h2&gt;

&lt;p&gt;I've set up several demo indexes. &lt;strong&gt;Each one has a Debug button in the bottom-right corner&lt;/strong&gt; - click it to see the exact Solr query parameters and full &lt;code&gt;debugQuery&lt;/code&gt; analysis. Great for learning!&lt;/p&gt;

&lt;h3&gt;
  
  
  🛠️ Romanian Hardware Store (Dedeman)
&lt;/h3&gt;

&lt;p&gt;Search a Romanian e-commerce site with emojis:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://opensolr.com/search/dedeman?topbar=block&amp;amp;q=%F0%9F%9A%B2&amp;amp;in=web&amp;amp;og=yes&amp;amp;locale=&amp;amp;duration=&amp;amp;source=&amp;amp;fresh=no&amp;amp;lang=" rel="noopener noreferrer"&gt;🚲 → Bicycle accessories&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No keywords. Just an emoji. And it finds bicycle mirrors, phone holders for bikes, etc. The vector model understands that 🚲 = bicicletă = bicycle-related products.&lt;/p&gt;

&lt;h3&gt;
  
  
  💎 English Jewelry Store (Rueb.co.uk)
&lt;/h3&gt;

&lt;p&gt;Sterling silver, gold, gemstones - searched semantically:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://opensolr.com/search/rueb?topbar=block&amp;amp;q=%F0%9F%90%95&amp;amp;in=web&amp;amp;og=yes&amp;amp;locale=&amp;amp;duration=&amp;amp;source=&amp;amp;fresh=no&amp;amp;lang=" rel="noopener noreferrer"&gt;🐕 → Dog-themed jewelry&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://opensolr.com/search/rueb?topbar=block&amp;amp;q=%E2%AD%90%EF%B8%8F&amp;amp;in=web&amp;amp;og=yes&amp;amp;locale=&amp;amp;duration=&amp;amp;source=&amp;amp;fresh=no&amp;amp;lang=" rel="noopener noreferrer"&gt;⭐️ → Star-themed jewelry&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🧣 Luxury Cashmere Accessories (Peilishop)
&lt;/h3&gt;

&lt;p&gt;Hats, scarves, ponchos:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://opensolr.com/search/peilishop?topbar=block&amp;amp;q=winter+hat&amp;amp;in=web&amp;amp;og=yes&amp;amp;locale=&amp;amp;duration=&amp;amp;source=&amp;amp;fresh=no&amp;amp;lang=" rel="noopener noreferrer"&gt;winter hat → Beanies, caps, cold weather gear&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  📰 Fresh News Index
&lt;/h3&gt;

&lt;p&gt;Real-time crawled news, searchable semantically:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://opensolr.com/search/vector?topbar=block&amp;amp;q=%F0%9F%8D%B3&amp;amp;in=web&amp;amp;og=yes&amp;amp;locale=&amp;amp;duration=&amp;amp;source=&amp;amp;fresh=no&amp;amp;lang=" rel="noopener noreferrer"&gt;🍳 → Food/cooking articles&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://opensolr.com/search/vector?topbar=block&amp;amp;q=what+do+we+have+to+eat+to+boost+health%3F&amp;amp;in=web&amp;amp;og=yes&amp;amp;locale=&amp;amp;duration=&amp;amp;source=&amp;amp;fresh=no&amp;amp;lang=" rel="noopener noreferrer"&gt;what do we have to eat to boost health? → Nutrition articles&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This last one is pure semantic search - there's no keyword "boost" or "health" necessarily in the results, but the &lt;em&gt;meaning&lt;/em&gt; matches.&lt;/p&gt;




&lt;h2&gt;
  
  
  Free API Endpoint for 1024D Embeddings
&lt;/h2&gt;

&lt;p&gt;Want to try this in your own Solr setup? We're exposing our embedding endpoint for free:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://opensolr.com/api/embed &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"text": "your text here"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Returns a 1024-dimensional vector ready to index in Solr.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Schema setup:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;fieldType&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"knn_vector"&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"solr.DenseVectorField"&lt;/span&gt; 
           &lt;span class="na"&gt;vectorDimension=&lt;/span&gt;&lt;span class="s"&gt;"1024"&lt;/span&gt; &lt;span class="na"&gt;similarityFunction=&lt;/span&gt;&lt;span class="s"&gt;"cosine"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;field&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"embeddings"&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"knn_vector"&lt;/span&gt; &lt;span class="na"&gt;indexed=&lt;/span&gt;&lt;span class="s"&gt;"true"&lt;/span&gt; &lt;span class="na"&gt;stored=&lt;/span&gt;&lt;span class="s"&gt;"false"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Key Learnings
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Title repetition trick&lt;/strong&gt;: For smaller embedding models, repeat the title 3x in your embedding text. This focuses the model's limited capacity on the most important content. Game changer for product search.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;topK isn't "how many results"&lt;/strong&gt;: It's "how many documents the vector search considers". The rest get score=0 for the vector component. Keep it reasonable (100-500) to avoid noise.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lexical search is still king for keywords&lt;/strong&gt;: Hybrid means vector helps when lexical fails (emojis, conceptual queries), and lexical helps when you need exact matches. Best of both worlds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use synonyms for domain-specific gaps&lt;/strong&gt;: Even the best embedding model doesn't know that "autofiletantă" (Romanian) = "drill". A simple synonym file fixes what AI can't.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Quality &amp;gt; Quantity&lt;/strong&gt;: Better to return 10 excellent results than 100 mediocre ones. Use &lt;code&gt;frange&lt;/code&gt; and reasonable &lt;code&gt;topK&lt;/code&gt; values.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;Still exploring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fine-tuning embedding models for specific domains&lt;/li&gt;
&lt;li&gt;RRF (Reciprocal Rank Fusion) as an alternative to score-based hybrid&lt;/li&gt;
&lt;li&gt;More aggressive caching strategies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy to answer questions. And seriously, click that Debug button on the demos - seeing the actual Solr queries is super educational!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Running Apache Solr 9.x on &lt;a href="https://opensolr.com/" rel="noopener noreferrer"&gt;OpenSolr.com&lt;/a&gt; - free hosted Solr with vector search support.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>php</category>
      <category>python</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>Here's the hybrid vector+lexical scoring trick nobody explains.</title>
      <dc:creator>Opensolr</dc:creator>
      <pubDate>Tue, 02 Dec 2025 07:53:37 +0000</pubDate>
      <link>https://dev.to/opensolr/heres-the-hybrid-vectorlexical-scoring-trick-nobody-explains-28ak</link>
      <guid>https://dev.to/opensolr/heres-the-hybrid-vectorlexical-scoring-trick-nobody-explains-28ak</guid>
      <description>&lt;p&gt;We're OpenSolr - Solr hosting and consulting. We're obsessed with search (probably too much).&lt;/p&gt;

&lt;p&gt;When we added vector search to Solr, we hit a problem nobody talks about: combining scores.&lt;/p&gt;

&lt;p&gt;Vector similarity: 0 to 1&lt;br&gt;
Lexical (BM25/edismax): 0 to whatever&lt;/p&gt;

&lt;p&gt;Naive sum = lexical always wins, even when semantically wrong.&lt;br&gt;
Fix: normalized_lexical = lexical / (lexical + k)&lt;/p&gt;

&lt;p&gt;Now we have:&lt;/p&gt;

&lt;p&gt;Cross-lingual search (EN→RO)&lt;br&gt;
Emoji search (🔥 finds fires, 🐕 finds dog products)&lt;br&gt;
Semantic fallback (wine emoji finds champagne when no wine exists)&lt;br&gt;
Full debug inspector on every search&lt;/p&gt;

&lt;p&gt;Live demos you can try:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://opensolr.com/search/dedeman?q=%F0%9F%94%A5wood" rel="noopener noreferrer"&gt;https://opensolr.com/search/dedeman?q=🔥wood&lt;/a&gt; (Romanian hardware)&lt;br&gt;
&lt;a href="https://opensolr.com/search/vector?q=%F0%9F%94%A5" rel="noopener noreferrer"&gt;https://opensolr.com/search/vector?q=🔥&lt;/a&gt; (news)&lt;br&gt;
&lt;a href="https://opensolr.com/search/peilishop?q=winter+hat" rel="noopener noreferrer"&gt;https://opensolr.com/search/peilishop?q=winter+hat&lt;/a&gt; (fashion)&lt;/p&gt;

&lt;p&gt;Click the debug button to see actual Solr params. We built it to be educational.&lt;br&gt;
Solr 9.x has dense vector support. You don't need Pinecone.&lt;/p&gt;

&lt;p&gt;If you're fighting relevance issues or want help with hybrid search, that's literally what we love doing. &lt;br&gt;
Happy to give pointers.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
