<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Qdrant</title>
    <description>The latest articles on DEV Community by Qdrant (@qdrant_engine).</description>
    <link>https://dev.to/qdrant_engine</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/qdrant_engine"/>
    <language>en</language>
    <item>
      <title>Question Answering with LangChain and Qdrant without boilerplate</title>
      <dc:creator>Qdrant</dc:creator>
      <pubDate>Tue, 31 Jan 2023 11:07:07 +0000</pubDate>
      <link>https://dev.to/qdrant/question-answering-with-langchain-and-qdrant-without-boilerplate-1kco</link>
      <guid>https://dev.to/qdrant/question-answering-with-langchain-and-qdrant-without-boilerplate-1kco</guid>
      <description>&lt;p&gt;Building applications with Large Language Models don’t have to be complicated. A lot has been going on recently to simplify the development, so you can utilize already pre-trained models and support even complex pipelines with a few lines of code. LangChain provides unified interfaces to different libraries, so you can avoid writing boilerplate code and focus on the value you want to bring.&lt;/p&gt;

&lt;p&gt;Read the full article on how to combine LangChain with Qdrant to create an LLM generative app with vector search included by the &lt;a href="https://qdrant.tech/articles/langchain-integration/#question-answering-with-langchain-and-qdrant-without-boilerplate" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/p&gt;

</description>
      <category>crypto</category>
      <category>web3</category>
      <category>blockchain</category>
    </item>
    <item>
      <title>Question Answering as a Service with Cohere and Qdrant</title>
      <dc:creator>Qdrant</dc:creator>
      <pubDate>Wed, 30 Nov 2022 16:41:45 +0000</pubDate>
      <link>https://dev.to/qdrant/question-answering-as-a-service-with-cohere-and-qdrant-23ko</link>
      <guid>https://dev.to/qdrant/question-answering-as-a-service-with-cohere-and-qdrant-23ko</guid>
      <description>&lt;p&gt;Bi-encoders are probably the most efficient way of setting up a semantic Question Answering system. This architecture relies on the same neural model that creates vector embeddings for both questions and answers. The assumption is, both question and answer should have representations close to each other in the latent space. It should be like that because they should both describe the same semantic concept. That doesn’t apply to answers like “Yes” or “No” though, but standard FAQ-like problems are a bit easier as there is typically an overlap between both texts. Not necessarily in terms of wording, but in their semantics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukurddjav6ibxr63o35i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukurddjav6ibxr63o35i.png" alt=" " width="800" height="624"&gt;&lt;/a&gt;&lt;br&gt;
Read more in the &lt;a href="https://qdrant.tech/articles/qa-with-cohere-and-qdrant/" rel="noopener noreferrer"&gt;&lt;strong&gt;article&lt;/strong&gt;&lt;/a&gt; by Kacper Łukawski.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Full-text filter and index are already available!</title>
      <dc:creator>Qdrant</dc:creator>
      <pubDate>Wed, 16 Nov 2022 14:37:06 +0000</pubDate>
      <link>https://dev.to/qdrant/full-text-filter-and-index-are-already-available-44pc</link>
      <guid>https://dev.to/qdrant/full-text-filter-and-index-are-already-available-44pc</guid>
      <description>&lt;p&gt;Qdrant is designed as an efficient vector database, allowing for a quick search of the nearest neighbours. But, you may find yourself in need of applying some extra filtering on top of the semantic search. Up to version 0.10, Qdrant was offering support for keywords only. Since 0.10, there is a possibility to apply full-text constraints as well. There is a new type of filter that you can use to do that, also combined with every other filter type.&lt;/p&gt;

&lt;p&gt;Using full-text filters without the payload index&lt;br&gt;
Full-text filters without the index created on a field will return only those entries which contain all the terms included in the query. That is effectively a substring match on all the individual terms but not a substring on a whole query.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqqeikoovauxj09hdu3a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqqeikoovauxj09hdu3a.png" alt=" " width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Read more by the &lt;strong&gt;&lt;a href="https://blog.qdrant.tech/qdrant-introduces-full-text-filters-and-indexes-9a032fcb5fa" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Storing multiple vectors per object in Qdrant</title>
      <dc:creator>Qdrant</dc:creator>
      <pubDate>Wed, 05 Oct 2022 13:02:33 +0000</pubDate>
      <link>https://dev.to/qdrant/storing-multiple-vectors-per-object-in-qdrant-33o4</link>
      <guid>https://dev.to/qdrant/storing-multiple-vectors-per-object-in-qdrant-33o4</guid>
      <description>&lt;p&gt;In a real case scenario, a single object might be described in several different ways. If you run an e-commerce business, then your items will typically have a name, longer textual description and also a bunch of photos. While cooking, you may care about the list of ingredients, and description of the taste but also the recipe and the way your meal is going to look. Up till now, if you wanted to enable semantic search with multiple vectors per object, Qdrant would require you to create separate collections for each vector type, even though they could share some other attributes in a payload. However, since Qdrant 0.10 you are able to store all those vectors together in the same collection and share a single copy of the payload!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvn0ssb87sodf37n6kyja.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvn0ssb87sodf37n6kyja.jpeg" alt=" " width="512" height="512"&gt;&lt;/a&gt;&lt;br&gt;
Read more in the &lt;a href="https://blog.qdrant.tech/storing-multiple-vectors-per-object-in-qdrant-c1da8b1ad727" rel="noopener noreferrer"&gt;&lt;strong&gt;article&lt;/strong&gt;&lt;/a&gt; written by Kacper Łukawski&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Qdrant supports ARM architecture!</title>
      <dc:creator>Qdrant</dc:creator>
      <pubDate>Fri, 23 Sep 2022 08:57:54 +0000</pubDate>
      <link>https://dev.to/qdrant/qdrant-supports-arm-architecture-1e96</link>
      <guid>https://dev.to/qdrant/qdrant-supports-arm-architecture-1e96</guid>
      <description>&lt;p&gt;The processor architecture is a thing that the end-user typically does not care much about, as long as all the applications they use run smoothly. If you use a PC then chances are you have an x86-based device, while your smartphone rather runs on an ARM processor. In 2020 Apple introduced their ARM-based M1 chip which is used in modern Mac devices, including notebooks. The main differences between those two architectures are the set of supported instructions and energy consumption. ARM’s processors have a way better energy efficiency and are cheaper than their x86 counterparts. That’s why they became available as an affordable alternative in the hosting providers, including the cloud.&lt;/p&gt;

&lt;p&gt;Read the full article by Kacper Łukawski (Developer Advocate at Qdrant) at this &lt;a href="https://blog.qdrant.tech/qdrant-supports-arm-architecture-363e92aa5026" rel="noopener noreferrer"&gt;&lt;strong&gt;link&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20t3xdxgkmvz9fhn7dk7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20t3xdxgkmvz9fhn7dk7.png" alt=" " width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How many layers to fine-tune?</title>
      <dc:creator>Qdrant</dc:creator>
      <pubDate>Wed, 31 Aug 2022 12:06:51 +0000</pubDate>
      <link>https://dev.to/qdrant/how-many-layers-to-fine-tune-5han</link>
      <guid>https://dev.to/qdrant/how-many-layers-to-fine-tune-5han</guid>
      <description>&lt;p&gt;Model fine-tuning allows you to improve the quality of the pre-trained models with just a fraction of the resources spent on training the original model.&lt;/p&gt;

&lt;p&gt;But there is a trade-off between the number of layers you tune and the precision you get.&lt;/p&gt;

&lt;p&gt;Using fewer layers allows for faster training with larger batch size, while more layers increase the model's capacity.&lt;/p&gt;

&lt;p&gt;Qdrant team run experiments so you could make more educated choices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7w8dm10vfs43o0nkdnp1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7w8dm10vfs43o0nkdnp1.jpg" alt=" " width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are some highlights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Training only the head of a model (5% of weights) gives 2x boost on metrics, while full training gives only 3x.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Training only a head layer allows using larger models with bigger batch sizes, compensating for the precision.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you only have a small dataset, full model tuning will give a more negligible effect&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Read more in the &lt;a href="https://qdrant.tech/articles/embedding-recycler/" rel="noopener noreferrer"&gt;&lt;strong&gt;article&lt;/strong&gt;&lt;/a&gt; by Yusuf Sarıgöz.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Benchmarking Vector Search Engines</title>
      <dc:creator>Qdrant</dc:creator>
      <pubDate>Tue, 23 Aug 2022 14:36:11 +0000</pubDate>
      <link>https://dev.to/qdrant/benchmarking-vector-search-engines-1l20</link>
      <guid>https://dev.to/qdrant/benchmarking-vector-search-engines-1l20</guid>
      <description>&lt;p&gt;As an Open Source vector search engine, we are often compared to the competitors and asked about our performance vs the other tools. But the answer was never simple, as the world of vector databases lacked a unified open benchmark that would show the differences. So we created one, making some bold assumptions about how it should be done. Here we describe why we think that’s the best way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flayl0gtm0b0hebckmo7j.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flayl0gtm0b0hebckmo7j.jpg" alt=" " width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In real-world scenarios, having access to unlimited power and computational resources is never the case. Projects have budgets that have to be tracked, and constraints affect the possibilities, also when it comes to hardware parameters. Unless you work for a FAANG company, chances are you would like to get out most from your VMs or bare-metal servers. That’s why we decided to benchmark the available vector databases using the same hardware configuration, so you can know how much you can expect from each on a specific setup. Does it guarantee the best performance? Probably not, but that makes the whole process affordable and reproducible, so you can easily repeat it on your machine. If you do so, your numbers won’t match the ones we’ve got. But the whole thing is not about measuring the state-of-the-art performance of a specific neural network on a selected task. &lt;strong&gt;The relative numbers, compared to the other engines are far more important, as they emphasize the differences in reality.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The list will be updated:&lt;/p&gt;

&lt;p&gt;Upload &amp;amp; Search speed on single node - &lt;a href="https://qdrant.tech/benchmarks/single-node-speed-benchmark/" rel="noopener noreferrer"&gt;Benchmark&lt;/a&gt;&lt;br&gt;
Memory consumption benchmark - TBD&lt;br&gt;
Filtered search benchmark - TBD&lt;br&gt;
Cluster mode benchmark - TBD&lt;br&gt;
Some of our experiment design decisions are described at &lt;a href="https://qdrant.tech/benchmarks/#benchmarks-faq" rel="noopener noreferrer"&gt;F.A.Q Section&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Suggest your variants of what you want to test in our &lt;a href="https://discord.com/invite/tdtYvXjC4h" rel="noopener noreferrer"&gt;Discord channel&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Read more details by this &lt;a href="https://qdrant.tech/benchmarks/" rel="noopener noreferrer"&gt;link&lt;/a&gt; &lt;a href="https://qdrant.tech/benchmarks/" rel="noopener noreferrer"&gt;https://qdrant.tech/benchmarks/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>"Vector search and applications" by Andrey Vasnetsov, CTO at Qdrant</title>
      <dc:creator>Qdrant</dc:creator>
      <pubDate>Wed, 10 Aug 2022 13:41:38 +0000</pubDate>
      <link>https://dev.to/qdrant/vector-search-and-applications-by-andrey-vasnetsov-cto-at-qdrant-481b</link>
      <guid>https://dev.to/qdrant/vector-search-and-applications-by-andrey-vasnetsov-cto-at-qdrant-481b</guid>
      <description>&lt;p&gt;Andrey Vasnetsov, Co-founder and CTO at Qdrant has shared about vector search and applications with Learn NLP Academy. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydhuvec9p3ezre5d54jo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydhuvec9p3ezre5d54jo.jpg" alt="Preview" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;
The video record of discussion is available by this link &lt;a href="https://youtu.be/MVUkbMYPYTE" rel="noopener noreferrer"&gt;https://youtu.be/MVUkbMYPYTE&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;He covered the following topics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Qdrant search engine and Quaterion similarity learning framework;&lt;/li&gt;
&lt;li&gt;Similarity learning to multimodal settings;&lt;/li&gt;
&lt;li&gt;Elastic search embeddings vs vector search engines;&lt;/li&gt;
&lt;li&gt;Support for multiple embeddings;&lt;/li&gt;
&lt;li&gt;Fundraising and VC discussions;&lt;/li&gt;
&lt;li&gt;Vision for vector search evolution;&lt;/li&gt;
&lt;li&gt;Finetuning for out of domain.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>v0.9.0 update of the Qdrant engine went live</title>
      <dc:creator>Qdrant</dc:creator>
      <pubDate>Mon, 08 Aug 2022 15:49:00 +0000</pubDate>
      <link>https://dev.to/qdrant/v090-update-of-the-qdrant-engine-went-live-4gfn</link>
      <guid>https://dev.to/qdrant/v090-update-of-the-qdrant-engine-went-live-4gfn</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjkay0zw8dmbokbchftac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjkay0zw8dmbokbchftac.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Qdrant has released the new version vector similarity search engine - v.0.9.0. It features the dynamic cluster scaling capabilities. Now Qdrant is more flexible with cluster deployment, allowing to move shards between nodes and remove nodes from the cluster.&lt;/p&gt;

&lt;p&gt;v.0.9.0 also has various improvements, such as removing temporary snapshot files during the complete snapshot, disabling default mmap threshold, and more.&lt;/p&gt;

&lt;p&gt;Read more in the change log by this link &lt;a href="https://github.com/qdrant/qdrant/releases/tag/v0.9.0" rel="noopener noreferrer"&gt;https://github.com/qdrant/qdrant/releases/tag/v0.9.0&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next release will include frequently requested functionality. Stay tuned!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Finding errors in datasets with Similarity Search</title>
      <dc:creator>Qdrant</dc:creator>
      <pubDate>Tue, 19 Jul 2022 14:54:24 +0000</pubDate>
      <link>https://dev.to/qdrant/finding-errors-in-datasets-with-similarity-search-4i8e</link>
      <guid>https://dev.to/qdrant/finding-errors-in-datasets-with-similarity-search-4i8e</guid>
      <description>&lt;p&gt;Nowadays, people create a huge number of applications of various types and solve problems in different areas. Despite such diversity, they have something in common - they need to process data. Real-world data is a living structure, it grows day by day, changes a lot and becomes harder to work with.&lt;/p&gt;

&lt;p&gt;In some cases, you need to categorize or label your data, which can be a tough problem given its scale. The process of splitting or labelling is error-prone and these errors can be very costly. Imagine that you failed to achieve the desired quality of the model due to inaccurate labels. Worse, your users are faced with a lot of irrelevant items, unable to find what they need and getting annoyed by it. Thus, you get poor retention, and it directly impacts company revenue. It is really important to avoid such errors in your data.&lt;/p&gt;

&lt;p&gt;ML Engineer at Qdrant, George Panchuk, describes hot to define labeling errors in datasets with similarity search. Read more &lt;strong&gt;&lt;a href="https://qdrant.tech/articles/dataset-quality/" rel="noopener noreferrer"&gt;https://qdrant.tech/articles/dataset-quality/&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23jeua2x1ufhve3lhyw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23jeua2x1ufhve3lhyw8.png" alt="Category vs. Title and Image" width="701" height="711"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Introducing the Quaterion: a framework for fine-tuning similarity learning models</title>
      <dc:creator>Qdrant</dc:creator>
      <pubDate>Tue, 28 Jun 2022 13:17:11 +0000</pubDate>
      <link>https://dev.to/qdrant/introducing-the-quaterion-a-framework-for-fine-tuning-similarity-learning-models-2jam</link>
      <guid>https://dev.to/qdrant/introducing-the-quaterion-a-framework-for-fine-tuning-similarity-learning-models-2jam</guid>
      <description>&lt;p&gt;Qdrant team shared the result of the work they’ve been into during the last months - &lt;strong&gt;&lt;a href="https://quaterion.qdrant.tech/" rel="noopener noreferrer"&gt;Quaterion&lt;/a&gt;&lt;/strong&gt;. It is a framework for fine-tuning similarity learning models that streamlines the training process to make it significantly faster and cost-efficient.&lt;/p&gt;

&lt;p&gt;To develop Quaterion, Qdrant team utilized PyTorch Lightning, leveraging a high-performing AI research approach to constructing training loops for ML models.&lt;/p&gt;

&lt;p&gt;This framework empowers vector search &lt;a href="https://qdrant.tech/solutions/" rel="noopener noreferrer"&gt;solutions&lt;/a&gt;, such as semantic search, anomaly detection, and others, by advanced coaching mechanism, specially designed head layers for pre-trained models, and high flexibility in terms of customization according to large-scale training pipelines and other features.&lt;/p&gt;

&lt;p&gt;Here you can read why similarity learning is preferable to the traditional machine learning approach and how Quaterion can help benefit &lt;a href="https://quaterion.qdrant.tech/getting_started/why_quaterion.html#why-quaterion" rel="noopener noreferrer"&gt;https://quaterion.qdrant.tech/getting_started/why_quaterion.html#why-quaterion&lt;/a&gt;   &lt;/p&gt;

&lt;p&gt;Quaterion on GitHub: &lt;a href="https://github.com/qdrant/quaterion" rel="noopener noreferrer"&gt;https://github.com/qdrant/quaterion&lt;/a&gt;&lt;/p&gt;

</description>
      <category>similaritylearning</category>
      <category>vectorsearch</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
