DEV Community

Julien L for WiScale

Posted on

What if SQL could search by meaning? Meet VelesQL

You know SQL. You have been writing SELECT, WHERE, and JOIN for years. But the moment you need to search by meaning, traverse a knowledge graph, or rank results by relevance, SQL cannot help you. You reach for a proprietary API, a different client, a different mental model.

What if you did not have to?

VelesQL is a query language that starts where SQL stops. It keeps the syntax you already know and adds three things SQL never had: vector similarity search (NEAR), graph pattern matching (MATCH), and full-text BM25 ranking. One language. One query. One result set.

This article walks through VelesQL from the ground up, with working Python code you can run right now.

Key concepts in 60 seconds

If you have worked with embeddings before, skip ahead. Otherwise, here is what the jargon means:

Vector embedding - a list of numbers (e.g. 384 floats) that captures the meaning of a piece of text. Two sentences that mean similar things will have vectors that are close together in space. The sentence "How to train a dog" and "Puppy obedience tips" produce vectors that are almost identical, even though they share no words.

Cosine similarity - measures the angle between two vectors. A score of 1.0 means identical direction (same meaning), 0.0 means unrelated. This is how VelesDB decides which documents are "closest" to your query.

HNSW (Hierarchical Navigable Small World) - instead of comparing your query vector against every single vector in the database (which would be slow), HNSW builds a multi-layer navigation graph at insert time. At query time, it hops through this graph to find approximate nearest neighbors in O(log n) instead of O(n). This is the index structure behind NEAR queries.

BM25 (Best Match 25) - the classic text ranking algorithm used by Elasticsearch and Lucene. It scores a document based on how often your search terms appear in it, weighted by how rare each term is across all documents. A document matching a rare term like "HNSW" scores higher than one matching a common term like "data".

RRF (Reciprocal Rank Fusion) - when you combine vector search and text search, you get two independent ranked lists with incompatible scores. RRF merges them by assigning each result a score of 1 / (k + rank) from each list, then summing. No score normalization needed. The k parameter (default: 60) controls how much weight goes to top-ranked results versus the long tail.

The problem with multi-model queries

A typical AI application needs three types of search:

1. Vector search    → "Find documents similar to this embedding"
2. Graph traversal  → "Walk relationships between entities"
3. Full-text search → "Rank documents containing these keywords"
Enter fullscreen mode Exit fullscreen mode

Today, most developers use three separate systems for this. A vector database for embeddings, a graph database for relationships, and Elasticsearch or similar for text. That means three clients, three query languages, three result formats, and a stitching layer to merge everything.

Your app
   → Pinecone client  (proprietary API)
   → Neo4j client     (Cypher)
   → Elasticsearch    (JSON DSL)
   → custom code to merge results
Enter fullscreen mode Exit fullscreen mode

VelesQL replaces this with a single query interface:

Your app
   → VelesDB + VelesQL  (one language, one result set)
Enter fullscreen mode Exit fullscreen mode

Setup

pip install velesdb numpy
Enter fullscreen mode Exit fullscreen mode
import velesdb
import numpy as np

db = velesdb.Database("./velesql_demo")
Enter fullscreen mode Exit fullscreen mode

That is it. No Docker, no server process, no API key. VelesDB runs as a library. The database is a folder on disk.

Creating collections with DDL

VelesQL supports full DDL. You create collections the same way you create tables in SQL:

# Create a vector collection
db.execute_query("CREATE COLLECTION articles (dimension = 384, metric = 'cosine')")

# Create a graph collection with schema
db.execute_query("""
    CREATE GRAPH COLLECTION knowledge (dimension = 384, metric = 'cosine') SCHEMALESS
""")

# Inspect what we have
results = db.execute_query("SHOW COLLECTIONS")
for r in results:
    print(f"  {r['payload']['name']} ({r['payload']['type']})")
Enter fullscreen mode Exit fullscreen mode

Output:

  articles (vector)
  knowledge (graph)
Enter fullscreen mode Exit fullscreen mode

You can also describe a collection to see its configuration:

info = db.execute_query("DESCRIBE COLLECTION articles")
print(info[0]['payload'])
# {'name': 'articles', 'type': 'vector', 'dimension': 384, 'metric': 'Cosine', 'point_count': 0}
Enter fullscreen mode Exit fullscreen mode

Inserting data: what a record actually looks like

Before writing any code, let's understand what VelesDB stores. Each record (called a point) has three parts:

Point {
  id:      1                                            -- unique integer
  vector:  [0.052, 0.014, -0.025, ..., 0.009]          -- 384 floats (the "meaning")
  payload: {                                            -- JSON metadata (filterable)
    "title":    "Vector databases for AI applications",
    "category": "tutorial"
  }
}
Enter fullscreen mode Exit fullscreen mode

The vector is the key difference with a SQL row. It is a list of numbers that encodes the semantic meaning of the record. In production, you generate it with an embedding model:

# In production:
# from sentence_transformers import SentenceTransformer
# model = SentenceTransformer("all-MiniLM-L6-v2")   # 384 dimensions
# vector = model.encode("Vector databases for AI applications").tolist()
# → [0.052, 0.014, -0.025, ..., 0.009]
#
# For this tutorial, we use random vectors to skip the 90MB model download.
Enter fullscreen mode Exit fullscreen mode

Now let's insert data using plain SQL syntax:

def make_vector():
    """Generate a random 384-dim vector for demonstration."""
    v = np.random.rand(384).astype(np.float32)
    return (v / np.linalg.norm(v)).tolist()

np.random.seed(42)

documents = [
    (1, "Vector databases for AI applications", "tutorial"),
    (2, "Graph neural networks explained", "guide"),
    (3, "Building RAG pipelines with Python", "tutorial"),
    (4, "Knowledge graphs in healthcare", "research"),
    (5, "Hybrid search strategies for production", "tutorial"),
]

for doc_id, title, category in documents:
    v = make_vector()
    db.execute_query(
        "INSERT INTO articles (id, vector, title, category) VALUES ($id, $v, $title, $cat)",
        {"id": doc_id, "v": v, "title": title, "cat": category}
    )

print(f"Inserted {len(documents)} documents")
Enter fullscreen mode Exit fullscreen mode

Let's peek at what VelesDB actually stored:

result = db.execute_query("SELECT * FROM articles WHERE category = 'tutorial' LIMIT 1")
import json
print(json.dumps(result[0], indent=2))
Enter fullscreen mode Exit fullscreen mode
{
  "id": 1,
  "score": 1.0,
  "payload": {
    "title": "Vector databases for AI applications",
    "category": "tutorial"
  },
  "vector_score": 1.0,
  "graph_score": null,
  "fused_score": 1.0,
  "bindings": {
    "title": "Vector databases for AI applications",
    "category": "tutorial"
  }
}
Enter fullscreen mode Exit fullscreen mode

This is the anatomy of every VelesDB result. payload holds your metadata (title, category, any JSON field you inserted). bindings mirrors the same data for use in LET expressions. vector_score is the cosine similarity when you use NEAR, graph_score activates with MATCH graph patterns, and fused_score combines both when you use USING FUSION. The vector itself is stored internally and never returned in results - it is only used for similarity computation.

Multi-row INSERT is also supported:

db.execute_query(
    """INSERT INTO articles (id, vector, title, category) VALUES
       ($id1, $v1, 'Sparse embeddings 101', 'tutorial'),
       ($id2, $v2, 'Local-first AI architectures', 'guide')""",
    {"id1": 6, "v1": make_vector(), "id2": 7, "v2": make_vector()}
)
Enter fullscreen mode Exit fullscreen mode

Vector search with NEAR

Here is where VelesQL diverges from SQL. The NEAR keyword performs approximate nearest neighbor search on dense vectors:

query_vector = make_vector()

results = db.execute_query(
    """SELECT id, title, similarity() AS score
       FROM articles
       WHERE vector NEAR $q
       ORDER BY similarity() DESC
       LIMIT 3""",
    {"q": query_vector}
)

for r in results:
    print(f"  [{r['score']:.3f}] {r['payload']['title']}")
Enter fullscreen mode Exit fullscreen mode

Output:

  [0.821] Vector databases for AI applications
  [0.779] Building RAG pipelines with Python
  [0.752] Hybrid search strategies for production
Enter fullscreen mode Exit fullscreen mode

The similarity() function returns the pre-computed search score. You can use it in SELECT, ORDER BY, and even in LET bindings (more on that below).

Combining vector search with metadata filters

Just add AND conditions like you would in SQL:

results = db.execute_query(
    """SELECT id, title, similarity() AS score
       FROM articles
       WHERE vector NEAR $q AND category = 'tutorial'
       ORDER BY similarity() DESC
       LIMIT 5""",
    {"q": query_vector}
)

for r in results:
    print(f"  [{r['score']:.3f}] {r['payload']['title']}")
Enter fullscreen mode Exit fullscreen mode

Only tutorials are returned. The metadata filter runs alongside the vector search, not as a post-filter.

Advanced WHERE: IN, BETWEEN, ILIKE, IS NULL

VelesQL supports the full range of SQL filtering operators. If you know SQL, you already know how to filter in VelesQL:

# IN: filter by a set of values
results = db.execute_query(
    "SELECT id, title FROM articles WHERE category IN ('tutorial', 'research') LIMIT 10"
)

# BETWEEN: range filtering
results = db.execute_query(
    "SELECT id, title FROM articles WHERE price BETWEEN 10 AND 100 LIMIT 10"
)

# ILIKE: case-insensitive pattern matching
results = db.execute_query(
    "SELECT id, title FROM articles WHERE title ILIKE '%ai%' LIMIT 10"
)

# IS NULL / IS NOT NULL
results = db.execute_query(
    "SELECT id, title FROM articles WHERE category IS NOT NULL LIMIT 10"
)

# NOT + compound conditions
results = db.execute_query(
    "SELECT id, title FROM articles WHERE NOT (category = 'draft') AND price > 50 LIMIT 10"
)
Enter fullscreen mode Exit fullscreen mode

ILIKE is particularly useful: it matches case-insensitively, so '%ai%' finds both "AI-Powered Search" and "ai portfolio optimizer". LIKE does the same but case-sensitive.

Full-text search with MATCH

The MATCH keyword performs BM25 full-text search. Remember BM25 from the key concepts section: it ranks documents by term frequency, adjusted by how rare each term is. So MATCH 'graph' will rank "Graph neural networks explained" higher than a document that mentions "graph" once among hundreds of other words.

results = db.execute_query(
    "SELECT id, title FROM articles WHERE title MATCH 'graph' LIMIT 5"
)

for r in results:
    print(f"  {r['payload']['title']}")
Enter fullscreen mode Exit fullscreen mode

Hybrid search: NEAR + MATCH

When you combine NEAR and MATCH in the same WHERE clause, you get two independent ranked lists: one from vector similarity, one from BM25 text scoring. These scores are on completely different scales, so you cannot just add them.

VelesQL solves this with RRF (Reciprocal Rank Fusion). Each result gets a score of 1 / (k + rank) from each list, and the scores are summed. A document ranked #1 by vector and #3 by BM25 gets 1/61 + 1/63 = 0.032. A document ranked #2 by both gets 1/62 + 1/62 = 0.032. RRF naturally rewards documents that appear in both lists, without needing to normalize the raw scores.

results = db.execute_query(
    """SELECT id, title, similarity() AS score
       FROM articles
       WHERE vector NEAR $q AND title MATCH 'AI'
       ORDER BY similarity() DESC
       LIMIT 5""",
    {"q": query_vector}
)

for r in results:
    print(f"  [{r['score']:.3f}] {r['payload']['title']}")
Enter fullscreen mode Exit fullscreen mode

You can also control the fusion strategy explicitly:

results = db.execute_query(
    """SELECT id, title, similarity() AS score
       FROM articles
       WHERE vector NEAR $q AND title MATCH 'AI'
       LIMIT 5
       USING FUSION(strategy = 'rrf', k = 60)""",
    {"q": query_vector}
)
Enter fullscreen mode Exit fullscreen mode

Four fusion strategies are available:

Strategy Best for Key parameter
rrf General-purpose (default) k (default: 60)
weighted Custom importance weights = [0.7, 0.3]
rsf Dense + sparse blending dense_weight, sparse_weight
maximum Conservative precision (none)

LET bindings: named scores

VelesQL 3.2 introduced LET bindings. Define a score expression once, use it everywhere:

results = db.execute_query(
    """LET relevance = similarity()
       SELECT id, title, relevance AS score
       FROM articles
       WHERE vector NEAR $q
       ORDER BY relevance DESC
       LIMIT 3""",
    {"q": query_vector}
)

for r in results:
    print(f"  [{r['score']:.3f}] {r['payload']['title']}")
Enter fullscreen mode Exit fullscreen mode

LET bindings shine in hybrid queries where you want to weight multiple score components:

results = db.execute_query(
    """LET hybrid = 0.7 * vector_score + 0.3 * bm25_score
       SELECT id, title, hybrid AS score
       FROM articles
       WHERE vector NEAR $q AND title MATCH 'search'
       ORDER BY hybrid DESC
       LIMIT 5""",
    {"q": query_vector}
)
Enter fullscreen mode Exit fullscreen mode

Bindings can reference earlier bindings, so you can build complex scoring pipelines:

LET base = 0.6 * vector_score + 0.4 * bm25_score
LET boosted = base * 1.5
SELECT *, boosted AS final_score
FROM docs
WHERE vector NEAR $v AND content MATCH 'AI'
ORDER BY boosted DESC
LIMIT 10
Enter fullscreen mode Exit fullscreen mode

Graph queries with MATCH patterns

VelesQL borrows Cypher-style graph patterns. Here is a complete example that builds a knowledge graph and queries it:

import json

# Insert nodes with _labels (required for MATCH pattern matching)
db.execute_query(
    "INSERT NODE INTO knowledge (id = 1, payload = '%s')" %
    json.dumps({"name": "Alice", "role": "engineer", "_labels": ["Person"]})
)
db.execute_query(
    "INSERT NODE INTO knowledge (id = 2, payload = '%s')" %
    json.dumps({"name": "Bob", "role": "researcher", "_labels": ["Person"]})
)
db.execute_query(
    "INSERT NODE INTO knowledge (id = 3, payload = '%s')" %
    json.dumps({"name": "Acme Corp", "industry": "tech", "_labels": ["Company"]})
)

# Add vectors to nodes (needed for combined vector + graph queries)
coll = db.get_collection("knowledge")
coll.upsert(1, vector=make_vector(), payload={"name": "Alice", "role": "engineer", "_labels": ["Person"]})
coll.upsert(2, vector=make_vector(), payload={"name": "Bob", "role": "researcher", "_labels": ["Person"]})
coll.upsert(3, vector=make_vector(), payload={"name": "Acme Corp", "industry": "tech", "_labels": ["Company"]})

# Insert edges
db.execute_query("INSERT EDGE INTO knowledge (source = 1, target = 2, label = 'KNOWS')")
db.execute_query("INSERT EDGE INTO knowledge (source = 1, target = 3, label = 'WORKS_AT')")
db.execute_query("INSERT EDGE INTO knowledge (source = 2, target = 3, label = 'WORKS_AT')")
Enter fullscreen mode Exit fullscreen mode

Cypher-style MATCH in WHERE

Now query the graph using MATCH patterns inside a WHERE clause:

# Find people who know other people
results = db.execute_query(
    "SELECT * FROM knowledge WHERE MATCH (a:Person)-[:KNOWS]->(b) LIMIT 10"
)
for r in results:
    print(f"  {r['payload']['name']} ({r['payload']['role']})")
Enter fullscreen mode Exit fullscreen mode

Output:

  Alice (engineer)
Enter fullscreen mode Exit fullscreen mode
# Find everyone who works at a company
results = db.execute_query(
    "SELECT * FROM knowledge WHERE MATCH (p:Person)-[:WORKS_AT]->(c:Company) LIMIT 10"
)
for r in results:
    print(f"  {r['payload']['name']}")
Enter fullscreen mode Exit fullscreen mode

Output:

  Bob
  Alice
Enter fullscreen mode Exit fullscreen mode

Combining graph + metadata + vector search

This is where VelesQL shines. A single query that walks a graph, filters by metadata, and ranks by vector similarity:

# Find people who work at a company AND are similar to a query vector
results = db.execute_query(
    """SELECT * FROM knowledge
       WHERE vector NEAR $v AND MATCH (p:Person)-[:WORKS_AT]->(c:Company)
       LIMIT 5""",
    {"q": query_vector}
)

# Graph pattern + scalar filter
results = db.execute_query(
    """SELECT * FROM knowledge
       WHERE MATCH (a:Person)-[:WORKS_AT]->(b) AND role = 'engineer'
       LIMIT 10"""
)
for r in results:
    print(f"  {r['payload']['name']} - {r['payload']['role']}")
Enter fullscreen mode Exit fullscreen mode

Edge queries and properties

Query edges directly with SELECT EDGES:

edges = db.execute_query("SELECT EDGES FROM knowledge WHERE source = 1")
for e in edges:
    print(f"  {e['payload']['label']}: {e['payload']['source']} -> {e['payload']['target']}")
Enter fullscreen mode Exit fullscreen mode

Output:

  KNOWS: 1 -> 2
  WORKS_AT: 1 -> 3
Enter fullscreen mode Exit fullscreen mode

Edges can also carry properties:

db.execute_query(
    """INSERT EDGE INTO knowledge (source = 1, target = 2, label = 'COLLABORATES')
       WITH PROPERTIES (project = 'VelesDB', since = 2024)"""
)
Enter fullscreen mode Exit fullscreen mode

Search quality tuning with WITH

Every vector search query can be fine-tuned with a WITH clause:

# Fast search (autocomplete, suggestions)
results = db.execute_query(
    "SELECT id, title FROM articles WHERE vector NEAR $q LIMIT 5 WITH (mode = 'fast')",
    {"q": query_vector}
)

# Accurate search (production queries)
results = db.execute_query(
    "SELECT id, title FROM articles WHERE vector NEAR $q LIMIT 5 WITH (mode = 'accurate')",
    {"q": query_vector}
)

# Custom HNSW parameters
results = db.execute_query(
    """SELECT id, title FROM articles
       WHERE vector NEAR $q LIMIT 5
       WITH (ef_search = 256, rerank = true)""",
    {"q": query_vector}
)
Enter fullscreen mode Exit fullscreen mode

Five quality presets are available:

Mode ef_search Best for
fast 32 Autocomplete, real-time suggestions
balanced 64 Default, good accuracy/speed tradeoff
accurate 256 Production search, quality matters
perfect 512 Benchmarks, maximum recall
autotune auto Adapts based on collection size

UPDATE, DELETE, and other DML

VelesQL supports the full CRUD lifecycle:

# Update a field
db.execute_query("UPDATE articles SET category = 'advanced' WHERE id = 2")

# Delete a row (WHERE is mandatory for safety)
db.execute_query("DELETE FROM articles WHERE id = 7")

# Upsert (insert or update)
db.execute_query(
    "UPSERT INTO articles (id, vector, title, category) VALUES ($id, $v, 'Updated article', 'guide')",
    {"id": 2, "v": make_vector()}
)
Enter fullscreen mode Exit fullscreen mode

Set operations: UNION, INTERSECT, EXCEPT

Combine results from multiple queries, just like in SQL:

# UNION: combine tech and finance products, remove duplicates
results = db.execute_query("""
    SELECT id, title FROM articles WHERE category = 'tutorial'
    UNION
    SELECT id, title FROM articles WHERE category = 'guide'
""")

# INTERSECT: find items in both result sets
results = db.execute_query("""
    SELECT id, title FROM articles WHERE category = 'tutorial'
    INTERSECT
    SELECT id, title FROM articles WHERE vector NEAR $q LIMIT 10
""", {"q": query_vector})

# EXCEPT: items in first set but not in second
results = db.execute_query("""
    SELECT id FROM articles WHERE category = 'tutorial'
    EXCEPT
    SELECT id FROM articles WHERE title MATCH 'graph'
""")
Enter fullscreen mode Exit fullscreen mode

N-ary chaining works too: A UNION B UNION C evaluates left to right.

DISTINCT: deduplicate results

# Unique categories
results = db.execute_query("SELECT DISTINCT category FROM articles")

# DISTINCT with vector search (dedup by payload value after search)
results = db.execute_query(
    "SELECT DISTINCT category FROM articles WHERE vector NEAR $q LIMIT 20",
    {"q": query_vector}
)
Enter fullscreen mode Exit fullscreen mode

Temporal queries

Filter by time without converting timestamps manually:

import time

# Insert an event with a timestamp
now = int(time.time())
db.execute_query(
    "INSERT INTO articles (id, vector, title, created_at) VALUES ($id, $v, 'Fresh article', $ts)",
    {"id": 100, "v": make_vector(), "ts": now}
)

# Query using NOW() and INTERVAL
results = db.execute_query(
    "SELECT id, title FROM articles WHERE created_at > NOW() - INTERVAL '24 hours' LIMIT 10"
)
Enter fullscreen mode Exit fullscreen mode

NOW() returns the current Unix timestamp. INTERVAL supports seconds, minutes, hours, days, weeks, and months.

Query introspection with EXPLAIN

Before optimizing, understand what the query planner is doing:

plan = db.execute_query(
    "EXPLAIN SELECT * FROM articles WHERE vector NEAR $q LIMIT 5",
    {"q": query_vector}
)

print(plan[0]['payload']['tree'])
Enter fullscreen mode Exit fullscreen mode

Output:

Query Plan:
├─ VectorSearch
│  ├─ Collection: articles
│  ├─ ef_search: 100
│  └─ Candidates: 5
└─ Limit: 5

Estimated cost: 0.101ms
Index used: HNSW
Enter fullscreen mode Exit fullscreen mode

Index management

Create secondary indexes on metadata fields for faster filtering:

# Create an index on category
db.execute_query("CREATE INDEX ON articles (category)")

# Analyze collection statistics for the query optimizer
db.execute_query("ANALYZE articles")
Enter fullscreen mode Exit fullscreen mode

Agent Memory collections

If you use VelesDB's Agent Memory SDK, the three memory subsystems (semantic, episodic, procedural) are queryable via VelesQL:

memory = db.agent_memory(384)

# Store some facts via the SDK
memory.semantic.store(1, "User prefers Python", make_vector())
memory.semantic.store(2, "User works in healthcare", make_vector())

# Query them with VelesQL
results = db.execute_query(
    """SELECT * FROM _semantic_memory
       WHERE vector NEAR $q
       LIMIT 5 WITH (mode = 'accurate')""",
    {"q": make_vector()}
)

for r in results:
    print(f"  {r['payload']['content']}")
Enter fullscreen mode Exit fullscreen mode

The same works for _episodic_memory (with timestamp filtering) and _procedural_memory (with confidence sorting).

VelesQL vs. the alternatives

Feature VelesQL Raw Python API Pinecone pgvector (SQL)
Vector NEAR search Yes Yes Yes Yes (with extensions)
Graph MATCH traversal Yes Partial No No
Full-text MATCH (BM25) Yes No No Yes (tsvector)
Hybrid fusion (RRF, weighted) Yes Manual No No
LET score bindings Yes No No No
DDL (CREATE, DROP, ALTER) Yes Method calls Dashboard Yes
EXPLAIN query plan Yes No No Yes
Temporal (NOW, INTERVAL) Yes Manual No Yes
Runs locally, no server Yes Yes No No

The point is not that VelesQL replaces SQL everywhere. It fills a gap: the space where you need vector search, graph queries, and text ranking in one place, without running three separate systems.

VelesDB is still young. JOIN is fully supported for INNER JOIN, but LEFT/RIGHT/FULL JOIN are parsed and produce explicit runtime errors rather than silent failures. MATCH in hybrid mode boosts rather than strictly filters. These are documented tradeoffs, not bugs, and the query planner tells you exactly what it will do via EXPLAIN.

The full VelesQL feature set at a glance

Category Features
Queries SELECT, FROM, WHERE, JOIN (INNER), GROUP BY, HAVING, ORDER BY, LIMIT, OFFSET, DISTINCT
Vector NEAR, SPARSE_NEAR, NEAR_FUSED, similarity(), WITH (quality modes)
Text MATCH (BM25 full-text search)
Graph MATCH patterns, variable-length paths, INSERT NODE, INSERT EDGE, SELECT EDGES
Fusion USING FUSION (rrf, weighted, rsf, maximum)
Scoring LET bindings, arithmetic ORDER BY, vector_score, bm25_score, graph_score
DML INSERT, multi-row INSERT, UPSERT, UPDATE, DELETE
DDL CREATE/DROP COLLECTION, CREATE/DROP INDEX, ALTER COLLECTION, TRUNCATE
Admin SHOW COLLECTIONS, DESCRIBE, EXPLAIN, ANALYZE, FLUSH
Temporal NOW(), INTERVAL, temporal arithmetic
Types Strings, integers, floats, booleans, NULL, vectors, sparse vectors, parameters

The complete specification is on GitHub.

Getting started

  1. VelesDB on GitHub (~3MB binary, source-available under Elastic License 2.0)
  2. Documentation and examples
  3. VelesQL Specification

If you find the project useful, a star on the repo helps other developers discover it. We are actively looking for partners and contributors who have use cases for hybrid search, check velesdb.com for details.


What query would you write first with VelesQL? I am curious whether the hybrid NEAR + MATCH pattern or the graph traversal feels more useful to your projects. Drop a comment below.

Top comments (0)