When you develop generative AI applications, you typically introduce three additional components to your infrastructure: an embedder, an LLM, and a vector database.
However, if you are using MariaDB, you don't need to introduce an additional database along with its own SQL dialect — or even worse — its own proprietary API. Since MariaDB version 11.7 (and MariaDB Enterprise Server 11.4) you can simply store your embeddings (or vectors) in any column of any table—no need to make your applications database polyglots.
"After announcing the preview of vector search in MariaDB Server, the vector search capability has now been added to the MariaDB Community Server 11.7 release," writes Ralf Gebhardt, Product Manager for MariaDB Server at MariaDB. This includes a new datatype (VECTOR
), vector index, and a set of functions for vector manipulation.
Why Are Vectors Needed in Generative AI Applications?
Vectors are needed in generative AI applications because they embed complex meanings in a compact fixed-length array of numbers (a vector). This is more clear in the context of retrieval-augmented generation (or RAG). This technique allows you to fetch relevant data from your sources (APIs, files, databases) to enhance an AI model input with the fetched, often private-to-the-business, data.
Since your data sources can be vast, you need a way to find the relevant pieces, given that current AI models have a finite context window — you cannot simply add all of your data to a prompt. By creating chunks of data and running these chunks of data through a special AI model called embedder, you can generate vectors and use proximity search techniques to find relevant information to be appended to a prompt.
For example, take the following input from a user in a recommendation chatbot:
I need a good case for my iPhone 15 pro.
Since your AI model was not trained with the exact data containing the product information in your online store, you need to retrieve the most relevant products and their information before sending the prompt to the model.
For this, you send the original input from the user to an embedder and get a vector that you can later use to get the closest, say, 10 products to the user input. Once you get this information (and we'll see how to do this with MariaDB later), you can send the enhanced prompt to your AI model:
I need a good case for my iPhone 15 pro. Which of the following products better suit my needs?
1. ProShield Ultra Case for iPhone 15 Pro - $29.99: A slim, shock-absorbing case with raised edges for screen protection and a sleek matte finish.
2. EcoGuard Bio-Friendly Case for iPhone 15 Pro - $24.99: Made from 100% recycled materials, offering moderate drop protection with an eco-conscious design.
3. ArmorFlex Max Case for iPhone 15 Pro - $39.99: Heavy-duty protection with military-grade durability, including a built-in kickstand for hands-free use.
4. CrystalClear Slim Case for iPhone 15 Pro - $19.99: Ultra-thin and transparent, showcasing the phone's design while providing basic scratch protection.
5. LeatherTouch Luxe Case for iPhone 15 Pro - $49.99: Premium genuine leather construction with a soft-touch feel and an integrated cardholder for convenience.
This results in AI predictions that use your own data.
Creating Tables for Vector Storage
To store vectors in MariaDB, use the new VECTOR
data type. For example:
CREATE TABLE products (
id INT PRIMARY KEY,
name VARCHAR(100),
description TEXT,
embedding VECTOR(2048)
);
In this example, the embedding
column can hold a vector of 2048 dimensions. You have to match the number of dimensions that your embedder generates.
Creating Vector Indexes
For read performance, it's important to add an index to your vector column. This speeds up similarity searches. You can define the index at table creation time as follows:
CREATE TABLE products (
id INT PRIMARY KEY,
name VARCHAR(100),
description TEXT,
embedding VECTOR(2048) NOT NULL,
VECTOR INDEX (embedding)
);
For greater control, you can specify the distance function that the database server will use to build the index, as well as the M value of the Hierarchical Navigable Small Worlds (HNSW) algorithm used by MariaDB. For example:
CREATE TABLE products (
id INT PRIMARY KEY,
name VARCHAR(100),
description TEXT,
embedding VECTOR(2048) NOT NULL,
VECTOR INDEX (embedding) M=8 DISTANCE=cosine
);
Check the documentation for more information on these configurations.
Inserting Vectors
When you pass data (text, image, audio) through an embedder, you get a vector. Typically, this is a series of numbers in an array in JSON format. To insert this vector in a MariaDB table, you can use the VEC_FromText
function. For example:
INSERT INTO products (name, embedding)
VALUES
("Alarm clock", VEC_FromText("[0.001, 0, ...]")),
("Cow figure", VEC_FromText("[1.0, 0.05, ...]")),
("Bicycle", VEC_FromText("[0.2, 0.156, ...]"));
Remember that the inserted vectors must have the correct number of dimensions as defined in the CREATE TABLE
statement.
Similarity Search (Comparing Vectors)
In RAG applications, you send the user input to an embedder to get a vector. You can then query the records in your database that are closer to that vector. Closer vectors represent data that are semantically similar. At the time of writing this, MariaDB has two distance functions that you can use for similarity or proximity search:
-
VEC_DISTANCE_EUCLIDEAN
: calculates the straight-line distance between two vectors. It is best suited for vectors derived from raw, unnormalized data or scenarios where spatial separation directly correlates with similarity, such as comparing positional or numeric features. However, it is less effective for high-dimensional or normalized embeddings since it is sensitive to differences in vector magnitude. -
VEC_DISTANCE_COSINE
: measures the angular difference between vectors. Good for comparing normalized embeddings, especially in semantic applications like text or document retrieval. It excels at capturing similarity in meaning or context.
Keep in mind that similarity search using the previous functions is only approximate and highly depends on the quality of the calculated vectors and, hence, on the quality of the embedder used.
The following example, finds the top 10 most similar products to a given vector ($user_input_vector
should be replaced with the actual vector returned by the embedder over the user input):
SELECT id, name, description
FROM products
ORDER BY VEC_DISTANCE_COSINE(
VEC_FromText($user_input_vector),
embedding
)
LIMIT 10;
The VEC_DISTANCE_COSINE
and VEC_DISTANCE_EUCLIDEAN
functions take two vectors. In the previous example, one of the vectors is the vector calculated over the user input, and the other is the corresponding vector for each record in the products
table.
A Practical Example
I have prepared a practical example using Java and no AI frameworks so you truly understand the process of creating generative AI applications leveraging MariaDB's vector search capabilities. You can find the code on GitHub.
Top comments (0)