This tutorial uses Supabase + JS/TS, so having some basic experience with them will help.
Most of what I'll be explaining here, I developed on dissolutus.
To start, we need to have something for our users to search. In my case, it’s templates, so they can create their own posts.
It looks like this:
Basically, each of my templates needs a description.
And now our AI journey begins!
You can think of an embedding as how an AI model understands the difference between, say, a pen and a car.
So, an embedding is a vector, which is basically a long list of numbers. These numbers capture the meaning or essence of the text. The closer two vectors are, the more related their meanings are.
For example, if I search for “food promotion”, templates with a description such as “pizza” or “burger” would have higher similarities - because they are about food. But something like “work with us” would be far away.
What we need now is to transform our search term, let's say "Food promotion", into a vector and look for similar vectors in our table to find matching templates.
To do that, we need to install @huggingface/transformers
:
pnpm i @huggingface/transformers
And now we can add this snippet and it will do the magic for us:
const pipe = await pipeline('feature-extraction', 'Supabase/gte-small');
const output = await pipe(searchText, {
pooling: 'mean',
normalize: true
});
const embedding = Array.from(output.data);
Something really important: you need to use the same model for both the search term and your table. Otherwise it will not be working.
Here, I'll be using Supabase/gte-small, it returns a vector with 384 dimensions. Vectors with fewer dimensions run faster queries and consume less RAM. Wanna learn more? Check this article created by the Supabase team.
We are generating the search embedding, and that's great.
But let's move to our database now.
We need to add to Postgres the extension pgvector.
You can do that by using the Supabase dashboard or running in the Postgres terminal/migration file:
CREATE EXTENSION IF NOT EXISTS vector;
And to run requests from Postgres, we need to:
CREATE EXTENSION IF NOT EXISTS pg_net;
The table templates
is where I'll be storing my embedding.
So I run:
ALTER TABLE templates ADD COLUMN embedding vector(384);
When the admin user creates a new template we need a trigger, that calls a function and this functions calls another function (yep!) that generates our embedding.
Our function is:
create or replace function embed_server_row()
returns trigger
language plpgsql
as $$
declare
content_column text = TG_ARGV[0];
embedding_column text = TG_ARGV[1];
timeout_milliseconds int = 5 * 60 * 1000;
begin
perform
net.http_post(
url := supabase_url() || '/functions/v1/embed_server',
headers := jsonb_build_object(
'Content-Type', 'application/json'
),
body := jsonb_build_object(
'ids', jsonb_build_array(NEW.id),
'table', TG_TABLE_NAME,
'contentColumn', content_column,
'embeddingColumn', embedding_column,
'edgeKey', edge_key()
),
timeout_milliseconds := timeout_milliseconds
);
return null;
end;
$$;
What this big boi here does is nothing more than getting the ids that were modified (created/updated) and calls the function to create the embedding to given table and cols (the col that will receive and the one that will be the base).
You ask what supabase_url is? It is like an "env", where we can store secrets.
You can learn more about vault here.
select vault.create_secret(
'<api-url>', -- change here!
'supabase_url'
);
And then you can:
create function supabase_url()
returns text
language plpgsql
security definer
as $$
declare
secret_value text;
begin
select decrypted_secret into secret_value from vault.decrypted_secrets where name = 'supabase_url';
return secret_value;
end;
$$;
Now, we can create our trigger, I'll be creating for both updates and inserts.
-- add trigger to embedd the action with the function embed
create trigger embed_templates_insert
after insert on public.templates
for each row
when (new.content is not null)
execute procedure embed_server_row('content', 'embedding');
-- when a update is made, the trigger will be executed
create trigger embed_templates_update
after update on public.templates
for each row
when (old.content is distinct from new.content)
execute procedure embed_server_row('content', 'embedding');
Here we go to our edge function. Does not know what that means? Check this article.
As you can see the name is "embed_server_row", so we need a file with the same name.
In your import_map you should use:
{
"@supabase/supabase-js": "https://esm.sh/@supabase/supabase-js@2.21.0"
}
And our file will look something like this:
import { createClient } from "@supabase/supabase-js";
const model = new Supabase.ai.Session("gte-small");
const supabaseUrl = Deno.env.get("SUPABASE_URL");
const supabaseRoleKey = Deno.env.get("SUPABASE_SERVICE_ROLE_KEY");
Deno.serve(async (req) => {
if (!supabaseUrl || !supabaseRoleKey) {
return new Response(
JSON.stringify({
error: "Missing environment variables.",
}),
{
status: 500,
headers: { "Content-Type": "application/json" },
}
);
}
const { ids, table, contentColumn, embeddingColumn } = await req.json();
if (!ids || !table || !contentColumn || !embeddingColumn) {
return new Response(
JSON.stringify({
error: "Missing required parameters.",
}),
{
status: 400,
headers: { "Content-Type": "application/json" },
}
);
}
const supabase = createClient(supabaseUrl, supabaseRoleKey);
const { data: rows, error: selectError } = await supabase
.from(table)
.select(`id, ${contentColumn}`)
.in("id", ids);
if (selectError) {
return new Response(JSON.stringify({ error: selectError }), {
status: 500,
headers: { "Content-Type": "application/json" },
});
}
for (const row of rows) {
const { id, [contentColumn]: content } = row;
if (!content) {
console.error(`No content available in column '${contentColumn}'`);
continue;
}
if (!id) {
console.error(`No id available in column 'id'`);
continue;
}
const output = (await model.run(content, {
mean_pool: true,
normalize: true,
})) as number[];
const embedding = JSON.stringify(output);
if (!embedding) {
console.error(`No embedding generated for id ${id}`);
continue;
}
const { error } = await supabase
.from(table)
.update({
[embeddingColumn]: embedding,
})
.eq("id", id);
if (error) {
console.error(
`Failed to save embedding on '${table}' table with id ${id}`
);
console.error(error);
}
console.log(
`Generated embedding ${JSON.stringify({
table,
id,
contentColumn,
embeddingColumn,
})}`
);
}
return new Response(null, {
status: 204,
headers: { "Content-Type": "application/json" },
});
});
The base idea of this function and helpers I got from this repo.
Almost done!
Now when the user search something we need a last Postgres function that returns a similarity check.
It looks like this:
create or replace function match_table_embeddings(
table_name text,
embedding vector(384),
match_threshold float
)
returns table (id uuid, similarity float)
language plpgsql
as $$
#variable_conflict use_variable
declare
query text;
begin
query := format(
'select table_name.id, embedding <#> $1 as similarity
from %I as table_name
where embedding <#> $1 < -$2
order by similarity;',
table_name
);
-- Execute the query dynamically
return query execute query using embedding, match_threshold;
end;
$$;
In your search function having the user embedding you can do something like this:
const { data, error } = await supabase
.rpc('match_table_embeddings', {
table_name: 'creativehub_actions',
embedding,
match_threshold: 0.8
})
.select('id')
.limit(LIMIT);
When it comes to the match_threshold, closer to 1, it will return exact similarities, and closer to 0, well, it may start hallucinating.
But there is more to it.
In the next post, I'll show how to tailor the user experience.
See you soon, folks!
Top comments (0)