DEV Community

Daniel S
Daniel S

Posted on

pg_auto_embeddings — text embeddings directly in Postgres, without extensions

Image description

Overview

You have a PostgreSQL database that stores a lot of text data. You want to use vector representations (embeddings), for example from OpenAI, to build a recommendation system, improved search, or implement RAG for working with LLMs. But you don’t want to install extensions (or maybe you can’t). For instance, on cloud Managed PostgreSQL, you often don’t have the required permissions.

pg_auto_embeddings is a lightweight open-source solution that lets you compute embeddings through OpenAI models directly in PostgreSQL without installing external extensions. It uses the Foreign Data Wrappers (FDW) mechanism “under the hood” to send requests to the OpenAI API, and it works synchronously and atomically. In this article, we'll see how pg_auto_embeddings can help, how to install it (spoiler: very easily), and what the key features of the project are.


What is pg_auto_embeddings and what problem does it solve?

pg_auto_embeddings is an MIT-licensed open-source project that solves the key problem:

How do we compute text vector representations (embeddings) directly from PostgreSQL without extra fuss and without special extensions?

Main ideas:

  1. Call via SQL: You write a simple function pgae_embedding('some text') and that's enough. You can use the function in triggers so that embeddings for text columns are saved automatically.
  2. Flexible settings: You can use public models (for example, OpenAI) or set up an on-premise proxy if you need more control—like adding your own limits, private access, etc.

Because of this, pg_auto_embeddings is great when you need to quickly “plug in” the calculation of embeddings into an existing DB without dealing with external binary extensions. It's convenient for RAG systems and other tasks where embeddings are a core functionality.


Key features

  • No extensions: You don't need to install any extra software in PostgreSQL—just run one SQL file (yes, we're repeating it, but it’s important!).

  • Two deployment options:

    1. Simplified installation: Run one SQL script and you’re done.
    2. On-Premise (via Docker): Spin up your own proxy server that handles embedding API requests. Also launched with a single docker run command.
  • OpenAI Embeddings support: At the moment, OpenAI models (text-embedding-3-small/large and some others) work out of the box.

  • Automatic vector update on insert or update of text data: You can “attach” auto-embedding to a table column so you don’t have to spend time writing a trigger.

  • Cleanup: If needed, you can completely remove pg_auto_embeddings and all of its objects with a single function.


Step 1. Installation

Take the file simple/pgae_simple_install.sql and run it in your database.

Initialize pg_auto_embeddings:

CALL pgae_init('openai-text-embedding-3-small', 'YOUR_OPENAI_API_KEY');
Enter fullscreen mode Exit fullscreen mode

That’s it!


Step 2. Usage

To get a vector (an array of double precision[]):

SELECT pgae_embedding('your text');
Enter fullscreen mode Exit fullscreen mode

If you have pgvector installed and want the vector format:

SELECT pgae_embedding_vec('some text');
Enter fullscreen mode Exit fullscreen mode

Automatic calculation and saving of the embedding

Suppose you have a posts table with a title column, and you want to automatically store embeddings for titles in the title_embedding column:

SELECT pgae_create_auto_embedding(
  'public', 'posts', 'title', 'title_embedding'
);
Enter fullscreen mode Exit fullscreen mode

Voila.

If you decide to remove pg_auto_embeddings completely along with all its functions and objects, just run:

SELECT pgae_self_destroy();
Enter fullscreen mode Exit fullscreen mode

[Optional] On-premise option (via Docker)

If you have restrictions on external requests or you want to set up your own proxy server for extra security, it’s easy to do.

You need to run two services:

  1. A Postgres database that acts as a proxy between FDW and Node.js.
  2. A Node.js service that sends requests to the model’s API.

pg_auto_embeddings provides a Docker image in which both components are already deployed and configured for use.

Below is an example docker-compose.yml:

services:
    server:
        image: elkornacio/pg_auto_embeddings:latest
        environment:
            - PG_HOST=localhost  # Host of the proxying Postgres
            - PG_PORT=5432       # Its port
            - PG_USERNAME=root_user
            - PG_PASSWORD=root_pass
            - DATABASE_SYNC=true
            - SERVER_HOST=localhost
            - SERVER_PORT=3000
            - SELF_URL=http://localhost:3000
        ports:
            # The port of the proxying Postgres must be open — your managed DB will connect to it
            - 5432:5432
Enter fullscreen mode Exit fullscreen mode

Then, instead of pgae_init, you use pgae_init_onprem:

CALL pgae_init_onprem(
  'your.host.com', '5432',  -- host and port of your proxying Postgres
  'openai-text-embedding-3-small', 'sk-...'  -- model type and API key
);
Enter fullscreen mode Exit fullscreen mode

Usage is the same as with the "simple" option:

SELECT pgae_embedding('your text');
Enter fullscreen mode Exit fullscreen mode

How does it work inside?

Under the hood, pg_auto_embeddings uses a trick based on Foreign Data Wrappers (FDW):

  1. When you install it, a “proxy table” named embeddings_* is created in your local database.
  2. When you call SELECT pgae_embedding('some text'), internally there’s an UPDATE to this “proxy table,” passing the text.
  3. The FDW redirects the request to a remote table (in the Docker proxy or a public server).
  4. On the remote server, a trigger fires that calls the internal function pgae_embedding_internal().
  5. This function sends an HTTP request to the Node.js proxy.
  6. The Node.js server calls the OpenAI API (or another provider if you’re using something else) and gets a vector.
  7. The vector returns to the remote DB, then back to the local DB, and finally shows up in your SELECT query.

Conclusion

pg_auto_embeddings is perfect when you need a quick and simple way to compute vector representations. It’s ideal for those who want to connect their DB with an LLM or build advanced full-text search directly in SQL.

The project is active and open to suggestions and stars :) PRs are also welcome. If you have any questions, feel free to post them in GitHub Issues.

Thanks for reading and happy experimenting :) If you find any mistakes in the text, please message me, and I’ll fix them quickly.


Tags: embeddings, rag, postgresql, postgres, embeddings, vector representations, ai

Top comments (0)