DEV Community

Cover image for S3 Vectors: Changing How We Think About Vector Embeddings
Benedetto Proietti for Janea Systems

Posted on • Edited on • Originally published at linkedin.com

S3 Vectors: Changing How We Think About Vector Embeddings

Inserting and maintaining data in a relational database is expensive. Every write must update one or more indexes (data structures such as B-trees) that accelerate reads at the cost of extra CPU, memory, and I/O. On a single node, tables start to struggle once they pass a few terabytes. Distributed SQL and NoSQL systems push that limit, but the fundamental write amplification costs remain.

Object Storage

To escape those costs, teams began landing raw data in cloud object stores like Amazon S3. Instead of hot indexes, query engines (Spark, Athena, Trino) rely on partition pruning and lightweight statistics. The led to dramatically lower storage bills and petabyte scale datasets on commodity hardware.

Vectors Embeddings

AI and LLM workloads now emit vector embeddings – hundreds or thousands of dimensions per record. Answering “Which vectors are nearest to this one?” in real time is tricky:

High-dimensional data breaks classic data structures.
We lean on approximate nearest neighbor (ANN) algorithms such as HNSW or IVFPQ.
Queries often combine a distance threshold with metadata filters.
Recall, precision, and latency form a three-way tradeoff.

Amazon S3 Vector: A Game-Changer

Announced yesterday, Amazon S3 Vectors brings vector-aware storage classes to S3. Each vector table:

  • Stores vectors of fixed dimensionality, compressed on write. Not possible with traditional S3.
  • Supports ANN search with simultaneous filters on metadata. Immensely faster than S3.
  • Delivers sub-second latency: great for batch, a bit slow for interactive UX.

Closing the Latency Gap with In-Memory Caching

Janea Systems’ background is deeply rooted in working with in-memory, low-latency caches. Our track record includes:

  • We are the creators of Memurai, the official Redis for Windows, trusted by developers for its performance and reliability.
  • We are active contributors to ValKey, a rapidly evolving open-source fork of Redis, pushing the boundaries of in-memory data stores.

Given the inherent characteristics of S3 Vectors, including powerful storage and batch processing, but with room for improvement in interactive scenarios, the next logical step is to strategically implement a high-performance cache on top of S3 Vectors.

The Future

We are excited about the possibilities Amazon S3 Vectors unlocks. The upcoming articles will cover how to effectively integrate Redis, Valkey, or Memurai with the S3 Vector service to achieve optimal performance for your AI/LLM workloads. Also, we will explore the new AWS service and its implications for modern data architectures in detail. Stay tuned!

Top comments (0)