DEV Community

Cover image for Lexis Link
Nadine
Nadine Subscriber

Posted on

Lexis Link

Redis AI Challenge: Real-Time AI Innovators

This is a submission for the Redis AI Challenge: Real-Time AI Innovators.

What I Built

Ever wondered which sources your AI agent is actually using to answer questions?

Lexis Link: Build your knowledge base, then search using natural language. Newly uploaded content is immediately available for querying.

Creates a semantically searchable knowledge base that enables AI systems to provide accurate, traceable, and citable responses with real-time optimization.

Demo

Architecture Flow:

Content Upload β†’ Embedding β†’ Redis Index β†’ Search β†’ Gap Detection

πŸ”— https://lexis-link.vercel.app/

Note: Frontend deployed on Vercel, backend running locally with Redis Stack


How I Used Redis Stack

Migrated from FAISS to Redis Stack, transforming from batch-processing to a real-time, dynamic application.

Feature FAISS Redis Stack
Real-time Updates Requires index rebuild βœ… Instant updates
Persistence File-based, manual saves βœ… Automatic persistence
Production Ready Research/development βœ… Excellent for production
Confidence Tracking Manual implementation βœ… Built-in with Sets

Technical Implementation

Redis Search Index with Rich Metadata:


schema = [
   TextField("content"), TextField("author"), TextField("title"),
   TextField("publication_year"), TextField("page"),
   NumericField("chunk_index"), NumericField("total_chunks"),
   VectorField("vector", "FLAT", {
       "TYPE": "FLOAT32", "DIM": VECTOR_DIMENSION, 
       "DISTANCE_METRIC": "COSINE"
   })
]

Enter fullscreen mode Exit fullscreen mode

πŸš€ RAG Optimization with Knowledge Gap Detection

Automatically identifies content gaps using confidence thresholds (low-confidence queries are stored in a Redis Set for later review).

# Record knowledge gap if confidence is low
if top_confidence < CONFIDENCE_THRESHOLD:
    redis_client.sadd(REDIS_KNOWLEDGE_GAPS_SET, query)
    logging.info(f"πŸ“ Recorded knowledge gap: '{query}' (confidence: {top_confidence:.3f})") 

Enter fullscreen mode Exit fullscreen mode

πŸš€ Search Performance

Caching Strategy for Optimising Performance

⏳The embedding generation causes a bottleneck especially for complex concepts. Caching solves this problem. The speed gained from Redis caching is what makes the system feel responsive on repeat queries.

πŸš€ SEARCH PERFORMANCE: 165.6ms for query: 'freedom of speech'
INFO:werkzeug:127.0.0.1 - - [10/Aug/2025 20:35:22] "POST /semantic-search
πŸš€ SEARCH PERFORMANCE: 47.1ms for query: 'freedom of speech'
INFO:werkzeug:127.0.0.1 - - [10/Aug/2025 20:44:05] "POST /semantic-search HTTP/1.1" 200 - 
Enter fullscreen mode Exit fullscreen mode

Search Total Average: 60.78ms
Queries: 20

  • βœ”Real-time Search Avg: sub 100ms semantic search on newly uploaded content
  • βœ”Source Attribution: Complete citation tracking with page-level accuracy
  • βœ”Self-Optimization: Automatic knowledge gap recommendations for content improvement
  • βœ”Production Scale: Distributed, clusterable Redis architecture

Result: Redis transforms static knowledge bases into dynamic, self-improving AI systems.

πŸ“šInspired by the need to query and optimise structured, citable knowledge bases for AI agents.

Top comments (0)