DEV Community

Jules
Jules Subscriber

Posted on • Edited on

VariantLab

Agentic Postgres Challenge Submission

Variantlab - Generate Infinite App Variations in Seconds

What I Built

Variantlab is an AI-powered design tool that generates three parallel UI variations from a single prompt, letting you explore different design directions simultaneously instead of playing design roulette.

The Problem: Traditional AI design tools (like v0.dev) generate one result at a time. Don't like it? Regenerate and hope for the best. After 5-10 iterations, you might find something you like.

The Solution: Variantlab generates 3 unique interpretations simultaneously:

  • Variant A: Minimalist approach
  • Variant B: With animations and interactions
  • Variant C: Dark mode focused

Each variant is a fully functional React/TypeScript app with its own version history, stored in isolated database forks.

Key Features:

  • ⚑ Instant parallel generation (3 variants in ~10 seconds)
  • πŸ”€ True isolation using zero-copy database forks
  • πŸ“ Complete version history and lineage tracking
  • 🎨 Side-by-side preview in separate canvas
  • πŸ’Ύ Only pay for what changes (copy-on-write storage)
  • πŸ”„ Iterate from any variant as a starting point

Demo

Live Demo: https://variantlab-powered-by-tiger-data-488176611125.us-west1.run.app/

Usage Example:

User Input: "make my dashboard vercel style"

Variantlab Output:
β”œβ”€ Canvas A: Clean white space, minimal typography
β”œβ”€ Canvas B: Smooth animations, hover effects
└─ Canvas C: Dark theme, neon accents

All generated in parallel, all fully functional code.
Enter fullscreen mode Exit fullscreen mode

Screenshots:

Three different interpretations of the same prompt

Git-like version history showing forks and lineage

Real TypeScript code, compiled with esbuild


How I Used Agentic Postgres

Variantlab leverages all three core features of Agentic Postgres:

1. ⚑ Zero-Copy Database Forks

This is the foundation of Variantlab. Each design variant needs its own isolated environment without duplicating data.

Implementation:

// Generate 3 variants from a single prompt
async function generateVariants(projectId, prompt) {
  // 1. Create 3 zero-copy forks in ~2 seconds
  const forks = await Promise.all([
    tiger.fork(`project-${projectId}`, 'variant-a'),
    tiger.fork(`project-${projectId}`, 'variant-b'),
    tiger.fork(`project-${projectId}`, 'variant-c')
  ]);

  // 2. Each fork has its own isolated database
  // Shared data: project metadata, base files
  // Unique data: variant files, version history

  // 3. Generate variants in parallel
  const results = await Promise.all([
    aiAgent.generate(forks[0], prompt + " - minimalist"),
    aiAgent.generate(forks[1], prompt + " - animated"),
    aiAgent.generate(forks[2], prompt + " - dark mode")
  ]);

  return results;
}
Enter fullscreen mode Exit fullscreen mode

Database Schema:

-- Base project data (shared across forks)
CREATE TABLE projects (
    id UUID PRIMARY KEY,
    user_id UUID,
    name TEXT,
    current_version_id UUID
);

-- Version history (unique per fork)
CREATE TABLE versions (
    id UUID PRIMARY KEY,
    project_id UUID,
    parent_version_id UUID,  -- Git-like lineage
    prompt TEXT,
    style_variant TEXT,  -- 'minimalist', 'animated', 'dark'
    created_at TIMESTAMPTZ
);

-- File storage (unique per fork)
CREATE TABLE files (
    id UUID PRIMARY KEY,
    version_id UUID,
    path TEXT,  -- "src/App.tsx", "components/Dashboard.tsx"
    content TEXT,
    size_bytes INT
);
Enter fullscreen mode Exit fullscreen mode

Cost Efficiency:

Without zero-copy forks:

Base project: 100 MB
Γ— 3 variants = 300 MB storage
Enter fullscreen mode Exit fullscreen mode

With zero-copy forks:

Base project: 100 MB (shared)
Variant A changes: 2 MB
Variant B changes: 3 MB
Variant C changes: 2 MB
Total: 107 MB (64% savings!)
Enter fullscreen mode Exit fullscreen mode

2. πŸ” Hybrid Search (pgvectorscale + BM25)

Used for component discovery and style matching to help AI agents generate better code.

Implementation:

-- Component library with embeddings
CREATE TABLE component_library (
    id UUID PRIMARY KEY,
    name TEXT,
    description TEXT,
    code TEXT,
    tags TEXT[],
    embedding VECTOR(1536),  -- OpenAI embeddings
    usage_count INT
);

-- BM25 index for keyword search
CREATE INDEX components_bm25_idx 
ON component_library 
USING bm25(description, tags) 
WITH (text_config='english');

-- Vector index for semantic search
CREATE INDEX components_vector_idx 
ON component_library 
USING hnsw (embedding vector_cosine_ops);
Enter fullscreen mode Exit fullscreen mode

Hybrid Search Query:

async function findRelevantComponents(prompt) {
  // Generate embedding for user prompt
  const embedding = await openai.embeddings.create({
    input: prompt
  });

  // Hybrid search: semantic + keyword
  const results = await db.query(`
    WITH semantic AS (
      SELECT *, 
             1 - (embedding <=> $1) as similarity
      FROM component_library
      ORDER BY embedding <=> $1
      LIMIT 10
    ),
    keyword AS (
      SELECT *,
             bm25_score(description, tags) as score
      FROM component_library
      WHERE description @@ $2 OR $2 = ANY(tags)
      ORDER BY score DESC
      LIMIT 10
    )
    SELECT * FROM semantic
    UNION
    SELECT * FROM keyword
    ORDER BY similarity DESC, score DESC
    LIMIT 5
  `, [embedding.data[0].embedding, prompt]);

  return results.rows;
}
Enter fullscreen mode Exit fullscreen mode

AI Agent Usage:

// Before generating code, agent searches for relevant components
const relevantComponents = await findRelevantComponents(
  "vercel style dashboard"
);

// Prompt includes found components as context
const aiPrompt = `
Generate a dashboard component based on:
${prompt}

Relevant existing components:
${relevantComponents.map(c => c.code).join('\n\n')}

Style should match these patterns.
`;
Enter fullscreen mode Exit fullscreen mode

3. πŸ€– MCP Integration

Used the Tiger MCP server for schema design and query optimization.

Setup:

# Install Tiger CLI with MCP
curl -fsSL https://cli.tigerdata.com | sh
tiger auth login
tiger mcp install
Enter fullscreen mode Exit fullscreen mode

AI Agent Workflow:

// Agent designs optimal schema using MCP
const schemaDesign = await claude.chat({
  messages: [{
    role: "user",
    content: `Using Postgres best practices from Tiger MCP,
    design a schema for storing app variants with:
    - Version history
    - File storage
    - User preferences
    - Embedding search`
  }],
  tools: ['tiger-mcp']  // Access to Tiger docs via MCP
});

// MCP provides expert guidance on:
// - Proper indexing strategies
// - Partition schemes for large datasets
// - Query optimization tips
Enter fullscreen mode Exit fullscreen mode

Example MCP Interaction:

Agent: "How should I index files table for fast retrieval?"

MCP Response (from Tiger docs):
- Use composite index on (version_id, path)
- Consider partial index for active versions
- Use text search for content if needed
Enter fullscreen mode Exit fullscreen mode

Overall Experience

What Worked Great βœ…

1. Zero-Copy Forks Are Game-Changing

The ability to create isolated environments in 2-3 seconds completely changed my architecture. Initially, I was going to use separate Postgres schemas or even separate databasesβ€”both would have been slow and expensive.

With forks:

  • Instant creation (no waiting for copies)
  • True isolation (variants can't interfere)
  • Cost-efficient (only pay for deltas)
  • Easy cleanup (delete unused forks)

Performance metrics:

Traditional copy: 5-10 minutes for 100 MB DB
Zero-copy fork: 2-3 seconds
Speedup: ~200x faster
Enter fullscreen mode Exit fullscreen mode

2. Hybrid Search is Powerful

Combining BM25 + vector search gave much better results than either alone:

  • BM25 catches exact keyword matches
  • Vector search finds semantic similarity
  • Together = high precision + recall

Example: User searches "modern card layout"

  • BM25 finds: "card", "layout" (keyword match)
  • Vector finds: "contemporary grid design" (semantic match)

3. MCP Server Feels Like Having a Postgres Expert

Instead of searching docs manually, the agent queries Tiger MCP directly:

Before MCP: 
- Google "postgres indexing best practices"
- Read 5 articles
- Hope advice is current
- Implement

With MCP:
- Agent asks "best index for this query?"
- Gets Tiger-specific optimized answer
- Implements directly
Enter fullscreen mode Exit fullscreen mode

Challenges & Solutions πŸ› οΈ

Challenge 1: Managing Fork Lifecycle

Early versions leaked forks (created but never deleted).

Solution: Implemented cleanup strategy:

// Auto-delete forks after 24h if not selected
async function cleanupOldForks() {
  const oldForks = await db.query(`
    SELECT id FROM versions
    WHERE created_at < NOW() - INTERVAL '24 hours'
    AND parent_version_id IS NOT NULL
    AND id NOT IN (
      SELECT current_version_id FROM projects
    )
  `);

  for (const fork of oldForks) {
    await tiger.deleteFork(fork.id);
  }
}
Enter fullscreen mode Exit fullscreen mode

Challenge 2: AI Consistency Across Variants

Three parallel AI agents sometimes generated incompatible code.

Solution: Shared component library + strict schema:

// All agents pull from same component library
const sharedComponents = await findRelevantComponents(basePrompt);

// Each agent gets unique style modifier
const prompts = [
  basePrompt + " - minimalist, white space, clean",
  basePrompt + " - animated, smooth transitions",
  basePrompt + " - dark mode, high contrast"
];

// Validate output format
function validateVariant(code) {
  // Must have: App.tsx, proper imports, valid JSX
  if (!code.includes('export default')) {
    throw new Error('Invalid component structure');
  }
}
Enter fullscreen mode Exit fullscreen mode

Challenge 3: Embedding Sync Performance

Updating embeddings for every file change was slow.

Solution: Used pgai Vectorizer for automatic sync:

-- Auto-sync embeddings when files change
SELECT ai.create_vectorizer(
  'files',
  destination => 'file_embeddings',
  embedding => 'openai/text-embedding-ada-002',
  chunking => ai.chunking_character_text_splitter('content')
);

-- Now embeddings update automatically on INSERT/UPDATE
Enter fullscreen mode Exit fullscreen mode

"I Didn't Know You Could Do That!" Moments 🀯

1. Forks Are THAT Fast

I expected 10-30 seconds. Getting 2-3 seconds was mind-blowing. This made real-time parallel generation actually feasible.

2. Hybrid Search > Vector Search Alone

I initially only used pgvectorscale. Adding BM25 improved component discovery by ~40% (measured by AI agent picking relevant components).

3. MCP Can Search Documentation Semantically

The Tiger MCP server doesn't just return docsβ€”it searches them semantically and returns the MOST relevant sections. It's like having a Postgres expert that actually reads the docs for you.

Metrics πŸ“Š

Technical Performance:

  • Fork creation: 2.3s average
  • Variant generation (3 parallel): 12s total
  • Storage efficiency: 64% reduction vs full copies
  • Component search latency: <100ms (hybrid)

User Metrics (50 beta users):

  • Average variants created per project: 4.2
  • Variant selection rate: 73% (vs 100% iterations with single-gen)
  • Time saved per project: ~15 minutes
  • User satisfaction: 4.6/5

What's Next πŸš€

Planned features:

  1. Collaborative forks - Multiple users iterate on same project
  2. Version diffing - Visual comparison between variants
  3. Custom component library - Users upload their design system
  4. Export to GitHub - One-click export as repo with proper structure

Open Source Plans:
Considering open-sourcing the fork management layer as tiger-fork-manager package for other developers building similar apps.

Final Thoughts

Agentic Postgres fundamentally changed what's possible. Without zero-copy forks, Variantlab would either be:

  • Too slow (waiting for copies)
  • Too expensive (3x storage costs)
  • Too limited (only 1 variant at a time)

The combination of forks + hybrid search + MCP feels like the future of agentic applications: fast, intelligent, and economically viable.

Would I recommend Agentic Postgres?

Absolutely. Especially if you're building anything that involves:

  • Parallel experimentation
  • AI agents that need isolation
  • Apps with versioning/branching
  • Systems that benefit from semantic search

Tech Stack

  • Database: Tiger Cloud (Agentic Postgres)
  • AI: Claude Sonnet 4.5
  • Backend: Node.js + Express
  • Frontend: React + TypeScript + Tailwind
  • Compiler: esbuild
  • Deployment: Google Cloud

Links


Built with ❀️ for the Agentic Postgres Challenge

Top comments (5)

Collapse
 
nadinev profile image
Nadine

Hi Julio, I tested your app and it's working great! For future protection, I highly recommend adding rate limiting to the API to prevent potential abuse.

Collapse
 
nottjules profile image
Jules

Thank you, and ty for the advice as well :)

Collapse
 
vero-code profile image
Veronika Kashtanova

This looks like something people will genuinely use.

Collapse
 
hira_malik_8e874a324a6051 profile image
Hira Malik

Hi, I tested your app and it's working great!

Collapse
 
hira_malik_8e874a324a6051 profile image
Hira Malik

Hi, I tested your app and it's working great! For future protection, I highly recommend adding rate limiting to the API prevent potential abuse