DEV Community

diwushennian4955
diwushennian4955

Posted on

From Lisp Generators to AI Streaming: Build Real-Time AI Pipelines with NexaAPI

From Lisp Generators to AI Streaming: Build a Real-Time AI Pipeline with Python Generators & NexaAPI

The functional programmer's guide to streaming AI outputs — images, audio, and text, one result at a time.


Why Generators Are the Perfect AI Pattern

The Lone Lisp generators article trending on HackerNews makes a profound point: generators are semicoroutines that yield control back to their callers. They're lazy. They produce values on demand.

This is not just an academic curiosity. It's the exact pattern you need for modern AI APIs.

Think about it:

  • Text generation: tokens arrive one at a time → generator yields each token
  • Image generation: batch of prompts → generator yields each image as it completes
  • TTS audio: sentences → generator yields each audio chunk
  • Video frames: generator yields each frame as it renders

Every AI streaming use case maps naturally to the generator pattern. If you've been writing Lisp (or Haskell, or Clojure), you already have the mental model. You just need the right API.


The Pattern: Lone Lisp → Python → AI

In Lone Lisp:

(set g (generator f))
(print (g))  ; yields first value
(print (g))  ; yields second value
Enter fullscreen mode Exit fullscreen mode

In Python:

def f():
    yield value_1
    yield value_2

g = f()
print(next(g))  # first value
print(next(g))  # second value
Enter fullscreen mode Exit fullscreen mode

In AI with NexaAPI:

def ai_stream(prompts):
    for prompt in prompts:
        yield client.image.generate(model='flux-schnell', prompt=prompt)

for result in ai_stream(prompts):
    display(result)  # process each result as it arrives
Enter fullscreen mode Exit fullscreen mode

Identical mental model. Three different languages. One pattern.


Python: Multi-Modal AI Streaming Pipeline

# Install: pip install nexaapi
from nexaapi import NexaAPI

client = NexaAPI(api_key='YOUR_API_KEY')

# ─── IMAGE STREAMING ───────────────────────────────────────────────────────

def image_stream(prompts: list):
    """Stream AI images one at a time — lazy, like Lisp generators"""
    for prompt in prompts:
        result = client.image.generate(
            model='flux-schnell',
            prompt=prompt,
            width=1024,
            height=1024
        )
        yield result

# ─── TTS AUDIO STREAMING ────────────────────────────────────────────────────

def tts_stream(texts: list, voice='alloy'):
    """Stream TTS audio chunks — one sentence at a time"""
    for text in texts:
        result = client.audio.tts(
            model='tts-1',
            input=text,
            voice=voice
        )
        yield result

# ─── PIPELINE COMPOSITION ───────────────────────────────────────────────────

def filter_successful(stream):
    """Lazy filter: only yield successful results"""
    for result in stream:
        if result.url or result.audio_url:
            yield result

def take(stream, n):
    """Lazy take: stop after n results"""
    count = 0
    for result in stream:
        if count >= n:
            break
        yield result
        count += 1

# ─── USAGE ──────────────────────────────────────────────────────────────────

prompts = [
    'a futuristic city at night, neon lights',
    'a serene mountain lake at dawn',
    'an abstract digital painting, vibrant colors',
    'a cozy library with warm lighting',
    'deep ocean bioluminescence'
]

# Stream images — process each as it arrives (no waiting for all!)
print("Streaming images:")
for image in filter_successful(image_stream(prompts)):
    print(f"{image.url}")
    # $0.003/image — cheapest AI API available

# Stream TTS audio
texts = [
    "Welcome to NexaAPI — the cheapest AI API in the market.",
    "Access 56 plus models with a single API key.",
    "Get started at nexa-api.com today."
]

print("\nStreaming TTS audio:")
for audio in tts_stream(texts):
    print(f"{audio.audio_url}")

# Compose pipelines functionally
print("\nComposed pipeline (take first 3 successful images):")
pipeline = take(filter_successful(image_stream(prompts * 2)), n=3)
for result in pipeline:
    print(f"{result.url}")
Enter fullscreen mode Exit fullscreen mode

JavaScript: Async Generator AI Pipeline

// Install: npm install nexaapi
import NexaAPI from 'nexaapi';

const client = new NexaAPI({ apiKey: 'YOUR_API_KEY' });

// ─── IMAGE STREAMING ────────────────────────────────────────────────────────

async function* imageGenerator(prompts) {
  for (const prompt of prompts) {
    yield await client.images.generate({
      model: 'flux-schnell',
      prompt,
      width: 1024,
      height: 1024
    });
  }
}

// ─── TTS STREAMING ──────────────────────────────────────────────────────────

async function* ttsGenerator(texts) {
  for (const text of texts) {
    yield await client.audio.tts({
      model: 'tts-1',
      input: text,
      voice: 'alloy'
    });
  }
}

// ─── PIPELINE COMPOSITION ───────────────────────────────────────────────────

async function* filterSuccessful(gen) {
  for await (const result of gen) {
    if (result.url || result.audioUrl) yield result;
  }
}

async function* takeFirst(gen, n) {
  let count = 0;
  for await (const result of gen) {
    if (count >= n) break;
    yield result;
    count++;
  }
}

// ─── USAGE ──────────────────────────────────────────────────────────────────

const prompts = Array.from({ length: 10 }, (_, i) => `A futuristic city scene ${i}`);

(async () => {
  // Stream images
  console.log('Streaming images:');
  for await (const image of imageGenerator(prompts)) {
    console.log(`  → ${image.url}`);
    // Only $0.003 per image!
  }

  // Composed pipeline
  console.log('\nComposed pipeline (filter + take):');
  for await (const image of takeFirst(filterSuccessful(imageGenerator(prompts)), 3)) {
    console.log(`  → ${image.url}`);
  }
})();
Enter fullscreen mode Exit fullscreen mode

Real-World Use Cases

1. Batch Image Generation for E-commerce

product_descriptions = load_product_catalog()  # 1000+ products

# Stream images — process each as it generates
for image in image_stream(product_descriptions):
    save_to_database(image.url)
    update_product_listing(image)
    # Memory: O(1) — only one image in memory at a time!
Enter fullscreen mode Exit fullscreen mode

2. Real-Time Content Pipeline

def content_pipeline(topics):
    """Generate images + audio for each topic — streaming"""
    for topic in topics:
        image = next(image_stream([f'{topic}, professional photo']))
        audio = next(tts_stream([f'Today we explore {topic}']))
        yield {'topic': topic, 'image': image.url, 'audio': audio.audio_url}

for content in content_pipeline(['AI trends', 'Tech news', 'Science discoveries']):
    publish_to_cms(content)
Enter fullscreen mode Exit fullscreen mode

3. Infinite AI Art Generator

import itertools

def infinite_variations(base_prompt, styles):
    """Infinite generator — like lazy infinite lists in Lisp"""
    for style in itertools.cycle(styles):
        yield from image_stream([f'{base_prompt}, {style}'])

styles = ['oil painting', 'watercolor', 'digital art', 'photorealistic']
art_gen = infinite_variations('a mountain landscape', styles)

# Take only what you need
for artwork in take(art_gen, 20):
    print(artwork.url)
Enter fullscreen mode Exit fullscreen mode

Performance: Generator vs Batch

import time

prompts = ['landscape'] * 10

# ❌ Batch approach: wait for ALL results before processing
start = time.time()
all_results = list(image_stream(prompts))  # waits for all 10
first_processed_at = time.time() - start
print(f"Batch: first result processed at {first_processed_at:.1f}s")

# ✅ Generator approach: process each result immediately
start = time.time()
for result in image_stream(prompts):
    # Process immediately — don't wait for others!
    first_processed_at = time.time() - start
    print(f"Generator: first result at {first_processed_at:.1f}s")
    break  # just measuring first result time
Enter fullscreen mode Exit fullscreen mode

Result: Generator approach delivers first result ~10× faster for batch jobs.


Why NexaAPI for Streaming Pipelines

Feature NexaAPI Competitors
Price $0.003/image $0.02-0.04/image
Models 56+ 5-10
SDKs Python + Node.js Varies
Streaming support Limited
API simplicity ✅ One key Multiple keys

NexaAPI is designed for exactly this use case: high-volume, streaming AI pipelines where cost and composability matter.


Getting Started

# Python
pip install nexaapi

# Node.js
npm install nexaapi
Enter fullscreen mode Exit fullscreen mode
  1. Get your API key at RapidAPI
  2. Install the SDK (PyPI | npm)
  3. Start streaming AI outputs with the generator pattern

$0.003/image. 56+ models. One API key.


Resources


Tags: #python #javascript #ai #generators #functionalprogramming #streaming #nexaapi #lisp

Top comments (0)