DEV Community

Cover image for Meet pixserp — One Drop-in API for Web, News, Places, Flights, Hotels, YouTube and Anything Else on the Live Web
mv7
mv7

Posted on • Originally published at pixserp.com

Meet pixserp — One Drop-in API for Web, News, Places, Flights, Hotels, YouTube and Anything Else on the Live Web

If you've built an AI agent in 2026, you've probably integrated more "search" APIs than you'd like to admit. One for web pages. One for news. One for product prices because the SERP one doesn't return shopping cards. One for flights, because of course none of the above know about flights. One for YouTube transcripts. Each with its own SDK, its own JSON shape, its own pricing model, its own bill at month-end.

We were there too. We built Teti AI — an AI assistant used daily by hundreds of thousands of people — and behind every conversation there's a live web lookup. Millions a day. Sometimes it's a news headline. Sometimes a flight, a hotel, a product, a YouTube summary. All of those are "search" from the user's point of view, even if no single search API covers them. That's where pixserp was born.

This post is the short version of what pixserp is and why we think you should care if you're shipping anything LLM-shaped.

The setup: one endpoint, ten shapes

pixserp is an AI search API with a single twist: one endpoint covers ten different shapes of answer. You write a natural-language question; the agent figures out which vertical the answer lives in.

from openai import OpenAI

client = OpenAI(api_key="pxs_…", base_url="https://pixserp.com/api/v1")

def ask(q):
    r = client.chat.completions.create(
        model="pixserp-fast",
        messages=[{"role": "user", "content": q}],
    )
    return r.choices[0].message

# Same call. Different shapes of answer. One bill.
ask("Best practices for Postgres index maintenance in 2026")      # web
ask("latest news on AI startup funding this week")                # news
ask("top-rated ramen near Porta Garibaldi, Milan")                # places
ask("iPhone 15 Pro under $900 with free shipping")                # shopping
ask("cheapest direct MXP→JFK on July 18, economy")                # flights
ask("hotels in Barcelona Jul 15-20, 4★+, under $250/night")       # hotels
ask("summarize https://youtu.be/dQw4w9WgXcQ in 5 bullets")        # YouTube + transcript
ask("extract the key claims from https://example.com/article")    # any URL
Enter fullscreen mode Exit fullscreen mode

You don't pick the vertical. You don't switch endpoints. You just ask, and you get back a cited answer with structured per-shape fields — rating and address for places, price and store for shopping, flight numbers and segments for flights, etc.

Why "OpenAI-compatible" matters

The wire format is the standard chat.completions shape, which means anything that speaks OpenAI's API speaks pixserp. Swap base_url and you're done:

  • The official openai Python / JS / Go SDKs work
  • LangChain works (use ChatOpenAI with api_base)
  • LlamaIndex works (OpenAILike)
  • Vercel AI SDK works (createOpenAI with baseURL)
  • Cursor, Continue, any tool that lets you set a base URL works
  • curl works (it's just HTTP)

No bespoke client to install. No per-framework adapter to maintain. The same code that talks to GPT-4 talks to pixserp, just pointed at a different host.

import OpenAI from "openai";

const pixserp = new OpenAI({
  apiKey:  "pxs_…",
  baseURL: "https://pixserp.com/api/v1",
});

const r = await pixserp.chat.completions.create({
  model:    "pixserp-fast",
  messages: [{ role: "user", content: "Latest CRISPR developments in 2026" }],
});

console.log(r.choices[0].message.content);       // cited answer prose
console.log(r.choices[0].message.citations);     // structured sources
Enter fullscreen mode Exit fullscreen mode

Citations are first-class

Every fact in the answer comes back with both an inline [1] marker and a structured entry in message.citations. Each entry has a kind (web, news, place, shopping, flight, hotel, video, transcript, image, webpage) plus per-kind structured fields. UI rendering becomes "render the cards", not "parse free-form prose":

{
  "id":     "1",
  "kind":   "place",
  "title":  "Ippudo NY",
  "rating": 4.5,
  "address": "65 4th Ave, New York, NY 10003",
  "url":    "https://www.google.com/maps/place/…",
  "markdown": "**Ippudo NY** — 4.5★ · 65 4th Ave · New York"
}
Enter fullscreen mode Exit fullscreen mode

Drop into a card component, you have a working place card without writing a single regex over markdown.

Pricing — flat per request

This is where we differ most from the rest of the category.

Model Use for Price
pixserp-fast Quick lookups, chat-style $1.50 / 1k
pixserp-standard Balanced research $2.50 / 1k
pixserp-deep Multi-angle thorough $3.50 / 1k
pixserp-agent Multi-step research loop (up to 100 steps) $0.0035 / step

For comparison, May 2026 list prices on the rest of the category:

  • Exa: $7–$15 / 1k + extra $1 / 1k per content-type fetch
  • Tavily: $8–$16 / 1k pay-as-you-go
  • Perplexity Sonar: $1–$15 per 1M tokens + an extra $14–$22 / 1k on Pro Search
  • Brave Summarizer: async polling, multi-second floor

We're cheaper than Exa by ~4×, cheaper than Tavily by ~5×, and predictable in a way per-token APIs aren't. No per-token roulette, no metered sub-calls to count, no end-of-month surprises. You know the cost before you call.

How? Because we have one variable (request count) instead of N (tokens × pages × searches × verticals). The variance averages out at scale; we price the average plus a margin and offer it back as a fixed unit. The teams we talked to before launching this pricing all had the same story: "month one was fine, month three the bill was 4× projected because traffic shifted toward harder questions." Flat pricing ends that.

Streaming, JSON schema, MCP

A few other things that ship by default:

SSE streaming — same OpenAI wire format. stream: true and you get token-by-token chunks. Time-to-first-token on pixserp-fast is ~1 second.

JSON schema outputs — pass response_format with a schema, get JSON back with web-grounded values. No parsing, no validation gymnastics:

r = client.chat.completions.create(
    model="pixserp-fast",
    messages=[{"role": "user", "content": "Top 3 aerospace companies, CEO, founded year"}],
    response_format={
        "type": "json_schema",
        "json_schema": {
            "name": "companies",
            "schema": {
                "type": "object",
                "properties": {
                    "companies": {
                        "type": "array",
                        "items": {
                            "type": "object",
                            "properties": {
                                "name":         {"type": "string"},
                                "ceo":          {"type": "string"},
                                "founded_year": {"type": "integer"},
                            },
                            "required": ["name", "ceo", "founded_year"],
                        },
                    },
                },
            },
        },
    },
)

import json
for c in json.loads(r.choices[0].message.content)["companies"]:
    print(c["name"], "", c["ceo"], "", c["founded_year"])
Enter fullscreen mode Exit fullscreen mode

MCP server — we also ship a dedicated Model Context Protocol server so any MCP-compatible client (Claude Desktop, Cursor, Zed, Claude Code, Cline, Continue) can pick up pixserp as a search tool with one config block. Full install configs are at pixserp.com/mcp.

What it deletes from your codebase

If you're coming from a stitched-together setup (SerpAPI + scraper + your own LLM synthesis), what you can delete:

  1. Result-picking logic — pixserp picks which URLs to fetch.
  2. Page scraper (Playwright / BeautifulSoup / trafilatura) — content extraction is server-side.
  3. HTML cleaner / boilerplate stripper — same.
  4. Token-budget truncator — pixserp returns a final answer, not raw context.
  5. Citation-marker injector — built into the response.
  6. The second LLM call for synthesis — pixserp is the synthesis. You delete the OpenAI bill underneath.
  7. Per-vertical clients (flight API, hotels API, shopping API, YouTube transcript service) — all behind the same endpoint now.

If that stack is ~300 lines and three separate vendor accounts, you delete ~300 lines and consolidate three accounts.

Honest comparison with the rest of the category

We're hardly the only AI search API. If you're evaluating:

  • Exa is the right pick if you specifically want embedding-similarity over papers/blogs and synthesis isn't part of the job. Their /contents endpoint is solid for "find me posts similar to X".
  • Tavily is the most pragmatic for plain web/news Q&A if you don't need streaming, structured outputs, or any verticals beyond web/news.
  • Perplexity Sonar is the right pick if you want their consumer-product research engine specifically, and you're OK with per-token pricing.
  • Brave Search is great if you only need raw SERP results and want a simple per-1k web/news/images call. No native synthesis, no agent.

pixserp is the pick if:

  • You want one endpoint that handles web + news + places + shopping + flights + hotels + YouTube + transcripts + any URL — without integrating four separate APIs.
  • You want flat per-request pricing so finance can model your bill.
  • You want a drop-in OpenAI-compatible wire format so existing code works.
  • You want structured citations with per-shape fields, not just URL lists.

Where to go next

If you want to try it:

  1. Get an API key — $2.50 free credit on signup, no card.
  2. Point your base_url at https://pixserp.com/api/v1.
  3. Pick a model — start with pixserp-fast for chat, pixserp-standard for research.
  4. Ship. Your existing chat.completions.create() code keeps working.

Docs and per-language quickstarts: pixserp.com/docs. MCP install: pixserp.com/mcp. Playground (in-browser, no setup): pixserp.com/playground.

We run Teti AI on this exact stack. If pixserp goes down, Teti goes down too — so the uptime expectations are the same as our own product. Drop it in and tell us what you build.

— The Teti AI team

Top comments (0)