The Output Problem
Here's something that bugs me about most agent frameworks: they're obsessed with input — tools, RAG, memory, context windows — but nobody talks about where the output goes.
You build a multi-agent workflow. A research agent gathers data, an analyst crunches it, a writer produces a report. Beautiful architecture. Then result.final_output dumps to stdout and... that's it. You copy-paste it into a Google Doc. Or worse, it lives in a log file forever.
Agents need output surfaces — places where their work product becomes something you can share, link to, and actually use. Not just tools for doing things, but infrastructure for producing things.
I've been using SurfaceDocs for this. It's an API-first document service built specifically for machine-generated content. Agent produces structured output, SurfaceDocs turns it into a shareable URL. No templates, no CMS, no infrastructure to maintain. Here's how I wire it up with the OpenAI Agents SDK.
The OpenAI Agents SDK in 30 Seconds
If you're reading this, you probably know the Agents SDK. Quick refresher on the parts that matter here:
from agents import Agent, Runner
agent = Agent(
name="Writer",
instructions="You write technical reports.",
output_type=MyPydanticModel, # structured output
)
result = Runner.run_sync(agent, "Write a report on X")
# result.final_output is a MyPydanticModel instance
The key feature for us: output_type. Pass a Pydantic model and the SDK uses OpenAI's structured outputs under the hood. You get typed, validated output — not a blob of text you have to parse. This is what makes the SurfaceDocs integration clean.
The Integration Pattern
The pattern is straightforward:
- Define a Pydantic model that matches SurfaceDocs' document schema
- Use it as your agent's
output_type - After the run, pass the structured output to
docs.save()
Agent focuses on content. Publishing is a post-processing step. Clean separation.
Agent → structured output (Pydantic) → SurfaceDocs → shareable URL
Let's build it.
Example 1: Agent That Publishes a Report
First, install both packages:
pip install openai-agents surfacedocs
Now the code. I'll define a Pydantic model that mirrors the SurfaceDocs document format, use it as the agent's output type, and save the result:
from __future__ import annotations
from pydantic import BaseModel
from agents import Agent, Runner
from surfacedocs import SurfaceDocs
# -- Define the document structure as Pydantic models --
class BlockMetadata(BaseModel):
level: int | None = None
language: str | None = None
listType: str | None = None
class Block(BaseModel):
type: str # heading, paragraph, code, list, quote, etc.
content: str
metadata: BlockMetadata | None = None
class Document(BaseModel):
title: str
blocks: list[Block]
# -- Create the agent --
agent = Agent(
name="Report Writer",
instructions="""You write well-structured technical reports.
Output a document with a clear title and blocks of content.
Use a variety of block types:
- heading blocks (with metadata.level = 1, 2, or 3)
- paragraph blocks for body text
- code blocks (with metadata.language set appropriately)
- list blocks for enumerations (metadata.listType = "bullet" or "ordered")
- quote blocks for key takeaways
Write substantive, information-dense content. No filler.""",
output_type=Document,
)
# -- Run and publish --
result = Runner.run_sync(
agent,
"Write a technical report on Python async patterns: "
"when to use asyncio vs threading vs multiprocessing. "
"Include code examples.",
)
doc: Document = result.final_output
# Publish to SurfaceDocs
docs = SurfaceDocs() # uses SURFACEDOCS_API_KEY env var
saved = docs.save_raw(
title=doc.title,
blocks=[block.model_dump(exclude_none=True) for block in doc.blocks],
metadata={"source": "report-writer-agent"},
)
print(f"Published: {saved.url}")
Run it, and you get a URL. Open it in a browser — a clean, formatted document you can share with anyone. No copy-pasting, no formatting, no "let me put this in a Google Doc."
The docs.save_raw() method takes the title and blocks directly. The exclude_none=True keeps the payload clean — no null metadata fields cluttering things up.
A note on the schema approach
You could also use SurfaceDocs' built-in DOCUMENT_SCHEMA and SYSTEM_PROMPT with the raw OpenAI API. But the Agents SDK's output_type is Pydantic-native, so defining your own model gives you more control and keeps everything in Agents SDK patterns. Either approach works.
Example 2: Multi-Agent Research → Report Pipeline
This is where it gets interesting. Two agents:
- Research Agent — gathers information using a tool (simulated here, but you'd wire up web search or a database)
- Report Writer Agent — takes research findings and produces a structured document
The research agent hands off to the writer via the SDK's handoff mechanism.
from __future__ import annotations
from pydantic import BaseModel
from agents import Agent, Runner, function_tool
from surfacedocs import SurfaceDocs
# -- Document schema (same as before) --
class BlockMetadata(BaseModel):
level: int | None = None
language: str | None = None
listType: str | None = None
class Block(BaseModel):
type: str
content: str
metadata: BlockMetadata | None = None
class Document(BaseModel):
title: str
blocks: list[Block]
# -- Research tool --
@function_tool
def search_web(query: str) -> str:
"""Search the web for information on a topic."""
# In production, wire this to Brave Search, Tavily, etc.
# Returning mock data for the example.
return (
f"Search results for '{query}':\n"
"1. FastAPI has become the most popular async Python framework, "
"growing 300% in GitHub stars since 2023.\n"
"2. Litestar emerged as a strong alternative with built-in "
"dependency injection.\n"
"3. Django 5.x added async view support but adoption is gradual.\n"
"4. Starlette remains the foundation for most async frameworks."
)
# -- The report writer agent (produces structured output) --
report_writer = Agent(
name="Report Writer",
instructions="""You receive research findings and produce a polished
technical report as a structured document.
Use these block types:
- heading (metadata.level: 1 for title sections, 2 for subsections)
- paragraph for body text
- list for comparisons and enumerations
- quote for key insights or conclusions
- code for any technical examples
Synthesize the research into a coherent narrative.
Be opinionated — make recommendations, not just summaries.""",
output_type=Document,
)
# -- The research agent (gathers info, then hands off) --
research_agent = Agent(
name="Research Lead",
instructions="""You are a senior technical researcher. Given a topic:
1. Use the search tool to gather information (make 2-3 searches
with different angles)
2. Once you have enough material, hand off to the Report Writer
with a clear summary of your findings.
Include specific data points, comparisons, and trends.""",
tools=[search_web],
handoffs=[report_writer],
)
# -- Run the pipeline --
result = Runner.run_sync(
research_agent,
"Research the current state of async Python web frameworks in 2025. "
"Compare FastAPI, Django, Litestar, and Starlette.",
)
doc: Document = result.final_output
# Publish
docs = SurfaceDocs()
saved = docs.save_raw(
title=doc.title,
blocks=[block.model_dump(exclude_none=True) for block in doc.blocks],
metadata={
"source": "research-pipeline",
"agents": ["research-lead", "report-writer"],
},
)
print(f"Published: {saved.url}")
What's happening here: the research agent runs its searches, accumulates context, then hands off to the report writer. The report writer's output_type=Document ensures the final output is structured. We catch it at the end and publish.
The metadata field is handy — I tag documents with which agents produced them, what pipeline they came from, timestamps, whatever makes sense. Useful when you're publishing dozens of documents and need to trace provenance.
Organizing output with folders
When your agents produce documents regularly, folders keep things sane:
docs = SurfaceDocs()
# Create a folder for this pipeline's output
folder = docs.create_folder("Async Python Research")
saved = docs.save_raw(
title=doc.title,
blocks=[block.model_dump(exclude_none=True) for block in doc.blocks],
folder_id=folder.id,
metadata={"source": "research-pipeline"},
)
Why This Matters
There's a conceptual gap in how we build agents today. We've gotten good at giving agents capabilities — tools, function calling, code execution. But we haven't thought enough about giving agents output surfaces.
Think about how human knowledge workers operate. A consultant doesn't just "think about the problem" — they produce a deliverable. A slide deck, a report, a memo. The artifact is the point.
Agents should work the same way. The value isn't in the inference call. It's in the artifact that comes out the other side — something with a URL, something you can send to a stakeholder, something that persists beyond the terminal session.
This is the pattern I keep coming back to:
- Structured output to ensure the agent produces well-formed content (not just freeform text)
- A document service to turn that content into something shareable
- Metadata and folders to organize output at scale
SurfaceDocs handles steps 2 and 3. The Agents SDK handles step 1 beautifully with Pydantic models. Together, they close the loop from "agent ran successfully" to "here's the URL with the results."
Getting Started
SurfaceDocs: surfacedocs.dev — grab an API key, pip install surfacedocs
OpenAI Agents SDK: pip install openai-agents — docs on GitHub
The full examples from this article work as-is. Set your OPENAI_API_KEY and SURFACEDOCS_API_KEY environment variables and run them.
The broader point: if you're building agents, think about where the output goes. Your agents are doing real work. Give that work a place to live.
Top comments (0)