DEV Community

Cover image for How to Turn TradingAgents into a Real Trading API Workflow
Wanda
Wanda

Posted on • Originally published at apidog.com

How to Turn TradingAgents into a Real Trading API Workflow

TL;DR / Quick Answer

The most efficient way to operationalize TradingAgents is to install it as a Python package, wrap it with a minimal FastAPI service, and use Apidog to test and document your API. This setup enables you to trigger analyses, poll for results, define and share the request contract, and collaborate with your team using a repeatable workflow.

Try Apidog today

Introduction

TradingAgents offers a multi-agent trading framework, a robust CLI, support for various model providers, and clear documentation. However, integrating it into real engineering workflows requires more than running scripts locally. Most teams need a standardized way to trigger analysis, pass in parameters, receive job IDs, and inspect results—without turning every task into Python debugging.

A proper API wrapper lets frontend, QA, and platform teammates use TradingAgents reliably. Given the importance of trading research, wrapping TradingAgents in a well-defined, documented API is essential.

đź’ˇ Apidog fits into this workflow naturally. Import your FastAPI OpenAPI schema, manage environments, extract variables, chain polling requests, and publish docs for your team. Download Apidog free to follow along.

What TradingAgents Is and Is Not

Clarify the tool before implementation.

TradingAgents Architecture

TradingAgents is an open-source, multi-agent trading framework. It models trading firm roles—fundamental, sentiment, news, technical analysts, bullish/bearish researchers, traders, risk managers, and a portfolio manager.

TradingAgents Workflow

Built on LangGraph, it supports OpenAI, Google, Anthropic, xAI, OpenRouter, and Ollama. The default config uses:

  • llm_provider = "openai"
  • deep_think_llm = "gpt-5.2"
  • quick_think_llm = "gpt-5-mini"
  • backend_url = "https://api.openai.com/v1"
  • max_debate_rounds = 1

TradingAgents is a configurable Python framework (not a SaaS API), presented as a research system—not financial advice. Keep this context in your docs and user experience.

Step 1: Install TradingAgents

Clone the repo and set up your environment:

git clone https://github.com/TauricResearch/TradingAgents.git
cd TradingAgents
conda create -n tradingagents python=3.13
conda activate tradingagents
pip install .
Enter fullscreen mode Exit fullscreen mode

For the API wrapper, add FastAPI and Uvicorn:

pip install fastapi uvicorn
Enter fullscreen mode Exit fullscreen mode

Set your provider credentials in an .env file:

OPENAI_API_KEY=
GOOGLE_API_KEY=
ANTHROPIC_API_KEY=
XAI_API_KEY=
OPENROUTER_API_KEY=
Enter fullscreen mode Exit fullscreen mode

Best practices:

  1. Store credentials in environment variables or a secrets manager.
  2. Never expose provider secrets via public API request bodies.

This separation keeps your Apidog environments clean and improves security.

Step 2: Run TradingAgents in Python First

Ensure the framework works before wrapping it in an API.

Minimal Python usage:

from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG

ta = TradingAgentsGraph(debug=True, config=DEFAULT_CONFIG.copy())
_, decision = ta.propagate("NVDA", "2026-01-15")
print(decision)
Enter fullscreen mode Exit fullscreen mode

You can override config as needed:

from tradingagents.graph.trading_graph import TradingAgentsGraph
from tradingagents.default_config import DEFAULT_CONFIG

config = DEFAULT_CONFIG.copy()
config["llm_provider"] = "openai"
config["deep_think_llm"] = "gpt-5.2"
config["quick_think_llm"] = "gpt-5-mini"
config["max_debate_rounds"] = 2

ta = TradingAgentsGraph(debug=True, config=config)
_, decision = ta.propagate("NVDA", "2026-01-15")
print(decision)
Enter fullscreen mode Exit fullscreen mode

Parameters you might want to expose via API:

  • ticker
  • analysis_date
  • llm_provider
  • deep_think_llm
  • quick_think_llm
  • research_depth / max_debate_rounds

Always verify local Python execution before building HTTP endpoints for easier debugging.

Step 3: Decide How You Want to Use TradingAgents

Three usage patterns:

Option 1: CLI Only

The CLI is good for learning and quick experiments. Use it if:

  • You're exploring or running solo tests.
  • No integration with other apps is planned.

Option 2: Python Only

Direct Python calls are better for custom orchestration or automation. Use when:

  • You want notebooks or scripts.
  • You need programmatic control.
  • One developer owns the workflow.

Option 3: API Wrapper + Apidog

For teams, expose TradingAgents via a FastAPI service and test/document with Apidog. Use this if:

  • Frontend/QA needs to trigger analysis.
  • You want environments, assertions, and docs in one place.
  • The workflow may run long (so polling is better than synchronous requests).

This approach is best for collaborative, maintainable implementations.

Step 4: Wrap TradingAgents in a FastAPI Service

A job-based API pattern works best for long-running analyses:

POST /analyses      -> returns analysis_id
GET /analyses/{id}  -> returns job status/result
Enter fullscreen mode Exit fullscreen mode

Create the API Contract

Endpoint Purpose
GET /health Basic health check
POST /analyses Trigger TradingAgents run
GET /analyses/{id} Fetch job status/result

Build the Wrapper

from concurrent.futures import ThreadPoolExecutor
from datetime import date, datetime
from uuid import uuid4

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel, Field

from tradingagents.default_config import DEFAULT_CONFIG
from tradingagents.graph.trading_graph import TradingAgentsGraph

app = FastAPI(title="TradingAgents API", version="0.1.0")
executor = ThreadPoolExecutor(max_workers=2)
jobs: dict[str, dict] = {}

class AnalysisRequest(BaseModel):
    ticker: str = Field(..., min_length=1, examples=["NVDA"])
    analysis_date: date
    llm_provider: str = Field(default="openai")
    deep_think_llm: str = Field(default="gpt-5.2")
    quick_think_llm: str = Field(default="gpt-5-mini")
    research_depth: int = Field(default=1, ge=1, le=5)

def run_analysis(job_id: str, payload: AnalysisRequest) -> None:
    jobs[job_id]["status"] = "running"
    jobs[job_id]["started_at"] = datetime.utcnow().isoformat()

    config = DEFAULT_CONFIG.copy()
    config["llm_provider"] = payload.llm_provider
    config["deep_think_llm"] = payload.deep_think_llm
    config["quick_think_llm"] = payload.quick_think_llm
    config["max_debate_rounds"] = payload.research_depth
    config["max_risk_discuss_rounds"] = payload.research_depth

    try:
        graph = TradingAgentsGraph(debug=False, config=config)
        _, decision = graph.propagate(
            payload.ticker,
            payload.analysis_date.isoformat(),
        )
        jobs[job_id].update(
            {
                "status": "completed",
                "finished_at": datetime.utcnow().isoformat(),
                "result": decision,
            }
        )
    except Exception as exc:
        jobs[job_id].update(
            {
                "status": "failed",
                "finished_at": datetime.utcnow().isoformat(),
                "error": str(exc),
            }
        )

@app.get("/health")
def health() -> dict:
    return {"status": "ok"}

@app.post("/analyses", status_code=202)
def create_analysis(payload: AnalysisRequest) -> dict:
    analysis_id = str(uuid4())
    jobs[analysis_id] = {
        "status": "queued",
        "ticker": payload.ticker,
        "analysis_date": payload.analysis_date.isoformat(),
        "created_at": datetime.utcnow().isoformat(),
    }
    executor.submit(run_analysis, analysis_id, payload)
    return {"analysis_id": analysis_id, "status": "queued"}

@app.get("/analyses/{analysis_id}")
def get_analysis(analysis_id: str) -> dict:
    job = jobs.get(analysis_id)
    if not job:
        raise HTTPException(status_code=404, detail="Analysis not found")
    return job
Enter fullscreen mode Exit fullscreen mode

Start the API server:

uvicorn app:app --reload
Enter fullscreen mode Exit fullscreen mode

FastAPI exposes:

  • http://localhost:8000/docs
  • http://localhost:8000/openapi.json (for Apidog import)

Step 5: Use TradingAgents Through the API

Trigger an Analysis

Send a POST /analyses request with:

{
  "ticker": "NVDA",
  "analysis_date": "2026-03-26",
  "llm_provider": "openai",
  "deep_think_llm": "gpt-5.2",
  "quick_think_llm": "gpt-5-mini",
  "research_depth": 2
}
Enter fullscreen mode Exit fullscreen mode

Expected response:

{
  "analysis_id": "88f9f0f5-7315-4c73-8ed5-d0a71f613d31",
  "status": "queued"
}
Enter fullscreen mode Exit fullscreen mode

Poll for the Result

Call GET /analyses/{analysis_id}:

{
  "status": "running",
  "ticker": "NVDA",
  "analysis_date": "2026-03-26",
  "created_at": "2026-03-26T06:00:00.000000",
  "started_at": "2026-03-26T06:00:01.000000"
}
Enter fullscreen mode Exit fullscreen mode

When done:

{
  "status": "completed",
  "ticker": "NVDA",
  "analysis_date": "2026-03-26",
  "result": {
    "decision": "hold"
  }
}
Enter fullscreen mode Exit fullscreen mode

On error, return a failed status with an error message.

Step 6: Import the API into Apidog

Import the OpenAPI schema in Apidog:

http://localhost:8000/openapi.json
Enter fullscreen mode Exit fullscreen mode

Benefits:

  • Docs match the actual implementation.
  • Path parameters and request bodies are auto-generated.
  • Teammates avoid manual collection setup.

This is a significant upgrade over ad hoc cURL or less integrated tooling.

Step 7: Create an Apidog Environment

Define a local environment:

base_url = http://localhost:8000
analysis_id =
Enter fullscreen mode Exit fullscreen mode

For authenticated APIs:

internal_api_key = your-local-dev-key
Enter fullscreen mode Exit fullscreen mode

Advantages:

  • Quickly switch between local, staging, and prod.
  • Reusable requests.
  • Consistent headers/URLs for all teammates.

Apidog complements TradingAgents by managing workflow and documentation.

Step 8: Test the Full Workflow in Apidog

Request 1: Create the Analysis

Set up:

  • Method: POST
  • URL: {{base_url}}/analyses
  • Body:
{
  "ticker": "NVDA",
  "analysis_date": "2026-03-26",
  "llm_provider": "openai",
  "deep_think_llm": "gpt-5.2",
  "quick_think_llm": "gpt-5-mini",
  "research_depth": 2
}
Enter fullscreen mode Exit fullscreen mode

Add a test script:

pm.test("Status is 202", function () {
    pm.response.to.have.status(202);
});

const data = pm.response.json();
pm.expect(data.analysis_id).to.exist;
pm.environment.set("analysis_id", data.analysis_id);
Enter fullscreen mode Exit fullscreen mode

Request 2: Poll the Analysis

Configure:

  • Method: GET
  • URL: {{base_url}}/analyses/{{analysis_id}}

Assertions:

pm.test("Analysis has a valid status", function () {
    const data = pm.response.json();
    pm.expect(["queued", "running", "completed", "failed"]).to.include(data.status);
});
Enter fullscreen mode Exit fullscreen mode

Check for completed result:

pm.test("Completed jobs include a result", function () {
    const data = pm.response.json();
    if (data.status === "completed") {
        pm.expect(data.result).to.exist;
    }
});
Enter fullscreen mode Exit fullscreen mode

Chain Requests into a Scenario

Build a scenario:

  1. POST /analyses (store analysis_id)
  2. Wait a few seconds
  3. GET /analyses/{{analysis_id}}

This gives your team a reproducible way to validate the workflow end-to-end.

Step 9: Publish Internal Docs for Your Team

After successful testing, use Apidog to publish internal docs covering:

  • Allowed providers
  • The meaning of research_depth
  • Status values and expected durations
  • Error handling and retry logic
  • Research-only disclaimers

Clear documentation prevents bottlenecks and keeps the workflow team-friendly.

Common Mistakes When Using TradingAgents This Way

Treating the Framework Like a Hosted API

TradingAgents is a Python framework, not a managed service. Build your own API contract.

Passing Secrets Through Request Bodies

Keep provider keys in environment management—never in requests, examples, or docs.

Returning One Long Synchronous Response

Prefer job-based APIs to avoid blocking clients on long-running workflows.

Exposing Too Many Config Knobs

Start with a minimal, stable set of parameters. Don’t expose every internal option.

Keeping Results Only In-Memory

The example uses an in-memory dict. For production, persist job state in Redis, Postgres, etc.

Hiding the Research Disclaimer

Always include the research-only disclaimer if you expose TradingAgents as a service.

Conclusion

The optimal TradingAgents setup depends on your goals. For solo exploration, use the CLI or Python package. For stable team workflows, wrap it in an API and leverage Apidog for testing and documentation.

To quickly go from repo to usable workflow:

  1. Install TradingAgents
  2. Confirm TradingAgentsGraph works locally
  3. Add POST /analyses and GET /analyses/{id} endpoints
  4. Import the schema into Apidog
  5. Build and automate your scenario

This approach scales better than relying on terminal commands and tribal knowledge.

FAQ

How do you use TradingAgents for the first time?

Install the repo, set model provider environment variables, and run the Python example with TradingAgentsGraph. Decide if you only need the CLI, or if you should wrap it in an API for broader use.

Does TradingAgents come with an official REST API?

No. As of March 26, 2026, TradingAgents provides a CLI and Python package. Teams typically add a FastAPI layer as shown above.

What is the easiest way to use TradingAgents in a frontend app?

Do not call the Python framework directly from the frontend. Use a backend API that returns an analysis_id, then poll for results.

Why use Apidog with TradingAgents?

Apidog lets you import your OpenAPI schema, manage environments, store example requests, add assertions, and share the workflow—no need for teammates to reverse-engineer Python code.

Which TradingAgents settings are worth exposing in an API?

Start with ticker, analysis_date, provider, model choices, and research depth. Expand only if justified by your use case.

Can I keep the example job state in memory?

Only for prototyping or demos. For production, persist job state to a durable backend.

Is TradingAgents suitable for live financial decisions?

No. The public repo states it is a research framework and not financial advice. Use it for experimentation unless you implement your own controls and validation.

Top comments (0)