DEV Community

Kang
Kang

Posted on

I Built an AI Hedge Fund with 6 Agents in 650 Lines of Python

Most AI agent tutorials build chatbots. I wanted to build something that makes decisions under uncertainty - like a hedge fund investment committee.

So I did. Six specialized agents. ~650 lines of Python. No frameworks.

What I Built

A multi-agent trading system where AI agents work together as an investment team. Each agent has one job:

  • Research Agent - scrapes news, analyzes sentiment
  • Quant Agent - computes RSI, MACD, Bollinger Bands
  • Fundamentals Agent - evaluates P/E, cash flow, competitive moat
  • Strategy Agent - combines all signals into a buy/sell/hold decision
  • Risk Agent - enforces position sizing and stop losses
  • Debate Agent - Bull vs Bear argue before any trade happens

The pipeline runs sequentially. Each agent passes structured data to the next. At the end, the Execution Agent paper trades based on the committee's verdict.

Why No Frameworks?

I started with LangChain. Deleted it after a day.

The abstractions didn't match what I needed. My agents don't need memory, retrieval, or chat history. They need to call an LLM with a structured prompt and parse JSON output. That's it.

Every agent follows the same pattern:

def call_llm(prompt: str, system: str = "") -> str:
    """Call Gemini or OpenAI with an optional system prompt."""
    gemini_key = os.environ.get("GEMINI_API_KEY")
    openai_key = os.environ.get("OPENAI_API_KEY")

    if gemini_key:
        url = f"https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key={gemini_key}"
        parts = []
        if system:
            parts.append({"text": f"[System] {system}\n\n{prompt}"})
        else:
            parts.append({"text": prompt})
        r = httpx.post(url, json={"contents": [{"parts": parts}]}, timeout=30)
        r.raise_for_status()
        return r.json()["candidates"][0]["content"]["parts"][0]["text"]

    if openai_key:
        msgs = []
        if system:
            msgs.append({"role": "system", "content": system})
        msgs.append({"role": "user", "content": prompt})
        r = httpx.post("https://api.openai.com/v1/chat/completions",
            headers={"Authorization": f"Bearer {openai_key}"},
            json={"model": "gpt-4o-mini", "messages": msgs}, timeout=30)
        r.raise_for_status()
        return r.json()["choices"][0]["message"]["content"]
Enter fullscreen mode Exit fullscreen mode

One function. Works with Gemini or OpenAI. No wrapper library needed.

The Debate Agent - My Favorite Part

Chapter 7 is where it gets interesting. Instead of blindly trading on signals, the system stages a debate.

A Bull persona argues for the trade. A Bear argues against. Then a Judge delivers the verdict.

def debate(ticker: str, context: str = "") -> dict:
    """Run a Bull vs Bear debate with a Judge verdict."""
    info = context or f"Stock: {ticker}. Use your knowledge of this company."

    bull_system = ("You are a BULL analyst - aggressively optimistic. "
                   "Find every reason this stock will go UP. Be specific with data.")
    bear_system = ("You are a BEAR analyst - deeply skeptical. "
                   "Find every risk and reason this stock will go DOWN. Be specific.")
    prompt = (f"Give exactly 2 short, punchy arguments (1 sentence each) about {ticker}.\n"
              f"Context: {info}\nRespond as a JSON list of 2 strings. No markdown.")

    bull_raw = call_llm(prompt, system=bull_system)
    bear_raw = call_llm(prompt, system=bear_system)
    bull_args = _parse_args(bull_raw)
    bear_args = _parse_args(bear_raw)

    judge_prompt = (
        f"You are the JUDGE on an investment committee for {ticker}.\n\n"
        f"Bull arguments:\n" + "\n".join(f"  - {a}" for a in bull_args) +
        f"\n\nBear arguments:\n" + "\n".join(f"  - {a}" for a in bear_args) +
        f"\n\nDeliver your verdict. Respond JSON only:\n"
        f'{{"verdict":"PROCEED"|"REJECT","confidence":0.0-1.0,'
        f'"vote":"X-Y (bull-bear)","reasoning":"one sentence"}}'
    )
    judge_raw = call_llm(judge_prompt,
        system="You are an impartial investment committee judge.")
Enter fullscreen mode Exit fullscreen mode

Three LLM calls. That's the entire debate. The output looks like this:

============================================================
  ?? DEBATE: Should we trade NVDA?
============================================================

  ?? BULL Case:
     ? Oversold RSI + earnings beat + new product cycle = buy the dip
     ? Cash flow growth justifies premium. CUDA moat is unbreakable

  ?? BEAR Case:
     ? P/E of 45x in a rate-hiking cycle is dangerous
     ? Customer concentration risk - top 5 customers = 40% revenue

  ????????????????????????????????????????????????????????
  ??  Judge's Verdict: ? PROCEED
  ?? Confidence: 73%
  ???  Vote: 4-1
  ?? Strong product cycle and moat outweigh valuation concerns
============================================================
Enter fullscreen mode Exit fullscreen mode

People keep screenshotting this part.

Putting It All Together (Chapter 9)

The hedge fund orchestrator imports all six agents and runs them as a pipeline:

def run_hedge_fund(ticker: str, demo: bool = False):
    """Run the full investment committee pipeline."""
    ticker = ticker.upper()
    print(f"\n{'='*60}")
    print(f"  === AI HEDGE FUND - Investment Committee Meeting ===")
    print(f"{'='*60}")

    # Each agent runs in sequence, passing data forward
    research = run_research(ticker)         # ? sentiment + headlines
    quant = run_quant(ticker)               # ? RSI, MACD, signals
    funda = run_fundamentals(ticker)        # ? P/E, moat, value
    strategy = run_strategy(quant, funda, research)  # ? BUY/SELL/HOLD
    risk = assess_risk(ticker, price, strategy["decision"])  # ? sizing
    dbt = debate(ticker, context)           # ? GO/NO-GO
    # ... execute trade if verdict is PROCEED
Enter fullscreen mode Exit fullscreen mode

Clone it, set an API key, run it. That's it.

git clone https://github.com/NeuZhou/ai-hedge-fund.git
cd ai-hedge-fund
export OPENAI_API_KEY=sk-...
python chapters/09_hedge_fund/hedge_fund.py --ticker NVDA
Enter fullscreen mode Exit fullscreen mode

What I Learned

Agents don't need frameworks. If your agent is "call LLM, parse output, pass data forward," a framework adds complexity without value.

Structured output matters more than prompt engineering. Asking for JSON and parsing it is more reliable than trying to extract meaning from natural language.

The debate pattern is surprisingly effective. Forcing the system to argue both sides catches things a single-pass system misses.

Keep it readable. The biggest compliment: "I read the whole codebase in 30 minutes." That's the goal.

Links

Top comments (0)