I’ve been writing code for many years now. Not as much these days, but I started when I was eight. With BASIC. After this many years, you develop a particular mindset. It helps sometimes and causes problems other times. When you see a problem, you immediately think about writing a solution. But you should first investigate what’s already out there.
So when I see articles and hear people talking about AI agents and orchestration frameworks and multi-agent architectures and all that, my first instinct is: if you know code, you just need a few lines, a couple of API calls, and an LLM. That’s it. That’s your agent.
I wanted to test that instinct. So I mixed it with something I actually care about: trading.
What I Built
The idea was simple. I wanted an app that:
- Gets some input from me (optionally)
- Calls a few APIs to gather market data and news
- Sends everything to an LLM with the right context
- Saves the output for future runs (a basic form of memory)
- Sends me the result via email and push notification
I am not comfortable enough to let it place or cancel orders automatically. Maybe later. For now, I just want it to think and tell me what it thinks.
And what it tells me is pretty concrete. The output of each run includes a specific recommendation: WAIT, BUY at a particular price (or NOW), SELL at a particular price (or NOW), amend existing orders, and so on. It’s not a vague “the market looks bullish” — it’s an actionable next step with reasoning attached. I still make the final call, but the model does the legwork of pulling everything together into a decision.
To keep things simple, I used Claude models. I already have a Pro account, so I just activated the API and topped it up. Then I asked Claude Code (Opus) to help me scaffold the whole thing. A Python app using LangChain, calling some APIs, talking to Sonnet, saving output, emailing me, sending push notifications.
I also put everything in a Docker container and deployed it to my local K3s cluster running on a Raspberry Pi 5. It runs as a set of cronjobs: once before the market opens, and a few times during market hours.
It does stuff. It actually works.
Now, I should be upfront: I am not a professional trader. I trade from time to time, and I am certainly not a millionaire. So I have to admit that part of it also feels a little bit speculative. But here’s the thing — the summary the LLM provides in its output, the rationale behind the decision, actually makes sense. When I have time to read the news myself (actual news, not AI-generated summaries), the reasoning lines up. It’s not just pulling recommendations out of thin air. It reads the same data I would read, and it reaches conclusions I can follow. That doesn’t make it right every time, but it makes it auditable, which is more than I can say for my gut feeling at 7 AM.
The Implementation Details
Let me go a bit deeper into what’s under the hood.
Why LangChain? Partly to learn it, and partly because I actually want to extend the solution. For example, in the future I could use Sonnet as a faster and cheaper model for the first pass — a quick sanity check — and then, if an actual trade decision is needed, escalate to Opus for verification. LangChain gives me a reasonable abstraction layer for that kind of routing. I also used Tavily for news data, configured as a tool inside LangChain. But I want to be clear — I could have done all of this without it. LangChain is a convenience here, not a requirement.
Dynamic user prompt. The prompt I send to the model has several dynamic variables that get filled in at runtime: current position, pending orders, historical orders. To populate these, the app first makes API requests to my broker. This is just plain REST — no fancy orchestration needed.
No historical price data (yet). One gap worth mentioning: my broker doesn’t expose historical price data through its API, and I don’t want to pay a separate data provider for it. I’ll find a solution eventually, but for now the prompt doesn’t rely heavily on price charts or technical indicators. Instead, it leans on analysis and context from other sources on the internet — news, market commentary, analyst takes — which Tavily pulls in. It’s a trade-off, but it keeps things simple and cheap.
Local override mode. When the app runs locally (not as a cronjob), I have the option to feed it manual input. I can paste in some news I found interesting, or force a particular instrument price so the app doesn’t need to look one up. This turned out to be surprisingly useful for testing and for those moments when I want to say “forget the current price, assume it’s at X — what would you do?”
Memory. I don’t save the whole LLM output. The system prompt has multiple steps, and one of them explicitly asks the model to produce a “run summary” — a condensed version of its reasoning and recommendation, designed to be useful as context on the next run. That summary gets saved to a file. On the next run, it’s loaded and injected into the prompt. So the model can see what it concluded last time, what it recommended, and whether conditions have changed. No vector database, no embeddings, no retrieval pipeline. Just a file with a structured summary that the model itself wrote for its future self.
Notifications. I use my own Gmail account to send emails to myself. For push notifications, I use my Home Assistant instance — it already knows how to push to my phone, so I just call its API. Simple, reliable, already running.
What It Costs
Each run costs about $0.20. That’s the Claude API usage for one cycle of: ingest context, read news, reason about the position, produce a recommendation. For the news data, I use Tavily’s free Researcher tier, which should be more than enough for the number of monthly runs I need at the moment. So the only real cost is the LLM.
A proper trader would probably want to optimise this. Maybe use a local model for some steps. Maybe look for a model specifically fine-tuned for financial reasoning. Maybe care about latency.
But for me? I just want to have fun and learn. $0.20 per run is a rounding error compared to what you’d lose on a single bad trade made on gut feeling at 7 AM before coffee.
The Architecture (If You Can Call It That)
Let me sketch what this looks like end to end:
- Cronjob triggers the Docker container on K3s (Raspberry Pi 5).
- Broker API calls fetch current positions, pending orders, and order history.
- Tavily tool (via LangChain) fetches relevant market news.
- Prompt assembly: all the dynamic data gets injected into the user prompt, along with the previous run’s output (memory).
- Claude Sonnet processes the full context and produces a recommendation.
- Output is saved to a file (next run’s memory).
- Email + push notification sent to me with the recommendation.
- Human in the loop (me) decides whether to act on the model’s recommendation.
That’s eight steps. Most of them are just API calls and string concatenation. The “agentic” part is really step 5 — the LLM reasoning over structured context with tool access. And step 8 is just a guy looking at his phone over coffee.
The Obvious Question
Now, the obvious question. For a developer who can write code, why exactly do you need a fancy orchestration framework to build something like this?
I keep seeing products and platforms that promise to let you “build AI agents without code” or “orchestrate complex multi-agent workflows” with drag-and-drop interfaces and visual pipelines. And I get it — there’s a market for that. Not everyone writes code.
But if you do write code, what you need is:
- An LLM API
- A few REST calls
- Some string templating for prompts
- A file system (for memory)
- A way to send notifications
- A container runtime (optional, but nice)
- Some logging, if you need audit or debug data (add as needed — it’s just code)
That’s it. That’s the agent.
I’m not saying LangChain is strictly necessary either. I used it because I wanted to learn it and because the tool abstraction is convenient for the Tavily integration. But I could have done the same thing with raw API calls and a hundred lines of Python.
The orchestration is just code. It always was.
The value of an LLM agent is not in the orchestration layer. It’s in the prompt design, the context assembly, and the quality of the model’s reasoning. Everything else is plumbing. And developers have been doing plumbing for decades.
Note: This is a personal learning project. Nothing here should be taken as financial advice. Also, as I’m not a native English speaker, I used an LLM to review and refine the language of this article while keeping my original tone and ideas.
Originally published on Medium.




Top comments (0)