I started by tracking Orca LP rewards on Solana and ended up building a contextual bandit that learns when to buy, sell, or hold SOL based on price patterns and past trade outcomes.
This project combines:
- A Flask web app
- Reinforcement learning (contextual bandits)
- Live price prediction
- SOL/USDC liquidity pool monitoring
It evolved from a passive analytics tool into a smarter system for simulating trades using real-time data and historical trends.
Liquidity pools like Orca’s SOL/USDC offer passive rewards, but I wanted to go one step further:
What if you reinvested those rewards into SOL using a machine-learning model that understood price context?
That idea led to building a trading simulator that:
- Learns from market indicators (Sharpe ratio, momentum, SMA)
- Evaluates the impact of each trade
Tracks portfolio performance over time
Python + Flask for the web app
SQLite for local data persistence
LiveCoinWatch API for SOL price data
Helius API for Orca LP activity
Contextual Bandits (via
river
orVowpal Wabbit
conceptually)Chart.js + Tailwind for the frontend
A contextual bandit is a type of reinforcement learning algorithm that chooses actions based on the current state (called "context") and learns from the reward.
In this case:
- Context = features like price deviation from 24h low/high, rolling mean, Sharpe ratio, momentum, and current portfolio state
-
Actions =
buy
,sell
, orhold
- Reward = based on realized/unrealized profit, market timing, and trade quality
📈 Key Features
Live portfolio tracking and trade logs
Reward calculations that factor in profit, timing, and trend alignment
Automatic SQLite logging for all trades and portfolio snapshots
Model state saved between runs
(Optional) Price predictor using rolling mean and recent volatility
🔍 Challenges & Tradeoffs
- The model needs time to learn. Early trades are often naive.
- There’s no "real" trading yet—only simulation.
- We're still experimenting with the best reward functions.
- Future goal: add backtesting, deploy to simulate with live SOL rewards.
Try it!
GitHub: [https://github.com/JHenzi/OrcaRewardDashboard]
To run locally:
- Clone the repo
- Set up a
.env
file with your API keys - Run
flask run
- Visit
http://localhost:5030
to explore the dashboard
🛣️ What's Next
Next steps:
- Backtest with historical price data
- Refactor to make predictions available via API
- Possibly deploy as a hosted dashboard for live tracking
- Improve reward function based on more advanced trading signals
Let's Talk?
Would love to hear feedback on:
- Reward strategy improvements
- Trading signal features to add
- How you’d use this if it supported real trading
Drop a comment below or reach out on GitHub!
Top comments (1)
I think for my next steps I'm going to build a few things: