The Setup
Two weeks ago, I had an idea: What if I could build an AI system that predicts 15-minute price movements of Solana on Kalshi (a prediction market platform)?
Not a full trading algorithm. Not a crypto hedge fund. Just: can an automated system, using real technical signals + real-time sentiment analysis, consistently predict whether SOL goes up or down in 15 minutes?
By April 12, 2026, the answer was yes. The system is live. It's trading real money. It's winning 55% of the time overall, and 69% when it focuses on what actually works.
Here's what actually happened — not the polished version. The real one.
The Idea (April 4)
Kalshi runs binary markets. You bet "yes" or "no" on whether something happens. For crypto, they offer KXSOL15M — a market that resolves to YES if SOL is higher in 15 minutes, NO if it's lower.
Most retail traders hate binary markets because you're forced to pick a direction. No hedging. No sizing down. You're right or you're wrong.
But for an AI system, that's perfect. It's a classification problem: given all available data right now, will the next 15-minute candle close higher?
I sketched out 8 signals:
- RSI (overbought/oversold)
- EMA crossovers
- Volume spikes
- Momentum
- Bollinger Bands
- Higher timeframe alignment
- Order book imbalance
- Sentiment analysis (via Perplexity + Gemini LLM)
Combine them, score directionally, place a bet when confidence is high.
That's the engine.
Week 1: Everything Seemed Smart (April 5-7)
I built pre-check.js (the signal scorer), analyze-sentiment.js (the LLM layer), and paper-trade.js (simulate trades without real money).
The first few trades were winners. I was gassed. The signals were working. The sentiment analysis was firing. Everything looked intelligent.
Then I added a hard RSI filter. The logic was airtight: "If RSI is in a danger zone, skip the trade."
I paper-traded for a week with that filter active.
The result? Out of 11 filtered signals (trades I didn't take because of the RSI gate):
- 10 would have been winners
- 1 would have been a loser
The filter I was sure would protect us had a 9% accuracy rate.
It was killing winners to avoid one loser. I deleted it on April 7.
That decision alone taught me more about real trading than a year of reading about it. Your best defense sometimes is your worst offense.
The Data Starts Talking (April 8-10)
I ran 69 paper trades. The stats engine broke down performance by:
- Session (NY Market, NY Afterhours, Evening, Overnight)
- Direction (UP vs DOWN)
- Score bracket (60-64, 65-69, 70-74, 75-79, 80+)
- Signal combination (which signals fired together)
The patterns were obvious:
Session breakdown:
- NY Afterhours: 68% win rate (THIS IS THE GOLDEN WINDOW)
- NY Market: 59% win rate
- NY Evening: 38% win rate
- NY Overnight: 38% win rate
Direction breakdown:
- UP signals: 69% win rate
- DOWN signals: 45% win rate
DOWN was a coin flip. UP was actually profitable. I could've spent months trying to make DOWN work. Instead, the data said: disable it.
Score bracket:
- 60-64: 50% win rate
- 65-69: 50% win rate
- 70-74: 54% win rate
- 75-79: 75% win rate
- 80+: 80% win rate
Higher scores = higher win rates. The system was working.
The Risk Management Layer (April 10)
By April 10, the engine was winning, but compounding was risky. After a 3-trade winning streak, the balance had compounded from $500 → $781. Then a single loss at full size brought it back to $585.
I added three risk rules:
Rule 1: Drawdown Protection
If balance drops 20% from peak, cap all trades at $50 until recovery. Prevents catastrophic blowout during losing streaks.
Rule 2: Consecutive Loss Cooldown
After 2 consecutive losses, pause trading for 30 minutes. Forces the system to step back instead of revenge-trading.
Rule 3: Consecutive Win Reset
After 3 consecutive wins, reset the next trade to $50. Locks in compounded gains before re-exposing larger capital.
Each rule was added because the data showed a near-disaster that happened when I didn't have it.
Going Live (April 12)
By April 12, I was confident. The system had been tested on 69 paper trades, the risk management was solid, and the data was clear about what worked (UP in NY Afterhours at high confidence).
I wrote live-trade.js — the script that places real orders on Kalshi — flipped the LIVE_TRADING_ENABLED flag, and went live with $500.
The system is now running 24/7 on a Lenovo ThinkCentre M75n running Ubuntu. Every 5 minutes, it:
- Fetches SOL price data from Binance
- Calculates all 8 signals
- If confidence is high, fetches news sentiment via Perplexity
- If the LLM agrees, places a real order on Kalshi
- When 15 minutes pass, resolves the trade and updates balance
Total API cost: less than $1/day. Most of that is Perplexity. Gemini Flash is effectively free at this volume.
What This Actually Teaches
Most posts about "building an AI system" skip the ugly parts. They skip the filters that don't work. They skip the directional asymmetry that contradicts your intuition. They skip the moment you realize your best idea was your worst idea.
This is the real story.
The framework works:
- Build something that has a theory behind it
- Test it on simulated data
- Look at the breakdowns — session, direction, confidence level, signal combinations
- Delete everything the data says isn't working
- Add risk management in response to real near-disasters, not hypotheticals
- Go live when you're confident, but small enough that you're still testing
This isn't specific to trading. This is how you build any autonomous system — whether it's a trading bot, a customer service AI, or a content generation engine.
You have an idea. Data tells you what's working. You ruthlessly remove what isn't. You compound what is.
What's Next
The engine is live and running. I'm monitoring it daily. Once we have 30+ live trades with strong performance in NY Afterhours, we'll know if this actually works with real money (psychology, slippage, all the things that change between paper and live).
If it does, the next questions are:
- Can we apply this to other markets on Kalshi? (BTC, ETH, macro events)
- Can we build prediction market bots for other platforms? (Polymarket, Metaculus)
- What happens when we expand the signal set?
But that's future work. Right now, the bot is live. The code is running. The results will tell us if we were right.
For Builders Reading This
If you're building your first autonomous system — whether it's trading, content generation, or anything else — here's what matters:
Get to a testable version fast. Don't spend months perfecting signals. Build something that works 60% of the time and iterate.
Let the data kill your ideas. You will have smart ideas that the data proves are dumb. The RSI filter was logically sound. It had a 9% accuracy rate. Delete it.
Add risk management in response to real problems, not hypotheticals. The consecutive win reset wasn't in the original design. A 3-trade streak compounding too far made it necessary. Let reality inform your architecture.
Go live small. Paper trading is useful but it's not real. $500 on Kalshi teaches me more than 1000 paper trades ever could.
If you want to build your own AI system (trading or otherwise), the infrastructure is out there. Kalshi's API works. Perplexity's API works. Gemini Flash costs pennies. Your laptop can run it 24/7.
The barrier to entry isn't capital or education. It's the willingness to build something that might not work, test it ruthlessly, and iterate.
That's the actual moat.
Vitoshi | @devCharizard
Built a BTC trading radar, Mewtwo (ops agent), and now this. The pattern is the same: code a hypothesis, run real data through it, delete what doesn't work, compound what does.
Want to build your own AI system? Start here: nulllimit.gg/agents — fully configured AI agent setup for whatever you're building.
Or if you want the full framework for your first autonomous system: nulllimit.gg/ebooks — the complete guide.
Top comments (0)