How I built an AI-powered stock research system that runs itself
The Problem
Researching stocks manually is time-consuming. Every time I wanted to analyze a company, I had to:
- Find a stock idea
- Look up the company on screener.in
- Analyze financials, business, management, valuation, risks
- Make a decision
- Track it somewhere
That's 5+ steps for every single stock. Multiply by dozens of stocks per month — and suddenly you're spending hours every week just on research.
I wanted to automate this. Enter OpenClaw.
What is OpenClaw?
OpenClaw is an AI agent platform that runs on your machine. It can:
- Execute code and scripts
- Read/write files
- Manage cron jobs (scheduled tasks)
- Control a browser
- Send messages via Telegram, Discord, etc.
Think of it as having a personal AI assistant that can actually do things for you — not just answer questions.
The Architecture
Here's what I built:
┌─────────────────────────────────────────────────────────────┐
│ OpenClaw Agent │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Stock │───▶ │ Deep Research│───▶ │ Research │ │
│ │ Screener │ │ Agent │ │ Log │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ 8 Specialized Skills │ │
│ │ • Business Understanding • Management Quality │ │
│ │ • Fundamental Analysis • Risk Assessment │ │
│ │ • Moat Analysis • Valuation Analysis │ │
│ │ • Trade Plan • Thesis Synthesis │ │
│ └──────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Automated Cron Jobs │ │
│ │ • Every 20 min: Research stocks │ │
│ │ • Every Sunday: Send recommendations to Telegram │ │
│ └──────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Step 1: Finding Stocks — The Screener Skill
First, I needed a way to get stock ideas. I created a stock-screener skill that:
- Fetches popular stock screens from screener.in
- Extracts stock data using a Python script
- Filters out stocks I already own or track (via skip.json)
- Returns 50 fresh stock ideas
python3 skills/stock-screener/fetch_stocks.py --max-stocks 50
The results go into memory/stock-screener/stocks.md — ready for research.
Step 2: Deep Research Agent — 8 Skills in Sequence
This is the core. I built a stock_deep_research_agent skill that uses 8 specialized sub-agents:
Wave 1 (Parallel)
- Business Understanding — Company description, revenue segments, business model, industry context, lifecycle stage
- Fundamental Analysis — Financials, cash flow, balance sheet health, peer comparison
Wave 2 (Parallel)
- Moat Analysis — Competitive advantage, moat types (brand, network effects, switching costs, etc.)
- Management Quality — Capital allocation, promoter skin in the game, governance
Wave 3 (Parallel)
- Risk Assessment — Business risks, financial risks, macro risks, execution risks
- Valuation Analysis — PEG, PE, EV/EBITDA, DCF, P/FCF methods to find fair value
Wave 4 (Sequential)
- Trade Plan — Buy/sell price tiers, position sizing, hold duration, expected returns
- Thesis Synthesis — Bull case, bear case, conviction score, final recommendation
Each sub-agent runs as an isolated OpenClaw sub-agent and saves its output to markdown files. Finally, all outputs are compiled into a master report.
Step 3: Automation — Cron Jobs
Here's where the magic happens. I set up two cron jobs:
Cron 1: Stock Research (Every 20 Minutes)
name: stock-research
schedule: every 20 minutes
message: |
1. Read to_be_analysed.md for stocks to research
2. Check log.md — skip if analyzed in last 30 days
3. If no stocks left, fetch 50 new stocks from screener
4. If stocks available, run deep research skill
5. Update log.md with results
6. Save all research to memory/stock-research/<ticker>/
How it works:
- Every 20 minutes, it checks if there's a stock to analyze
- If the queue is empty, it fetches 50 new stocks from screener.in
- If there's a stock, it runs the full 8-stage research
- Results are saved to markdown files
- Only processes ONE stock per run (to avoid timeout)
Cron 2: Weekly Recommendations (Sunday 9 AM)
name: Weekly Stock Ideas
schedule: Sunday 9:00 AM
message: |
1. Check log.md for stocks analyzed in last 7 days
2. Filter for BUY or STRONG BUY recommendations
3. Exclude stocks already in portfolio or watchlist
4. Send to Telegram with ticker, price, fair value, conviction score
Every Sunday morning, I get a Telegram message like:
📈 Stock Recommendations
• TATASTEEL — ₹180 | Fair Value: ₹250 | Conviction: 82/100
• COALINDIA — ₹431 | Fair Value: ₹550 | Conviction: 85/100
File Structure
All research lives in the workspace:
memory/stock-research/
├── log.md # Research execution log
├── to_be_analysed.md # Queue of stocks to research
└── RELIANCE/
├── raw/profile.md # Raw data from screener
├── business.md # Business analysis
├── financials.md # Financial analysis
├── moat.md # Competitive moat
├── management.md # Management quality
├── risk.md # Risk assessment
├── valuation.md # Valuation analysis
├── trade_plan.md # Trade plan
├── synthesis.md # Final thesis
└── RELIANCE.md # Master report
Tools Used
| Tool | Purpose |
|---|---|
| stock-screener | Fetch stock ideas from screener.in |
| fundamental-stock-analysis | Analyze financials (from ClawHub) |
| business-understanding | Company business analysis |
| moat-analysis | Competitive advantage analysis |
| management-quality | Management team assessment |
| risk-assessment | Risk identification |
| valuation-analysis | Fair value calculation |
| trade-plan | Entry/exit strategy |
| thesis-synthesis | Final recommendation |
Results
- Coverage: 100+ stocks analyzed per week (automatically)
- Consistency: Same quality analysis every time
- Convenience: Weekly Telegram updates with buy ideas
This entire system runs on a MacBook Pro, automated via OpenClaw. No server needed — just scheduled AI power.
Top comments (0)