DEV Community

Cover image for I built a open source free stock analysis tool with 55+ data dimensions
Raviteja Nekkalapu
Raviteja Nekkalapu

Posted on

I built a open source free stock analysis tool with 55+ data dimensions


The Problem

I invest in stocks on the side. My workflow looked like this:

  1. Open Screener.in for fundamentals
  2. Open TradingView for technicals
  3. Ask Perplexity for a quick summary
  4. Manually check insider trading on SEC EDGAR
  5. Look up institutional holders somewhere else

Five tools.
None of them talked to each other.
And I was still missing actual valuation models, moat analysis, and any kind of fact-checking on the AI summaries.

Bloomberg does all of this in one place. It also costs $24,000 per year.

So I built one tool that does it for free.

What It Does

You enter a stock ticker. Eight seconds later, you get a report with:

  • Nipun Score - A proprietary A+ to F letter grade based on a weighted composite of technicals (25%), fundamentals (25%), sentiment (20%), risk (15%), and insider activity (15%)
  • Three valuation models - DCF, Benjamin Graham Number, Peter Lynch Fair Value
  • Scenario analysis - Bull, Base, and Bear price targets with probability estimates
  • Competitive moat - Wide, Narrow, or None with moat sources (brand, network effects, switching costs, etc.)
  • SWOT analysis - AI-generated strengths, weaknesses, opportunities, threats
  • Insider trading - Recent executive buys and sells with dollar amounts
  • Institutional ownership - Top 10 holders with position changes
  • ...and about 20 more sections

The Architecture

The backend is a serverless worker that runs a 4-phase pipeline:

Phase 1 - Data Collection (~2-3s)
10+ parallel API calls to Finnhub (financials, peers, insider trades, earnings), SEC EDGAR (10-K, 10-Q filings), Reddit RSS (sentiment), and Yahoo RSS (news).

Phase 2 - Compute (~5ms)
Zero API calls.
Pure math.
This phase calculates Altman Z-Score, Piotroski F-Score, DCF valuation, Graham Number, momentum scoring, risk-reward ratios, and dividend safety. All deterministic.

Phase 3 - AI Synthesis (~3-5s)
Two parallel calls to Google Gemini.
One generates the main analysis.
The other generates premium insights (scenario analysis, moat, SWOT, investment thesis). I built a 5-model cascade: 2.5 Pro → 2.0 Flash → 1.5 Pro → Flash → Lite. If one model rate-limits, it falls through to the next automatically.

Phase 4 - Second Opinions (~1-2s)
Cerebras generates a contrarian take.
Cohere runs a fact audit, classifying every AI-generated claim as grounded, speculative, or unverifiable. Both are non-fatal — if they fail, the report still ships.

Security

This was important to me. The tool uses a BYOK (Bring Your Own Keys) model:

  • API keys encrypted client-side with AES-256-GCM
  • Key derivation: PBKDF2 with SHA-256, 100K iterations
  • 16-byte random salt, 12-byte random IV
  • Keys sent via X-Nipun-Keys header, never in request body
  • Worker processes keys in memory, never persists them
  • Everything uses the Web Crypto API — zero npm dependencies for crypto

The Interesting Technical Decisions

1. Mock data fallback on every API call

Every single external call is wrapped in try/catch with a fallback to mock data. This means:

  • The app literally cannot crash from a bad API response
  • Demo mode works using the same fallback path
  • You can run the full UI without any API keys

2. Cascading AI models

Instead of relying on one model, I chain five. Rate limits, outages, and model-specific failures are handled transparently. The user never sees an error.

3. Contrarian AI

Most AI tools agree with themselves. I deliberately ask a second model to disagree with the first. This gives you both sides of the argument instead of just confirmation bias.

How to Try It

npx nipun-ai
Enter fullscreen mode Exit fullscreen mode

Or check the live demo at nipun-ai.pages.dev (uses mock data, no keys needed).

GitHub: github.com/myProjectsRavi/Nipun-AI

MIT licensed.
All APIs have free tiers.
Feedback welcome.

Top comments (0)