A deep dive into building MoneySense.ai — from idea to 430+ users
I spent years reading financial news the hard way. Every morning, I'd open 15+ tabs — Seeking Alpha, Bloomberg, Reuters, Yahoo Finance — and spend hours trying to separate signal from noise. When earnings season hit, I'd wade through 200-page 10-K filings, desperately searching for the one paragraph that actually mattered.
One day, I thought: What if AI could do this for me?
Six months later, MoneySense.ai was born — a Chrome extension that instantly analyzes any financial article or SEC filing and gives you:
- A TL;DR summary
- Sentiment analysis (bullish/bearish/neutral)
- Key pros and cons
- Relevant stock tickers
In this post, I'll walk you through exactly how I built it — the tech stack, the challenges, and the lessons learned along the way.
Table of Contents
- The Problem I Was Solving
- Choosing the Tech Stack
- Building the Chrome Extension
- The AI Analysis Engine
- Handling SEC Filings
- Backend Architecture
- Authentication & Payments
- Challenges & Solutions
- Launch & Growth
- What I'd Do Differently
- Key Takeaways
The Problem I Was Solving
As a retail investor, I faced three major pain points:
1. Information Overload
The average 10-K filing is 150-250 pages. The average investor has maybe 30 minutes. The math doesn't work.
2. Jargon Fatigue
Financial documents are written by lawyers, for lawyers. Terms like "material adverse effects," "going concern," and "non-GAAP reconciliation" make most people's eyes glaze over.
3. Hidden Sentiment
Is this article bullish or bearish? Sometimes it's obvious. Often, it's buried under layers of hedging language and qualifications. Professional analysts are trained to spot these signals. Retail investors? Not so much.
I wanted to build something that would level the playing field — give individual investors the same quick-analysis capabilities that Wall Street analysts take for granted.
Choosing the Tech Stack
After evaluating several options, I landed on this stack:
Frontend (Chrome Extension)
- Framework: Vanilla JavaScript + Tailwind CSS
- Build Tool: Vite with CRXJS plugin
- UI Components: Custom-built, lightweight
Backend
- Runtime: Bun (blazing fast, TypeScript-native)
- Framework: Hono.js (lightweight, works everywhere)
- Database: PostgreSQL with Drizzle ORM
- Hosting: DigitalOcean App Platform
AI/ML
- Primary Model: Google Gemini 2.5 Flash-Lite
- Fallback: OpenAI GPT-4o-mini
- Embeddings: For future semantic search features
Payments
- Provider: Polar.sh (Merchant of Record)
- Why: Handles VAT, taxes, invoicing globally
Why These Choices?
Bun + Hono over Node + Express:
Bun starts in milliseconds, has native TypeScript support, and Hono is incredibly lightweight (14KB). For a side project where I'm paying for compute, every millisecond matters.
Gemini Flash-Lite over GPT-4:
Cost. At $0.075 per million input tokens (vs $2.50 for GPT-4), I can offer a $14.99/month subscription and still maintain 90%+ margins. The quality difference for summarization tasks is negligible.
Polar.sh over Stripe:
As a solo founder, I didn't want to deal with VAT calculations, tax remittance, or international invoicing. Polar handles all of that as a Merchant of Record. Their 4% + $0.40 fee is worth every penny for the time saved.
Building the Chrome Extension
Manifest V3
Chrome extensions now require Manifest V3, which fundamentally changes how extensions work. No more background pages — everything runs through service workers.
Here's my manifest.json:
{
"manifest_version": 3,
"name": "MoneySense AI",
"version": "1.1.0",
"description": "Analyze financial pages with AI-powered insights",
"permissions": [
"activeTab",
"storage",
"identity"
],
"host_permissions": [
"https://*.sec.gov/*",
"https://*.yahoo.com/*",
"https://*.bloomberg.com/*",
"https://*.reuters.com/*",
"https://*.seekingalpha.com/*"
],
"action": {
"default_popup": "popup.html",
"default_icon": {
"16": "icons/icon16.png",
"48": "icons/icon48.png",
"128": "icons/icon128.png"
}
},
"background": {
"service_worker": "background.js",
"type": "module"
},
"content_scripts": [
{
"matches": ["<all_urls>"],
"js": ["content.js"],
"css": ["content.css"]
}
]
}
Content Script Architecture
The content script is responsible for:
- Detecting if the current page is a financial article
- Extracting the article content
- Sending it to the popup for analysis
// content.js
const extractArticleContent = () => {
// Try multiple selectors for different sites
const selectors = [
'article',
'[role="article"]',
'.article-body',
'.post-content',
'#article-content',
// SEC-specific
'.formContent',
'#contentDiv'
];
for (const selector of selectors) {
const element = document.querySelector(selector);
if (element && element.textContent.length > 500) {
return {
title: document.title,
content: element.textContent.trim(),
url: window.location.href,
domain: window.location.hostname
};
}
}
// Fallback: get main content area
return {
title: document.title,
content: document.body.innerText.substring(0, 50000),
url: window.location.href,
domain: window.location.hostname
};
};
// Listen for requests from popup
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
if (request.action === 'extractContent') {
const content = extractArticleContent();
sendResponse(content);
}
return true; // Keep channel open for async response
});
The Popup UI
I kept the popup minimal — users want answers, not interfaces.
<!-- popup.html -->
<div id="app" class="w-[400px] min-h-[300px] bg-slate-900 text-white p-4">
<!-- Header -->
<div class="flex items-center justify-between mb-4">
<div class="flex items-center gap-2">
<img src="icons/icon48.png" class="w-8 h-8" />
<h1 class="text-lg font-semibold">MoneySense AI</h1>
</div>
<button id="settings-btn" class="text-slate-400 hover:text-white">
⚙️
</button>
</div>
<!-- Main Content -->
<div id="content">
<!-- Analysis results appear here -->
</div>
<!-- Analyze Button -->
<button id="analyze-btn" class="w-full bg-blue-600 hover:bg-blue-700
text-white font-medium py-3 px-4 rounded-lg mt-4
transition-colors duration-200">
🔍 Analyze This Page
</button>
<!-- Usage Counter -->
<div class="mt-4 text-center text-sm text-slate-400">
<span id="usage-count">7</span> / <span id="usage-limit">100</span>
analyses this month
</div>
</div>
Displaying Results
The analysis results are structured and easy to scan:
const displayResults = (analysis) => {
const content = document.getElementById('content');
content.innerHTML = `
<!-- Sentiment Badge -->
<div class="mb-4">
<span class="px-3 py-1 rounded-full text-sm font-medium
${analysis.sentiment === 'bullish' ? 'bg-green-900 text-green-300' : ''}
${analysis.sentiment === 'bearish' ? 'bg-red-900 text-red-300' : ''}
${analysis.sentiment === 'neutral' ? 'bg-slate-700 text-slate-300' : ''}
">
${analysis.sentiment === 'bullish' ? '📈' : ''}
${analysis.sentiment === 'bearish' ? '📉' : ''}
${analysis.sentiment === 'neutral' ? '➖' : ''}
${analysis.sentiment.toUpperCase()}
</span>
</div>
<!-- TL;DR -->
<div class="mb-4">
<h3 class="text-sm font-semibold text-slate-400 mb-2">TL;DR</h3>
<p class="text-white">${analysis.summary}</p>
</div>
<!-- Tickers -->
${analysis.tickers.length > 0 ? `
<div class="mb-4">
<h3 class="text-sm font-semibold text-slate-400 mb-2">
Mentioned Tickers
</h3>
<div class="flex flex-wrap gap-2">
${analysis.tickers.map(t => `
<span class="px-2 py-1 bg-slate-700 rounded text-sm">${t}</span>
`).join('')}
</div>
</div>
` : ''}
<!-- Pros -->
<div class="mb-4">
<h3 class="text-sm font-semibold text-green-400 mb-2">✅ Pros</h3>
<ul class="space-y-1">
${analysis.pros.map(p => `
<li class="text-sm text-slate-300">• ${p}</li>
`).join('')}
</ul>
</div>
<!-- Cons -->
<div class="mb-4">
<h3 class="text-sm font-semibold text-red-400 mb-2">⚠️ Cons</h3>
<ul class="space-y-1">
${analysis.cons.map(c => `
<li class="text-sm text-slate-300">• ${c}</li>
`).join('')}
</ul>
</div>
`;
};
The AI Analysis Engine
This is where the magic happens. I use a carefully crafted prompt to get consistent, structured output from the AI.
The Prompt Template
const createAnalysisPrompt = (content, documentType) => {
const systemPrompt = `You are a financial analyst assistant.
Analyze the provided ${documentType} and return a JSON response with:
1. summary: A 2-3 sentence TL;DR of the key points
2. sentiment: "bullish", "bearish", or "neutral"
3. sentimentScore: A number from -100 (very bearish) to +100 (very bullish)
4. tickers: Array of stock ticker symbols mentioned
5. pros: Array of 3-5 positive points for investors
6. cons: Array of 3-5 risks or concerns for investors
7. keyMetrics: Object with any important numbers mentioned (revenue, EPS, etc.)
Guidelines:
- Be objective and balanced
- Focus on what matters to retail investors
- Avoid jargon — explain in plain English
- For SEC filings, focus on MD&A, Risk Factors, and recent developments
- Always identify the most material information
Return ONLY valid JSON, no markdown or explanation.`;
return {
systemPrompt,
userPrompt: `Analyze this ${documentType}:\n\n${content.substring(0, 100000)}`
};
};
Calling the Gemini API
import { GoogleGenerativeAI } from '@google/generative-ai';
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
const analyzeContent = async (content, documentType) => {
const model = genAI.getGenerativeModel({
model: 'gemini-2.5-flash-lite',
generationConfig: {
temperature: 0.3, // Lower = more consistent
topP: 0.8,
maxOutputTokens: 2048,
}
});
const { systemPrompt, userPrompt } = createAnalysisPrompt(content, documentType);
const result = await model.generateContent({
contents: [{ role: 'user', parts: [{ text: userPrompt }] }],
systemInstruction: systemPrompt,
});
const response = result.response.text();
// Parse JSON response
try {
// Clean up response (remove markdown code blocks if present)
const cleanJson = response
.replace(/```
{% endraw %}
json\n?/g, '')
.replace(/
{% raw %}
```\n?/g, '')
.trim();
return JSON.parse(cleanJson);
} catch (error) {
console.error('Failed to parse AI response:', error);
throw new Error('Invalid AI response format');
}
};
Cost Optimization
With Gemini 2.5 Flash-Lite, my costs are incredibly low:
| Document Type | Tokens | Cost |
|---|---|---|
| News Article | ~5,000 | $0.0004 |
| 10-Q Filing | ~75,000 | $0.006 |
| 10-K Filing | ~200,000 | $0.016 |
At 100 analyses per user per month, my average AI cost is $0.50 per user — just 3.3% of the $14.99 subscription price.
Handling SEC Filings
SEC filings are a special beast. They're long, structured differently than news articles, and contain legally-mandated sections that are goldmines for investors.
Detecting SEC Filings
const detectDocumentType = (url, content) => {
// SEC EDGAR detection
if (url.includes('sec.gov')) {
if (content.includes('FORM 10-K') || content.includes('ANNUAL REPORT')) {
return '10-K';
}
if (content.includes('FORM 10-Q') || content.includes('QUARTERLY REPORT')) {
return '10-Q';
}
if (content.includes('FORM 8-K') || content.includes('CURRENT REPORT')) {
return '8-K';
}
if (content.includes('FORM S-1') || content.includes('REGISTRATION STATEMENT')) {
return 'S-1';
}
return 'SEC Filing';
}
// News article detection
return 'Financial Article';
};
Extracting Key Sections from 10-K
Instead of sending the entire 200-page filing, I extract the most valuable sections:
const extract10KSections = (content) => {
const sections = {};
// Key sections retail investors care about
const targetSections = [
{ name: 'business', pattern: /ITEM\s*1[.\s]+BUSINESS/i },
{ name: 'riskFactors', pattern: /ITEM\s*1A[.\s]+RISK\s*FACTORS/i },
{ name: 'mdna', pattern: /ITEM\s*7[.\s]+MANAGEMENT['']?S\s*DISCUSSION/i },
{ name: 'financials', pattern: /ITEM\s*8[.\s]+FINANCIAL\s*STATEMENTS/i },
];
for (const section of targetSections) {
const match = content.match(section.pattern);
if (match) {
const startIndex = match.index;
// Find next ITEM or end of document
const nextItemMatch = content.substring(startIndex + 100)
.match(/ITEM\s*\d+[A-Z]?[.\s]/i);
const endIndex = nextItemMatch
? startIndex + 100 + nextItemMatch.index
: startIndex + 50000;
sections[section.name] = content.substring(startIndex, endIndex);
}
}
return sections;
};
Section-Specific Analysis
For 10-K filings, I run focused analysis on each section:
const analyze10K = async (content) => {
const sections = extract10KSections(content);
// Analyze each section with specific prompts
const [businessAnalysis, riskAnalysis, mdnaAnalysis] = await Promise.all([
analyzeSection(sections.business, 'business description'),
analyzeSection(sections.riskFactors, 'risk factors'),
analyzeSection(sections.mdna, 'management discussion'),
]);
// Combine into unified analysis
return {
summary: mdnaAnalysis.summary,
sentiment: calculateOverallSentiment([
businessAnalysis, riskAnalysis, mdnaAnalysis
]),
pros: [...businessAnalysis.pros, ...mdnaAnalysis.pros],
cons: [...riskAnalysis.keyRisks],
keyMetrics: mdnaAnalysis.metrics,
// Section-specific insights
sections: {
business: businessAnalysis.summary,
risks: riskAnalysis.topRisks,
outlook: mdnaAnalysis.forwardLooking
}
};
};
Backend Architecture
The backend is simple but robust:
┌─────────────────────────────────────────────────────────────┐
│ Chrome Extension │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Hono.js API │
│ ┌─────────────┐ ┌──────────────┐ ┌───────────────────┐ │
│ │ /analyze │ │ /auth/* │ │ /subscription/* │ │
│ └─────────────┘ └──────────────┘ └───────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│ │ │
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────────┐
│ Gemini AI │ │ PostgreSQL │ │ Polar.sh │
│ │ │ (Drizzle) │ │ (Payments) │
└──────────────┘ └──────────────┘ └──────────────────┘
API Routes
import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { jwt } from 'hono/jwt';
const app = new Hono();
// Middleware
app.use('*', cors({ origin: '*' }));
app.use('/api/*', jwt({ secret: process.env.JWT_SECRET }));
// Health check
app.get('/health', (c) => c.json({ status: 'ok' }));
// Analyze content
app.post('/api/analyze', async (c) => {
const { content, url, documentType } = await c.req.json();
const userId = c.get('jwtPayload').sub;
// Check usage limits
const usage = await checkUsage(userId);
if (usage.remaining <= 0) {
return c.json({ error: 'Usage limit reached' }, 429);
}
// Run analysis
const analysis = await analyzeContent(content, documentType);
// Track usage
await incrementUsage(userId);
return c.json({
analysis,
usage: {
used: usage.used + 1,
limit: usage.limit,
remaining: usage.remaining - 1
}
});
});
// Get user subscription status
app.get('/api/subscription', async (c) => {
const userId = c.get('jwtPayload').sub;
const subscription = await getSubscription(userId);
return c.json(subscription);
});
export default app;
Rate Limiting & Usage Tracking
import { db } from './db';
import { usage, users } from './schema';
import { eq, and, gte } from 'drizzle-orm';
const checkUsage = async (userId) => {
const user = await db.query.users.findFirst({
where: eq(users.id, userId)
});
const startOfMonth = new Date();
startOfMonth.setDate(1);
startOfMonth.setHours(0, 0, 0, 0);
const monthlyUsage = await db.query.usage.findMany({
where: and(
eq(usage.userId, userId),
gte(usage.createdAt, startOfMonth)
)
});
const used = monthlyUsage.length;
const limit = user.subscriptionStatus === 'active' ? 100 : 10;
return {
used,
limit,
remaining: limit - used
};
};
Authentication & Payments
Magic Link Authentication
I chose passwordless auth for simplicity. Users enter their email, click a link, and they're in.
import { Resend } from 'resend';
const resend = new Resend(process.env.RESEND_API_KEY);
app.post('/auth/magic-link', async (c) => {
const { email } = await c.req.json();
// Generate token
const token = crypto.randomUUID();
const expires = new Date(Date.now() + 15 * 60 * 1000); // 15 minutes
// Store token
await db.insert(magicLinks).values({
email,
token,
expiresAt: expires
});
// Send email
await resend.emails.send({
from: 'MoneySense AI <auth@moneysense.ai>',
to: email,
subject: 'Your login link',
html: `
<h1>Login to MoneySense AI</h1>
<p>Click the link below to sign in:</p>
<a href="https://moneysense.ai/auth/verify?token=${token}">
Sign In
</a>
<p>This link expires in 15 minutes.</p>
`
});
return c.json({ success: true });
});
Polar.sh Integration
Polar handles all payment complexity. Here's my webhook handler:
import { Webhooks } from '@polar-sh/hono';
app.post('/polar/webhooks', Webhooks({
webhookSecret: process.env.POLAR_WEBHOOK_SECRET,
onOrderPaid: async (payload) => {
const { customer, product } = payload.data;
// Activate subscription
await db.update(users)
.set({
subscriptionStatus: 'active',
subscriptionPlan: product.name,
polarCustomerId: customer.id
})
.where(eq(users.email, customer.email));
console.log(`✅ Subscription activated for ${customer.email}`);
},
onSubscriptionCanceled: async (payload) => {
const { customer, currentPeriodEnd } = payload.data;
// Mark as canceled but keep access until period ends
await db.update(users)
.set({
subscriptionStatus: 'canceled',
subscriptionEndsAt: new Date(currentPeriodEnd)
})
.where(eq(users.polarCustomerId, customer.id));
},
onSubscriptionRevoked: async (payload) => {
const { customer } = payload.data;
// Remove access immediately
await db.update(users)
.set({
subscriptionStatus: 'free',
subscriptionPlan: null
})
.where(eq(users.polarCustomerId, customer.id));
}
}));
Challenges & Solutions
Challenge 1: Content Extraction Across Different Sites
Problem: Every financial news site has different HTML structure.
Solution: I built a multi-strategy extractor that tries multiple approaches:
const extractors = [
// Strategy 1: Semantic HTML
() => document.querySelector('article')?.textContent,
// Strategy 2: Common class names
() => document.querySelector('.article-body, .post-content')?.textContent,
// Strategy 3: Readability-style extraction
() => {
const paragraphs = document.querySelectorAll('p');
return Array.from(paragraphs)
.filter(p => p.textContent.length > 100)
.map(p => p.textContent)
.join('\n\n');
},
// Strategy 4: Fallback to body
() => document.body.innerText.substring(0, 50000)
];
Challenge 2: SEC Filing Format Variations
Problem: SEC filings aren't standardized. Some use HTML, some use plain text, formatting varies wildly.
Solution: Normalize everything to plain text and use pattern matching:
const normalizeSECContent = (html) => {
// Remove HTML tags but preserve structure
const text = html
.replace(/<br\s*\/?>/gi, '\n')
.replace(/<\/p>/gi, '\n\n')
.replace(/<[^>]+>/g, ' ')
.replace(/ /g, ' ')
.replace(/&/g, '&')
.replace(/\s+/g, ' ')
.trim();
return text;
};
Challenge 3: AI Response Consistency
Problem: LLMs sometimes return malformed JSON or unexpected formats.
Solution: Robust parsing with fallbacks:
const parseAIResponse = (response) => {
// Try direct parse
try {
return JSON.parse(response);
} catch (e) {
// Try extracting JSON from markdown
const jsonMatch = response.match(/```
{% endraw %}
(?:json)?\s*([\s\S]*?)
{% raw %}
```/);
if (jsonMatch) {
return JSON.parse(jsonMatch[1]);
}
// Try finding JSON object
const objectMatch = response.match(/\{[\s\S]*\}/);
if (objectMatch) {
return JSON.parse(objectMatch[0]);
}
throw new Error('Could not parse AI response');
}
};
Challenge 4: Token Limits for Long Documents
Problem: 10-K filings can exceed token limits.
Solution: Smart chunking and section prioritization:
const prepareForAnalysis = (content, maxTokens = 100000) => {
const estimatedTokens = content.length / 4; // Rough estimate
if (estimatedTokens <= maxTokens) {
return content;
}
// Extract priority sections only
const sections = extract10KSections(content);
const priorityContent = [
sections.mdna, // Most important
sections.riskFactors,
sections.business
].filter(Boolean).join('\n\n---\n\n');
return priorityContent.substring(0, maxTokens * 4);
};
Launch & Growth
Pre-Launch (Week -2 to 0)
- Built in public — Shared progress on Twitter/X
- Beta testers — Got 20 people from finance Twitter
- Iterated on feedback — Fixed extraction bugs, improved UI
Launch Day (Product Hunt)
- Finished #12 Product of the Day
- Got 150+ upvotes
- 47 signups on day one
Post-Launch Growth
| Month | Users | MRR | Key Driver |
|---|---|---|---|
| 1 | 47 | $0 | Product Hunt launch |
| 2 | 89 | $149 | Reddit posts |
| 3 | 156 | $374 | SEO starting to work |
| 4 | 243 | $598 | Word of mouth |
| 5 | 358 | $897 | Newsletter sponsorship |
| 6 | 430+ | $1,200+ | Organic + referrals |
What Worked
- SEO Content — Blog posts ranking for "how to read 10-K" and similar
- Reddit — Genuine engagement in r/investing, r/stocks, r/SecurityAnalysis
- Finance Twitter — Building relationships with fintwit creators
- Product Hunt — Good for initial visibility, not sustainable growth
What Didn't Work
- Paid ads (initially) — CPA was too high at small scale
- Cold outreach — Low response rate, not worth the time
- Hacker News — Finance tools don't resonate there
What I'd Do Differently
1. Start with a Waitlist
I built for 3 months before validating demand. A landing page with a waitlist would have told me if people actually wanted this.
2. Charge from Day One
I launched with a 30-day free trial. Many users churned before ever paying. Now I'd do a 7-day trial or freemium with tight limits.
3. Focus on One Document Type First
I tried to support everything — news, 10-K, 10-Q, 8-K — from the start. I should have nailed 10-K analysis perfectly, then expanded.
4. Build the Referral Program Earlier
Word of mouth is my best channel. I didn't add referral rewards until month 4. Should have been there from launch.
Key Takeaways
For Developers Building AI Products
AI costs are lower than you think. Gemini Flash-Lite makes it possible to build profitable AI products at $15/month price points.
Prompt engineering is everything. I spent more time refining prompts than writing code. The same model can give terrible or excellent results depending on how you ask.
Structured output is your friend. Always ask for JSON. Always validate the response. Always have fallbacks.
For Indie Hackers
Solve your own problem. I built MoneySense because I needed it. That authenticity resonates with users.
Distribution > Product. The extension was 80% complete in month 1. Months 2-6 were all about getting it in front of people.
Merchant of Record services are worth it. Polar.sh, Lemon Squeezy, Paddle — they handle the stuff you don't want to think about.
For Finance Enthusiasts
AI won't replace analysis — it accelerates it. MoneySense doesn't tell you what to buy. It helps you read faster and spot patterns.
The edge is in primary sources. News articles are someone else's interpretation. SEC filings are the ground truth.
Retail investors can compete. Wall Street doesn't have a monopoly on good analysis tools anymore.
What's Next
I'm currently working on:
- SEC filing search — Find specific topics across all of a company's filings
- Earnings call transcripts — Analyze management tone and key quotes
- Watchlist integration — Get alerts when new filings drop for your stocks
- Mobile app — For reading on the go
If you want to try MoneySense.ai, it's available on the Chrome Web Store.
Top comments (0)