In 2025, I faced a decision that would determine whether my fintech startup lived or died:
Option A: Spend $200K+/year on Google Ads and SEO, competing with NerdWallet and Credit Karma for the same keywords.
Option B: Build infrastructure that turns AI assistants into my distribution channel.
I chose Option B. Here's what happened, and why I think AI-native distribution is the future.
Within 30 days of launching our MCP server, we hit 150+ installations across Claude, ChatGPT, and Cursor without spending a dollar on ads. Each installation represents a potential customer who found us because an AI assistant recommended us.
Why Traditional Fintech Distribution is Broken (And Why I Don't Want to Fix It)
Let me paint you a picture of the traditional fintech playbook:
- Spend millions on Google Ads bidding on keywords like "best business loans" and "SBA lender"
- Hire an SEO team to churn out comparison content
- Pay affiliates to drive traffic to your site
- Convert 2-5% of visitors into leads
- Repeat, bleeding cash at up to $800 CAC
The Three Fatal Flaws
Flaw #1: You're David fighting Goliath with a slingshot
Credit Karma, NerdWallet, Bankrate have billion-dollar marketing budgets. They own page 1 of Google. You're competing for scraps on page 2.
Flaw #2: Users don't trust you (and they're right)
Everyone knows comparison sites are paid placements. The "best" loan is usually just the one that pays the highest affiliate fee. Users click around, then go directly to their bank anyway.
Flaw #3: You're one algorithm update away from zero
Google changes its algorithm. Your traffic drops 60%. Your startup dies. This isn't hypothetical—I've watched it happen to competitors.
The Insight That Changed Everything
One day I noticed something interesting in my analytics: people were copy-pasting loan questions into ChatGPT to "get a second opinion."
Then I asked ChatGPT the same question: "What's the best business loan for equipment financing?"
It hallucinated. Made up rates. Recommended lenders that don't even offer equipment loans.
That's when it clicked: People are already using AI assistants for financial advice. But the AI doesn't have the data (yet). It's just making educated guesses.
What if I could be the infrastructure layer that makes AI assistants actually useful for financial services?
What if, instead of competing for Google rankings, I could embed directly into the tools people are already asking for advice?
Enter Model Context Protocol (MCP)
If you haven't heard of MCP yet, you will. Anthropic released it in late 2024, and it's quietly becoming the standard for connecting AI assistants to external data.
What is MCP? (The 60-Second Version)
Think of MCP as an API standard for AI assistants. Instead of building a custom plugin for ChatGPT, a different one for Claude, another for Gemini, etc., you build one MCP server that works with all of them.
Technical overview:
- Open protocol by Anthropic (not vendor lock-in)
- Server-client architecture
- Standardized tool definitions (like OpenAPI, but for AI function calling)
- Transport over HTTP Streamable
- Works with Claude, ChatGPT (via MCP support), Cursor, Cline, and any other MCP-compatible client
Here's what blew my mind: when a user asks Claude "compare business loans for me," Claude can automatically discover and call my MCP server, get real loan data, and present it conversationally.
No app store approval. No plugin marketplace bureaucracy. Just a standard protocol.
Why This Matters for Distribution
Traditional software distribution:
User searches Google → Finds your site → Signs up → Uses product
MCP distribution:
User asks AI a question → AI discovers your MCP server → AI uses your product → User gets value
See the difference? The AI assistant becomes your sales team. It discovers your service when users ask relevant questions. Zero CAC.
The best part: One MCP server serves multiple platforms. Build it once, distribute everywhere.
This is like building for mobile in 2010. The platform is early, the standards are still forming, but the shift is inevitable.
The Bet I'm Making
In 3-5 years, most software discovery will happen through AI assistants, not Google search or app stores.
Users won't Google "best business loan rates" and click through comparison sites. They'll just ask Claude or ChatGPT, and the AI will search across all available MCP servers to find the best answer.
If you're building infrastructure for AI assistants now, you're positioned to capture that shift. If you're still optimizing for SEO... good luck competing with AI-generated content farms.
How I Built It (The Technical Deep Dive)
Alright, let's get into the architecture. If you're here just for the business strategy, feel free to skip to the next section. But if you're a developer thinking about building your own MCP server, this is for you.
System Architecture
User asks question in Claude
↓
Claude's MCP client discovers tools
↓
Claude calls: https://mcp.dev.securelend.ai/mcp
↓
My MCP server (Node.js on ECS Fargate)
↓
GraphQL API gateway (AppSync)
↓
Lending microservice
↓
Queries aggregated DynamoDB tables
↓
Results normalized, returned via HTTP Streamable
↓
Claude presents results conversationally to user
Tech stack:
- MCP Server: TypeScript, Express, ECS Fargate (stateless, horizontally scalable)
- Transport: HTTP Streamable (SSE - is being deprecated)
- API Layer: AppSync (GraphQL) over microservices
- Data: DynamoDB with Global Tables (multi-region for SOC 2)
- Data Strategy: Pre-aggregated lender data in DynamoDB, not real-time API calls
- Infra: CDK for infrastructure-as-code
- Monitoring: CloudWatch, Pino structured logging
Key Design Decision #1: Pre-Aggregated Data vs Real-Time APIs
I initially planned to call hundreds of lender APIs in real-time for each query. Bad idea.
Problems:
- Inconsistent response times (20ms to 5+ seconds)
- Rate limiting across diverse APIs
- Lenders changing APIs without notice
- No way to guarantee data freshness
Better approach: Aggregate lender data into DynamoDB tables, refresh on a schedule.
We maintain tables for:
-
lender-programs- Loan products, eligibility, rates -
lender-metadata- Operating hours, contact info, application URLs -
rate-updates- Historical rate tracking
Refresh strategy:
- Critical data (rates, availability): Multiple times daily
- Metadata (contact info, programs): Daily
- Historical data: On-demand
Result: Sub-second query performance (~800ms average), predictable costs, easier debugging.
Key Design Decision #2: Granular Tools vs Generic Search
I initially built one generic tool: searchLoans(query: string). Bad idea.
The AI didn't know how to structure queries properly. It would ask for "business loans" but not specify amount, credit score, or purpose. Results were garbage.
Better approach: 20+ specific tools, each with strongly-typed parameters.
Here's an example MCP tool definition:
{
name: "compareSBALoans",
description: "Compare SBA 7(a) loan offers from multiple lenders based on business profile",
inputSchema: {
type: "object",
properties: {
amount: {
type: "number",
description: "Loan amount in USD (minimum $50,000)"
},
creditScore: {
type: "number",
description: "Personal credit score (300-850)",
minimum: 300,
maximum: 850
},
annualRevenue: {
type: "number",
description: "Business annual revenue in USD"
},
businessAge: {
type: "number",
description: "Years in business (minimum 2 for SBA 7(a))"
},
purpose: {
type: "string",
enum: ["working_capital", "equipment", "real_estate", "acquisition"],
description: "Primary loan purpose"
},
veteran: {
type: "boolean",
description: "Is the applicant a veteran? (affects SBA guarantee fees)"
}
},
required: ["amount", "creditScore", "annualRevenue", "businessAge"]
}
}
Now ChatGPT, Claude or any other LLM client knows exactly what data to collect from the user and how to call the tool properly.
Key Design Decision #3: Stateless Everything
MCP servers should be stateless. Each request is independent.
Why?
- Scales horizontally without coordination
- No session management complexity
- Simple deployment (just spin up more containers)
- Easy recovery from failures
Trade-off: Can't do multi-turn conversations where the MCP server "remembers" context. But that's fine—Claude handles conversation state, not the MCP server.
The Hard Parts Nobody Talks About
1. Financial compliance isn't just "add a disclaimer"
You can't let the AI hallucinate loan terms. Every rate, every fee must be accurate and sourced from real data.
But it's deeper than that. The labels matter:
- Show a business loan with "Finance Charge"? That's a TILA term for consumer loans. Legally incorrect.
- Show a personal loan with "Total Cost of Financing"? That's a commercial lending term. Confusing and potentially deceptive.
Solution: Dynamic disclosure modals that detect loan type (consumer vs. business) and show the legally correct terminology. This isn't about being pedantic - it's about not getting sued.
Audit trail for every recommendation: DynamoDB table logs every tool call, every data query, every result returned, and every disclosure shown. If a regulator asks "why did you recommend this loan?" I can show them the exact data. If they ask "did the user see the required disclosure?" I can prove it with timestamps.
2. Multi-tenant auth in regulated industries
Some lenders need different API keys per customer. Some have webhooks. Some require OAuth. Some need IP whitelisting.
But here's the fintech twist: you also need to track which user saw which lender's offer for compliance. If a lender gets audited, they need to prove they only marketed to eligible borrowers.
Solution: Service-to-service auth with API keys stored in AWS Secrets Manager. Each service validates incoming requests against its own API key. Tenant isolation at the database level. Plus, every lender match gets logged with user geography, loan type, and eligibility criteria met.
3. AI unpredictability meets regulatory precision
Users phrase questions in infinite ways:
- "I need money for my business"
- "Compare SBA loans"
- "What's the cheapest way to finance equipment?"
- "I'm buying a franchise, help me with financing"
All of these should trigger the right tools, but with different parameters. The tool descriptions and schemas need to be extremely clear so the AI client understands when to use each one.
But here's the compliance challenge: The AI might recommend a loan the user doesn't qualify for. Or it might describe terms in a way that's technically accurate but could be misinterpreted.
Solution: The AI only has access to offers the user actually qualifies for based on their stated criteria. We pre-filter before the MCP tool returns results. And every rate is accompanied by the required disclosure format for that loan type and state.
Code You Can Actually Use
Want to build your own MCP server? Don't build from scratch. Use existing tools:
Option 1: Use Skybridge (For ChatGPT Apps with React UI)
If you're building specifically for ChatGPT and want rich, interactive React components inside the chat interface, Skybridge is a full-stack framework that extends the official MCP SDK.
import { createServer } from 'skybridge/server';
const server = createServer({
name: "my-chatgpt-app",
version: "1.0.0"
});
// Define tools with widget support
server.registerTool({
name: "compareSBALoans",
description: "Compare SBA loans with interactive results",
widget: 'LoanComparison', // Renders a React component in ChatGPT
handler: async ({ amount, creditScore }) => {
const results = await yourBusinessLogic(amount, creditScore);
return results;
}
});
What Skybridge adds:
- Hot Module Reload for fast ChatGPT app development
- End-to-end type safety between server and React widgets
- Dev environment that doesn't require testing inside ChatGPT
- Widget-to-model synchronization for dual interaction surfaces
Use Skybridge if: You're building rich, interactive UI experiences inside ChatGPT conversations with custom React components.
Skip Skybridge if: You're building a standard MCP server for Claude, Cursor, or don't need custom UI.
Option 2: Use the Official MCP SDK (Standard approach)
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
const server = new Server({
name: "my-mcp-server",
version: "1.0.0"
}, {
capabilities: {
tools: {}
}
});
// Register your tools
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
{
name: "compareSBALoans",
description: "Compare SBA 7(a) loan offers",
inputSchema: {
type: "object",
properties: {
amount: { type: "number" },
creditScore: { type: "number" }
}
}
}
]
};
});
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "compareSBALoans") {
const results = await yourBusinessLogic(request.params.arguments);
return { content: [{ type: "text", text: JSON.stringify(results) }] };
}
});
const transport = new StdioServerTransport();
await server.connect(transport);
Option 3: Try Our Implementation (Learn by example)
Use our Next.js example at github.com/SecureLend/sdk/tree/main/examples/nextjs-app to see how we connect to our MCP server and handle responses. This way:
- Your core API is platform-agnostic
- MCP is just another client interface
- Easy to add other protocols later (GraphQL, gRPC, etc.)
Don't reinvent the wheel. The MCP protocol has nuances around transport, error handling, and streaming that are easy to get wrong.
Full implementation details at: github.com/SecureLend/mcp-financial-services
Distribution Strategy: Getting Users to Actually Install This Thing
Building the MCP server was the straightforward part. Getting people to use it? That's the hard part.
Here's what I tried, what worked, and what didn't.
Multi-Channel Distribution (Because Platform Risk is Real)
Official Directories (for credibility):
- ✅ Smithery.ai: Live (immediate early adoption)
- ✅ Cursor MCP directory: Live
- ⏳ Claude MCP directory: Submitted, waiting for approval
- ⏳ ChatGPT app store: Submitted, still pending after 3+ weeks
Lesson: App store approval is slow and unpredictable. Don't bet your launch on it.
Developer Distribution (technical users find you organically):
- NPM package:
@securelend/sdk - PyPI package:
securelend-sdk - GitHub: Open-source MCP schema
- Documentation: docs.securelend.ai
Lesson: Developers love open source in fintech. It builds trust. "Show me the code" is a feature, not a risk.
One-Click Installer (reduce friction to zero):
Built extensions.securelend.ai with downloadable installers for macOS/Windows.
Automatically detects Claude Desktop config, adds the MCP server, done in 60 seconds.
Before installer: "Add this JSON to your config file at ~/Library/Application Support/Claude/claude_desktop_config.json..."
After installer: "Click download, click install, done."
Result: Installation rate increased significantly. Friction kills adoption.
What's Working
Developer communities love architecture stories. The "here's how I built it" angle gets way more traction than "try my product."
One-click installer is clutch. Manual config scares away 90% of potential users.
Open source builds trust. Multiple times people said "I checked the code before installing—looks legit."
Multi-platform hedging works. When ChatGPT app store delayed, I already had Cursor and Smithery users.
What's Not Working (Yet)
App store approval timelines suck. 30+ days and counting. Can't control it, can't optimize it.
User education gap. Most people don't know MCP exists. Have to explain "what is MCP" before "why use our MCP server."
Discovery problem. How do users find MCP servers? There's no MCP equivalent of the iOS App Store yet, but ChatGPT seems to have exactly this in mind: https://openai.com/index/introducing-apps-in-chatgpt/.
Early Metrics
After the first month of launch:
- Solid initial adoption across multiple platforms
- Strong activation rate (users who install actually use it)
- Real loan queries being processed daily
- Qualified leads generated proving the model works
- B2B interest from regional and community lenders
Not unicorn numbers (yet), but it validates the model: AI-native distribution works, zero ad spend required.
The Business Model (Or: How This Actually Makes Money)
Developers love technical details. But VCs and founders want to know: does this actually generate revenue?
Short answer: Yes. Here's how.
Dual Revenue Streams
Stream 1: B2C Lead Generation (the AI finds customers for me)
- User asks Claude: "I need a $300K business loan"
- Claude calls my MCP server
- I return matching lenders with rates/terms
- User selects a lender → we capture the lead
- Lender pays industry-standard lead generation fees
Lead pricing tiers:
- High-intent: User started application, provided documentation
- Medium-intent: User requested direct contact from lender
- Low-intent: User saved lender information for future reference
The AI's conversational qualification means we generate disproportionately more high-intent leads compared to traditional form-based comparison sites.
Stream 2: B2B SaaS (the MCP creates enterprise sales pipeline)
Lender sees inbound leads from MCP → realizes they need better origination platform → signs up for SecureLend SaaS.
Pricing: Monthly SaaS fee plus basis points on funded loan volume
Why this works: The MCP server is a trojan horse. Lenders get hooked on the lead flow, then realize they need the full platform to manage it efficiently.
Unit Economics (The Stuff That Matters)
B2C (per lead):
Revenue: Industry-standard lead gen rates (varies by loan type)
COGS: Minimal (API costs, infrastructure)
Gross margin: up to 90%
CAC: $0 (organic AI discovery)
B2B SaaS (per customer):
MRR: Base fee + volume-based pricing
COGS: AWS infrastructure per tenant
Gross margin: up to 90%
CAC: $0 (inbound from MCP leads)
LTV: High (multi-year contracts, low churn in fintech SaaS)
Yes, those margins are real. Fintech lead gen + B2B SaaS is a beautiful business model.
Projected Scale (Conservative vs Aggressive)
Conservative Case (End of 2026):
MCP Installations: Low thousands
Active users/month: 20-25% activation rate
Leads generated: Meaningful monthly volume
Monthly revenue: Path to profitability within first year
Annual revenue: Low seven figures ARR
B2B SaaS customers: Initial cohort of regional/community lenders
Total ARR: Solid foundation for sustainable growth
Not bad for a solo founder with zero ad spend.
Aggressive Case (End of 2027, with international expansion):
MCP Installations: Tens of thousands globally
Active users/month: Sustained 20%+ activation across markets
Leads generated: High-volume monthly flow
Monthly revenue: Substantial recurring revenue
Annual revenue: Eight-figure ARR potential
B2B SaaS customers: Hundreds of lenders across multiple countries
Total ARR: Unicorn trajectory becomes plausible
Multi-million dollar ARR starts looking realistic with AI-native distribution.
Why This Works as a Solo Founder
The magic of AI-native distribution:
❌ No sales team → AI does discovery and qualification
❌ No customer support → Comprehensive docs + AI-powered help
❌ No marketing team → MCP distribution + organic content
❌ No ops team → Everything is serverless and auto-scaling
❌ Minimal engineering → Microservices + CDK means I write business logic, AWS handles infrastructure
My entire "team":
- Me (full-stack dev + founder)
- AWS (infrastructure)
- Claude Sonnet (coding assistant, content generation)
- Stripe (payments)
- A few contractors (compliance, legal, design)
Monthly burn: Single-digit thousands (AWS, contractors, tools)
Break-even: Achievable within first few months of meaningful traction
This is the "one-person unicorn" model: AI automation + serverless infrastructure + AI-native distribution = infinite leverage.
Lessons Learned & What's Next
Five Things I Wish I Knew Before Starting
1. AI assistants are a real distribution channel
I was skeptical. Would people actually trust Claude or ChatGPT for something as important as a business loan?
Turns out: yes. Users trust AI recommendations more than comparison sites because they know comparison sites are paid placements.
AI feels like a neutral advisor. It's conversational. It asks follow-up questions. It explains trade-offs.
Zero-click experience: no website to navigate, no forms to fill out. Just conversation.
2. MCP is early but real
I thought MCP might be vaporware—a cool idea that never gets adopted. Nope.
Developer community is actively building. Platform support is expanding (Claude Desktop, Cursor, Cline, with ChatGPT coming soon).
First-mover advantage is real. If you can own a vertical niche (like financial services) before the market gets crowded, you win.
3. Open source builds trust in fintech
I was afraid to open-source the MCP schema. "What if competitors copy me?"
Reality: Open source increased adoption. Developers audit the code before installing. Lenders validate data accuracy. Everyone feels safer.
Plus, community contributions improve the product. Someone submitted a PR to add Canadian lender support. For free.
4. Multi-platform hedging is critical
Don't build on a single AI platform. ChatGPT app store approval took 30+ days (and still pending). If that was my only distribution channel, I'd be dead.
MCP's platform-agnostic design is a feature. Build once, distribute everywhere.
5. Compliance can't be an afterthought (and it's your moat)
I built SOC 2 controls from day one: audit trails, access controls, encryption, the works.
Cost more upfront (compliance infrastructure and expertise isn't cheap - I believe vanta starts at $10k only for the platform not including the actual audit fee which can easily be another $10k or more). But now I can sell to banks and enterprise lenders who require SOC 2.
If I'd built fast-and-loose, I'd have to rebuild everything later. Technical debt in fintech kills you.
But here's the strategic insight: Compliance is actually your competitive moat.
Most solo developers or lightly-funded startups will see the compliance requirements and walk away. They want to build fast and iterate. In fintech, you can't do that.
What compliance actually means in practice:
Dual regulatory regimes:
- Consumer loans governed by TILA (Truth in Lending Act) - must use specific terms like "Finance Charge" and "Amount Financed"
- Business loans governed by state laws (CA SB 1235, NY CFDL) - must use different terms like "Total Cost of Financing" and "Funds Provided"
- You can't just show generic "loan details" - the labels, headers, and disclosures must match the regulatory context
- One modal component with dynamic labels based on loan type, not two separate implementations
State-by-state variations:
- California has different commercial lending disclosure requirements than New York
- Texas has different rules than Florida
- You need to track which state the borrower is in and show the correct disclosure format
- This isn't just good practice - it's legally required
Broker licensing complexity:
- In some states, showing loan comparisons might classify you as a "broker"
- Broker licenses can take 6-12 months and cost $10K-50K per state
- Some states require physical presence, bonds, or minimum capital requirements
- Many competitors give up here
UDAAP compliance:
- Unfair, Deceptive, or Abusive Acts or Practices regulations
- Can't show misleading rates, hide fees, or omit material terms
- Every rate displayed requires a disclosure explaining the calculation
- Audit trail showing exactly what the user saw and when they saw it
Data handling requirements:
- GLBA (Gramm-Leach-Bliley Act) for financial data
- FCRA (Fair Credit Reporting Act) if pulling credit data
- State privacy laws (CCPA, CDPA, etc.)
- Each has different retention periods, deletion requirements, breach notification timelines
The result: Most developers see this list and go build a SaaS tool or crypto project instead. The ones who start anyway usually give up after talking to their first compliance lawyer.
This is why NerdWallet and Credit Karma have huge compliance teams. It's not optional.
But as a solo founder, I found a hack: Build compliance into the architecture from day one, not as an afterthought.
- Every API response logged with timestamp and user ID
- Every disclosure shown tracked in DynamoDB
- Every rate calculation documented with source data
- Geographic detection determines which disclosure format to show
- Automated compliance checks before data is displayed to users
The infrastructure costs more upfront, but it means I can operate in multiple states. And it creates a natural moat against competitors who are just trying to ship fast.
What's Next (The Roadmap)
Q1 2026: Expand MCP capabilities
- Pre-qualification tools (not just comparison)
- Document upload via MCP (leverage existing AI document processing)
- Saved comparisons that persist across sessions
- Partner with business banking platforms (Brex, Mercury)
Q2 2026: International expansion
- Canada first: Similar regulations, shared time zones, bilingual opportunity (French/English)
- Partner with Canadian banks and credit unions
- Test international go-to-market before bigger markets
Q3 2026: Platform partnerships
- Integrate with accounting software (Xero, QuickBooks) for automatic financial data
- Co-marketing with AI tool vendors (Cursor, Cline)
Q4 2026: Agent-to-agent workflows
- Let AI agents apply for loans on user's behalf
- Agentic document collection and verification (less human involvement)
- Automated follow-up and negotiation with lenders
2026: Vertical expansion
- Insurance
- Merchant services
- Goal: Become the financial services infrastructure layer for AI
The Open Questions I'm Still Figuring Out
How do you maintain data quality at scale? Hundreds of lender programs, each updating rates daily. How do you keep data fresh and accurate while respecting rate limits and API constraints?
What's the right balance between AI autonomy and regulatory compliance? Should AI be able to submit loan applications on behalf of users? Or just recommend options? Where's the line between helpful and potentially violating lending regulations?
How do you handle multi-state compliance? Each state has slightly different regulations. Do you limit to certain states initially, or build for all 50 from day one?
Can a single-person company really achieve unicorn status? Or does regulatory complexity eventually require teams? I want to believe infinite leverage is possible, but compliance might force me to hire.
How do you defend against well-funded competitors? Once MCP distribution proves viable, what stops every fintech company from copying this playbook? Network effects? Proprietary data? Compliance moat? All of the above?
I don't have answers yet. But I'm figuring it out in public. That's the fun part.
The Thesis (And Why I Think You Should Care)
Here's what I believe:
In 3-5 years, AI assistants will be the primary interface for complex decision-making.
People won't Google "best business loans" and click through 10 comparison sites. They'll just ask Claude or ChatGPT, and the AI will search across all available data sources (via MCP servers) to find the best answer.
Companies that build infrastructure for AI rather than websites for humans will capture outsized value.
Why Fintech Specifically?
Four reasons:
High-intent, high-value transactions → A single loan can be worth $500K. A single lead pays $300. The economics work.
Complex decision-making → Choosing a loan involves comparing rates, terms, fees, eligibility requirements. Perfect use case for (Agentic) AI.
Fragmented data → 200+ lenders, opaque pricing, no standardization. Someone needs to aggregate and normalize this data.
Broken user experience → Forms. Phone calls. Weeks of waiting. AI can do it in minutes conversationally.
The Opportunity (If You're Building Something Similar)
Think of this as the Plaid moment for AI.
Plaid won because they made it easy for apps to access bank data. Every fintech company integrated Plaid instead of building bank connections themselves.
The same opportunity exists for AI-powered financial services. Be the infrastructure layer that makes it easy for AI to access lender data, insurance data, investment data, whatever.
Own the data layer → capture the value.
Want to Build Your Own MCP Server?
If you're inspired by this and want to build your own MCP server (for a different vertical), here's my advice:
Pick a vertical with fragmented data → Real estate, healthcare, legal services, B2B software. Anywhere data is scattered across hundreds of providers.
Start with one AI platform → Don't try to support everything at once. Build for ChatGPT or Claude Desktop, get 100 users, then expand.
Make installation dead simple → One-click installer or bust. Manual config kills adoption.
Open-source the schema → Builds trust, attracts contributors, shows you're not trying to hide anything.
Track everything → Logs, metrics, user behavior. You need data to iterate.
Don't wait for app store approval → Build your own distribution channels. LinkedIn, Reddit, Hacker News, Twitter, GitHub. Go direct.
Follow Along
I'm building this in public. If you want to follow the journey:
- LinkedIn: SecureLend
- GitHub: github.com/SecureLend/mcp-financial-services
- Docs: docs.securelend.ai
- Install the MCP server: extensions.securelend.ai
If you're building something similar, DM me. I'd love to compare notes.
The future of software distribution is being written right now. Might as well help write it.
Thanks for reading. If you found this useful, drop a comment, share it with someone building in the AI infrastructure space or connect on LinkedIn. And if you're a developer who wants to test the MCP server, go ahead and install it—I'd love your feedback.
—Tobias, Founder of SecureLend
Top comments (0)