DEV Community

Lambo Poewert
Lambo Poewert

Posted on

How I Built a Profitable Solana Intelligence Platform on a Single Server — €132/month

How I Built a Profitable Solana Intelligence Platform on a Single Server — €132/month

Two months ago I started building MadeOnSol, a real-time trading intelligence platform for Solana. Today it processes 22,000+ API calls, tracks 1,000 KOL wallets across 11 DEXes, scores 17,000+ token deployers, and indexes 1.1 million buyer records.

It runs on a single dedicated server. It's profitable. Here's the full technical breakdown.

The Stack

  • Framework: Next.js (App Router, SSG)
  • Database: Self-hosted Supabase (PostgreSQL + Auth)
  • Styling: Tailwind CSS
  • Real-time data: Dual gRPC streams via Yellowstone (Triton) from Constant-K validator nodes
  • Process manager: PM2
  • Reverse proxy: Nginx
  • CDN/DNS: Cloudflare (free tier)
  • Email: Resend
  • Analytics: Umami (self-hosted)
  • Payments: USDC and SOL on Solana (no Stripe, no credit cards)

The Server

Hetzner dedicated server, €100/month:

  • Intel Xeon E-2176G (6 cores / 12 threads, 3.70GHz)
  • 64GB DDR4 ECC RAM
  • 2x Samsung PM983 960GB NVMe in RAID1 (database)
  • 2x Micron 5200 1.92TB SATA SSD in RAID1 (backups + historical data)
  • 1 Gbit unlimited traffic

Plus ~€32/month for two Constant-K gRPC validator nodes (Frankfurt + New York). Total: €132/month.

How the Data Pipeline Works

Solana processes thousands of transactions per second across dozens of DEX programs. I needed to capture every swap in real-time.

gRPC Streams

I use Yellowstone gRPC (by Triton) to subscribe to transaction updates from Solana validators. Two streams run in parallel:

  • Frankfurt node — primary, low latency for European users
  • New York node — failover, low latency for US users

Each stream subscribes to 11 DEX program IDs:

const DEX_PROGRAMS = [
  'PumpFun',
  'PumpSwap', 
  'Raydium',
  'Jupiter',
  'Orca',
  'Meteora',
  'LetsBonk',
  'Bags.app',
  // + 3 more
];
Enter fullscreen mode Exit fullscreen mode

When a transaction comes in, the listener:

  1. Parses the instruction data (each DEX has its own format)
  2. Extracts: wallet, token mint, SOL amount, direction (buy/sell), DEX name
  3. Checks against 1,000 tracked KOL wallets (O(1) hash lookup)
  4. Checks against 17,000+ scored deployer wallets
  5. Stores the trade in PostgreSQL
  6. Pushes to active WebSocket subscribers

Detection latency: under 3 seconds from on-chain to API response.

Dual-Region Failover

The two gRPC streams run independently. If Frankfurt drops, New York keeps processing. Reconnection is handled in-process — no PM2 restarts needed.

class GrpcStream {
  async connect() {
    try {
      this.stream = await this.client.subscribe();
      this.stream.on('data', this.handleTransaction);
      this.stream.on('error', () => this.reconnect());
    } catch (err) {
      setTimeout(() => this.connect(), 5000);
    }
  }

  async reconnect() {
    this.stream?.destroy();
    await this.connect();
  }
}
Enter fullscreen mode Exit fullscreen mode

KOL Tracking

We maintain a list of 1,000 Solana KOL (Key Opinion Leader) wallets. For every trade detected, we check if the wallet is a tracked KOL.

The lookup is a simple Set in memory:

const kolWallets = new Set(await loadKolWallets());

function handleTrade(trade) {
  if (kolWallets.has(trade.wallet)) {
    // Store KOL trade, calculate PnL, check coordination
  }
}
Enter fullscreen mode Exit fullscreen mode

Coordination detection: when 3+ KOLs buy the same token within a time window, we flag it as a potential coordination signal.

Deployer Scoring

Every Pump.fun token has a deployer wallet. We track all 17,000+ deployers and score them by:

  • Bonding rate — what percentage of their tokens graduated from the bonding curve
  • Lifetime deploys — total tokens launched
  • Recent performance — last 10 tokens vs lifetime average

Tiers: Elite (50%+ bond rate), Good (30-50%), Average (10-30%), Poor (<10%)

The first 20 buyers on every Pump.fun token are also recorded. Over time this builds a dataset of 1.1 million+ buyer records that reveals which wallets consistently buy early on winning tokens.

Database Optimization

Materialized Views

The KOL leaderboard and deployer stats are expensive queries (joins + aggregations across millions of rows). Running them on every API request would kill performance.

Solution: materialized views refreshed by pg_cron.

CREATE MATERIALIZED VIEW kol_pairs_summary AS
SELECT wallet, token_mint, 
       count(*) as trade_count,
       sum(sol_amount) as total_volume
FROM kol_trades
GROUP BY wallet, token_mint;

-- Refresh every 5 minutes
SELECT cron.schedule('refresh-kol-pairs', '*/5 * * * *', 
  'REFRESH MATERIALIZED VIEW CONCURRENTLY kol_pairs_summary');
Enter fullscreen mode Exit fullscreen mode

Result: /kol/pairs endpoint went from 500ms to 5ms. 100x improvement.

Connection Pooling

Self-hosted Supabase includes Supavisor for connection pooling. PostgreSQL's process-per-connection model means 200 connections = 200 OS processes. Supavisor multiplexes thousands of API requests through a smaller pool of actual database connections.

API Architecture

Three tiers:

Tier Price Rate Limit Features
Free $0 200/day REST endpoints
Pro $49/mo 10,000/day REST + 1 WebSocket + 3 webhooks
Ultra $149/mo 100,000/day REST + 3 WebSocket + DEX firehose + 10 webhooks

Authentication: Bearer token (msk_ prefixed API keys). Rate limiting: sliding window counter in PostgreSQL (considering Redis for this when load increases).

Payments: USDC or SOL on Solana. No Stripe integration. Users connect their wallet, send payment to our treasury, and the API verifies the transaction on-chain.

WebSocket & DEX Firehose

The DEX Firehose streams every swap across all 11 DEXes via WebSocket. Users subscribe with server-side filters:

{
  "stream": "dex_trades",
  "filters": {
    "dexes": ["pumpfun", "raydium"],
    "min_sol_amount": 5,
    "action_types": ["buy"]
  }
}
Enter fullscreen mode Exit fullscreen mode

The server checks each incoming trade against all active subscriptions and only pushes matching events. One gRPC stream in, many filtered WebSocket streams out.

SEO: 9 to 7,652 Indexed Pages

Programmatic SEO at scale:

  • 1,000+ tool pages with unique descriptions
  • Comparison pages ("Tool A vs Tool B")
  • Alternatives pages ("Best X alternatives")
  • Best-of pages per category
  • Schema markup, proper meta tags, XML sitemap

Google indexed 94% of the sitemap. Bing's AI has cited the site 3,400+ times. ChatGPT and DuckDuckGo send referral traffic.

The insight: AI search engines cite structured, detailed, honest content automatically. I didn't optimize for AI — I just built useful pages.

Scraper Defense

With 7,652 indexed pages and growing traffic, scrapers showed up. Asian residential proxies running headless Chrome inflated the bounce rate from 54% to 91%.

Defense layers:

  1. Cloudflare Bot Fight Mode — catches obvious bots
  2. Cloudflare firewall rules — challenge traffic from VN/HK/JP/ID without a search engine referrer
  3. Nginx rate limiting — per-IP request caps
  4. Supabase RLS — intelligence data (KOL trades, deployer scores) requires authentication

The tool directory is public (it's SEO content), but the paid data is gated.

Security Audit Findings

Ran a full security audit in month 3. Key findings:

  • Critical: Missing Row Level Security on payment-related tables. Fixed with a migration adding RLS policies.
  • Critical: SSRF in webhook delivery — user-supplied URLs weren't validated. Fixed by reusing existing isPrivateUrl() validation.
  • High: Unauthenticated routes using admin Supabase client (bypassing RLS). Data was intentionally public, but pattern was risky.
  • Medium: 50+ any types from untyped Supabase queries. Fixed by generating types with supabase gen types typescript.

Capacity Planning

Current load: ~2,250 API calls/day (0.03 requests/second).

Server can handle conservatively 17 million calls/day (200 requests/second mixed workload). That's 7,500x current usage.

Bottleneck order:

  1. PostgreSQL connections (fix: PgBouncer)
  2. CPU contention between gRPC + API + DB (fix: split servers)
  3. WebSocket file descriptors (fix: increase ulimit)

Memory, disk I/O, and network are not bottlenecks.

Scaling plan: this single server handles up to 500-1,000 active API users. When revenue justifies it, split into a database server (keep the Xeon) and a cheap app server (~€15/month).

Results After 2 Months

  • MRR: $196 (4 Pro subscribers, profitable)
  • Registered developers: 45
  • API calls: 22,000+
  • Data indexed: 1.1M buyer records, 161K wallets, 17K deployers
  • Google indexed pages: 7,652
  • AI citations: 3,400+
  • Infrastructure cost: €132/month
  • Marketing spend: €0
  • Team size: 1

The entire platform — real-time gRPC streams, PostgreSQL, Next.js, WebSocket server, API, analytics — runs on a single €100/month dedicated server and is profitable.

Key Takeaways

  1. Self-hosting is viable for SaaS. Supabase, PostgreSQL, Next.js on a dedicated server gives you full control at a fraction of cloud costs.
  2. Materialized views solve most performance problems. Before reaching for Redis, try pg_cron + materialized views.
  3. Programmatic SEO works if the content is useful. 7,652 pages indexed because each one answers a real question.
  4. AI search is a real distribution channel. 3,400 citations without optimization.
  5. Crypto payments reduce friction. USDC on Solana is faster than Stripe for our audience.
  6. Don't scale prematurely. One server handles 7,500x our current load. Scale when you need to, not when you think you might.

If you're building a data-intensive SaaS and wondering whether to go cloud or self-host, the math is clear: €132/month for a server that handles millions of daily requests vs $500+/month for equivalent cloud infrastructure.

Happy to answer questions about any part of the stack.

madeonsol.com

Top comments (0)