DEV Community

Cover image for Why 1-second polling doesn't scale (and the architectures that do)
Mateusz Sroka
Mateusz Sroka

Posted on

Why 1-second polling doesn't scale (and the architectures that do)

I've watched this pattern destroy startups repeatedly. A team builds a crypto portfolio tracker, polls prices every second. Works perfectly at 50 users.

Then they hit 10,000 users. AWS bill jumps from $200 to $18,000 in one month. App slows to a crawl. Support tickets flood in. The team stays up three nights straight migrating to streaming. Every time, the same regret: "Should've switched months earlier."

1-second polling seems reasonable. Prices update fast, users want fresh data. But the math destroys you at scale. Here's exactly where it breaks and what works instead.

In this guide:

  • Real cost calculations showing where polling breaks down
  • The three limits that kill polling (connections, rate limits, bandwidth)
  • Architectures that scale (SSE, WebSocket, hybrid)
  • Migration strategies without downtime
  • When polling still makes sense

Polling starts cheap, then explodes

Polling looks simple. Send a request every N seconds. Get fresh data. Repeat. For small apps, it's perfect.

But there's a trap. Polling costs scale with users multiplied by update frequency. That multiplication crushes you.

Let me show you the numbers.

The math

Setup: 10,000 concurrent users, 1-second polling for crypto prices.

Requests per second:

10,000 users × 1 request/second = 10,000 req/sec
Enter fullscreen mode Exit fullscreen mode

Requests per month:

10,000 req/sec × 60 sec × 60 min × 24 hr × 30 days = 25.9 billion requests
Enter fullscreen mode Exit fullscreen mode

AWS API Gateway cost:

First 333M requests: $3.50 per million = $1,165
Next 667M requests: $2.80 per million = $1,868
Next 19B requests: $2.38 per million = $45,220
Remaining 6B requests: $1.51 per million = $9,060

Total: $57,313 per month
Enter fullscreen mode Exit fullscreen mode

For polling. At 1-second intervals. For 10K users.

Teams never believe this until they see the bill.

Where the money goes

It's not just API costs. Polling hits you everywhere:

Bandwidth costs:

  • 10,000 req/sec × 500 bytes response = 5 MB/sec
  • 5 MB/sec × 2.6M sec/month = roughly 13 TB/month
  • AWS bandwidth: 13 TB × $0.09/GB = $1,170/month

Database load:

  • 10,000 reads/sec continuous
  • RDS t3.medium maxes out around 3,000/sec
  • Need RDS r5.2xlarge: $730/month minimum
  • Still need read replicas: another $730/month

Application servers:

  • Each server handles ~1,000 req/sec (with caching)
  • Need 10 servers minimum
  • EC2 t3.medium × 10 = $300/month base
  • With load balancing and auto-scaling: around $500/month

Total monthly cost for 10K users with 1-second polling:

API Gateway:  $57,313
Bandwidth:    $1,170
Database:     $1,460
Servers:      $500
--------------------------
TOTAL:        $60,443/month
Enter fullscreen mode Exit fullscreen mode

Per-user cost: $6.04/month

That's just infrastructure. Add development time debugging connection issues, rate limit errors, and database overload. I've seen teams burn two weeks fighting these problems.

Where polling breaks

Costs hurt, but they're not what kills you first. Polling dies from three hard limits.

Limit 1: Browser connection cap

Browsers allow 6 concurrent HTTP/1.1 connections per domain. Open 7 tabs with your app? Tab 7 hangs forever.

I've seen this kill deals. A hedge fund opens 20 price charts in separate tabs. Only 6 load. The rest show loading spinners eternally. They call it "broken" and cancel the trial. One team lost a $50K/year contract because of browser connection limits.

The math:

  • 6 connection limit per domain
  • Each poll request holds a connection for 100-200ms
  • At 1-second polling: 6 connections × (1000ms / 150ms avg) = around 40 requests/sec max
  • Need faster updates? Hit the limit immediately

HTTP/2 helps but doesn't solve it. You still have connection overhead per request.

Limit 2: Rate limiting hell

You're not polling your own API. You're polling Binance, Coinbase, Kraken for prices. They have rate limits.

Binance rate limits:

  • 1,200 requests/minute per IP
  • That's 20 req/sec
  • You have 10,000 users polling every second
  • You'd need 500 different IP addresses to stay under limit

CoinGecko Pro:

  • 500 requests/minute with paid plan
  • 10,000 users = you'd need 20 paid accounts
  • Cost: 20 × $129/month = $2,580/month just for rate limit headroom

Teams hit these limits at 50 concurrent users. They add exponential backoff, retry queues, distributed rate limiting. The code turns into spaghetti. I've watched teams spend a month untangling it.

Limit 3: Thundering herd on deploys

Here's a pattern I've analyzed multiple times. Team deploys new code. All servers restart. 10,000 connected clients lose their polling loops. They all reconnect simultaneously.

What happens:

T+0s:  10,000 clients reconnect
T+0s:  10,000 requests hit API at once
T+0s:  Load balancer sees spike, spawns more servers
T+2s:  New servers boot, load balancer adds them
T+2s:  Original servers already crashed from spike
T+4s:  New servers get 10,000 requests immediately, crash
T+6s:  Auto-scaling spawns MORE servers
T+10s: AWS bill climbing $500/hour from runaway scaling
T+30s: Engineer in console manually shutting down instances
Enter fullscreen mode Exit fullscreen mode

I've seen this happen during routine 2 AM deploys. PagerDuty alerts, $2,000 AWS bill spikes, angry managers.

Streaming avoids this. Reconnections happen with exponential backoff and random jitter built-in. The load spreads over 30-60 seconds instead of hitting all at once.

What works instead

Once you accept polling is doomed, three patterns work.

SSE for simplicity

Server-Sent Events is HTTP with a persistent connection. Server pushes updates as they happen. Browser handles reconnection automatically.

Cost comparison for 10K users:

Polling (baseline):

  • 25.9 billion requests/month
  • $60,443/month total cost

SSE:

  • 10,000 persistent connections
  • Each connection: around 1 price update/second
  • Bandwidth: 10K × 500 bytes/sec × 2.6M sec = 13 TB (same as polling)
  • But: Only 10,000 initial connection requests (vs 25.9 billion!)
  • CloudFront + Lambda@Edge: roughly $1,200/month
  • Savings: $59,243/month (98% reduction)

I've seen teams migrate from polling to SSE in one weekend. Bill cuts by 95%. Connection code goes from 200 lines to 20.

Example using DexPaprika free SSE:

// Free SSE for crypto prices - no API key needed
const events = new EventSource(
  'https://streaming.dexpaprika.com/stream?method=t_p&chain=ethereum&address=0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48'
);

events.onmessage = (event) => {
  const data = JSON.parse(event.data);
  updatePrice(data.price);
};

// Browser handles reconnection automatically
// Last-Event-ID replay prevents data loss
// Works through corporate firewalls
Enter fullscreen mode Exit fullscreen mode

When SSE wins:

  • Unidirectional data (server → client only)
  • Under 100K concurrent users
  • Enterprise customers (firewall friendly)
  • You need to ship fast

WebSocket for bidirectional needs

WebSocket creates a persistent TCP socket. Lower latency than SSE. Supports bidirectional communication. Better for trading apps.

Cost comparison for 10K users:

WebSocket infrastructure:

  • 10,000 persistent connections
  • Binary encoding (MessagePack) reduces payload 60%
  • Bandwidth: 10K × 200 bytes/sec × 2.6M sec = around 5.2 TB/month
  • EC2 with uWebSockets.js: around $800/month (can handle 240K connections on one 16-core server)
  • Savings vs polling: $59,643/month (98.7% reduction)

But WebSocket adds complexity:

// WebSocket requires manual reconnection
class PriceFeed {
  constructor(url) {
    this.url = url;
    this.reconnectDelay = 1000;
    this.connect();
  }

  connect() {
    this.ws = new WebSocket(this.url);

    this.ws.onopen = () => {
      this.reconnectDelay = 1000;
      this.ws.send(JSON.stringify({
        subscribe: ['BTC-USD', 'ETH-USD']
      }));
    };

    this.ws.onmessage = (event) => {
      const data = JSON.parse(event.data);
      updatePrice(data);
    };

    this.ws.onclose = () => {
      // Exponential backoff prevents thundering herd
      const jitter = Math.random() * 1000;
      setTimeout(() => {
        this.connect();
        this.reconnectDelay = Math.min(this.reconnectDelay * 2, 30000);
      }, this.reconnectDelay + jitter);
    };
  }
}
Enter fullscreen mode Exit fullscreen mode

When WebSocket wins:

  • Need bidirectional communication (trading, chat)
  • Binary data efficiency matters
  • Over 100K concurrent users
  • Mobile battery optimization critical

Hybrid approach (recommended pattern)

Don't pick one. Use both.

Many teams run SSE for public price feeds and WebSocket for authenticated trading. Different constraints, different tools.

Public price feeds (SSE):

  • No authentication needed
  • Works through firewalls
  • Browser auto-reconnect
  • Simple to implement
  • Free with DexPaprika

Authenticated trading (WebSocket):

  • Order placement needs bidirectional
  • Lower latency matters (sub-100ms)
  • Binary encoding reduces costs
  • Full control over connection

Cost breakdown for 10K users (hybrid):

SSE public feeds:     $800/month (8K users)
WebSocket trading:    $400/month (2K users)
---------------------------------
Total:                $1,200/month

vs Polling baseline:  $60,443/month
Savings:              $59,243/month (98%)
Enter fullscreen mode Exit fullscreen mode

This isn't architectural purity. But it works. Infrastructure cost drops from $60K to $1.2K monthly. Same features. Better performance. Everyone sleeps better.

Migration path: from polling to streaming

You can't flip a switch. Here's how to do it without downtime.

First two weeks: Add streaming alongside polling

Deploy streaming endpoints. Don't remove polling yet. Run both in parallel.

// Feature flag for gradual rollout
const useStreaming = user.id % 100 < 10; // 10% of users

if (useStreaming) {
  connectSSE();
} else {
  startPolling();
}
Enter fullscreen mode Exit fullscreen mode

Monitor error rates, connection stability, data consistency. Fix issues before expanding. Teams typically find bugs in the first week that would've crashed production with a full rollout.

Week 3: Increase streaming to 50%

Bump feature flag to 50%. Watch infrastructure costs drop. Handle edge cases:

  • Corporate firewalls blocking WebSocket? Fall back to SSE
  • SSE failing? Fall back to polling temporarily
  • Connection storms? Add random jitter to reconnect timing

This week reveals edge cases. Common issue: corporate proxies silently kill SSE connections after 60 seconds. Solution: Add keepalive pings.

Week 4-5: 100% streaming, keep polling as fallback

Flip everyone to streaming. Keep polling code as fallback for failures.

class ResilientPriceFeed {
  constructor() {
    this.failureCount = 0;
    this.maxFailures = 3;
  }

  connect() {
    try {
      this.sse = new EventSource(streamingURL);
      this.sse.onerror = () => {
        this.failureCount++;
        if (this.failureCount > this.maxFailures) {
          this.sse.close();
          this.fallbackToPolling();
        }
      };
    } catch (error) {
      this.fallbackToPolling();
    }
  }

  fallbackToPolling() {
    console.warn('Streaming failed, using polling fallback');
    this.pollInterval = setInterval(() => {
      fetch(pollingURL).then(updatePrices);
    }, 5000); // Slower than before, but works
  }
}
Enter fullscreen mode Exit fullscreen mode

Eventually: Remove polling code (or don't)

Once streaming proves stable for a month, you can remove polling. But many teams keep the fallback. It saves them during outages. The extra code is worth the insurance.

When polling still makes sense

Don't cargo cult streaming. Polling works fine when:

Low frequency updates (> 30 seconds)

If you're updating prices every minute, polling is simpler:

1,000 users × 1 req/min × 60 min × 24 hr × 30 days = 43.2M requests/month
Cost: around $150/month

vs SSE infrastructure: around $200/month setup + maintenance
Enter fullscreen mode Exit fullscreen mode

Under 1,000 users with 60-second updates? Poll away.

One-off requests

User clicks "refresh prices" manually? Use polling. Streaming makes no sense for user-triggered actions.

Internal tools

Building a dashboard for your 20-person team? Polling is fine. Don't over-engineer. I've seen teams waste weeks building streaming infrastructure for 5 concurrent users. Just poll.

Testing and development

Polling is easier to debug. Start with polling to prove your concept. Migrate to streaming when scale demands it. Many teams prototype with polling, switch to streaming around 200 users.

Decision framework: polling vs streaming

Here's how to decide:

Quick reference:

Users Update Freq Use Polling Use Streaming
< 100 Any Simple Optional
100-1K > 60s Works Optional
100-1K < 60s Gets expensive Recommended
1K-10K > 5 min Barely works Recommended
1K-10K < 5 min Will fail Required
> 10K Any Don't even try Required

Real-world migration results

Here's what the numbers look like for a typical portfolio tracker migration.

Before (polling at 1-second intervals):

  • Users: 8,500 concurrent
  • AWS monthly cost: $48,000
  • Average latency: 800ms
  • Error rate: 4% (mostly rate limits)
  • Database load: 95% CPU constant
  • Support tickets: 50/month about "slow prices"

After (SSE streaming):

  • Users: 8,500 concurrent (same)
  • AWS monthly cost: $1,100 (97.7% reduction)
  • Average latency: 150ms
  • Error rate: 0.1%
  • Database load: 15% CPU average
  • Support tickets: 2/month

Typical migration takes 3 weeks. Teams keep polling as fallback for 6 months (rarely need it). This kind of cost reduction can save a startup from running out of runway.

Free streaming changes everything

Here's what nobody talks about: cost isn't just infrastructure.

Traditional crypto data providers:

  • Binance WebSocket: Free but rate limited (20 req/sec)
  • CoinGecko Pro: $129/month for 500 req/min
  • CoinMarketCap: $499/month for real-time
  • Kaiko: $2,000+/month for institutional

For our 8,500 users:

  • Need around 150 req/sec minimum
  • Binance alone doesn't work (rate limits)
  • Would need 18 CoinGecko accounts = $2,322/month
  • Or one Kaiko enterprise plan = $5,000+/month

DexPaprika (Docs at docs.dexpaprika.com:

  • Free SSE streaming
  • No rate limits
  • No API key required
  • Over 21 milion tokens covered
  • 33+ chains

Teams routinely cut costs from $50K/month to $1.1K/month by switching to free streaming. The savings are real.

Summary

1-second polling dies at scale. Not from one problem but from three: costs multiply with users, you hit hard limits (connections, rate limits), and thundering herd issues crash your infrastructure.

Streaming fixes all three. SSE cuts costs 95-98%. WebSocket goes further with binary encoding. Both eliminate connection limits and rate limit hell. Random reconnection jitter prevents thundering herd.

For most crypto apps, SSE is the right choice. Simpler than WebSocket, works through firewalls, browsers handle reconnection automatically. Free options like DexPaprika make it a no-brainer.

Start with polling if you're under 100 users. Migrate to streaming before you hit 1,000. Your infrastructure costs and sleep schedule will thank you.

Frequently asked questions

At what point should I migrate from polling to streaming?

When your monthly infrastructure cost exceeds $500 or you're seeing rate limit errors. For most apps, this happens around 500-1,000 concurrent users with sub-10 second polling intervals. Better to migrate early before costs spiral.

Can I mix polling and streaming in the same application?

Yes, and you should. Use polling for user-triggered actions (manual refresh, one-off queries) and streaming for continuous price feeds. Hybrid approaches work well. The key is using the right tool for each use case.

What if streaming fails? Do I need a polling fallback?

Yes for production apps. Keep polling as a fallback for when streaming connections fail. A common pattern: trigger fallback after 3 consecutive streaming failures and poll at a slower rate (30-60 seconds). This saves teams during outages.

How do I handle the migration without downtime?

Use feature flags. Roll out streaming to 10% of users first, monitor for a week, then increase gradually. Keep both systems running for 30 days minimum. See the migration path section above for a detailed timeline.

Is SSE really free with DexPaprika? What's the catch?

Yes, completely free. No API key, no rate limits, no credit card. The catch is it's DEX data (not centralized exchange data). For most crypto apps showing token prices, DEX data works fine. Teams report stable production usage.

Related articles

Top comments (0)