DEV Community

Cover image for Polling vs streaming for crypto prices: when each makes sense
Mateusz Sroka
Mateusz Sroka

Posted on

Polling vs streaming for crypto prices: when each makes sense

I spent months helping teams pick between polling and streaming for their crypto apps. Most assumed streaming was always better: faster, more "real-time," more professional. Then I'd ask about their user count. 50 users. 60-second update frequency. Standard shared hosting.

They were about to spend 2 weeks building WebSocket infrastructure for a problem polling solves in 2 days.

The choice between polling (making repeated HTTP requests) and streaming (keeping a persistent connection open) depends on your update frequency, user scale, infrastructure, and team experience. Neither approach wins universally.

You'll learn:

  • When polling actually makes more sense than streaming
  • Real costs at different scales (50 users vs 5,000 users)
  • Production failures I've seen with both approaches
  • How to migrate between them without downtime

Disclosure: I work at CoinPaprika/DexPaprika, so the code examples use our APIs. Everything here applies to any polling or streaming implementation.


What you're choosing between

Polling: Your client asks "what's the price?" every N seconds. The server responds. Your client waits. Repeats. Simple, stateless, works everywhere.

Streaming: Your client opens a connection and says "tell me when the price changes." The server pushes updates. Connection stays open. Lower latency, more complexity.

Both keep your users informed of price changes. The trade-offs differ:

  • Latency: Polling = 0 to 1× your interval. 30-second polling means 0-30 seconds of staleness. Streaming = sub-second. (I measured this in What "Real-Time Crypto Prices" Actually Means)
  • Cost: Polling scales with users × requests. At 5,000 users with 30-second polling, you're making 10,000 requests per minute. Streaming has fixed overhead but lower per-user cost at scale.
  • Complexity: Polling = standard HTTP. Test with curl. Debug with logs. Streaming = connection lifecycle, reconnect logic, heartbeat monitoring.
  • Team learning: Polling took me 2 days to ship production-ready. Streaming took 2 weeks to handle all the edge cases.

A portfolio tracker with 50 users checking prices every minute doesn't need the same architecture as a trading dashboard with 5,000 users needing sub-second updates.


How polling works

You make periodic HTTP requests to check for new data. Your client sends a request every N seconds, gets current prices back, waits, repeats.

Basic polling in JavaScript:

// Poll DexPaprika API every 30 seconds (verified Dec 23, 2025)
const POLL_INTERVAL = 30000; // 30s in milliseconds

async function pollPrices() {
  try {
    // DexPaprika REST API: /networks/{network}/tokens/{address}
    const response = await fetch('https://api.dexpaprika.com/networks/ethereum/tokens/0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2');
    const data = await response.json();
    updateUI(data.summary.price_usd);
  } catch (error) {
    console.error('Poll failed:', error);
    // Production: add exponential backoff, circuit breaker
  }
}

setInterval(pollPrices, POLL_INTERVAL);
Enter fullscreen mode Exit fullscreen mode

Each request is independent. If one fails, the next poll gets fresh data. No connection state to manage, no reconnect logic.

Production code needs exponential backoff for failures, jitter to avoid thundering herds (all clients polling simultaneously), and rate limit handling. The core stays simple: request, wait, repeat.

When polling is the right choice

Infrequent updates (30+ seconds acceptable)

Portfolio trackers showing daily P&L don't need sub-second prices. 30-60 second polling provides sufficient freshness. The overhead is negligible.

Personal finance app with 200 users, 60s polling = 200 requests/minute total. Any server handles this easily.

Small user base (< 100 concurrent users)

At 100 users with 30s polling, you're making ~200 requests/minute. Infrastructure cost: ~$30-50/month on standard hosting. Streaming infrastructure would cost more ($75-100/month for WebSocket-capable load balancers, connection management).

The crossover where streaming becomes cost-effective is around 800-1,000 users.

Standard infrastructure

Polling works with any HTTP server. No WebSocket support needed, no connection pooling, no special load balancer config. Shared hosting, standard CDNs, basic reverse proxies all work.

If your infrastructure doesn't support persistent connections, polling is your only option without an upgrade.

Team new to real-time systems

Polling is stateless HTTP, the same pattern used everywhere on the web. Developers learn it in days. Testing: curl works. Debugging: check HTTP logs. Error handling: retry the request.

Streaming requires understanding connection lifecycle, reconnect strategies, heartbeat logic, and stateful debugging. Learning curve: 1-2 weeks for production-ready.

Trade-offs you're making

Polling gives you:

  • Standard HTTP requests. No connection state, no lifecycle management. Test with curl, debug with HTTP logs.
  • Works with any web server, load balancer, CDN, or caching layer. No special configuration.
  • Predictable costs. N users × M requests/minute = clear bandwidth usage. Linear scaling makes budgeting simple.
  • Stateless debugging. Each request is independent. Failures don't cascade. Test individual requests in isolation.

Polling costs you:

  • Higher latency. Data freshness ranges from 0 (just polled) to 1× your interval (about to poll). 30s polling means 0-30s stale data. Users see price movement late.
  • Wasted bandwidth. Polling during periods of no change sends identical responses repeatedly. Bitcoin staying at $43,500 for 10 minutes? Your 30s polling sent 20 identical responses.
  • Rate limit pressure. Frequent polling exhausts API quotas fast. If your provider limits you to 10,000 requests/hour, 100 users at 10s polling = 36,000 requests/hour (360% over limit).
  • Server load spikes. Without jitter, all clients poll simultaneously. 1,000 users starting at midnight means 1,000 requests hit your server within milliseconds every 30s.

At 50 users with 60s polling, simplicity outweighs minimal latency cost. At 5,000 users with 5s polling, you're burning $5,000/month in bandwidth that streaming would reduce to $400/month.


How streaming works

You establish a persistent connection between client and server. Instead of the client requesting data repeatedly, the server pushes updates as they occur. The connection stays open indefinitely (or until network failure).

For browser crypto apps, Server-Sent Events (SSE) is common:

// Stream price updates via SSE (verified Dec 23, 2025)
// DexPaprika Streaming API: /stream?method=t_p&chain={chain}&address={address}
const eventSource = new EventSource('https://streaming.dexpaprika.com/stream?method=t_p&chain=ethereum&address=0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2');

eventSource.addEventListener('t_p', (event) => {
  const data = JSON.parse(event.data);
  updateUI(data.p); // Update immediately when price changes
});

eventSource.onerror = (error) => {
  console.error('Connection failed:', error);
  // Production: exponential backoff reconnect
  // Production: switch to polling fallback after N failures
};

// Connection stays open, server pushes updates as prices change
Enter fullscreen mode Exit fullscreen mode

The connection persists across multiple updates. Bitcoin's price changes 10 times in a minute? You receive 10 messages over one connection. No new HTTP requests.

Production code adds reconnect logic with exponential backoff, heartbeat monitoring (detect silent disconnects), connection health checks, and fallback to polling during extended outages. Core pattern: connect once, receive updates as they happen.

When streaming is the right choice

Fast updates (< 5 seconds)

Trading dashboards need prices within 1-2 seconds of market movement. Polling at 2s intervals creates visible lag and hammers your API (30 requests/minute per user). Streaming delivers updates in 100-500ms.

Active trading app with 500 users. Streaming = 500 connections, updates pushed as prices change. Polling at 2s = 15,000 requests/minute (rate limit nightmare).

Moderate to high users (> 1,000 concurrent)

At 1,000+ users, bandwidth costs favor streaming. Polling sends full state every interval. Streaming sends only changes.

Cost at 1,000 users (30s updates):

  • Polling: 2,000 requests/minute × ~1KB response = ~120GB/month = $500/month
  • Streaming: 1,000 connections × ~10KB/hour = ~7GB/month = $150/month

Crossover: ~800 users. Below that, polling is cheaper. Above, streaming wins.

Infrastructure supports persistent connections

Modern cloud platforms (AWS ALB, GCP Load Balancing, Cloudflare) support WebSocket and SSE. If your infrastructure is already capable, the barrier is lower.

Check your load balancer's connection limits. Default nginx: 1,024 connections. At 1,000 users, you're at 97% capacity. Configure higher limits.

Team comfortable with connection management

Streaming requires handling reconnects, exponential backoff, heartbeats, and monitoring active connections. Teams with prior WebSocket experience ship production-ready streaming in 1-2 weeks.

Teams new to persistent connections face 2-4 weeks to handle edge cases (silent disconnects, connection storms after outages, proxy timeouts).

Trade-offs you're making

Streaming gives you:

  • Low latency. Updates arrive in 100-500ms, not 0-30s. Users see price changes immediately. Critical for trading, real-time charts, urgent alerts.
  • Bandwidth efficiency. Only transmit when prices change. Bitcoin stays at $43,500 for 10 minutes? Streaming sends zero data. Polling sends 20 identical responses.
  • Real-time push. Server can send urgent updates (flash crashes, circuit breakers) without client requesting. Polling requires clients to discover updates on next poll.
  • Better UX at scale. Smooth, responsive interface. No visible polling lag. Prices update fluidly.

Streaming costs you:

  • Infrastructure complexity. Need WebSocket-capable load balancers, connection pooling, proxy timeout configuration. Default nginx proxy_read_timeout is 60s. Your connection drops after 1 minute without configuration.
  • Debugging difficulty. Connections are stateful. Can't test with curl. Need tools like wscat or browser DevTools. Reproducing connection-specific bugs is harder.
  • Connection management. Reconnect logic, exponential backoff, jitter, heartbeat monitoring, graceful shutdown. Each adds complexity and failure modes.
  • Higher cost at low scale. Connection infrastructure costs $75-100/month even at 10 users. Polling at 10 users costs $5-10/month. Streaming only becomes cost-effective at ~800+ users.

At 5,000 users with sub-second latency needs, streaming saves $4,600/month versus polling and delivers better UX. At 50 users with 60s updates, streaming adds unnecessary complexity.


Side-by-side comparison

Dimension Polling Streaming
Latency 0.5× to 1× polling interval (30s polling = 0-30s delay) < 1 second (typically 100-500ms)
Bandwidth High (full state every interval, even if unchanged) Low (only send when data changes)
Complexity Low (standard HTTP, stateless) Medium (connection lifecycle, reconnect logic)
Infrastructure Any HTTP server, standard load balancer WebSocket-capable LB, connection pools, timeout config
Debugging Easy (curl works, HTTP logs, stateless) Harder (stateful, need wscat, connection-aware tools)
Cost at 100 users $30-50/month (bandwidth for 30s polling) $75-100/month (infrastructure overhead)
Cost at 1,000 users $400-500/month (bandwidth costs) $150/month (efficient updates)
Cost at 10,000 users $5,000/month (bandwidth + rate limit upgrades) $400/month (scales linearly)
Rate limits High impact (N users × M req/min) Low impact (1 connection per user, data sent only on change)
Learning curve 1-2 days 1-2 weeks for production-ready

The cost crossover happens around 800-1,000 users. Below that, polling is cheaper. Above, streaming's bandwidth efficiency wins.

Latency is the clearest differentiator. If you need < 5s updates, streaming is effectively required. Polling at 5s intervals creates 0-5s staleness and high request rates.

Infrastructure matters. On shared hosting without WebSocket support, polling is your only option without infrastructure changes.

Team skills affect time-to-market. Polling ships in days. Streaming needs 1-2 weeks to handle reconnects, monitoring, edge cases.


When to use polling vs streaming

Use polling when:

  1. Update frequency > 30 seconds works

    • Portfolio tracker showing daily P&L
    • 100 users at 60s polling = ~$30/month
  2. User base < 100 concurrent

    • Internal company dashboard
    • Infrastructure simplicity outweighs bandwidth costs
    • Streaming would cost more ($75 vs $30) for minimal benefit
  3. You need simple infrastructure

    • Shared hosting, standard CDN, basic reverse proxy
    • No WebSocket support needed
    • Migration to WebSocket-capable hosting: $50-200/month
  4. Team is new to real-time

    • Small dev team, first real-time feature
    • Lower risk, faster shipping (1-2 days vs 1-2 weeks)
    • Learn with polling, migrate to streaming when proven

Use streaming when:

  1. Update frequency < 5 seconds required

    • Live trading dashboard, real-time price charts
    • 1,000 users at 5s polling = 12,000 req/min (unsustainable)
  2. User base > 1,000 concurrent

    • Public crypto price tracker
    • Bandwidth savings offset infrastructure complexity
    • Savings: $350/month at 1,000 users, $4,600/month at 10,000 users
  3. Infrastructure supports WebSocket

    • AWS ALB, GCP Load Balancing, Cloudflare, modern hosting
    • Connection infrastructure already available
    • Check: proxy_read_timeout configured for long connections
  4. Team comfortable with connections

    • Experienced team, prior WebSocket projects
    • Can handle reconnect logic, monitoring, edge cases

The gray area (100-1,000 users, 5-30s updates):

Either approach works. Context decides:

  • Polling if: Unsure of growth, want simplest solution, team is learning, infrastructure doesn't support WebSocket
  • Streaming if: Expect rapid growth (avoid migration later), team has experience, better UX is priority, infrastructure is ready

Can you migrate later? Yes. Start polling, migrate to streaming when you hit 3+ streaming criteria.


Migrating between polling and streaming

Polling → Streaming

When to migrate:

  • User base growing past 1,000 concurrent
  • Latency becoming a user complaint
  • Bandwidth costs exceeding $400/month
  • Rate limiting becoming an issue (> 80% of API quota used)

How I've done it:

Don't flip all users at once. Week 1: 10% of users to streaming (monitor errors, latency). Week 2: 50% of users (ensure infrastructure handles load). Week 3: 100% of users (keep polling code as fallback).

Use feature flags (LaunchDarkly, custom flag) to toggle streaming on/off per user segment. Instant rollback if issues arise. A/B test latency improvements. Gradual migration reduces risk.

Don't delete polling code. Use it during streaming outages (automatic graceful degradation). Some clients may have firewall/proxy issues with WebSocket. Fallback ensures service continuity.

Challenges I hit:

Different client code. Polling uses fetch(), streaming uses EventSource or WebSocket. Need separate code paths. Solution: abstraction layer that switches transport based on feature flag.

Connection monitoring needed. Track active connections, reconnect rates, message throughput. Solution: Add Prometheus metrics, Grafana dashboards before migration.

Reconnect storms. After outage, all clients reconnect simultaneously. Solution: Exponential backoff with jitter (randomize reconnect timing).

Timeframe: 2-4 weeks for production-ready streaming migration (code changes, testing, monitoring setup, gradual rollout).

Streaming → Polling fallback

When to fallback (temporary):

  • Streaming infrastructure repeatedly failing
  • Emergency mitigation during outage
  • Client environments blocking WebSocket (corporate proxies)

Keep polling code alongside streaming. Use feature flag to toggle. During streaming outage, automatically switch users to polling.

If EventSource errors exceed threshold (5 errors in 1 minute), fallback to polling for that user.

This isn't a permanent downgrade. It's graceful degradation. Streaming resumes when infrastructure recovers.

Hybrid approach

Can they coexist? Yes. I've seen streaming for power users (logged-in, active traders), polling for casual users (anonymous, checking prices occasionally).

Benefits: Optimize for each segment. Power users get best experience (streaming). Casual users don't require expensive infrastructure.

Complexity: Need to maintain both code paths. Only worth it if user segments are clearly distinct.

Crypto exchange example: streaming for logged-in traders (5,000 users), polling for anonymous price checkers (50,000 users). Saves infrastructure costs while delivering best experience to users who need it.


Production failures I've seen

Polling failures

Thundering herd (simultaneous polling)

All clients start polling at the same time (page load at midnight). Every 30 seconds, 10,000 requests hit within a 100ms window. Self-inflicted traffic spike overwhelms server.

One team told me: "We saw 12,000 requests hit within 200ms every 30s. Server CPU spiked to 95%, responses slowed to 5-8 seconds, causing more timeouts."

Solution: Add jitter to randomize polling interval by ±20%:

const baseInterval = 30000;
const jitter = Math.random() * 0.4 * baseInterval - 0.2 * baseInterval; // ±20%
const interval = baseInterval + jitter;
setInterval(pollPrices, interval);
Enter fullscreen mode Exit fullscreen mode

This spreads 10,000 requests across a 12-second window instead of 200ms. Server load stays smooth.

Stale data during fast markets

Users see 30-second-old prices during high volatility. Bitcoin dumps 5%, user sees stale price, makes decision on outdated data, blames your app.

Feedback I heard: "During flash crash, our 30s polling showed prices 30s behind. Users lost trust. 'Your prices are wrong.'"

Solution: Show "Last updated X seconds ago" indicator. Transparency builds trust. Users understand data freshness.

Or reduce polling interval during detected volatility (price changes > 2% in 1 minute). Dynamic polling: 30s normally, 5s during volatility.

Rate limiting exhaustion

Frequent polling exhausts API quotas. Provider limits you to 100,000 requests/month. At 2,000 users with 30s polling, you hit 2.6 million requests/month (26× over limit).

One team: "We burned $1,000/month in overage charges before optimizing polling intervals and adding client-side caching."

Solutions:

  • Increase polling interval (30s → 60s cuts requests in half)
  • Add client-side caching (check cache before polling)
  • Server-side caching with short TTL (cache provider responses for 5-10s)

Streaming failures

Connection pooling limits

Load balancers have default connection limits (nginx: 1,024). At 1,000 streaming users, you're at 97% capacity. User 1,025 gets connection refused.

Production experience: "Hit nginx connection limit at 980 users. Next 200 connection attempts failed with 'connection refused.' Users saw blank dashboards."

Solution: Configure higher limits in load balancer:

# nginx.conf
worker_connections 10240; # Default: 1024
Enter fullscreen mode Exit fullscreen mode

Add connection server pool: distribute connections across multiple servers (each handles 2,000-5,000 connections).

Monitor: Track active connections, alert at 80% capacity.

Silent disconnects (zombie connections)

Connection appears open (no error thrown) but no data flows. Network middlebox (firewall, proxy) silently drops idle connections after 60s. Client thinks it's connected, server thinks it's connected, but messages don't flow.

One team found: "5% of connections were zombies. Open but not receiving data. Users saw stale prices for 10+ minutes before manually refreshing."

Solution: Heartbeat pings every 30s:

// Server sends heartbeat
setInterval(() => {
  clients.forEach(client => client.send('ping'));
}, 30000);

// Client detects missing heartbeats, reconnects
let lastHeartbeat = Date.now();
eventSource.addEventListener('ping', () => {
  lastHeartbeat = Date.now();
});

setInterval(() => {
  if (Date.now() - lastHeartbeat > 45000) { // No heartbeat for 45s
    eventSource.close();
    reconnect();
  }
}, 10000);
Enter fullscreen mode Exit fullscreen mode

Reconnect storms (post-outage avalanche)

Server goes down for 2 minutes. All 10,000 users disconnect. Server comes back up. All 10,000 users reconnect simultaneously. Server drowns in connection avalanche, goes down again. Repeat.

Production story: "After 5-minute outage, 8,000 users reconnected within 30 seconds. Connection server CPU hit 100%, crashed again. Took 45 minutes to stabilize."

Solution: Exponential backoff with jitter:

let reconnectDelay = 1000; // Start at 1s
const maxDelay = 60000; // Cap at 60s

function reconnect() {
  setTimeout(() => {
    // Add jitter: ±50% randomization
    const jitter = reconnectDelay * (Math.random() - 0.5);
    const delay = reconnectDelay + jitter;

    eventSource = new EventSource(url);
    reconnectDelay = Math.min(reconnectDelay * 2, maxDelay); // Double delay, cap at 60s
  }, reconnectDelay);
}
Enter fullscreen mode Exit fullscreen mode

First reconnect attempts spread across 0.5-1.5s. If that fails, 1-3s. Then 2-6s, 4-12s, etc.

Result: "After implementing exponential backoff with jitter, post-outage reconnect storm dropped from 60% failure rate to 2%."

What to monitor

For polling:

  • Request latency (p50, p95, p99)
  • Rate limit headroom (requests used / requests available)
  • Cache hit rate (if using caching)
  • Error rate (failed polls / total polls)

For streaming:

  • Active connections (current / max capacity)
  • Reconnect rate (reconnects per minute)
  • Message throughput (messages sent per second)
  • Connection duration (how long connections stay alive)
  • Zombie connection rate (heartbeat timeouts / active connections)

Set up alerts:

  • Polling: Alert when rate limit > 80%, latency p95 > 2s
  • Streaming: Alert when connections > 80% capacity, reconnect rate > 100/min

What I learned

Neither polling nor streaming is universally better. Your context determines the right choice.

Polling works when updates > 30s acceptable, users < 100, standard hosting, team learning real-time systems.

Streaming works when updates < 5s required, users > 1,000, WebSocket infrastructure available, team experienced with connections.

Gray area exists: 100-1,000 users, 5-30s updates. Either works. Evaluate infrastructure, team skills, growth expectations.

Start simple. Begin with polling for MVPs and small user bases. Migrate to streaming when you hit 3+ streaming criteria (> 1,000 users, < 5s latency, WebSocket infra ready, team experienced).

Hybrid is valid. Use streaming for power users, polling for casual users. Optimize for each segment.

The decision isn't permanent. Architecture evolves as your app scales. Poll first, stream later is a proven path.

I work at CoinPaprika/DexPaprika, so the examples use our APIs. Polling examples use CoinPaprika's REST API. Streaming examples use DexPaprika's real-time feeds. Both completely free. No credit card, no rate limits, just start building.


Quick answers

Which is cheaper?

Depends on scale. At < 100 users, polling is cheaper ($30-50/month vs $75-100/month for streaming infrastructure). The crossover point is around 800-1,000 users. Above 1,000 users, streaming becomes significantly cheaper due to bandwidth efficiency.

Cost at 1,000 users: Polling = $400-500/month, Streaming = $150/month.

Which has lower latency?

Streaming has lower latency. Polling latency ranges from 0 to 1× your polling interval (30s polling = 0-30s stale data). Streaming delivers updates in < 1 second (typically 100-500ms). For detailed latency measurements, see What "Real-Time Crypto Prices" Actually Means.

Can I switch later?

Yes. Keep polling code as fallback, add streaming behind feature flag, roll out incrementally (10% → 50% → 100% of users). Timeframe: 2-4 weeks for production-ready migration.

When should I use each?

Polling: Updates > 30s acceptable, users < 100, standard hosting, team learning.

Streaming: Updates < 5s required, users > 1,000, WebSocket infrastructure available, team experienced.

Gray area (100-1,000 users, 5-30s updates): Either works. Start with polling, migrate to streaming when proven.

What infrastructure does streaming require?

Streaming requires WebSocket-capable load balancers (AWS ALB, GCP Load Balancing, Cloudflare), connection pooling configuration, and increased connection limits. Default nginx allows 1,024 connections. Configure higher for production. Polling works with any standard HTTP server.


Related

Top comments (1)

Collapse
 
evarestus_chinecherem_21f profile image
Evarestus Chinecherem

He bro what's the best way to reach you?