DEV Community

Allen Elzayn
Allen Elzayn

Posted on

Building Streaky: A GitHub Streak Guardian (Part 1 - The Journey)

Building Streaky: A GitHub Streak Guardian

Part 1: The Journey from "Simple App" to Distributed System

I thought building a GitHub streak reminder would take a weekend.

The "simple" idea:

  • Check users' GitHub contributions daily
  • Send Discord/Telegram notification if they haven't committed
  • That's it

What actually happened: 5 failed attempts, CPU limits, IP blocking, race conditions, and a complete architecture redesign.


The Problem

I kept losing my GitHub streak because I'd forget to commit on busy days. Existing solutions were either:

  • Self-hosted (requires always-on server)
  • Paid services
  • Missing Discord/Telegram integration

So I decided to build my own. How hard could it be?


Attempt 1: Sequential Processing (Failed)

The naive approach:

export async function checkAllUsersStreaks(env: Env) {
  const users = await env.DB.prepare(
    "SELECT * FROM users WHERE is_active = 1"
  ).all();

  for (const user of users) {
    await checkGitHub(user);
    await sendNotification(user);
  }
}
Enter fullscreen mode Exit fullscreen mode

Result: TLE (Time Limit Exceeded) after 5 users

Cloudflare Workers have a 30-second CPU time limit. With 10 users × 3 seconds each = 30 seconds. Add any overhead = TLE.

Lesson learned: Workers aren't mini servers. They're edge functions with strict limits.


Attempt 2: Promise.all() Parallelism (Failed)

The "clever" approach:

export async function checkAllUsersStreaks(env: Env) {
  const users = await env.DB.prepare(
    "SELECT * FROM users WHERE is_active = 1"
  ).all();

  await Promise.all(users.map((user) => processUser(user)));
}
Enter fullscreen mode Exit fullscreen mode

Result: CPU limit exceeded

Turns out, parallel execution = more CPU usage, not less. All promises run in the same Worker instance, sharing the same CPU budget.

Lesson learned: CPU time is cumulative across all promises. Parallelism doesn't help with CPU limits.


Attempt 3: Batch Processing (Failed)

The "smarter" approach:

export async function checkAllUsersStreaks(env: Env) {
  const users = await getUsers(env);
  const batches = chunk(users, 5); // Process 5 at a time

  for (const batch of batches) {
    await Promise.all(batch.map(processUser));
  }
}
Enter fullscreen mode Exit fullscreen mode

Result: Still hitting limits + Discord API returning 429 errors

Now I had TWO problems:

  1. Still hitting CPU limits
  2. Discord/Telegram rate-limiting my requests

Lesson learned: Cloudflare Workers share IP pools. Discord/Telegram detect this and rate-limit aggressively.


Attempt 4: Rust Proxy for Notifications (Partial Success)

The breakthrough (partial):

Deploy a Rust VPS on Koyeb to handle Discord/Telegram calls with a clean IP.

// Rust VPS receives encrypted credentials
pub async fn send_notification(
    Json(payload): Json<NotificationRequest>,
) -> Result<Json<NotificationResponse>, StatusCode> {
    // Decrypt credentials
    let webhook = decrypt_aes256(&payload.encrypted_webhook)?;

    // Send to Discord with clean IP
    let result = send_discord(&webhook, &payload.message).await?;

    Ok(Json(NotificationResponse { success: true }))
}
Enter fullscreen mode Exit fullscreen mode

Result: Notifications work! But still hitting CPU limits on Worker side.

Lesson learned: Solved one problem, created another. Now I need to solve the CPU issue.


Attempt 5: Service Bindings + Distributed Queue (Success!)

The final breakthrough:

Stop thinking of it as one job. Think of it as N independent jobs.

Architecture

┌─────────────────────────────────────────┐
│     Scheduler (Main Worker)             │
│  • Initialize queue in D1               │
│  • Dispatch N workers via Service       │
│    Bindings                             │
└────────────┬────────────────────────────┘
             │
             │ env.SELF.fetch()
             │ (Each call = new Worker instance!)
             ▼
┌─────────────────────────────────────────┐
│  Worker 1    Worker 2    Worker 3  ...  │
│  • Claim     • Claim     • Claim   ...  │
│  • Process   • Process   • Process ...  │
│  • Complete  • Complete  • Complete ... │
└─────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Implementation

1. Queue Table (D1 SQLite):

CREATE TABLE cron_queue (
  id TEXT PRIMARY KEY,
  user_id TEXT NOT NULL,
  batch_id TEXT NOT NULL,
  status TEXT CHECK(status IN ('pending', 'processing', 'completed', 'failed')),
  created_at TEXT DEFAULT (datetime('now')),
  started_at TEXT,
  completed_at TEXT
);

CREATE INDEX idx_cron_queue_status ON cron_queue(status);
Enter fullscreen mode Exit fullscreen mode

2. Scheduler (Main Worker):

export default {
  async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext) {
    // Get active users
    const users = await env.DB.prepare(
      "SELECT id FROM users WHERE is_active = 1"
    ).all();

    // Initialize batch
    const batchId = await initializeBatch(
      env,
      users.map((u) => u.id)
    );

    // Dispatch workers via Service Bindings
    for (const user of users) {
      const queueItem = await claimNextPendingUserAtomic(env);

      ctx.waitUntil(
        env.SELF.fetch("http://internal/api/cron/process-user", {
          method: "POST",
          body: JSON.stringify({
            queueId: queueItem.id,
            userId: queueItem.user_id,
          }),
        })
      );
    }
  },
};
Enter fullscreen mode Exit fullscreen mode

3. Atomic Queue Claiming:

export async function claimNextPendingUserAtomic(env: Env) {
  const result = await env.DB.prepare(
    `
    WITH next AS (
      SELECT id FROM cron_queue
      WHERE status = 'pending'
      ORDER BY created_at ASC
      LIMIT 1
    )
    UPDATE cron_queue
    SET status = 'processing', started_at = datetime('now')
    WHERE id IN (SELECT id FROM next)
    RETURNING id, user_id, batch_id
  `
  ).all();

  return result.results[0];
}
Enter fullscreen mode Exit fullscreen mode

4. Worker Instance (Process Single User):

app.post("/process-user", async (c) => {
  const { queueId, userId } = await c.req.json();

  try {
    // Process single user
    await processSingleUser(c.env, userId);

    // Mark completed
    await markCompleted(c.env, queueId);

    return c.json({ success: true });
  } catch (error) {
    // Mark failed
    await markFailed(c.env, queueId, error.message);

    return c.json({ success: false, error: error.message });
  }
});
Enter fullscreen mode Exit fullscreen mode

Result: SUCCESS!

  • 10 users processed in ~10 seconds (parallel)
  • Each Worker uses <5 seconds CPU time
  • No TLE, no CPU limits
  • Scales to 1000+ users
  • 100% notification success rate

What I Learned

1. Cloudflare Workers Are Not Servers

  • 30-second CPU time limit (not wall time!)
  • Shared IP pools (rate limiting issues)
  • Stateless by design
  • Edge-first, not server-first

2. Service Bindings Are Powerful

  • Self-calling Workers = distributed processing
  • Each env.SELF.fetch() = new Worker instance
  • Fresh CPU budget per instance
  • Automatic load balancing by Cloudflare

3. D1 Can Be a Queue

  • SQLite is fast enough for job queues
  • Atomic operations with CTE + UPDATE + RETURNING
  • No need for external queue service (Redis, SQS, etc.)
  • Idempotency built-in

4. IP Blocking Is Real

  • Cloudflare Workers share IPs
  • Discord/Telegram rate-limit aggressively
  • Solution: Proxy through dedicated IP (Rust VPS)
  • End-to-end encryption maintained

5. Free Tiers Are Generous

  • Cloudflare Workers: 100k req/day
  • Cloudflare D1: 50k writes/day
  • Koyeb: 512MB VPS free
  • Vercel: Unlimited bandwidth
  • Total cost: $0/month

Performance Metrics

Before (Sequential):

  • 10 users = 30+ seconds
  • TLE errors
  • 0% success rate

After (Distributed):

  • 10 users = ~10 seconds (parallel)
  • No TLE errors
  • 100% success rate
  • Scales to 1000+ users

Notification Delivery:

  • Cold start (VPS sleeping): ~10 seconds
  • Warm (VPS active): ~3.6 seconds
  • Success rate: 100%

Final Architecture

┌─────────────────────────────────────────────────────────┐
│              Cloudflare Cron (12:00 UTC)                │
└────────────────────────┬────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────┐
│         Cloudflare Worker (Scheduler)                   │
│  • Query active users from D1                           │
│  • Initialize batch in queue                            │
│  • Dispatch N workers via Service Bindings             │
└────────────────────────┬────────────────────────────────┘
                         │
                         │ env.SELF.fetch()
                         ▼
┌─────────────────────────────────────────────────────────┐
│         Worker Instances (Parallel)                     │
│  • Atomic claim from queue                              │
│  • Check GitHub contributions                           │
│  • Send encrypted data to Rust VPS                     │
│  • Mark completed/failed                                │
└────────────────────────┬────────────────────────────────┘
                         │
                         │ HTTPS + X-API-Secret
                         ▼
┌─────────────────────────────────────────────────────────┐
│         Rust VPS on Koyeb (Axum)                        │
│  • Decrypt credentials (AES-256-GCM)                    │
│  • Send to Discord/Telegram (clean IP!)                │
│  • Return success/failure                               │
└─────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Tech Stack

Frontend:

  • Next.js 15 (React 19, App Router)
  • NextAuth.js v5 (GitHub OAuth)
  • Tailwind CSS + shadcn/ui

Backend:

  • Cloudflare Workers (Hono framework)
  • Cloudflare D1 (SQLite)
  • Service Bindings for distributed processing

Notification Proxy:

  • Rust (Axum web framework)
  • AES-256-GCM encryption
  • Docker on Koyeb

Deployment:

  • Frontend: Vercel
  • Backend: Cloudflare
  • Proxy: Koyeb
  • Total cost: $0/month

Try It Out

Live App: streakyy.vercel.app

GitHub: github.com/0xReLogic/Streaky

Features:

  • Zero setup (GitHub OAuth)
  • Discord + Telegram notifications
  • Daily checks at 12:00 UTC
  • AES-256-GCM encryption
  • Free forever

What's Next?

In the next parts of this series, I'll dive deeper into:

  • Part 2: Building the Rust notification proxy (solving IP blocking)
  • Part 3: Distributed queue system with Service Bindings (deep dive)
  • Part 4: Zero-cost production architecture (free tier hacks)

Final Thoughts

What started as a "simple weekend project" turned into a deep dive into:

  • Edge computing constraints
  • Distributed systems
  • Queue management
  • IP blocking solutions
  • Zero-cost architectures

Key takeaway: Simple apps can have complex architectures. The journey is where the learning happens.

If you're building on Cloudflare Workers, remember:

  • CPU time ≠ wall time
  • Parallelism ≠ faster (in Workers)
  • Shared IPs = rate limiting
  • Service Bindings = distributed processing
  • Free tiers are powerful

Let's Connect

Have questions about the architecture? Hit me up in the comments!

Building something similar? I'd love to hear about your approach.

Found this helpful? Follow for Part 2!

GitHub: @0xReLogic
Project: Streaky

Top comments (0)