Part 2: The Notification Proxy
In Part 1, I shared how I built a distributed cron system to process users in parallel. But there was another critical problem I had to solve: Cloudflare Workers' shared IP pool was being blocked by Discord and Telegram APIs.
This is the story of how I built a Rust notification proxy to solve that problem while maintaining end-to-end encryption.
The IP Blocking Problem
When I first tried sending Discord notifications directly from Cloudflare Workers, I got this:
{
"message": "You are being rate limited.",
"retry_after": 64.0,
"global": false,
"code": 429
}
Every. Single. Time.
But when I tested the same webhook URL with curl from my laptop, it worked perfectly. What was going on?
The Root Cause
Cloudflare Workers use a shared IP pool. Thousands of Workers across different accounts make requests from the same set of IP addresses. Discord and Telegram's APIs see this pattern and flag it as bot/spam traffic.
From their perspective:
- Same IP making thousands of requests per hour
- Different webhooks/tokens
- Looks like a bot farm
Result: Aggressive rate limiting or outright blocking.
Why This Matters
This isn't just a Streaky problem. Any Cloudflare Worker trying to call Discord, Telegram, or similar APIs will hit this issue. It's a fundamental constraint of the shared infrastructure.
Possible solutions:
- Use Cloudflare's paid tier with dedicated IPs (expensive)
- Proxy through a service with clean IPs (adds complexity)
- Build your own proxy (what I chose)
The Solution: Rust VPS Proxy
I decided to build a lightweight Rust proxy on Koyeb's free tier. The architecture:
Cloudflare Worker (Shared IP, blocked)
|
| HTTPS + Auth Header
| Data: ENCRYPTED
v
Rust VPS on Koyeb (Clean IP, not blocked)
|
| Decrypt credentials
| Forward to Discord/Telegram
v
Discord/Telegram APIs (Success!)
Why Rust?
The real reason: Performance and resource efficiency on free tier.
Koyeb's free tier gives you 512MB RAM and 0.1 vCPU. With those constraints, I needed something that could:
- Handle hundreds of concurrent requests
- Use minimal memory (< 50MB)
- Fast response times (< 100ms processing)
- Small binary size (< 10MB)
Rust delivers:
- Blazing fast (comparable to C/C++)
- Tiny memory footprint (~20MB idle)
- Small binary size (5MB vs Node.js 50MB+)
- Zero-cost abstractions
- No garbage collection pauses
The math:
- Node.js: 50MB base + 10MB per 100 concurrent requests = 150MB+ for moderate load
- Rust: 20MB base + 2MB per 100 concurrent requests = 40MB for same load
On a 512MB VPS, Rust can handle 10x more traffic than Node.js. That's the difference between staying on free tier vs paying for resources.
Building the Rust Proxy
Tech Stack
- Framework: Axum (lightweight, fast, ergonomic)
- Async Runtime: Tokio
- HTTP Client: Reqwest
- Encryption: AES-GCM (aes-gcm crate)
- Serialization: Serde
- Deployment: Docker on Koyeb
Project Structure
server/
├── src/
│ ├── main.rs # Axum server setup
│ ├── handlers.rs # API endpoints
│ ├── encryption.rs # AES-256-GCM decryption
│ ├── discord.rs # Discord webhook sender
│ └── telegram.rs # Telegram bot API sender
├── Cargo.toml # Dependencies
└── Dockerfile # Container config
Dependencies (Cargo.toml)
[package]
name = "streaky-notification-proxy"
version = "0.1.0"
edition = "2021"
[dependencies]
axum = "0.7"
tokio = { version = "1", features = ["full"] }
reqwest = { version = "0.11", features = ["json"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
aes-gcm = "0.10"
base64 = "0.21"
tower-http = { version = "0.5", features = ["cors"] }
Implementation
1. Main Server (main.rs)
use axum::{
routing::{get, post},
Router,
};
use std::net::SocketAddr;
use tower_http::cors::CorsLayer;
mod handlers;
mod encryption;
mod discord;
mod telegram;
#[tokio::main]
async fn main() {
// Initialize encryption service
let encryption_key = std::env::var("ENCRYPTION_KEY")
.expect("ENCRYPTION_KEY must be set");
// Build router
let app = Router::new()
.route("/", get(health_check))
.route("/send-notification", post(handlers::send_notification))
.layer(CorsLayer::permissive());
// Start server
let addr = SocketAddr::from(([0, 0, 0, 0], 3000));
println!("Server listening on {}", addr);
axum::Server::bind(&addr)
.serve(app.into_make_service())
.await
.unwrap();
}
async fn health_check() -> &'static str {
"OK"
}
2. Encryption Service (encryption.rs)
The critical part: decrypting credentials sent from Workers.
use aes_gcm::{
aead::{Aead, KeyInit},
Aes256Gcm, Nonce,
};
use base64::{engine::general_purpose, Engine as _};
pub struct EncryptionService {
cipher: Aes256Gcm,
}
impl EncryptionService {
pub fn new(key: &str) -> Result<Self, Box<dyn std::error::Error>> {
// Key must be exactly 32 bytes for AES-256
if key.len() < 32 {
return Err("Encryption key must be at least 32 characters".into());
}
let key_bytes = key.as_bytes();
let key_array: [u8; 32] = key_bytes[..32].try_into()?;
let cipher = Aes256Gcm::new(&key_array.into());
Ok(Self { cipher })
}
pub fn decrypt(&self, encrypted_data: &str) -> Result<String, Box<dyn std::error::Error>> {
// Decode from base64 (try STANDARD first, fallback to URL_SAFE)
let encrypted_bytes = general_purpose::STANDARD
.decode(encrypted_data)
.or_else(|_| general_purpose::URL_SAFE.decode(encrypted_data))?;
// Extract IV (first 12 bytes) and ciphertext (rest)
if encrypted_bytes.len() < 12 {
return Err("Invalid encrypted data: too short".into());
}
let (iv_bytes, ciphertext) = encrypted_bytes.split_at(12);
let nonce: &[u8; 12] = iv_bytes.try_into()?;
// Decrypt
let plaintext = self
.cipher
.decrypt(nonce.into(), ciphertext)
.map_err(|e| format!("Decryption failed: {:?}", e))?;
// Convert to string
Ok(String::from_utf8(plaintext)?)
}
}
Key points:
- AES-256-GCM for authenticated encryption
- 12-byte IV (nonce) prepended to ciphertext
- Base64 decoding with fallback (STANDARD or URL_SAFE)
- Same algorithm as Worker encryption service
3. Request Handler (handlers.rs)
use axum::{
extract::Json,
http::StatusCode,
};
use serde::{Deserialize, Serialize};
use crate::{discord, telegram, encryption::EncryptionService};
#[derive(Deserialize)]
pub struct NotificationRequest {
#[serde(rename = "type")]
pub notification_type: String,
pub encrypted_webhook: Option<String>,
pub encrypted_token: Option<String>,
pub encrypted_chat_id: Option<String>,
pub message: MessagePayload,
}
#[derive(Deserialize, Serialize)]
pub struct MessagePayload {
pub username: String,
pub current_streak: i32,
pub contributions_today: i32,
pub message: String,
}
#[derive(Serialize)]
pub struct NotificationResponse {
pub success: bool,
pub error: Option<String>,
}
pub async fn send_notification(
Json(payload): Json<NotificationRequest>,
) -> Result<Json<NotificationResponse>, StatusCode> {
// Validate API secret
let vps_secret = std::env::var("VPS_SECRET")
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
// In production, check X-API-Secret header here
// Initialize encryption service
let encryption_key = std::env::var("ENCRYPTION_KEY")
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
let encryption_service = EncryptionService::new(&encryption_key)
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
// Route based on notification type
match payload.notification_type.as_str() {
"discord" => {
let webhook = payload.encrypted_webhook
.ok_or(StatusCode::BAD_REQUEST)?;
// Decrypt webhook URL
let decrypted_webhook = encryption_service
.decrypt(&webhook)
.map_err(|_| StatusCode::BAD_REQUEST)?;
// Send to Discord
discord::send_discord_notification(&decrypted_webhook, &payload.message)
.await
.map(|_| Json(NotificationResponse {
success: true,
error: None,
}))
.map_err(|e| {
eprintln!("Discord error: {}", e);
StatusCode::INTERNAL_SERVER_ERROR
})
}
"telegram" => {
let token = payload.encrypted_token
.ok_or(StatusCode::BAD_REQUEST)?;
let chat_id = payload.encrypted_chat_id
.ok_or(StatusCode::BAD_REQUEST)?;
// Decrypt credentials
let decrypted_token = encryption_service
.decrypt(&token)
.map_err(|_| StatusCode::BAD_REQUEST)?;
let decrypted_chat_id = encryption_service
.decrypt(&chat_id)
.map_err(|_| StatusCode::BAD_REQUEST)?;
// Send to Telegram
telegram::send_telegram_notification(
&decrypted_token,
&decrypted_chat_id,
&payload.message,
)
.await
.map(|_| Json(NotificationResponse {
success: true,
error: None,
}))
.map_err(|e| {
eprintln!("Telegram error: {}", e);
StatusCode::INTERNAL_SERVER_ERROR
})
}
_ => Err(StatusCode::BAD_REQUEST),
}
}
4. Discord Integration (discord.rs)
use reqwest::Client;
use serde_json::json;
use crate::handlers::MessagePayload;
pub async fn send_discord_notification(
webhook_url: &str,
message: &MessagePayload,
) -> Result<(), Box<dyn std::error::Error>> {
let client = Client::new();
// Build Discord embed
let embed = json!({
"title": format!("GitHub Streak Update - {}", message.username),
"description": message.message,
"color": if message.contributions_today > 0 { 0x00ff00 } else { 0xff0000 },
"fields": [
{
"name": "Current Streak",
"value": format!("{} days", message.current_streak),
"inline": true
},
{
"name": "Contributions Today",
"value": message.contributions_today.to_string(),
"inline": true
}
],
"timestamp": chrono::Utc::now().to_rfc3339()
});
let payload = json!({
"embeds": [embed]
});
// Send to Discord
let response = client
.post(webhook_url)
.json(&payload)
.timeout(std::time::Duration::from_secs(10))
.send()
.await?;
if !response.status().is_success() {
let error_text = response.text().await?;
return Err(format!("Discord API error: {}", error_text).into());
}
Ok(())
}
5. Telegram Integration (telegram.rs)
use reqwest::Client;
use serde_json::json;
use crate::handlers::MessagePayload;
pub async fn send_telegram_notification(
bot_token: &str,
chat_id: &str,
message: &MessagePayload,
) -> Result<(), Box<dyn std::error::Error>> {
let client = Client::new();
// Build Telegram message (Markdown format)
let text = format!(
"*GitHub Streak Update - {}*\n\n{}\n\n*Current Streak:* {} days\n*Contributions Today:* {}",
message.username,
message.message,
message.current_streak,
message.contributions_today
);
let payload = json!({
"chat_id": chat_id,
"text": text,
"parse_mode": "Markdown"
});
// Send to Telegram
let url = format!("https://api.telegram.org/bot{}/sendMessage", bot_token);
let response = client
.post(&url)
.json(&payload)
.timeout(std::time::Duration::from_secs(10))
.send()
.await?;
if !response.status().is_success() {
let error_text = response.text().await?;
return Err(format!("Telegram API error: {}", error_text).into());
}
Ok(())
}
Deployment
Dockerfile
FROM rust:1.75 as builder
WORKDIR /app
# Copy manifests
COPY Cargo.toml Cargo.lock ./
# Copy source code
COPY src ./src
# Build release binary
RUN cargo build --release
# Runtime stage
FROM debian:bookworm-slim
# Install CA certificates for HTTPS
RUN apt-get update && \
apt-get install -y ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Copy binary from builder
COPY --from=builder /app/target/release/streaky-notification-proxy /usr/local/bin/
# Expose port
EXPOSE 3000
# Run binary
CMD ["streaky-notification-proxy"]
Koyeb Configuration
Environment Variables:
ENCRYPTION_KEY=your-32-character-encryption-key-here
VPS_SECRET=your-vps-secret-uuid-here
PORT=3000
Resources:
- Memory: 512MB (free tier)
- CPU: 0.1 vCPU
- Region: Frankfurt (closest to Cloudflare edge)
Deployment:
- Push to GitHub
- Connect Koyeb to repo
- Set environment variables
- Deploy from
server/directory - Get public URL
Worker Integration
Now the Worker can call the Rust proxy instead of Discord/Telegram directly:
// web/backend/src/services/notifications.ts
export class NotificationService {
constructor(private env: Env) {}
async sendDiscordNotification(
encryptedWebhook: string,
message: NotificationMessage
): Promise<NotificationResult> {
try {
const response = await fetch(`${this.env.VPS_URL}/send-notification`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-API-Secret": this.env.VPS_SECRET,
},
body: JSON.stringify({
type: "discord",
encrypted_webhook: encryptedWebhook,
message,
}),
});
if (!response.ok) {
const error = await response.text();
return { success: false, error };
}
return { success: true };
} catch (error) {
return {
success: false,
error: error instanceof Error ? error.message : "Unknown error",
};
}
}
async sendTelegramNotification(
encryptedToken: string,
encryptedChatId: string,
message: NotificationMessage
): Promise<NotificationResult> {
try {
const response = await fetch(`${this.env.VPS_URL}/send-notification`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-API-Secret": this.env.VPS_SECRET,
},
body: JSON.stringify({
type: "telegram",
encrypted_token: encryptedToken,
encrypted_chat_id: encryptedChatId,
message,
}),
});
if (!response.ok) {
const error = await response.text();
return { success: false, error };
}
return { success: true };
} catch (error) {
return {
success: false,
error: error instanceof Error ? error.message : "Unknown error",
};
}
}
}
Security Considerations
1. End-to-End Encryption
Data flow:
- Credentials encrypted in D1 database
- Worker reads encrypted data (never decrypts)
- Worker sends encrypted data to Rust VPS
- Rust VPS decrypts and forwards
- Plaintext immediately discarded from memory
Even if HTTPS is compromised, credentials stay encrypted in transit.
2. API Authentication
Every request requires X-API-Secret header:
// In production, add middleware
async fn validate_api_secret(
headers: &HeaderMap,
) -> Result<(), StatusCode> {
let secret = headers
.get("X-API-Secret")
.and_then(|v| v.to_str().ok())
.ok_or(StatusCode::UNAUTHORIZED)?;
let expected_secret = std::env::var("VPS_SECRET")
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
if secret != expected_secret {
return Err(StatusCode::UNAUTHORIZED);
}
Ok(())
}
3. Stateless Design
The Rust VPS stores nothing:
- No database
- No file storage
- No persistent state
- Credentials decrypted, used, and immediately dropped
Memory safety guaranteed by Rust's ownership system.
4. Rate Limiting
Koyeb provides DDoS protection. For additional safety, add rate limiting:
use tower::limit::RateLimitLayer;
use std::time::Duration;
let app = Router::new()
.route("/send-notification", post(handlers::send_notification))
.layer(RateLimitLayer::new(100, Duration::from_secs(60))); // 100 req/min
Performance Results
Before (Direct from Workers)
Success rate: 0%
Error: 429 Too Many Requests
Latency: N/A (blocked)
After (Through Rust Proxy)
Success rate: 100%
Cold start: ~10 seconds (VPS sleeping)
Warm: ~3.6 seconds
Latency breakdown:
- Worker → VPS: ~500ms
- VPS decrypt: ~10ms
- VPS → Discord/Telegram: ~3s
- Total: ~3.6s
Binary Size
Rust binary: 5.2 MB
Docker image: 85 MB (with Debian base)
Memory usage: ~0 MB (idle)
CPU usage: <1% (idle)
Compare to Node.js:
- Binary: 50+ MB
- Docker image: 200+ MB
- Memory: 50+ MB (idle)
Rust is perfect for resource-constrained environments.
Lessons Learned
1. Rust Has a Learning Curve
Coming from TypeScript, Rust's ownership system was challenging. But the compiler errors are incredibly helpful. Once it compiles, it usually works.
2. Axum Is Excellent
Ergonomic, fast, and well-documented. The middleware system is powerful. Highly recommend for Rust web services.
3. Encryption Is Tricky
Matching encryption algorithms between TypeScript and Rust required careful attention to:
- IV/nonce size (12 bytes for GCM)
- Base64 encoding (STANDARD vs URL_SAFE)
- Key derivation (same 32-byte key)
4. Free Tiers Are Generous
Koyeb's free tier is perfect for this use case:
- 512MB RAM (plenty for Rust)
- Custom domains
- Automatic HTTPS
5. Stateless Is Simpler
No database = no migrations, no backups, no consistency issues. Just pure compute. Rust's ownership system makes this safe.
What's Next?
In Part 3, I'll dive deep into the distributed queue system with Cloudflare Service Bindings:
- Atomic queue claiming with SQL
- Idempotency protection
- Stale item requeuing
- Batch progress tracking
- Scaling to 1000+ users
Try It Out
Live App: streakyy.vercel.app
GitHub: github.com/0xReLogic/Streaky
Rust VPS Code: server/
Let's Connect
Building something similar? Have questions about Rust or Cloudflare Workers? Drop a comment!
GitHub: @0xReLogic
Project: Streaky
Top comments (1)
Absolutely brilliant breakdown.