If you manage social media for a business like PropFirm Key with dozens of content variations — different topics, multiple languages, several platforms — you already know the pain. Manually scheduling posts across Instagram, X (Twitter), LinkedIn, Facebook, Reddit, Pinterest, and others is a full-time job.
I decided to build my own automation system. Not a SaaS product — a purpose-built engine that takes a pool of content, combines it across dimensions (topics, languages, platforms), schedules everything with anti-ban spacing rules, and handles failures gracefully.
This article walks through the architecture, the database design, the queue system, and the hard-earned lessons. All code is TypeScript, all battle-tested.
The Problem: Combinatorial Explosion
Imagine you have 20 topics, 10 languages, and 12 platforms. That's 2,400 possible posts. You need a system that:
- Generates all valid combinations
- Schedules them with intelligent spacing
- Respects per-platform rate limits
- Tracks what's been published
- Retries failures without duplicating successful posts
Architecture
┌─────────────────────────────────────┐
│ Next.js API Routes │
├─────────────────────────────────────┤
│ Content Engine │
├─────────────────────────────────────┤
│ Queue + Scheduler │
├─────────────────────────────────────┤
│ Platform Adapters │
└─────────────────────────────────────┘
│
┌────┴────┐
│ SQLite │
└─────────┘
Database Schema
CREATE TABLE posts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
topic_slug TEXT NOT NULL,
locale TEXT NOT NULL DEFAULT 'en',
platform TEXT NOT NULL,
title TEXT NOT NULL,
body TEXT NOT NULL,
media_url TEXT,
status TEXT NOT NULL DEFAULT 'draft'
CHECK(status IN ('draft','queued','publishing','published','failed','skipped')),
scheduled_at TEXT,
published_at TEXT,
external_id TEXT,
error_message TEXT,
retry_count INTEGER NOT NULL DEFAULT 0,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
UNIQUE(topic_slug, locale, platform)
);
CREATE TABLE publications (
id INTEGER PRIMARY KEY AUTOINCREMENT,
post_id INTEGER NOT NULL REFERENCES posts(id),
platform TEXT NOT NULL,
external_id TEXT,
published_at TEXT NOT NULL DEFAULT (datetime('now')),
response_data TEXT
);
CREATE TABLE platform_logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
platform TEXT NOT NULL,
action TEXT NOT NULL,
status TEXT NOT NULL,
duration_ms INTEGER,
error TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
The Combinatorial Content Engine
interface ContentDimensions {
topics: { slug: string; title: Record<string, string>; body: Record<string, string> }[];
locales: string[];
platforms: string[];
}
function generateCombinations(dimensions: ContentDimensions): GeneratedPost[] {
const posts: GeneratedPost[] = [];
for (const topic of dimensions.topics) {
for (const locale of dimensions.locales) {
if (!topic.title[locale] || !topic.body[locale]) continue;
for (const platform of dimensions.platforms) {
posts.push({
topicSlug: topic.slug,
locale,
platform,
title: adaptForPlatform(topic.title[locale], platform),
body: adaptForPlatform(topic.body[locale], platform),
});
}
}
}
return posts;
}
function adaptForPlatform(text: string, platform: string): string {
switch (platform) {
case 'twitter':
return text.length > 280 ? text.slice(0, 277) + '...' : text;
case 'linkedin':
return text + '\n\n#trading #finance';
case 'reddit':
return text.replace(/#\w+/g, '').trim();
default:
return text;
}
}
The Anti-Ban Queue
This is the most critical part. Post too fast and platforms will shadow-ban you. Rules I settled on after trial and error:
- 45 minutes minimum between posts on the same platform
- 24 hours minimum between posts on the same topic
- 8 posts maximum per platform per day
- Random jitter of 5-15 minutes on every scheduled time
const DEFAULT_RULES = {
minPlatformGapMinutes: 45,
minTopicCooldownHours: 24,
maxPostsPerPlatformPerDay: 8,
jitterMinMinutes: 5,
jitterMaxMinutes: 15,
};
function scheduleNextBatch(db: Database, rules = DEFAULT_RULES): number {
const drafts = db.prepare(`
SELECT id, topic_slug, platform
FROM posts WHERE status = 'draft'
ORDER BY RANDOM()
`).all();
let scheduled = 0;
for (const draft of drafts) {
const nextSlot = findNextAvailableSlot(db, draft, rules);
if (!nextSlot) continue;
const jitter = randomBetween(rules.jitterMinMinutes, rules.jitterMaxMinutes);
const scheduledAt = addMinutes(nextSlot, jitter);
db.prepare(`UPDATE posts SET status = 'queued', scheduled_at = ? WHERE id = ?`)
.run(scheduledAt.toISOString(), draft.id);
scheduled++;
}
return scheduled;
}
Platform Adapters
Each platform gets an adapter implementing a common interface:
interface PlatformAdapter {
name: string;
publish(post: PostPayload): Promise<PlatformResult>;
validateCredentials(): Promise<boolean>;
}
interface PlatformResult {
success: boolean;
externalId?: string;
error?: string;
rawResponse?: unknown;
}
class ProxyServiceAdapter implements PlatformAdapter {
constructor(
public name: string,
private apiBase: string,
private apiKey: string
) {}
async publish(post: PostPayload): Promise<PlatformResult> {
const response = await fetch(`${this.apiBase}/posts`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
platform: this.name,
content: post.body,
media: post.mediaUrl,
}),
});
const data = await response.json();
return {
success: response.ok,
externalId: data.id,
error: data.error,
};
}
async validateCredentials(): Promise<boolean> {
const res = await fetch(`${this.apiBase}/me`, {
headers: { Authorization: `Bearer ${this.apiKey}` },
});
return res.ok;
}
}
The adapter pattern means swapping from a direct API to a proxy service takes 20 minutes — zero changes to the queue, scheduler, or database.
Failure Handling and Retry Logic
const MAX_RETRIES = 3;
const RETRY_DELAYS = [5, 30, 120]; // minutes
async function processQueue(db: Database): Promise<void> {
const due = db.prepare(`
SELECT * FROM posts
WHERE status = 'queued' AND scheduled_at <= datetime('now')
ORDER BY scheduled_at ASC LIMIT 5
`).all();
for (const post of due) {
const adapter = getAdapter(post.platform);
db.prepare(`UPDATE posts SET status = 'publishing' WHERE id = ?`).run(post.id);
const startTime = Date.now();
const result = await adapter.publish({
title: post.title,
body: post.body,
mediaUrl: post.media_url,
});
const duration = Date.now() - startTime;
db.prepare(`
INSERT INTO platform_logs (platform, action, status, duration_ms, error)
VALUES (?, 'publish', ?, ?, ?)
`).run(post.platform, result.success ? 'ok' : 'error', duration, result.error ?? null);
if (result.success) {
db.prepare(`
UPDATE posts SET status = 'published', published_at = datetime('now'),
external_id = ? WHERE id = ?
`).run(result.externalId, post.id);
} else {
handleFailure(db, post, result.error ?? 'Unknown error');
}
}
}
function handleFailure(db: Database, post: Post, error: string): void {
const newRetryCount = post.retry_count + 1;
if (newRetryCount >= MAX_RETRIES) {
db.prepare(`UPDATE posts SET status = 'failed', error_message = ? WHERE id = ?`)
.run(error, post.id);
return;
}
const delayMinutes = RETRY_DELAYS[newRetryCount - 1] ?? 120;
const retryAt = addMinutes(new Date(), delayMinutes);
db.prepare(`UPDATE posts SET status = 'queued', scheduled_at = ?, retry_count = ? WHERE id = ?`)
.run(retryAt.toISOString(), newRetryCount, post.id);
}
The intermediate publishing status prevents a race condition where a slow API call could cause the same post to be picked up by a concurrent queue processor.
Monitoring
// app/api/queue/stats/route.ts
export async function GET() {
const stats = db.prepare(`
SELECT status, platform, COUNT(*) as count
FROM posts GROUP BY status, platform
`).all();
const recentErrors = db.prepare(`
SELECT platform, error, created_at
FROM platform_logs WHERE status = 'error'
ORDER BY created_at DESC LIMIT 20
`).all();
return NextResponse.json({ queue: stats, recentErrors });
}
Lessons Learned
The 45-minute gap is the sweet spot. I initially tried 20 minutes and got rate-limited on two platforms within a day.
The topic cooldown matters more than you'd think. Posting about the same subject across platforms within a few hours triggers cross-platform spam detection.
SQLite is more than enough. Even with thousands of posts tracked, queries run in under 1ms.
Retry delays should increase sharply. Platform outages typically last 10-30 minutes. The 5, 30, 120 minute schedule means retries hit a working API.
Build adapters, not integrations. The adapter pattern meant swapping platforms or services required only a new adapter class — zero changes to everything else.
The full system is around 1,200 lines of TypeScript. The patterns are universal: generate, schedule, publish, retry, log.
Top comments (0)