Most people think consistency on LinkedIn is about discipline.
I treated it as a backend problem.
So I built a system that:
- fetches content from multiple sources
- generates LinkedIn posts using AI
- schedules them across the week
- and publishes automatically
No manual intervention.
π High-Level Flow
The system runs on 3 stages:
- Content Fetching (Sunday)
- Slot Allocation + AI Generation (Monday)
- Scheduled Publishing (TueβThu)
π 1. Fetch Scheduler
Every Sunday, a scheduler pulls content from multiple platforms:
const sources = [
"devto", "github", "medium", "npm",
"Hashnode", "nodeweekly", "reddit"
];
Each source is processed and stored in MongoDB.
Fetch Logic
const rawItems = await FetcherService.fetchFromSource(source, keyword);
for (const item of rawItems) {
await FetchedContent.updateOne(
{ url: item.url },
{
$set: {
...item,
source,
},
$setOnInsert: {
expiresAt: new Date(Date.now() + 7 * 24 * 60 * 60 * 1000),
},
},
{ upsert: true }
);
}
Why this design?
- upsert prevents duplicates
- TTL (expiresAt) auto-cleans old content
- decouples fetching from publishing
π§ 2. Slot Allocation Scheduler
This is the core of the system.Instead of posting randomly, I pre-allocate slots:
- Tuesday β 10 AM, 6 PM
- Wednesday β 10 AM, 6 PM
- Thursday β 10 AM, 6 PM
Slot Generation
const SLOT_TIMES = [
{ day: 2, hour: 10 },
{ day: 2, hour: 18 },
{ day: 3, hour: 10 },
{ day: 3, hour: 18 },
{ day: 4, hour: 10 },
{ day: 4, hour: 18 },
];
Slots are generated weekly using timezone-aware logic:
let slot = nowIST
.startOf("isoWeek")
.add(day, "day")
.hour(hour)
.minute(0)
Allocation Logic
- Skip already used slots
- Avoid duplicate articles
- Assign oldest unprocessed content
const contents = await FetchedContent.find({
_id: { $nin: usedArticleIds },
}).sort({ createdAt: 1 });
Each allocated post is inserted as:
await GeneratedPost.create({
articleId: contents[i]._id,
status: "draft",
publishAt,
});
Then queued for AI generation:
await aiQueue.add(
JOB_TYPES.GENERATE_POST,
{ postId: post._id }
);
π€ 3. AI Worker (Content β LinkedIn Post)
This worker converts raw content into a LinkedIn-ready post.
Processing Flow
const post = await GeneratedPost.findOneAndUpdate(
{ _id: postId, status: "draft" },
{ $set: { status: "generating" } }
);
Fetch original content:
const content = await FetchedContent.findById(post.articleId);
Generate post using AI:
const text = await aiService.generateForContent(content);
Update and queue for publishing:
post.status = "queued";
post.text = text;
await linkedinQueue.add(
JOB_TYPES.POST_TO_LINKEDIN,
{ postId },
{ delay }
);
β±οΈ 4. LinkedIn Worker (Final Stage)
This worker handles publishing at the exact scheduled time.
Safe Publishing Logic
const post = await GeneratedPost.findOneAndUpdate(
{
_id: postId,
status: { $nin: ["posted", "publishing"] },
publishAt: { $lte: new Date() },
},
{ $set: { status: "publishing" } }
);
Publish Call
const result = await publishToLinkedIn({
text: post.text,
url: post.url,
title: post.title,
});
Success Case
await GeneratedPost.findByIdAndUpdate(postId, {
status: "posted",
linkedinPostUrn: urn,
postedAt: new Date(),
});
Failure Handling
await GeneratedPost.findByIdAndUpdate(postId, {
status: "failed",
error: err.message,
$inc: { attempts: 1 },
});
π System Design Highlights
1. Fully Asynchronous Pipeline
- Fetch β Allocate β Generate β Publish
- Each stage is independent
2. Status-Driven Workflow
draft β generating β queued β publishing β posted / failed
3. Idempotency
- upsert on fetch
- guarded updates in workers
- prevents duplicate posts
4. Timezone Safety
- All slots generated in IST
- stored in UTC
5. TTL Cleanup
- both content and posts auto-expire after 7 days
β οΈ Challenges I Faced
- Race conditions between workers
- Handling delayed jobs correctly
- Preventing duplicate publishing
- LinkedIn API inconsistencies
- Managing timezones reliably
π What Iβd Improve Next
- Add exponential backoff retries
- Distributed workers for scaling
- Observability (logs + metrics)
π» GitHub Repo
π (https://github.com/Jay-Solanki-31/Linkdin-Bot)
π€ Feedback
If you're into backend systems, I'd love your thoughts:
- Would you keep MongoDB as a queue or move fully to Redis?
- How would you scale this pipeline for multiple users?
- Any improvements in the worker idempotency design?
Open to suggestions and contributions.
Top comments (0)