A deep technical dive into DotShare v3.0 — the Publishing Suite update that added Dev.to & Medium integrations, a YAML frontmatter parser, platform-first navigation, and a unified PostExecutor architecture. All from inside VS Code.
I Built a VS Code Extension That Posts to Dev.to, Medium & 7 Social Platforms — Here's How v3.0 Works
TL;DR — DotShare v3.0 "The Publishing Suite" transforms the extension from a social poster into a full publishing platform. This post is a deep technical walkthrough of the architecture decisions, the blog API integrations, and the tricky edge cases that took the most time to solve.
🧠 The Problem I Was Solving
Every time I shipped a new feature, I'd write a commit message, then write a Dev.to draft, then rephrase it for LinkedIn, then trim it for Twitter, then reformat it for Medium. Four context switches. Four text editors. Thirty minutes of overhead before my actual audience saw anything.
I'm a VS Code developer. I live in this editor. So I built DotShare — an extension that lets me write once, then distribute everywhere. v1 and v2 handled social platforms. v3.0 adds Dev.to and Medium, and rearchitects the whole thing into something I'm finally proud of.
Let me walk you through exactly how I built it.
🗺️ The Big Picture: What Changed in v3.0
Before v3.0, DotShare had one mental model: a short post goes to social media. The UI reflected that — a single textarea, a platform grid, a share button.
v3.0 breaks that assumption. Now there are two distinct workflows:
┌──────────────────────────────────────────────────┐
│ DotShare v3.0 Workspace │
├──────────────────────┬───────────────────────────┤
│ SOCIAL WORKSPACE │ BLOG WORKSPACE │
│ │ │
│ • Single post │ • Title + Tags │
│ • Thread composer │ • Markdown body │
│ • 280–25K chars │ • Canonical URL │
│ • Image attach │ • Cover image URL │
│ │ • Draft/Publish toggle │
│ Platforms: │ │
│ X, LinkedIn, │ Platforms: │
│ Bluesky, Reddit, │ Dev.to, Medium │
│ Facebook, Discord, │ │
│ Telegram │ │
└──────────────────────┴───────────────────────────┘
The key architectural move was making platform selection drive the UI, not the other way around. Let me show you how that works.
🏗️ Architecture: platform-config.ts as Single Source of Truth
The old code had workspace logic scattered everywhere — HTML conditionals, JS checks, handler guards. I replaced all of it with one file:
// src/platforms/platform-config.ts
export type WorkspaceType = 'social' | 'blog' | 'thread';
export type AuthType = 'oauth' | 'apikey' | 'bearer' | 'bot';
export interface PlatformConfig {
id: string;
name: string;
icon: string;
workspaceType: WorkspaceType;
maxChars: number | null; // null = no limit
supportsThreads: boolean;
supportsMedia: boolean;
supportsScheduling: boolean;
charCountMethod: 'standard' | 'twitter';
authType: AuthType;
color: string;
}
export const PLATFORM_CONFIGS: Record<string, PlatformConfig> = {
x: {
id: 'x',
name: 'X (Twitter)',
icon: '𝕏',
workspaceType: 'social',
maxChars: 280,
supportsThreads: true,
supportsMedia: true,
supportsScheduling: true,
charCountMethod: 'twitter',
authType: 'oauth',
color: '#000000',
},
devto: {
id: 'devto',
name: 'Dev.to',
icon: '👨💻',
workspaceType: 'blog',
maxChars: 100000,
supportsThreads: false,
supportsMedia: false, // API doesn't support file uploads
supportsScheduling: false,
charCountMethod: 'standard',
authType: 'apikey',
color: '#0a0a0a',
},
medium: {
id: 'medium',
name: 'Medium',
icon: 'Ⓜ️',
workspaceType: 'blog',
maxChars: 100000,
supportsThreads: false,
supportsMedia: false,
supportsScheduling: false,
charCountMethod: 'standard',
authType: 'bearer',
color: '#00ab6c',
},
// ... linkedin, bluesky, reddit, etc.
};
Now when the user clicks a platform icon in the sidebar, one function handles everything:
// media/webview/app.ts
function switchPlatform(platformId: string): void {
const config = PLATFORM_CONFIGS[platformId];
if (!config) return;
activeCommandPlatform = platformId;
// Workspace switching — driven entirely by config
const workspace = config.workspaceType;
document.querySelectorAll('.workspace').forEach(el => {
(el as HTMLElement).style.display = 'none';
});
const target = document.getElementById(`workspace-${workspace}`);
if (target) target.style.display = 'flex';
// Update platform header
updatePlatformHeader(config);
// Refresh character counter with new limits
updateCharCounter();
updateShareBtn();
}
No more scattered if (platform === 'devto') checks in 12 different places.
📝 The Blog Publish Page: Loading Your Active .md File
This was the feature I was most excited to build. The idea: you're editing a CHANGELOG.md or a blog-post.md in VS Code. Hit a button. The publish form fills itself.
Backend: Reading the Active Editor
// src/handlers/MessageHandler.ts
case 'loadActiveFile': {
const editor = vscode.window.activeTextEditor;
if (!editor) {
this.sendStatus('No active editor found', 'error');
return;
}
const doc = editor.document;
if (doc.languageId !== 'markdown') {
this.sendStatus('Active file is not a Markdown file', 'warning');
return;
}
const content = doc.getText();
const filePath = doc.uri.fsPath;
const fileName = path.basename(filePath, '.md');
// Parse frontmatter and send back to WebView
const parsed = parseFrontmatter(content);
this.webview.postMessage({
command: 'activeFileLoaded',
content: parsed.body,
frontmatter: parsed.data,
fileName,
});
break;
}
The YAML Frontmatter Parser
Dev.to and Medium both use YAML frontmatter. I built a parser that extracts the fields I care about:
// src/utils/frontmatterParser.ts
export interface FrontMatter {
title?: string;
description?: string;
tags?: string[];
cover_image?: string;
canonical_url?: string;
series?: string;
published?: boolean | 'draft' | 'unlisted';
}
export interface ParsedDocument {
data: FrontMatter;
body: string;
}
const FRONTMATTER_REGEX = /^---\r?\n([\s\S]*?)\r?\n---\r?\n([\s\S]*)$/;
export function parseFrontmatter(raw: string): ParsedDocument {
const match = raw.match(FRONTMATTER_REGEX);
if (!match) {
return { data: {}, body: raw };
}
const yamlBlock = match[1];
const body = match[2];
// Minimal YAML parser — handles the fields we care about
const data: FrontMatter = {};
const lines = yamlBlock.split('\n');
for (const line of lines) {
const colonIdx = line.indexOf(':');
if (colonIdx === -1) continue;
const key = line.slice(0, colonIdx).trim();
const value = line.slice(colonIdx + 1).trim().replace(/^['"]|['"]$/g, '');
switch (key) {
case 'title':
data.title = value;
break;
case 'description':
data.description = value;
break;
case 'cover_image':
data.cover_image = value;
break;
case 'canonical_url':
data.canonical_url = value;
break;
case 'series':
data.series = value;
break;
case 'published':
data.published = value === 'true' ? true : value === 'false' ? false : value as 'draft' | 'unlisted';
break;
case 'tags': {
// Handle both "tags: [a, b]" and multi-line "tags:\n - a"
if (value.startsWith('[')) {
data.tags = value
.replace(/[\[\]]/g, '')
.split(',')
.map(t => t.trim())
.filter(Boolean);
}
break;
}
}
}
return { data, body };
}
When the WebView receives the activeFileLoaded message, it populates the form:
// media/webview/app.ts
case 'activeFileLoaded': {
const { content, frontmatter, fileName } = msg;
// Populate body
const bodyEditor = get<HTMLTextAreaElement>('blog-body');
if (bodyEditor) bodyEditor.value = content;
// Populate frontmatter fields if present
if (frontmatter.title) {
const titleEl = get<HTMLInputElement>('blog-title');
if (titleEl) titleEl.value = frontmatter.title;
}
if (frontmatter.tags?.length) {
frontmatter.tags.forEach((tag: string) => addTagChip(tag));
}
if (frontmatter.canonical_url) {
const canonicalEl = get<HTMLInputElement>('blog-canonical');
if (canonicalEl) canonicalEl.value = frontmatter.canonical_url;
}
if (frontmatter.cover_image) {
const coverEl = get<HTMLInputElement>('blog-cover');
if (coverEl) coverEl.value = frontmatter.cover_image;
}
toast(`Loaded: ${fileName}.md`, 'success');
break;
}
🔌 Dev.to API Integration
Dev.to's API is clean and well-documented. Here's the full integration:
// src/platforms/devto.ts
import axios from 'axios';
import { logger } from '../utils/logger';
export interface DevToArticle {
title: string;
body_markdown: string;
published: boolean;
tags?: string[]; // max 4 tags
description?: string;
cover_image?: string; // must be a public URL
canonical_url?: string;
series?: string;
}
export interface DevToResponse {
id: number;
url: string;
title: string;
published: boolean;
}
export async function shareToDevTo(
apiKey: string,
article: DevToArticle
): Promise<DevToResponse> {
// Dev.to enforces a 4-tag maximum
const sanitizedTags = (article.tags ?? [])
.slice(0, 4)
.map(t => t.toLowerCase().replace(/[^a-z0-9]/g, ''));
const payload = {
article: {
title: article.title,
body_markdown: article.body_markdown,
published: article.published,
tags: sanitizedTags,
description: article.description,
cover_image: article.cover_image ?? null,
canonical_url: article.canonical_url ?? null,
series: article.series ?? null,
},
};
try {
const response = await axios.post<{ article: DevToResponse }>(
'https://dev.to/api/articles',
payload,
{
headers: {
'api-key': apiKey,
'Content-Type': 'application/json',
},
}
);
logger.info(`[Dev.to] Published: ${response.data.article.url}`);
return response.data.article;
} catch (err: unknown) {
if (axios.isAxiosError(err)) {
const status = err.response?.status;
const detail = err.response?.data?.error ?? err.message;
if (status === 422) {
// Validation error — usually malformed tags or missing title
throw new Error(`Dev.to validation error: ${detail}`);
}
if (status === 401) {
throw new Error('Dev.to API key is invalid or expired');
}
}
throw err;
}
}
The Image Policy Problem
Dev.to's API does not support file uploads. If you try to POST a local file as a cover image, it just silently drops it. I spent two hours debugging this before reading the docs carefully.
My solution: detect local file paths and warn the user instead of silently failing:
// src/handlers/PostHandler.ts
private validateBlogMedia(article: DevToArticle, platform: 'devto' | 'medium'): DevToArticle {
const { cover_image } = article;
if (cover_image && !cover_image.startsWith('http')) {
// Local file path — not supported
logger.warn(
`[${platform}] Local cover image "${cover_image}" skipped. ` +
`${platform === 'devto' ? 'Dev.to' : 'Medium'} only accepts public URLs. ` +
`Consider uploading to Cloudinary, GitHub, or Imgur first.`
);
return { ...article, cover_image: undefined };
}
return article;
}
Ⓜ️ Medium API Integration
Medium's API is older and has a few quirks worth documenting.
// src/platforms/medium.ts
import axios from 'axios';
export type MediumPublishStatus = 'public' | 'draft' | 'unlisted';
export type MediumContentFormat = 'markdown' | 'html';
export interface MediumPost {
title: string;
contentFormat: MediumContentFormat;
content: string;
tags?: string[]; // max 5 tags
canonicalUrl?: string;
publishStatus?: MediumPublishStatus;
}
// Medium's API uses "published" but their enum value is "public" — this trips people up
export function normalizeMediumPublishStatus(
status: string | boolean | undefined
): MediumPublishStatus {
if (status === true || status === 'published' || status === 'public') {
return 'public';
}
if (status === 'unlisted') return 'unlisted';
return 'draft'; // safe default
}
export async function shareToMedium(
bearerToken: string,
post: MediumPost
): Promise<{ id: string; url: string }> {
// Step 1: get the authenticated user's ID
const userResp = await axios.get('https://api.medium.com/v1/me', {
headers: { Authorization: `Bearer ${bearerToken}` },
});
const userId: string = userResp.data.data.id;
const sanitizedTags = (post.tags ?? []).slice(0, 5);
const payload = {
title: post.title,
contentFormat: post.contentFormat,
content: post.content,
tags: sanitizedTags,
canonicalUrl: post.canonicalUrl,
publishStatus: normalizeMediumPublishStatus(post.publishStatus),
};
const response = await axios.post(
`https://api.medium.com/v1/users/${userId}/posts`,
payload,
{
headers: {
Authorization: `Bearer ${bearerToken}`,
'Content-Type': 'application/json',
},
}
);
return {
id: response.data.data.id,
url: response.data.data.url,
};
}
The published → public gotcha: Medium's publish status enum uses "public", not "published". YAML frontmatter typically has published: true. If you pass that string directly to the API, Medium silently defaults to draft. The normalizeMediumPublishStatus() function handles all the variants.
⚡ The PostExecutor: Decoupling from VS Code UI
Before v3.0, all posting logic lived inside PostHandler.ts, which was tightly coupled to the VS Code extension host. This made it impossible to use the same logic from the scheduler running in the background.
v3.0 introduces PostExecutor — a clean execution engine with no VS Code dependencies:
// src/services/PostExecutor.ts
export interface ExecutorCallbacks {
onProgress?: (platform: string, message: string) => void;
onSuccess?: (platform: string, url?: string) => void;
onError?: (platform: string, error: Error) => void;
}
export class PostExecutor {
constructor(
private readonly credentials: CredentialProvider,
private readonly history: HistoryService,
) {}
async executeBlogPost(
post: BlogPost,
targets: PublishTarget[],
callbacks: ExecutorCallbacks = {}
): Promise<PublishResult[]> {
const results: PublishResult[] = [];
for (const target of targets) {
callbacks.onProgress?.(target.platform, `Publishing to ${target.platform}...`);
try {
let url: string | undefined;
if (target.platform === 'devto') {
const apiKey = await this.credentials.getDevToApiKey();
const result = await shareToDevTo(apiKey, {
title: post.title,
body_markdown: post.body,
published: post.publishStatus !== 'draft',
tags: post.tags,
description: post.description,
cover_image: post.coverImage,
canonical_url: post.canonicalUrl,
series: post.series,
});
url = result.url;
} else if (target.platform === 'medium') {
const token = await this.credentials.getMediumToken();
const result = await shareToMedium(token, {
title: post.title,
contentFormat: 'markdown',
content: post.body,
tags: post.tags,
canonicalUrl: post.canonicalUrl,
publishStatus: normalizeMediumPublishStatus(post.publishStatus),
});
url = result.url;
}
results.push({ platform: target.platform, success: true, url });
callbacks.onSuccess?.(target.platform, url);
// Log to history
await this.history.addEntry({
platform: target.platform,
content: post.title,
timestamp: new Date().toISOString(),
success: true,
url,
});
} catch (err: unknown) {
const error = err instanceof Error ? err : new Error(String(err));
results.push({ platform: target.platform, success: false, error: error.message });
callbacks.onError?.(target.platform, error);
await this.history.addEntry({
platform: target.platform,
content: post.title,
timestamp: new Date().toISOString(),
success: false,
error: error.message,
});
}
}
return results;
}
}
Now PostHandler just wires VS Code messages to the executor:
// src/handlers/PostHandler.ts (simplified)
case 'shareToBlogs': {
const post: BlogPost = {
title: msg.title,
body: msg.body,
tags: msg.tags ?? [],
publishStatus: msg.publishStatus ?? 'draft',
canonicalUrl: msg.canonicalUrl,
coverImage: msg.coverImage,
description: msg.description,
series: msg.series,
};
const targets: PublishTarget[] = (msg.platforms as string[]).map(p => ({
platform: p,
}));
await this.executor.executeBlogPost(post, targets, {
onProgress: (platform, message) => {
this.sendStatus(message, 'info');
},
onSuccess: (platform, url) => {
this.sendStatus(`✅ Published to ${platform}${url ? ': ' + url : ''}`, 'success');
},
onError: (platform, error) => {
this.sendStatus(`❌ ${platform} failed: ${error.message}`, 'error');
},
});
this.webview.postMessage({ command: 'shareComplete' });
break;
}
🔑 CredentialProvider: The resolve() Refactor
Every credential-getter used to look like this:
// ❌ BEFORE — repeated in every method
async getDevToApiKey(): Promise<string> {
if (this.credentialsGetter) {
const creds = await this.credentialsGetter();
const key = creds.devtoApiKey;
if (!key) throw new Error('Dev.to API key not configured');
return key;
} else {
const key = await this.secretStorage.get('dotshare.devto.apiKey');
if (!key) throw new Error('Dev.to API key not configured');
return key;
}
}
The resolve() refactor eliminates the duplication:
// ✅ AFTER
export class CredentialProvider {
private async resolve(
secretKey: string,
errorMessage: string
): Promise<string> {
const value = this.credentialsGetter
? (await this.credentialsGetter())[secretKey as keyof Credentials]
: await this.secretStorage.get(`dotshare.${secretKey}`);
if (!value) throw new Error(errorMessage);
return value;
}
async getDevToApiKey(): Promise<string> {
return this.resolve('devto.apiKey', 'Dev.to API key not configured. Go to Settings → Dev.to.');
}
async getMediumToken(): Promise<string> {
return this.resolve('medium.token', 'Medium integration token not configured.');
}
async getRedditSubreddit(): Promise<string> {
return this.resolve('reddit.subreddit', 'Target subreddit not configured.');
}
}
🐛 The Bugs That Took the Most Time
1. The Twitter URL Count Bug
Twitter counts URLs as exactly 23 characters, regardless of length. My original implementation used Math.min(url.length, 23), which was wrong — a 10-character URL would count as 10, not 23.
// ❌ Wrong
const urlChars = Math.min(url.length, 23);
// ✅ Correct — Twitter always counts URLs as exactly 23 chars
const urlChars = 23;
One character of difference. Caused character counters to be wrong for posts with URLs.
2. Reddit's Hardcoded Subreddit
There was a subreddit: 'test' string buried in PostHandler.ts from early development. Reddit posts were going to r/test in production. Found it in a code review.
// ❌ Found this in production
const subreddit = 'test';
// ✅ Fixed
const subreddit = await this.credentials.getRedditSubreddit();
if (!subreddit) {
throw new Error('Configure your target subreddit in DotShare settings.');
}
3. PostHandler Reading Stale History Instead of the Current Message
handleShareToX() was calling historyService.getLastPost() instead of reading message.post. This meant editing a post and resharing would sometimes send the previous version.
// ❌ Reading from history
const content = await historyService.getLastPost();
// ✅ Reading from the actual incoming message
const content = message.post;
📊 v3.0 by the Numbers
| Metric | v2.4 | v3.0 | Change |
|---|---|---|---|
| Platforms supported | 7 | 9 | +2 |
Lines in PostHandler.ts
|
~800 | ~320 | −60% |
Files in src/
|
14 | 23 | +9 new |
| TypeScript errors | 0 | 0 | ✅ |
| ESLint violations | 0 | 0 | ✅ |
New types in types.ts
|
— | 14 | +14 |
🚀 What's Next: v3.1 & v3.2
v3.0 is the foundation. The next two releases focus on:
- v3.1 "The Polish Pass" — Toast notification engine, glassmorphism UI, character limit validation, and fixing a nasty media preview race condition
- v3.2 "The Media Expansion" — Multi-image support (up to 4 images), per-thumbnail previews, and JIT image compression
I'll be writing deep dives on both.
🔗 Links
- GitHub: github.com/kareem2099/DotShare
- VS Code Marketplace: https://marketplace.visualstudio.com/items?itemName=FreeRave.dotshare
- Open VSX (VSCodium / Gitpod): https://open-vsx.org/extension/freerave/dotshare
-
Install via CLI:
code --install-extension freerave.dotshare - Support the project: buymeacoffee.com/freerave
If you're building in public or maintaining an open source project, give DotShare a try. I'd love to hear how you use it — drop a comment below 👇
Built with TypeScript, VS Code Extension API, and a lot of coffee.
Top comments (16)
posting to 3+ platforms without an abstraction layer is chaos. the YAML frontmatter parser would have saved me a lot of manual work.
I felt that chaos firsthand, which is exactly why I had to build the abstraction layer!
I'm really glad the YAML parser caught your eye, it's definitely my favorite time-saver. Thanks for reading!
yeah once you have it in place you realize how much you were tracking manually. what platforms are you publishing to now?
Right now, DotShare supports 9 platforms (X, LinkedIn, Reddit, Bluesky, FB, Telegram, Discord, Dev.to, and Medium). I’m currently planning to add Hashnode and Instagram next.
I’m deep into the documentation right now, but man... Meta's docs are something else. They definitely give you a 'brain shot'! haha. Trying to wrap my head around their flow before I start coding.
9 platforms already is impressive - that covers most meaningful distribution channels. Meta docs are notoriously painful (the API deprecation cycles alone are exhausting). Hashnode would be a solid next addition - strong developer audience overlap with Dev.to. How are you handling auth refresh and rate limit tracking across so many APIs?
Ah, a man of culture who understands the absolute labyrinth of Meta's API deprecation cycles! 😂
You hit exactly on the two biggest architectural bottlenecks. To solve them, I had to completely rethink the infrastructure. Here is how I engineered the pipeline:
The Auth & Token Lifecycle (Zero-State Backend):
I fully decoupled the OAuth execution by building a stateless Next.js 16 broker. It handles the PKCE challenges, state generation, and token exchange. Instead of storing tokens server-side, it constructs a secure deep link (vscode://freerave.dotshare/...) that fires back into the VS Code URI handler.
Once caught, the tokens are ingested directly into the OS-level keychain via VS Code’s native SecretStorage. My TokenManager singleton intercepts every outgoing Axios request, checks the expires_at timestamp, and silently executes a refresh grant in the background if we are within a 5-minute expiry buffer. The user never sees a 401.
Distributed Edge Rate Limiting:
This was the real headache. Naive in-memory rate limiting (Map) is completely useless on Vercel because each request might hit an isolated serverless instance (cold starts).
To fix this, I implemented Edge Rate Limiting using Upstash Redis. I wrote a sliding window algorithm at the Next.js proxy level (proxy.ts) that tracks IP-based hits across the distributed edge network. If an endpoint is hammered, it throws a 429 with an exact Retry-After header. On the client side, the extension catches the 429 and surfaces the delay gracefully.
I actually just published a deep-dive engineering post breaking down the exact Next.js proxy code, the Redis logic, and the UI abstraction layer. If you're a fan of system design, you'll probably enjoy the read:
dev.to/freerave/building-a-product...
Now I'm working on the background scheduling engine with exponential backoff for retries. It’s getting dangerously fun!
the stateless broker pattern is exactly right for this - keeps the token mess isolated from the rest of the app. did you hit any edge cases with Meta's refresh timing? that's always been the part that silently breaks on me around the 60-day mark.
Honestly, Meta's API is so weird, and there is almost zero clear documentation or help for us developers when it comes to this specific edge case.
But here is the exact logic I am implementing to solve the silent break. Instead of just passing the raw data, I'm calculating the exact expiry window server-side:
And since Meta sometimes returns weird or missing expires_in values, I am adding this normalizer to force a short-lived fallback so the client knows it needs to re-sync immediately rather than crashing silently:
It's a complete mess to handle, but this should finally give the VS Code client enough context to proactively warn the user before the token actually dies. Thanks for bringing this up, it’s good to know I'm not the only one suffering with Meta's token lifecycle!
yeah the expiry window calc is the right call - calculating it upfront beats having to poll and react. Meta auth docs being a black hole is honestly one of the more consistent things about their platform
I just wanted to say thank you so much for the feedback! You actually gave me some brilliant architectural ideas that helped me wrap this up way faster than planned. I’ve already pushed the update, and you can see the results in the sidebar—we now have a full 'Token Health' monitoring system with that proactive refresh window we discussed.
I’m actually writing a detailed technical breakdown/post-mortem about this update and how we handled these edge cases—should be out tomorrow.
Thanks again for the help, man. It’s rare to find someone who actually enjoys diving into the 'dark arts' of OAuth lifecycles! Cheers!
nice - Token Health is exactly the right abstraction for it. curious whether you went with a fixed refresh threshold or something that adapts based on observed drift. will definitely read the breakdown.
For this v1 of Token Health, I deliberately went with a strict fixed threshold (a 7-day preemptive window).
I actually considered an adaptive drift model, but given how erratic and undocumented Meta's token drops can be, relying on observed drift felt a bit too risky for a first pass. I wanted a guaranteed, mathematically predictable buffer to ensure absolute zero silent failures before adding more complexity.
That being said, tracking the actual drift over time and adjusting the window dynamically is a brilliant concept for a future iteration. I might actually start logging the delta between expected and actual expiries in the broker to see if a reliable pattern emerges.
The fixed 7-day window is the right call for v1. Adaptive drift models need signal to work off — without a baseline, you're just calibrating against noise. Ship the guardrail, collect the data, then revisit. Classic move for any system with unpredictable upstream behavior.
The platform-config.ts as single source of truth is a solid pattern, adding a tenth platform becomes config work instead of hunting through conditionals. But the core pitch ("30 minutes down to a single action") assumes the bottleneck is mechanical distribution, and I don't think it is. Most of that time is rephrasing: a Dev.to post with code blocks and technical depth reads nothing like a LinkedIn post competing with sales pitches for attention. The frontmatter parser pulling the same title, tags, and body for all targets nudges users toward identical cross-posts, which tends to underperform everywhere compared to platform-native content. The social/blog workspace split acknowledges this a little, but within each workspace the content is still largely shared. Have you considered building in per-platform content transforms (even simple ones like auto-truncation or tone hints) so the single-action workflow produces adapted posts rather than copies?
Could even run it through a low level agent call to transform content to better align with platform expectations while maintaining the overall message.
Honestly, you are an absolute genius! I genuinely love it when people drop brilliant ideas like this, and you hit the nail right on the head. I am 100% going to build this.
My only hold-up right now is that I am actively building out the Scheduling engine, so I told myself to push the AI/LLM agent transformations to a later phase until the core foundation is perfect. But you just shined a massive spotlight on how crucial this is for the actual workflow.
Thank you so much for taking the time to help and for the incredible advice. I will definitely be implementing this in the upcoming updates
Hey, glad you liked the idea! I'd be happy to collaborate on the idea if you need a hand but either way excited to see what you cook up!
Some comments may only be visible to logged-in visitors. Sign in to view all comments.