A deep technical dive into DotShare v3.0 — the Publishing Suite update that added Dev.to & Medium integrations, a YAML frontmatter par...
For further actions, you may consider blocking this person and/or reporting abuse
posting to 3+ platforms without an abstraction layer is chaos. the YAML frontmatter parser would have saved me a lot of manual work.
I felt that chaos firsthand, which is exactly why I had to build the abstraction layer!
I'm really glad the YAML parser caught your eye, it's definitely my favorite time-saver. Thanks for reading!
yeah once you have it in place you realize how much you were tracking manually. what platforms are you publishing to now?
Right now, DotShare supports 9 platforms (X, LinkedIn, Reddit, Bluesky, FB, Telegram, Discord, Dev.to, and Medium). I’m currently planning to add Hashnode and Instagram next.
I’m deep into the documentation right now, but man... Meta's docs are something else. They definitely give you a 'brain shot'! haha. Trying to wrap my head around their flow before I start coding.
9 platforms already is impressive - that covers most meaningful distribution channels. Meta docs are notoriously painful (the API deprecation cycles alone are exhausting). Hashnode would be a solid next addition - strong developer audience overlap with Dev.to. How are you handling auth refresh and rate limit tracking across so many APIs?
Ah, a man of culture who understands the absolute labyrinth of Meta's API deprecation cycles! 😂
You hit exactly on the two biggest architectural bottlenecks. To solve them, I had to completely rethink the infrastructure. Here is how I engineered the pipeline:
The Auth & Token Lifecycle (Zero-State Backend):
I fully decoupled the OAuth execution by building a stateless Next.js 16 broker. It handles the PKCE challenges, state generation, and token exchange. Instead of storing tokens server-side, it constructs a secure deep link (vscode://freerave.dotshare/...) that fires back into the VS Code URI handler.
Once caught, the tokens are ingested directly into the OS-level keychain via VS Code’s native SecretStorage. My TokenManager singleton intercepts every outgoing Axios request, checks the expires_at timestamp, and silently executes a refresh grant in the background if we are within a 5-minute expiry buffer. The user never sees a 401.
Distributed Edge Rate Limiting:
This was the real headache. Naive in-memory rate limiting (Map) is completely useless on Vercel because each request might hit an isolated serverless instance (cold starts).
To fix this, I implemented Edge Rate Limiting using Upstash Redis. I wrote a sliding window algorithm at the Next.js proxy level (proxy.ts) that tracks IP-based hits across the distributed edge network. If an endpoint is hammered, it throws a 429 with an exact Retry-After header. On the client side, the extension catches the 429 and surfaces the delay gracefully.
I actually just published a deep-dive engineering post breaking down the exact Next.js proxy code, the Redis logic, and the UI abstraction layer. If you're a fan of system design, you'll probably enjoy the read:
dev.to/freerave/building-a-product...
Now I'm working on the background scheduling engine with exponential backoff for retries. It’s getting dangerously fun!
The platform-config.ts as single source of truth is a solid pattern, adding a tenth platform becomes config work instead of hunting through conditionals. But the core pitch ("30 minutes down to a single action") assumes the bottleneck is mechanical distribution, and I don't think it is. Most of that time is rephrasing: a Dev.to post with code blocks and technical depth reads nothing like a LinkedIn post competing with sales pitches for attention. The frontmatter parser pulling the same title, tags, and body for all targets nudges users toward identical cross-posts, which tends to underperform everywhere compared to platform-native content. The social/blog workspace split acknowledges this a little, but within each workspace the content is still largely shared. Have you considered building in per-platform content transforms (even simple ones like auto-truncation or tone hints) so the single-action workflow produces adapted posts rather than copies?
Could even run it through a low level agent call to transform content to better align with platform expectations while maintaining the overall message.
Honestly, you are an absolute genius! I genuinely love it when people drop brilliant ideas like this, and you hit the nail right on the head. I am 100% going to build this.
My only hold-up right now is that I am actively building out the Scheduling engine, so I told myself to push the AI/LLM agent transformations to a later phase until the core foundation is perfect. But you just shined a massive spotlight on how crucial this is for the actual workflow.
Thank you so much for taking the time to help and for the incredible advice. I will definitely be implementing this in the upcoming updates
Hey, glad you liked the idea! I'd be happy to collaborate on the idea if you need a hand but either way excited to see what you cook up!