When you're a solo developer with a day job, actually building the project is maybe 60% of the work. The other 40%—the part nobody talks about—is getting anyone to notice it exists. And honestly, that 40% is where most of us fall apart.
My usual pattern: ship something, post about it once on Twitter, maybe write a Qiita article if I'm feeling ambitious, get distracted by the next thing, and watch the project quietly die with 12 GitHub stars.
The problem isn't motivation. It's friction. Writing a technical article for Qiita, reformatting it for Dev.to, condensing it to 280 characters for Twitter, crafting a Reddit post that doesn't get immediately downvoted—each platform has different norms, different formats, different audiences. Doing all of that manually for every project update is genuinely exhausting, and when it competes with actually writing code, it loses every time.
So last year I started building tooling to automate this. Along the way I hit a bunch of interesting technical problems. This article is about those problems and how I solved them—the pipeline architecture, the AI integration patterns, and some gotchas that cost me several evenings.
The Core Problem: Multi-Platform Content Distribution
The fundamental challenge is that each platform wants something different from the same underlying information:
- Twitter: 280 chars, casual tone, hashtags
- Dev.to: Markdown, technical depth, frontmatter metadata
- Qiita: Japanese-preferred, structured technical content
- Reddit: Community-appropriate framing, no obvious self-promotion
If you're maintaining all of these by hand, you're essentially doing N times the work for one project update.
The approach I landed on: use AI to transform a single structured project description into platform-appropriate content, then automate the distribution through GitHub Actions workflows triggered from my desktop.
Step 1: Automated Project Analysis
Before you can generate content, you need structured data about your project. I built a scanner that reads project files—package.json, Cargo.toml, pubspec.yaml, go.mod, pyproject.toml—and infers the tech stack.
The dependency-to-tech-stack mapping approach worked better than I expected. Here's a simplified version of the npm pattern:
const NPM_STACK_MAP: Record<string, string> = {
react: "React",
next: "Next.js",
vue: "Vue",
svelte: "Svelte",
"@tailwindcss/vite": "TailwindCSS",
tailwindcss: "TailwindCSS",
express: "Express",
prisma: "Prisma",
"drizzle-orm": "Drizzle ORM",
three: "Three.js",
electron: "Electron",
trpc: "tRPC",
};
function inferStackFromPackageJson(deps: Record<string, string>): string[] {
const detected = new Set<string>();
for (const [dep] of Object.entries(deps)) {
for (const [key, label] of Object.entries(NPM_STACK_MAP)) {
if (dep === key || dep.includes(key)) {
detected.add(label);
}
}
}
return Array.from(detected);
}
I built similar mappings for Flutter's pubspec.yaml (16 patterns covering riverpod, bloc, firebase, supabase, etc.) and Go modules.
One thing that tripped me up: category keyword ordering matters. I was classifying projects into 11 categories (crypto, saas, tool, finance, game, education, health, social, mobile-app, platform, other) using keyword matching, and "budget wallet" was getting classified as saas because finance wasn't being checked before looser matches. The fix was simple—order the category checks by specificity—but it took an embarrassingly long time to diagnose.
For deeper analysis, I integrated Claude CLI using the --print mode flag:
claude --print --permission-mode bypassPermissions \
"Analyze this project and return JSON with fields: name, category, targetAudience, techStack, tagline"
The --print flag was a complete game changer. My original approach used PowerShell's SendKeys + AppActivate to automate a visible terminal window—send keystrokes, wait fixed sleep durations, scrape output. It worked maybe 70% of the time. The other 30% it would hang indefinitely because the timing was off, usually when the machine was under load.
Switching to --print mode with stdout capture made the whole thing reliable and scriptable. No windows, no timing hacks, just a subprocess you can await. Analysis takes 30–120 seconds depending on project complexity, versus under 0.5 seconds for the file-based scan. Both have their place.
I also introduced a .pr-meta.json file that projects can include at their root:
{
"name": "MyProject",
"nameJa": "マイプロジェクト",
"tagline": "A tool that does X",
"taglineJa": "Xをするツール",
"category": "tool",
"targetAudience": ["developers", "devops"],
"techStack": ["Rust", "React", "Tauri"],
"description": "Longer description here..."
}
When this file exists, the scanner returns complete data in under 0.5 seconds. This is particularly important for multilingual fields—there's no way to infer Japanese copy from source files, so you need a metadata file for that.
Step 2: Git as a Shared Data Store
The architecture decision I'm most happy with: using a GitHub repository as the shared data store between my desktop app and GitHub Actions workflows.
Here's the pattern:
my-hub-repo/
content/
projects/
my-project.json # Project definition + metadata
another-project.json
meta/
posting-history.json # What was posted, when, where
traffic-history.json # Accumulated analytics data
.github/
workflows/
post-content.yml # The actual posting automation
The desktop app reads and writes JSON files in this repo. GitHub Actions workflows also read from the same files to know what to post and where. Both sides stay in sync because Git is the source of truth.
This means I can trigger a workflow from my desktop using gh workflow run post-content.yml --field project=my-project, and the workflow reads content/projects/my-project.json to get everything it needs.
One problem with this approach: concurrent modifications. When I fetch fresh analytics data, I'm pulling from the remote while potentially having local uncommitted changes. The ordering matters here:
git stash
git pull --ff-only origin main
git stash pop # NOT git stash drop
I originally had stash drop instead of stash pop. That meant every analytics refresh silently discarded any local work in progress. Not a fun bug to debug when you realize your edits are just... gone.
Step 3: Accumulating Analytics Past the 14-Day Limit
The GitHub Traffic API only returns the last 14 days of data. If you're polling infrequently, you'll have gaps. If you want to track trends over months, you need to accumulate data yourself.
My approach: on every analytics fetch, merge new data into an existing traffic-history.json file, keying by date string to deduplicate:
function mergeTrafficData(
existing: TrafficEntry[],
incoming: TrafficEntry[]
): TrafficEntry[] {
const byDate = new Map<string, TrafficEntry>();
// Load existing data first
for (const entry of existing) {
byDate.set(entry.date, entry);
}
// Incoming data overwrites (it's fresher)
for (const entry of incoming) {
byDate.set(entry.date, entry);
}
return Array.from(byDate.values()).sort(
(a, b) => new Date(a.date).getTime() - new Date(b.date).getTime()
);
}
This runs on every refresh and commits the updated file to the repo. Cheap, reliable, and you maintain a complete historical record without any external database.
Step 4: Platform API Authentication
Each platform has a different auth pattern, and you have to handle them individually. There's no shortcut here.
async function checkQiitaConnection(token: string): Promise<string> {
const resp = await fetch("https://qiita.com/api/v2/authenticated_user", {
headers: { Authorization: `Bearer ${token}` },
});
if (!resp.ok) throw new Error("Invalid token");
const user = await resp.json();
return user.id; // Returns username
}
async function checkDevToConnection(token: string): Promise<string> {
const resp = await fetch("https://dev.to/api/users/me", {
headers: { "api-key": token }, // Different header name!
});
if (!resp.ok) throw new Error("Invalid token");
const user = await resp.json();
return user.username;
}
Qiita uses Authorization: Bearer, Dev.to uses api-key as a header. Small difference, but it'll bite you if you try to generalize.
Zenn and Reddit are worth calling out as special cases. Zenn has no public API, so connection checking is impossible—you can only post manually. Reddit's OAuth flow is complex enough that I ended up recommending manual posting there too. Being honest about platform limitations in your UI prevents user frustration.
For storing tokens, I use Tauri's plugin-store. The tokens sit in a local config file, which is convenient but means they're effectively plaintext on disk. To use them safely in GitHub Actions, I sync them as GitHub secrets:
gh secret set QIITA_API_TOKEN --body "$token"
gh secret set DEVTO_API_KEY --body "$token"
This way the Actions workflows can access them securely, and you only have to set them once from your desktop.
Tauri-Specific Patterns Worth Knowing
If you're building anything with Tauri v2, a couple of things made my life significantly easier.
Type-safe IPC with generics: Tauri's invoke<T>() supports generic type parameters, which means you can get proper TypeScript types from your Rust commands:
const projects = await invoke<Project[]>("get_all_projects");
const result = await invoke<ScanResult>("scan_project", { path: projectPath });
Memoize your API hook: I centralized all IPC calls in a useApi() hook. The problem is that React recreates the hook object on every render by default, which causes unnecessary re-renders in child components. The fix:
const api = useMemo(() => createApi(), []);
Easy fix, but easy to miss. Without it, you get cascading re-renders that are annoying to trace.
Windows + Unicode paths: If your app runs on Windows and users might have Japanese or other non-ASCII characters in their directory paths, you'll hit encoding issues when spawning subprocesses. The workaround is converting to 8.3 short paths using PowerShell's Scripting.FileSystemObject before passing them to CLI tools. Not elegant, but it works.
What I Actually Built
I ended up wrapping all of this into a Tauri desktop app I call PR Dashboard—a 10-screen app that handles project scanning, content generation via Claude, multi-platform posting through GitHub Actions workflows, and analytics visualization using Recharts. The whole thing uses my GitHub repo as its data store.
Was it worth building? For me, yes—I'm posting consistently now instead of in random bursts. But honestly, most of what I described above can be implemented in a much simpler form: a few shell scripts, a cron job, and some GitHub Actions workflows. You don't need a desktop app to get the core benefit.
The patterns that matter regardless of what you build:
- Use Claude CLI's
--printmode for non-interactive AI integration - Accumulate data that APIs only return in rolling windows
- Let Git be your shared data store between local tools and CI/CD
- Handle platform-specific auth individually, not generically
- Be explicit in your UI about what you can't automate
I wrote a more detailed guide on my blog covering the GitHub Actions workflow setup and the content generation prompts in more depth: https://mcw999.github.io/mcw999-hub/blog/pr-dashboard-guide/
If you're a solo developer who's shipped things that quietly disappeared because you didn't have time to promote them, this is a solvable problem. The tooling doesn't have to be fancy—it just has to reduce the friction enough that you actually do it.
Top comments (0)