Multi-platform publishing is not only successful when everything succeeds. It should support partial completion.
When people design an automated publishing flow, the default goal is usually simple: publish the same article everywhere in one run. That goal is reasonable. But a system that only accepts 100% success as the only valid outcome is usually weak in real operations.
That is because multi-platform publishing is not a single action. It is a task with shared input, multiple outputs, and independent failure surfaces.
The same article may go to Zenn through a git push. Qiita may depend on token scope. dev.to may care about the shape of the request body. Hashnode may depend on a working GraphQL mutation and the correct publication configuration. The theme, content, and timing are shared, but the failure mode is not.
So if one platform fails and the whole run aborts with nothing more than “publish failed,” the automation is still designed for an idealized environment.
The real target is not total success. It is the maximum explainable completion.
A more practical goal is this:
Preserve content consistency, complete as many destinations as possible, and record failures explicitly.
This is not about being tolerant of failure. It is about making sure a local failure does not erase the value of the rest of the run.
Take a common case:
- Zenn succeeds
- Hashnode succeeds
- dev.to succeeds
- Qiita returns 401 because the token expired or lost the required scope
The worst possible behavior here is to mark the entire run as failed and stop there.
From an operational point of view, that is false. The run did not fully fail. Most of the external distribution was completed. What remains is a clearly bounded, repairable problem on a single platform.
If the system cannot express that difference, the operator sees the wrong picture. The label says “failed,” while reality is “3 out of 4 completed, 1 auth issue remains.”
Multi-platform work should settle results per platform
The stable design is not “one command tries its luck against all four platforms.” It is:
- generate the shared content artifacts first
- submit to each platform independently
- record each platform result independently
- emit a structured summary at the end
This gives you several concrete benefits.
1. One platform problem does not erase the others
If Qiita fails but Zenn and dev.to have already gone live, those successes should remain visible as successes. A late-stage error should not rewrite the whole run as if nothing happened.
2. Troubleshooting becomes faster
“publish failed” is nearly useless.
“Qiita: 401 Unauthorized; Zenn: success; Hashnode: success; dev.to: success” is useful. It immediately tells you to repair authentication first instead of suspecting the content, network, or entire publishing pipeline.
3. Retries become smaller and safer
If the output is structured, the next run only needs to retry the failed destinations.
That saves requests, but more importantly it reduces the risk of duplicate posts, duplicate commits, and duplicate notifications.
The dangerous part of automation is not failure itself. It is opaque failure.
A common mistake in publishing workflows is to treat “automatic execution” as the main goal and “auditability” as a nice extra.
The priority should be reversed.
For a cron-driven publishing job, the most important question is not whether the script ran. It is whether someone else can immediately answer the following after it finishes:
- which article was published today
- which platforms received it
- which platform failed
- whether the failure was auth, format, network, or rate limiting
- where the artifacts and result records were stored
If those questions cannot be answered quickly, the workflow is still immature even if it partially succeeded.
Treat results as first-class artifacts, not as disposable terminal output
A reliable publishing flow should always produce more than the article itself. It should also produce two kinds of artifacts:
- content artifacts for review: the source draft, the Japanese version, and the English version
- result artifacts for operations: per-platform status, URLs, HTTP codes, and failure reasons
That means the result should not live only in scrolling terminal output. It should be written as explicit files such as:
publish-result.jsonpublish-report.md
The first is for machines. The second is for humans.
With that structure, you do not need to inspect shell history the next day or guess where yesterday’s run got stuck. The evidence is already in the artifact directory.
One practical test
I now use one sentence to judge whether a multi-platform publishing pipeline is well designed:
If one out of four platforms is temporarily broken, can the other three still complete, and can the broken point be recorded clearly?
If the answer is no, the system is not really automated publishing. It is just serialized luck.
Real automation should not depend on a perfect environment. It should accept local failures, preserve overall progress, and leave behind enough information to make the next repair step obvious.
The most valuable capability in multi-platform publishing is not getting a perfect run every time. It is keeping the result orderly, visible, and recoverable when the run is not perfect.
Top comments (0)