When I first started automating my blog publishing, I wrote monolithic scripts—big files that did everything: fetch content, format it, upload images, call APIs, send emails. They worked, but they were a nightmare to debug. A failure in one part meant wading through hundreds of lines to find the culprit.
So I changed my approach. I started building small, single-purpose scripts that do one thing well, and then chaining them together. This is the Unix philosophy applied to automation, and it's transformed how I work.
The Problem with Monoliths
My first email notification script was 200 lines of PowerShell. It built the email body, conditionally added content, handled errors, sent the message, logged the result, and updated state files. It did too much.
When it failed (and it did, fairly often), I had to instrument it with verbose logging just to see which step broke. Debugging meant adding Write-Host statements everywhere and running it interactively. Modifying it was risky—a change in one section could unexpectedly affect another.
The worst part? I couldn't reuse any of it. If I wanted to send a different kind of notification, I had to copy-paste and modify the whole thing. That's not engineering; it's cargo cult programming.
The Epiphany: Small Tools, Big Results
I saw the light when I built the Hashnode automation. Instead of one giant script that created a draft, waited, published it, and then added tags, I made separate tools:
-
create_draft.js– Takes title, content, publication ID; outputs draft ID -
publish_draft.js– Takes draft ID; outputs post URL -
add_tags.js– Takes post ID and tags; adds them to the published post
Each script is under 100 lines. Each has one clear responsibility. Each can be tested independently.
The workflow became:
node create_draft.js > draft.json
draftId=$(extract-draft-id-from draft.json)
node publish_draft.js "$draftId"
node add_tags.js "$postId" "nes" "retro-gaming" "some-game"
This is how real tools are built. git doesn't try to also diff and merge and rebase and cherry-pick in one binary. It's a collection of specialized commands. That's what makes it powerful.
Composability Is Key
The magic happens when you can pipe outputs into inputs. My screenshot capture pipeline works the same way:
-
extract_rom.sh– Finds a random ROM from the collection -
capture_screenshots.sh– Runs fceux with Lua, saves PNGs tosnaps/ -
upload_images.sh– POSTs each image to 0x0.st, outputs URLs -
generate_post.sh– Takes those URLs plus title, produces markdown draft
Each script writes to stdout and stderr appropriately. Each returns a proper exit code. Each can be run standalone for testing. When any step fails, the whole pipeline stops—thanks to set -e and Bash's natural error propagation.
I can swap out components easily. Need a different image host? Replace upload_images.sh (or better, write a new one and change the pipeline order). Want to use random screenshots instead of fixed intervals? Tweak capture_screenshots.sh without touching the others.
Error Handling Becomes Simpler
Small scripts have simple error models. They either succeed (exit 0) or fail (exit non-zero). There's no "partial success" state to manage.
I add explicit checks:
if (!response.ok) {
console.error('HTTP error:', response.status, response.statusText);
process.exit(1);
}
That's it. The caller decides what to do. If publish_draft.js fails, I don't have to clean up a half-created draft because there's no cleanup logic in that script—the script either fully creates the draft or doesn't. (Okay, sometimes I do need cleanup, but that's a separate concern.)
Contrast this with monoliths where a failure in the middle leaves resources allocated, files open, or temporary state inconsistent. Atomic operations are easier when the operation is small.
Testing Is Actually Possible
Can you unit test a 500-line script that does everything? Not easily. Can you test a 50-line script that takes input, calls an API, and returns JSON? Absolutely.
For add_tags.js, the test is straightforward:
// Mock fetch, verify the mutation is built correctly
// Check that tags array has the right shape
I don't need to simulate the entire blog system. I just need to verify that given a postId and tag names, the script produces the correct GraphQL mutation.
My coverage isn't 100%, but I have confidence that each piece does its job because I can run it in isolation with known inputs.
Debugging Is Less Painful
When something breaks in production, I can reproduce the failure by running just the problematic piece. If the post isn't getting tags, I run add_tags.js manually with the same arguments and see the error. If the GraphQL mutation fails, I copy the request body, test it with curl, check the response format. I'm not debugging through layers of abstraction.
Logging is simpler too. Each script writes what it's doing and its inputs/outputs. That's enough context to understand the failure without overwhelming detail.
Reuse and Extensibility
The best part? I can reuse these scripts in new contexts. The Hashnode tag-adding script? I used it for all three Hashnode posts—Battletoads & Double Dragon, and any future ones. The image uploader? Same script for all NES games. The draft creator? Works for any content type.
If I ever need to add a new platform, I already have a pattern to follow: write a "create" script, a "publish" script, maybe a "set metadata" script. The mental model is established. The code structure is familiar.
But It's Not All Roses
The trade-off is orchestration complexity. Now I have a shell script (or a Makefile, or a Node.js runner) that sequences these tools. That's another layer of indirection. I need to carefully pass arguments and handle intermediate files.
I've accepted that trade-off because the benefits outweigh the costs. The orchestration layer is usually straightforward Bash, and I can see the entire workflow at a glance. It's readable.
Also, debugging across script boundaries requires some discipline. I need to ensure data flows correctly between stages. If create_draft.js silently fails to output the draft ID, the next step will fail with a cryptic "missing argument" error. So I add validation and explicit error messages.
Real-World Examples from My Work
Hashnode tags as objects: I discovered that tags must be { name, slug }. My add_tags.js accepts simple tag names and does the transformation internally. The caller doesn't need to know about the GraphQL detail. That's a perfect abstraction—hide platform complexity behind a simple interface.
Image URL extraction: The upload_images.sh script outputs URLs in a predictable format: one per line. The generate_post.sh script reads those lines and inserts them into the Markdown template. If upload fails for one image, the script exits non-zero and the pipeline stops. No broken images in the post.
Email notifications: I have a separate send_email.sh script that uses the ProtonMail CLI. Its interface is send_email.sh " recipient " " subject " " body ". Any process can invoke it. No need to know how email sending works internally.
Lessons Learned
Single Responsibility – Each script should have one clear purpose. If you find yourself writing "and then it also does X," extract that into a separate script.
Convention over Configuration – I established conventions: draft JSON has a
draft.idfield, image uploads print URLs to stdout, post titles are passed as quoted arguments. Once the contract is agreed upon, each script can trust the others.Fail Fast, Fail Loudly – Use
set -ein Bash, check arguments, validate responses, and exit immediately on error. Let the orchestrator handle retries or cleanup if needed.Document the Interface – Each script has a comment header describing its inputs, outputs, and exit codes. This is the "man page" for my tools.
Keep Scripts Language-Neutral When Possible – I use Bash for pipelining, Node.js for HTTP/JSON work, Python for text processing, PowerShell when on Windows. Each tool lives in the language that fits it best. They all communicate via stdin/stdout and files.
The Bottom Line
Breaking work into small, focused scripts isn't just an aesthetic preference. It's a productivity multiplier. It makes your automation resilient to change, easier to understand, and simpler to debug. When something breaks, you know exactly where to look. When you need to add a feature, you can extend or replace one piece without rewriting the whole system.
If you're writing automation that does more than three things, consider splitting it up. Your future self—the one debugging at 2 AM—will thank you.
This post is part of my dev-to-diaries series documenting the technical journey behind automating blog publishing. See the whole series at https://dev.to/retrorom/series/35977
Top comments (0)