Last month, I had to process 16 YouTube Shorts.
Trim intros. Normalize audio. Add watermarks. Export multiple formats. Generate thumbnails.
Doing that manually in Premiere would have taken me most of an afternoon.
So I built a CLI skill instead.
It took about 2 hours to put together. On my machine, the batch itself finished in under 3 minutes once everything was set up.
Here’s the exact structure I used.
What I mean by a CLI skill
For me, a CLI skill is a reusable shell workflow with:
- input validation
- sensible defaults
- predictable output
- error handling
- lightweight docs
Instead of retyping a long FFmpeg command every time, I run one script and get the same result every time.
# Instead of this:
ffmpeg -i input.mp4 -ss 00:00:02 -to 00:00:35 -vf "scale=1080:1920" -af "loudnorm=I=-14" -c:v libx264 -preset fast output.mp4
# I run this:
./process-short.sh input.mp4
That difference sounds small, but it removes the part that always breaks in real work: remembering the flags, the order, and the output steps.
Step 1: List the manual steps first
Before I wrote anything, I wrote down the full workflow:
- trim the first 2 seconds
- cut after 35 seconds
- scale to 1080×1920
- normalize audio to -14 LUFS
- add watermark
- export MP4
- export WebM
- generate a thumbnail
That gave me a real pipeline instead of a vague automation idea.
Step 2: Build one script that does the boring work
#!/bin/bash
set -euo pipefail
TRIM_START="00:00:02"
TRIM_END="00:00:35"
RESOLUTION="1080:1920"
AUDIO_TARGET="-14"
WATERMARK="./assets/watermark.png"
THUMB_TIME="00:00:05"
INPUT="${1:?Usage: process-short.sh <input.mp4>}"
if [[ ! -f "$INPUT" ]]; then
echo "Error: File '$INPUT' not found."
exit 1
fi
BASENAME=$(basename "$INPUT" .mp4)
OUTDIR="./output/${BASENAME}"
mkdir -p "$OUTDIR"
ffmpeg -y -ss "$TRIM_START" -to "$TRIM_END" -i "$INPUT" \
-vf "scale=${RESOLUTION}" \
-af "loudnorm=I=${AUDIO_TARGET}" \
-c:v libx264 -preset fast -crf 23 \
"${OUTDIR}/trimmed.mp4"
if [[ -f "$WATERMARK" ]]; then
ffmpeg -y -i "${OUTDIR}/trimmed.mp4" -i "$WATERMARK" \
-filter_complex "overlay=W-w-20:H-h-20" \
"${OUTDIR}/${BASENAME}_final.mp4"
else
cp "${OUTDIR}/trimmed.mp4" "${OUTDIR}/${BASENAME}_final.mp4"
fi
ffmpeg -y -i "${OUTDIR}/${BASENAME}_final.mp4" \
-c:v libvpx-vp9 -crf 30 -b:v 0 \
"${OUTDIR}/${BASENAME}.webm"
ffmpeg -y -i "${OUTDIR}/${BASENAME}_final.mp4" \
-ss "$THUMB_TIME" -frames:v 1 \
"${OUTDIR}/${BASENAME}_thumb.jpg"
rm -f "${OUTDIR}/trimmed.mp4"
A few choices mattered a lot:
-
set -euo pipefailso failures don’t get ignored -
-ybecause this is a pipeline, not an interactive tool - temp file cleanup so output folders stay usable
Step 3: Make it batch-capable
One file is a demo. A directory is the real use case.
#!/bin/bash
INPUT_DIR="${1:?Usage: batch-process.sh <directory>}"
PROCESSED=0
FAILED=0
for file in "$INPUT_DIR"/*.mp4; do
[[ -f "$file" ]] || continue
if ./process-short.sh "$file"; then
((PROCESSED++))
else
echo "FAILED: $file"
((FAILED++))
fi
done
echo "Batch complete: $PROCESSED processed, $FAILED failed"
This is where it actually became useful. I didn’t want a cool script. I wanted to stop babysitting repetitive exports.
Step 4: Add docs, even if it’s just for yourself
I also added a tiny SKILL.md with:
- what the script does
- requirements
- usage
- config variables
- output files
That sounds boring, but it matters. A script without docs becomes archaeology in two weeks.
Step 5: Test annoying edge cases
This was the part that caught real bugs:
- empty files
- videos without audio
- short clips
- weird filenames with spaces and brackets
That testing found multiple issues immediately. If I had skipped it, the batch version would have failed silently later.
One of the first things I ran into was how quickly “works on one file” falls apart on real footage. A weird filename, a silent clip, or a shorter-than-expected video is enough to break the whole flow if you never test for it.
The result
Before: most of an afternoon for 16 videos
After: one command, then the machine does the repetitive part
./batch-process.sh ./raw-videos/
It’s still less flexible than Premiere. If I want one-off polish, I’ll still do it manually.
But for repeatable batch work, the tradeoff is absolutely worth it.
That’s why I like building CLI skills.
Not because they’re clever. Because they turn something fragile and repetitive into something boring and reliable.
If you build little terminal workflows like this too, I’d genuinely love to hear what you’ve automated. I keep more reusable terminal workflow patterns on Terminal Skills.
What would you automate first?
Top comments (0)