DEV Community

Cover image for I'm Using Python to Automate My Own Marketing — Here's Every Script
German Yamil
German Yamil

Posted on

I'm Using Python to Automate My Own Marketing — Here's Every Script

I'm Using Python to Automate My Own Marketing — Here's Every Script

I built a pipeline that writes and publishes technical ebooks automatically.

Then I realized: why am I manually marketing it?

So I automated the marketing too. Here's every script I wrote to run a full content operation without logging into any dashboard.


🎁 Free: AI Publishing Checklist — 7 steps in Python · Full pipeline: germy5.gumroad.com/l/xhxkzz (pay what you want, min $9.99)


The stack: 4 APIs, 2 crons, 0 dashboards

Task Tool Script
Publish articles Dev.to API publish_devto.py
Auto-publish queue Dev.to API + cron auto_publish_queue.py
Update article tags Dev.to API inline script
Add CTAs to articles Dev.to API inline script
Generate cover images Pillow make_covers.py
Upload images Imgur API inline script
Attach covers to articles Dev.to API inline script
RSS ping xmlrpc daily_ping.py
Update product pricing Gumroad API inline script
Update product copy Gumroad API inline script
Track analytics Dev.to API check_stats.py

No Buffer. No Hootsuite. No Notion. Pure Python.

Script 1: Auto-publish queue

The core script. Publishes one article per day from a JSON queue, called by cron at 10am.

#!/usr/bin/env python3
"""
Publishes one article from publish_queue.json to Dev.to.
Cron: 0 10 * * * python3 auto_publish_queue.py
"""
import os, re, json, requests
from datetime import date

QUEUE_FILE = "publish_queue.json"
TOKEN = os.environ["DEVTO_TOKEN"]

def load_queue():
    with open(QUEUE_FILE) as f:
        return json.load(f)

def save_queue(q):
    with open(QUEUE_FILE, "w") as f:
        json.dump(q, f, indent=2)

def parse_frontmatter(content):
    match = re.match(r'^---\n(.*?)\n---\n', content, re.DOTALL)
    fm = {}
    for line in match.group(1).splitlines():
        if line.startswith('tags: '):
            fm['tags'] = [t.strip() for t in line[6:].split(',')]
        elif ': ' in line:
            k, _, v = line.partition(': ')
            fm[k.strip()] = v.strip().strip('"')
    return fm

def publish(filepath):
    with open(filepath) as f:
        content = f.read()
    fm = parse_frontmatter(content)
    headers = {"api-key": TOKEN, "Content-Type": "application/json"}
    payload = {"article": {
        "title": fm["title"],
        "body_markdown": content,
        "published": True,
        "tags": fm.get("tags", []),
        "description": fm.get("description", ""),
    }}
    resp = requests.post("https://dev.to/api/articles", headers=headers, json=payload)
    resp.raise_for_status()
    return resp.json()

q = load_queue()
if not q["pending"]:
    print("Queue empty")
    exit(0)

item = q["pending"][0]
result = publish(item["filename"])
url = f"https://dev.to{result['path']}"
print(f"✅ Published: {url}")

q["pending"].pop(0)
q["published"].append({
    "filename": item["filename"],
    "title": item["title"],
    "date": str(date.today()),
    "url": url,
    "id": result["id"],
})
save_queue(q)
Enter fullscreen mode Exit fullscreen mode

Script 2: Bulk tag update

When I discovered #career gets 3-5x more views than #automation, I needed to update 6 articles immediately.

import requests, re, time

TOKEN = os.environ["DEVTO_TOKEN"]
HEADERS = {"api-key": TOKEN, "Content-Type": "application/json"}

# {article_id: new_tag_list}
UPDATES = {
    3507086: ["python", "tutorial", "career", "productivity"],
    3487145: ["python", "productivity", "career", "selfpublishing"],
    3511931: ["python", "career", "productivity", "selfpublishing"],
}

for article_id, new_tags in UPDATES.items():
    # Get current body
    r = requests.get(f"https://dev.to/api/articles/{article_id}", headers=HEADERS)
    body = r.json()["body_markdown"]

    # Replace tags line in frontmatter
    tags_str = ", ".join(new_tags)
    new_body = re.sub(r'^tags:.*$', f'tags: {tags_str}', body, flags=re.MULTILINE)

    # Update
    resp = requests.put(f"https://dev.to/api/articles/{article_id}",
        headers=HEADERS, json={"article": {"body_markdown": new_body}})

    print(f"{article_id}: {resp.json().get('tag_list')}")
    time.sleep(1.5)
Enter fullscreen mode Exit fullscreen mode

Key insight: Dev.to ignores the tags field in PUT requests — it only reads tags from the frontmatter inside body_markdown. You have to update the markdown, not the metadata field.

Script 3: Bulk CTA injection

I added a dual CTA (free + paid) to all 22 articles in one run.

CTA_BLOCK = """
---
> **🎁 Free:** [AI Publishing Checklist](https://germy5.gumroad.com/l/vlvhld) \
· **Full pipeline:** [germy5.gumroad.com/l/xhxkzz](https://germy5.gumroad.com/l/xhxkzz) (min $9.99)
---

"""

def insert_cta(body: str, cta: str) -> str:
    lines = body.split('\n')
    # Find end of frontmatter
    fm_end = 0
    if lines[0].strip() == '---':
        for i in range(1, len(lines)):
            if lines[i].strip() == '---':
                fm_end = i + 1
                break

    # Find after 2nd paragraph break
    blank_count = 0
    insert_at = fm_end + 2
    for i in range(fm_end, min(fm_end + 50, len(lines))):
        if lines[i].strip() == '':
            blank_count += 1
            if blank_count >= 2:
                insert_at = i + 1
                break

    new_lines = lines[:insert_at] + cta.split('\n') + lines[insert_at:]
    return '\n'.join(new_lines)
Enter fullscreen mode Exit fullscreen mode

Script 4: Cover image generation

Articles with cover images get 2-3x more clicks in the Dev.to feed. I generate them with Pillow.

from PIL import Image, ImageDraw, ImageFont
import textwrap

def make_cover(title: str, subtitle: str, output_path: str, accent: str = "#6366f1"):
    W, H = 1000, 420
    img = Image.new("RGB", (W, H), "#0f172a")
    draw = ImageDraw.Draw(img)

    # Left accent stripe
    for x in range(10):
        draw.rectangle([x, 0, x+1, H], fill=accent)

    # Top-right accent bar
    draw.rectangle([W-220, 0, W, 6], fill=accent)

    # Load fonts
    try:
        ft = ImageFont.truetype("/System/Library/Fonts/Helvetica.ttc", 38)
        fs = ImageFont.truetype("/System/Library/Fonts/Helvetica.ttc", 18)
        fb = ImageFont.truetype("/System/Library/Fonts/Helvetica.ttc", 13)
    except:
        ft = fs = fb = ImageFont.load_default()

    # Tag line
    draw.text((40, 30), "dev.to · #python · #automation", fill="#94a3b8", font=fb)

    # Title
    y = 90
    for line in textwrap.wrap(title, 36)[:3]:
        draw.text((40, y), line, fill="#f8fafc", font=ft)
        y += 52

    # Subtitle
    y += 8
    for line in textwrap.wrap(subtitle, 60)[:2]:
        draw.text((40, y), line, fill="#94a3b8", font=fs)
        y += 28

    # Footer
    draw.rectangle([0, H-48, W, H], fill="#1e293b")
    draw.text((40, H-32), "germy5.gumroad.com/l/xhxkzz", fill="#94a3b8", font=fb)

    img.save(output_path, "PNG")
    return output_path
Enter fullscreen mode Exit fullscreen mode

Script 5: Imgur upload + Dev.to attachment

import base64, requests

def upload_cover(img_path: str, article_id: int):
    # Upload to Imgur (anonymous — no auth needed)
    with open(img_path, "rb") as f:
        b64 = base64.b64encode(f.read()).decode()

    r = requests.post("https://api.imgur.com/3/image",
        headers={"Authorization": "Client-ID YOUR_CLIENT_ID"},
        data={"image": b64, "type": "base64"})
    img_url = r.json()["data"]["link"]

    # Attach to Dev.to article
    headers = {"api-key": DEVTO_TOKEN, "Content-Type": "application/json"}
    requests.put(f"https://dev.to/api/articles/{article_id}",
        headers=headers,
        json={"article": {"main_image": img_url}})

    return img_url
Enter fullscreen mode Exit fullscreen mode

Script 6: RSS ping (daily cron, 9am)

#!/usr/bin/env python3
"""Ping RSS aggregators to request re-indexing."""
import xmlrpc.client
from datetime import datetime

FEED_URL = "https://dev.to/feed/german_yamil_e021eef8710d"

services = [
    "http://rpc.pingomatic.com/",
    "http://ping.blogs.yandex.ru/RPC2",
    "http://rpc.weblogs.com/RPC2",
]

for service in services:
    try:
        server = xmlrpc.client.ServerProxy(service)
        server.weblogUpdates.ping("AI Publishing Pipeline", FEED_URL)
    except:
        pass

print(f"{datetime.now():%Y-%m-%d %H:%M} — Pings sent to {len(services)} services")
Enter fullscreen mode Exit fullscreen mode

Cron entry:

0 9 * * * python3 /path/to/daily_ping.py >> ping.log 2>&1
0 10 * * * python3 /path/to/auto_publish_queue.py >> queue.log 2>&1
Enter fullscreen mode Exit fullscreen mode

Stats script

import requests

TOKEN = os.environ["DEVTO_TOKEN"]
headers = {"api-key": TOKEN}

arts = requests.get("https://dev.to/api/articles/me", headers=headers).json()

total_views = sum(a["page_views_count"] for a in arts)
total_reactions = sum(a["positive_reactions_count"] for a in arts)

print(f"Articles: {len(arts)}")
print(f"Views:    {total_views}")
print(f"Reactions:{total_reactions}")

for a in sorted(arts, key=lambda x: x["page_views_count"], reverse=True)[:5]:
    print(f"  [{a['page_views_count']:3d}v] {a['title'][:60]}")
Enter fullscreen mode Exit fullscreen mode

What this replaces

Before Python scripts: would need Buffer ($18/mo), a content calendar tool ($12/mo), analytics dashboard ($20/mo), and 2-3 hours/week of manual posting.

After: $0 in SaaS tools, cron handles posting, a single Python script handles analytics.

The whole marketing stack is 6 scripts, ~300 lines of Python, and 2 cron jobs.


The ebook pipeline that makes the product worth marketing: germy5.gumroad.com/l/xhxkzz — pay what you want, min $9.99.


Further Reading

Top comments (0)