DEV Community

Otto Brennan
Otto Brennan

Posted on

Automate the Boring Stuff: 5 Python Scripts That Save Me Hours Every Week

Every developer has tasks they do manually that could be automated in 20 lines of Python. Here are 5 scripts I actually use weekly — not toy examples, but real tools that save me real time.

1. Bulk File Organizer

Ever download 200 files into your Downloads folder and then spend 15 minutes sorting them? This script watches a directory and auto-sorts files by extension into categorized subdirectories.

import os
import shutil
from pathlib import Path

CATEGORIES = {
    'Images': ['.jpg', '.jpeg', '.png', '.gif', '.bmp', '.svg', '.webp'],
    'Documents': ['.pdf', '.doc', '.docx', '.txt', '.xlsx', '.csv', '.pptx'],
    'Code': ['.py', '.js', '.ts', '.html', '.css', '.json', '.yml'],
    'Archives': ['.zip', '.tar', '.gz', '.rar', '.7z'],
    'Videos': ['.mp4', '.avi', '.mkv', '.mov'],
    'Audio': ['.mp3', '.wav', '.flac', '.aac'],
}

def organize(directory):
    path = Path(directory)
    moved = 0
    for file in path.iterdir():
        if file.is_file():
            ext = file.suffix.lower()
            for category, extensions in CATEGORIES.items():
                if ext in extensions:
                    dest = path / category
                    dest.mkdir(exist_ok=True)
                    shutil.move(str(file), str(dest / file.name))
                    moved += 1
                    break
    print(f"Organized {moved} files")

organize("~/Downloads")
Enter fullscreen mode Exit fullscreen mode

Time saved: ~15 min/week sorting downloads manually.

2. CSV Deduplicator + Cleaner

I deal with CSVs constantly — exports from various tools, client data, logs. They always have duplicates, weird whitespace, and inconsistent formatting.

import csv
import sys

def clean_csv(input_file, output_file, key_columns=None):
    seen = set()
    rows_in = rows_out = dupes = 0

    with open(input_file, 'r') as f:
        reader = csv.DictReader(f)
        fieldnames = reader.fieldnames
        rows = []
        for row in reader:
            rows_in += 1
            # Clean whitespace
            cleaned = {k: v.strip() if isinstance(v, str) else v 
                      for k, v in row.items()}

            # Build dedup key
            if key_columns:
                key = tuple(cleaned.get(k, '') for k in key_columns)
            else:
                key = tuple(cleaned.values())

            if key not in seen:
                seen.add(key)
                rows.append(cleaned)
                rows_out += 1
            else:
                dupes += 1

    with open(output_file, 'w', newline='') as f:
        writer = csv.DictWriter(f, fieldnames=fieldnames)
        writer.writeheader()
        writer.writerows(rows)

    print(f"Processed: {rows_in} rows → {rows_out} unique ({dupes} duplicates removed)")
Enter fullscreen mode Exit fullscreen mode

Time saved: ~30 min/week vs. manual dedup in Excel.

3. Multi-URL Status Checker

When you manage multiple services, APIs, or client sites, checking if everything is up gets tedious. This script pings a list of URLs concurrently and reports status.

import asyncio
import aiohttp
import time

URLS = [
    "https://api.example.com/health",
    "https://mysite.com",
    "https://client-app.herokuapp.com",
]

async def check_url(session, url):
    try:
        start = time.time()
        async with session.get(url, timeout=aiohttp.ClientTimeout(total=10)) as resp:
            elapsed = time.time() - start
            return url, resp.status, f"{elapsed:.2f}s"
    except Exception as e:
        return url, "ERROR", str(e)

async def check_all(urls):
    async with aiohttp.ClientSession() as session:
        tasks = [check_url(session, url) for url in urls]
        results = await asyncio.gather(*tasks)

        print(f"{'URL':<45} {'Status':<10} {'Time':<10}")
        print("-" * 65)
        for url, status, time_str in results:
            emoji = "" if status == 200 else ""
            print(f"{emoji} {url:<43} {status:<10} {time_str}")

asyncio.run(check_all(URLS))
Enter fullscreen mode Exit fullscreen mode

Time saved: ~10 min/day vs. manually checking each URL.

4. JSON ↔ CSV Converter

APIs return JSON. Spreadsheet people want CSV. This comes up constantly.

import json
import csv
from pathlib import Path

def json_to_csv(json_file, csv_file):
    with open(json_file) as f:
        data = json.load(f)

    if isinstance(data, dict):
        data = [data]

    # Flatten nested objects
    flat_data = []
    for item in data:
        flat = {}
        for key, value in item.items():
            if isinstance(value, (dict, list)):
                flat[key] = json.dumps(value)
            else:
                flat[key] = value
        flat_data.append(flat)

    # Get all unique keys
    fieldnames = list(dict.fromkeys(k for row in flat_data for k in row.keys()))

    with open(csv_file, 'w', newline='') as f:
        writer = csv.DictWriter(f, fieldnames=fieldnames)
        writer.writeheader()
        writer.writerows(flat_data)

    print(f"Converted {len(flat_data)} records to {csv_file}")

json_to_csv("data.json", "output.csv")
Enter fullscreen mode Exit fullscreen mode

5. Batch Image Resizer

Need to resize 50 product photos for a website? Or create thumbnails from a folder of images?

from PIL import Image
from pathlib import Path

def resize_images(input_dir, output_dir, max_width=800, max_height=800):
    input_path = Path(input_dir)
    output_path = Path(output_dir)
    output_path.mkdir(parents=True, exist_ok=True)

    extensions = {'.jpg', '.jpeg', '.png', '.webp', '.bmp'}
    processed = 0

    for img_file in input_path.iterdir():
        if img_file.suffix.lower() in extensions:
            with Image.open(img_file) as img:
                img.thumbnail((max_width, max_height), Image.LANCZOS)
                output_file = output_path / img_file.name
                img.save(output_file, quality=85, optimize=True)
                processed += 1
                print(f"  {img_file.name}: {img.size[0]}x{img.size[1]}")

    print(f"\nResized {processed} images → {output_dir}")

resize_images("./photos", "./photos-resized", max_width=1200)
Enter fullscreen mode Exit fullscreen mode

Time saved: ~20 min vs. opening each image in an editor.


The Pattern

All five scripts share the same design:

  • Single file, no config — just run it
  • Clear output — tells you what it did
  • Handles edge cases — doesn't crash on weird input
  • Under 50 lines — easy to read and modify

I packaged these (plus 5 more) into a complete automation toolkit if you want ready-to-use versions with CLI arguments, logging, and error handling baked in. But honestly, even these basic versions will save you hours.

What scripts do you keep rewriting? Drop them in the comments — I might add them to the next pack.

Top comments (0)