DEV Community

Otto
Otto

Posted on

Python Automation for Beginners: 5 Scripts That Save You Hours Every Week

You don't need to be a senior engineer to automate your work.

These 5 Python scripts require zero experience to run and genuinely save time. I'll walk you through each one — what it does, the full code, and how to use it.


Prerequisites

You only need Python installed. That's it.

Check if you have it:

python3 --version
Enter fullscreen mode Exit fullscreen mode

If not, download from python.org. Takes 3 minutes.


Script 1: Rename 100 Files in 3 Seconds

The problem: You have 100 photos named IMG_3847.jpg and need them renamed to 2026-03-22-photo-001.jpg.

The script:

import os
from datetime import datetime

folder = "./photos"  # change this to your folder path
date = datetime.now().strftime("%Y-%m-%d")

files = [f for f in os.listdir(folder) if f.endswith(('.jpg', '.png', '.jpeg'))]
files.sort()

for i, filename in enumerate(files):
    extension = os.path.splitext(filename)[1]
    new_name = f"{date}-photo-{i+1:03d}{extension}"

    old_path = os.path.join(folder, filename)
    new_path = os.path.join(folder, new_name)

    os.rename(old_path, new_path)
    print(f"Renamed: {filename}{new_name}")

print(f"\n✅ Done! {len(files)} files renamed.")
Enter fullscreen mode Exit fullscreen mode

Time saved: 2-3 hours of manual renaming → 3 seconds.


Script 2: Send Yourself a Daily Email Summary

The problem: You want a daily digest of your to-do list, sent to your inbox every morning.

The script:

import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from datetime import datetime

# Config — use a Gmail App Password (not your main password)
sender = "youremail@gmail.com"
receiver = "youremail@gmail.com"
app_password = "xxxx xxxx xxxx xxxx"  # Google App Password

# Your daily tasks (edit this)
tasks = [
    "Review client proposal",
    "Follow up with 3 prospects",
    "Write 500 words for article",
    "Team standup at 10am",
    "Exercise: 30 min run",
]

date = datetime.now().strftime("%A, %B %d, %Y")
task_html = "\n".join([f"<li>☐ {task}</li>" for task in tasks])

html_body = f"""
<h2>📋 Daily Focus — {date}</h2>
<p>Your top priorities for today:</p>
<ul>
{task_html}
</ul>
<p>Make it count! 🚀</p>
"""

msg = MIMEMultipart("alternative")
msg["Subject"] = f"Daily Briefing — {date}"
msg["From"] = sender
msg["To"] = receiver
msg.attach(MIMEText(html_body, "html"))

with smtplib.SMTP_SSL("smtp.gmail.com", 465) as server:
    server.login(sender, app_password)
    server.send_message(msg)

print(f"✅ Daily briefing sent to {receiver}")
Enter fullscreen mode Exit fullscreen mode

Set it to run every morning using cron (Mac/Linux):

0 8 * * * python3 /path/to/daily_email.py
Enter fullscreen mode Exit fullscreen mode

Time saved: Never forgetting your priorities again. Priceless.


Script 3: Download All Images From a Webpage

The problem: You need all images from a webpage downloaded locally.

The script:

import urllib.request
import urllib.parse
import re
import os

url = "https://example.com"  # replace with your URL
output_folder = "./downloaded_images"

os.makedirs(output_folder, exist_ok=True)

# Fetch the page
req = urllib.request.Request(url, headers={"User-Agent": "Mozilla/5.0"})
with urllib.request.urlopen(req) as response:
    html = response.read().decode("utf-8")

# Find all image URLs
img_pattern = r'<img[^>]+src=["\']([^"\']+)["\']'
img_urls = re.findall(img_pattern, html)

downloaded = 0
for img_url in img_urls:
    # Handle relative URLs
    if img_url.startswith("//"):
        img_url = "https:" + img_url
    elif img_url.startswith("/"):
        parsed = urllib.parse.urlparse(url)
        img_url = f"{parsed.scheme}://{parsed.netloc}{img_url}"
    elif not img_url.startswith("http"):
        continue

    try:
        filename = os.path.basename(urllib.parse.urlparse(img_url).path)
        if not filename or "." not in filename:
            filename = f"image_{downloaded+1}.jpg"

        filepath = os.path.join(output_folder, filename)
        urllib.request.urlretrieve(img_url, filepath)
        print(f"✅ Downloaded: {filename}")
        downloaded += 1
    except Exception as e:
        print(f"❌ Failed: {img_url}{e}")

print(f"\n✅ Done! {downloaded} images saved to '{output_folder}'")
Enter fullscreen mode Exit fullscreen mode

Time saved: Hours of right-click → save → repeat.


Script 4: Convert CSV to Formatted HTML Report

The problem: You have sales data in a CSV and need a clean HTML report.

The script:

import csv
import os

input_file = "sales_data.csv"
output_file = "sales_report.html"

rows = []
with open(input_file, newline="", encoding="utf-8") as f:
    reader = csv.DictReader(f)
    headers = reader.fieldnames
    rows = list(reader)

total_row = len(rows)

table_rows = ""
for row in rows:
    cells = "".join([f"<td>{row[h]}</td>" for h in headers])
    table_rows += f"<tr>{cells}</tr>\n"

header_cells = "".join([f"<th>{h}</th>" for h in headers])

html = f"""<!DOCTYPE html>
<html>
<head>
<style>
  body {{ font-family: Arial, sans-serif; padding: 20px; }}
  table {{ border-collapse: collapse; width: 100%; }}
  th, td {{ border: 1px solid #ddd; padding: 8px; text-align: left; }}
  th {{ background-color: #4CAF50; color: white; }}
  tr:nth-child(even) {{ background-color: #f2f2f2; }}
  h1 {{ color: #333; }}
</style>
</head>
<body>
<h1>📊 Sales Report</h1>
<p><strong>{total_row} records</strong></p>
<table>
<thead><tr>{header_cells}</tr></thead>
<tbody>{table_rows}</tbody>
</table>
</body>
</html>"""

with open(output_file, "w", encoding="utf-8") as f:
    f.write(html)

print(f"✅ Report generated: {output_file} ({total_row} rows)")
Enter fullscreen mode Exit fullscreen mode

Time saved: 30 minutes of manual formatting → 1 second.


Script 5: Monitor a Website and Alert You When It Changes

The problem: You want to know when a competitor changes their pricing, or when a job listing is updated.

The script:

import urllib.request
import hashlib
import json
import os
from datetime import datetime

url = "https://example.com/pricing"  # page to monitor
state_file = "monitor_state.json"

def get_page_hash(url):
    req = urllib.request.Request(url, headers={"User-Agent": "Mozilla/5.0"})
    with urllib.request.urlopen(req, timeout=10) as response:
        content = response.read()
    return hashlib.md5(content).hexdigest()

# Load previous state
state = {}
if os.path.exists(state_file):
    with open(state_file) as f:
        state = json.load(f)

current_hash = get_page_hash(url)
previous_hash = state.get(url)

if previous_hash is None:
    print(f"✅ First check — baseline saved for {url}")
elif current_hash != previous_hash:
    timestamp = datetime.now().strftime("%Y-%m-%d %H:%M")
    print(f"🚨 CHANGE DETECTED at {timestamp}!")
    print(f"   URL: {url}")
    print(f"   Action: Check the page manually")
    # You can add email notification here (Script 2 pattern)
else:
    print(f"✅ No change detected — {url}")

# Save current state
state[url] = current_hash
with open(state_file, "w") as f:
    json.dump(state, f)
Enter fullscreen mode Exit fullscreen mode

Run with cron every hour:

0 * * * * python3 /path/to/monitor.py
Enter fullscreen mode Exit fullscreen mode

Time saved: Hours of manual checking. Never miss a competitor move again.


What's Next?

These 5 scripts are a starting point. Once you're comfortable, the next level is:

  • Combine scripts — rename files then email yourself a summary
  • Add error handling — make scripts robust enough to run unattended
  • Sell your automations — freelancers who can automate repetitive tasks charge €50-150/hr

If you're a freelancer looking to automate your invoicing and contracts, I built two Python tools specifically for that:

Invoice Generator Pro (€14.99) — generates professional invoices from JSON, auto-calculates VAT, outputs PDF-ready HTML

Freelance AI Power Kit (€14.99) — 40 prompts to automate your client communications, proposals, and content


Which script will you use first? Or do you have a repetitive task you want automated? Drop it in the comments — I read everything.

Top comments (0)