AI is evolving rapidly, and many of us have gotten used to throwing tasks directly into a chatbox. However, AI can be unstable when handling automated tasks, and some services aren't free. For tasks that need to run daily, process large files, or monitor system status, relying on AI can lead to "hallucinations" and inconsistent results.
AI is great, but you don't always need a sledgehammer to crack a nut. Python is perfectly capable of handling these simple automation tasks.
Before writing any Python, ensure your Python environment is ready.
I recommend using ServBay to manage your development environment. It supports one-click installation of Python environments, covering everything from the ancient Python 2.7 to Python 3.5, and even the latest Python 3.14. Best of all, these versions can coexist simultaneously. You don't need to manually configure environment variables or worry about messing up your system's default configuration. Install it, and you're ready to go in a minute.
With a stable environment in place, here are a few automation script patterns frequently used in real-world work.
Automatic Retry Mechanism: Making Network Requests Robust
Network fluctuations are normal when writing crawlers or calling APIs. Instead of writing try-except blocks everywhere, encapsulate the retry logic. Professional scripts don't crash just because of a single timeout.
import time
import requests
from requests.exceptions import RequestException
def fetch_with_retry(url, max_retries=3, pause=2):
"""
GET request with automatic retry mechanism
"""
for attempt in range(max_retries):
try:
# Setting a timeout is mandatory to prevent the program from hanging indefinitely
response = requests.get(url, timeout=10)
response.raise_for_status() # Raise exception if status code is not 200
return response
except RequestException as e:
print(f"Request failed (Attempt {attempt + 1}/{max_retries}): {e}")
if attempt == max_retries - 1:
raise # Raise exception if the last attempt also fails
time.sleep(pause)
# Usage Example
try:
data = fetch_with_retry("https://api.github.com")
print(f"Request successful, status code: {data.status_code}")
except RequestException:
print("Failed after multiple retries, please check network or target service.")
Why it works: It digests occasional network instability at the code level. The biggest benefit is preventing the script from crashing due to a minor network hiccup, truly achieving "unattended" operation—perfect for nightly batch tasks.
Rename Files Based on Content
Sometimes filenames aren't updated, and you have no idea what's inside. Instead of opening them one by one, which is time-consuming, you can write a script to read the content and rename them. This logic is often used for invoices, logs, or auto-generated reports.
import os
TARGET_DIR = "./reports"
def clean_filename(text):
# Remove illegal characters from filename
return "".join(c if c.isalnum() else "_" for c in text)[:50]
for filename in os.listdir(TARGET_DIR):
full_path = os.path.join(TARGET_DIR, filename)
# Ensure we only process files
if os.path.isfile(full_path):
try:
with open(full_path, "r", encoding='utf-8') as f:
# Read the first line as the new filename
first_line = f.readline().strip()
if first_line:
new_name = clean_filename(first_line) + ".txt"
new_path = os.path.join(TARGET_DIR, new_name)
# Prevent overwriting existing files
if not os.path.exists(new_path):
os.rename(full_path, new_path)
print(f"Renamed: {filename} -> {new_name}")
except Exception as e:
print(f"Unable to process file {filename}: {e}")
Why it works: It solves the pain point of meaningless filenames (like scan_001.txt). By naming files based on their core content, retrieval becomes highly efficient, eliminating the need to open files to verify them.
Clean Up Long-Unaccessed Zombie Files
Disk space always seems to vanish, usually due to temporary files you downloaded and never opened again. This script cleans up files based on their last access time.
import os
import time
WATCH_DIR = "/path/to/cleanup"
EXPIRY_DAYS = 180 # Delete if not accessed in 6 months
current_time = time.time()
for filename in os.listdir(WATCH_DIR):
filepath = os.path.join(WATCH_DIR, filename)
if os.path.isfile(filepath):
# Get last access time (atime)
last_access_time = os.path.getatime(filepath)
# Calculate time difference
if current_time - last_access_time > (EXPIRY_DAYS * 86400):
try:
os.remove(filepath)
print(f"Deleted stale file: {filename}")
except OSError as e:
print(f"Deletion failed: {e}")
Why it works: Judging by "access time" rather than "creation time" is crucial because it accurately identifies files you truly no longer need. It acts like an invisible janitor, preventing temporary data from silently eating up your disk space.
Monitor and Kill High-Load Zombie Processes
Some Python scripts (especially those involving multiprocessing or machine learning) can leave orphan processes behind after an unexpected exit, hogging CPU. Manually searching via Activity Monitor is too slow; use a script to self-check.
import psutil
# Set threshold: CPU usage over 80% and process name is python
CPU_THRESHOLD = 80.0
PROCESS_NAME = "python"
for proc in psutil.process_iter(['pid', 'name', 'cpu_percent']):
try:
# psutil usually requires an interval to get cpu_percent;
# using instant value here might require adjustments or loop monitoring
if proc.info['name'] and PROCESS_NAME in proc.info['name'].lower():
if proc.info['cpu_percent'] > CPU_THRESHOLD:
print(f"High load zombie process detected PID: {proc.info['pid']} (CPU: {proc.info['cpu_percent']}%)")
proc.kill()
print("Process terminated.")
except (psutil.NoSuchProcess, psutil.AccessDenied):
pass
Why it works: This is an active defense mechanism. It solves memory leaks or infinite loops caused by long-running scripts, preventing a single out-of-control process from dragging down the whole system and saving you from manual debugging.
The "Undo Button" for Operations: Automatic Backup
Backups are mandatory before writing to config files or critical data. But we often forget. Let Python handle it.
import shutil
import time
import os
def safe_backup(filepath):
if not os.path.exists(filepath):
print(f"File does not exist: {filepath}")
return
# Generate backup filename with timestamp
timestamp = time.strftime("%Y%m%d_%H%M%S")
backup_path = f"{filepath}.{timestamp}.bak"
try:
shutil.copy2(filepath, backup_path)
print(f"Backup created: {backup_path}")
except IOError as e:
print(f"Backup failed: {e}")
raise # Backup failure should interrupt subsequent operations
# Usage Scenario: Before modifying config
config_file = "app_config.yaml"
safe_backup(config_file)
# Execute write operation here...
Why it works: Forcing a backup before any destructive operation (like overwriting) is a basic safety rule in production environments, ensuring you can roll back instantly if an error occurs.
System Notifications After Script Completion
Many scripts run for half an hour. You can't stare at the console the whole time. Having the script send a pop-up notification when it finishes is a small detail that improves the work experience.
import platform
import os
def send_notification(title, text):
system_type = platform.system()
if system_type == "Darwin": # macOS
# Use AppleScript to trigger notification
cmd = f"""osascript -e 'display notification "{text}" with title "{title}"'"""
os.system(cmd)
elif system_type == "Linux":
# Linux usually uses notify-send
cmd = f"""notify-send "{title}" "{text}" """
os.system(cmd)
else:
print(f"Notification: [{title}] {text}")
# Simulate long task
import time
time.sleep(2)
send_notification("Task Finished", "Data processing script has completed.")
Why it works: Developers don't need to stare blankly at a black-and-white console. You can turn the "waiting for script" downtime into free time; the task will proactively notify you when it's done.
Lightweight Data Collection
When you need to grab specific fields (like prices or titles) from web pages and organize them into a table, Python is the most efficient tool.
import requests
from bs4 import BeautifulSoup
import csv
def scrape_data(url, output_file):
# Spoof User-Agent to prevent simple anti-crawling blocks
headers = {'User-Agent': 'Mozilla/5.0'}
try:
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.content, 'html.parser')
# Suppose we want to scrape titles and links from an article list
articles = soup.find_all('article')
data_rows = []
for article in articles:
title_tag = article.find('h2')
if title_tag and title_tag.a:
title = title_tag.a.text.strip()
link = title_tag.a['href']
data_rows.append([title, link])
# Write to CSV
with open(output_file, 'w', newline='', encoding='utf-8') as f:
writer = csv.writer(f)
writer.writerow(['Title', 'Link'])
writer.writerows(data_rows)
print(f"Successfully scraped {len(data_rows)} items and saved to {output_file}")
except Exception as e:
print(f"Error during scraping: {e}")
# Example Call
# scrape_data("http://example-blog.com", "results.csv")
Why it works: It transforms the "copy-paste" action into a structured data flow. Compared to manual work, it handles hundreds of pages in milliseconds, and the generated data format (like CSV) is uniform and ready to use. Convenient, right?
Automated File Categorization
I don't know about your computer, but my Downloads folder is usually a mess. This script automatically moves files into corresponding folders based on their extensions, so you don't have to drag them manually.
import os
import shutil
from pathlib import Path
SOURCE_DIR = Path("/Users/username/Downloads/MixedData")
DEST_DIR = Path("/Users/username/Documents/Sorted")
def organize_files():
if not SOURCE_DIR.exists():
return
for file_path in SOURCE_DIR.iterdir():
if file_path.is_file():
# Get extension, e.g., .pdf
ext = file_path.suffix.lower()
if ext: # Ignore files without extensions
# Remove dot, use as folder name, e.g., PDF
folder_name = ext[1:].upper() + "_Files"
target_folder = DEST_DIR / folder_name
# Create target folder
target_folder.mkdir(parents=True, exist_ok=True)
# Move file
try:
shutil.move(str(file_path), str(target_folder / file_path.name))
print(f"Moved: {file_path.name} -> {folder_name}")
except shutil.Error as e:
print(f"Error moving file: {e}")
if __name__ == "__main__":
organize_files()
Why it works: It's a free digital professional organizer. Using the most basic metadata—file extensions—it instantly turns a chaotic download directory into an orderly space, greatly reducing the cognitive load of manual filing.
Conclusion
These scripts aren't complex, but they form the foundation of an automated workflow. Starting with environment setup using ServBay to solve version coexistence issues, combined with robust code logic, you can offload a massive amount of time-consuming repetitive work to machines. A true programmer isn't the one who types the fastest, but the one who knows how to make code work for them.


Top comments (0)