DEV Community

ScrapeStorm
ScrapeStorm

Posted on

5 practical Python scripts

Here are 5 practical Python scripts covering file operations, automation, and web requests, with clear explanations and ready-to-use code:

  1. Batch Rename Files Use Case: Rename all files in a folder (add prefix/suffix/change extension).

import os

def batch_rename(path, prefix="", suffix="", new_ext=None):
for filename in os.listdir(path):
name, ext = os.path.splitext(filename)
new_name = f"{prefix}{name}{suffix}{new_ext if new_ext else ext}"
os.rename(
os.path.join(path, filename),
os.path.join(path, new_name))

Example: Add prefix "backup_" to all .txt files

batch_rename("/path/to/folder", prefix="backup_", new_ext=".txt")

  1. Download Images from URLs Use Case: Download multiple images from a list of URLs.

import requests

def download_images(urls, save_dir="images"):
os.makedirs(save_dir, exist_ok=True)
for i, url in enumerate(urls):
try:
response = requests.get(url, stream=True)
with open(f"{save_dir}/image_{i+1}.jpg", "wb") as f:
for chunk in response.iter_content(1024):
f.write(chunk)
except Exception as e:
print(f"Failed to download {url}: {e}")

Example

urls = ["https://example.com/1.jpg", "https://example.com/2.jpg"]
download_images(urls)

  1. Convert CSV to Excel Use Case: Automate data format conversion.

import pandas as pd

def csv_to_excel(csv_path, excel_path):
df = pd.read_csv(csv_path)
df.to_excel(excel_path, index=False)

Example

csv_to_excel("data.csv", "data.xlsx")

  1. Schedule Computer Shutdown Use Case: Shut down the system after a delay (Windows only).

import os
import time

def schedule_shutdown(minutes):
seconds = minutes * 60
print(f"Shutting down in {minutes} minutes...")
time.sleep(seconds)
os.system("shutdown /s /t 1")

Example: Shutdown after 30 minutes

schedule_shutdown(30)

  1. Extract All Links from a Webpage Use Case: Scrape hyperlinks from a URL.

from bs4 import BeautifulSoup
import requests

def extract_links(url):
try:
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
return [a["href"] for a in soup.find_all("a", href=True)]
except Exception as e:
print(f"Error: {e}")
return []

Example

links = extract_links("https://example.com")
print(links)


Enter fullscreen mode Exit fullscreen mode

Heroku

Built for developers, by developers.

Whether you're building a simple prototype or a business-critical product, Heroku's fully-managed platform gives you the simplest path to delivering apps quickly — using the tools and languages you already love!

Learn More

Top comments (0)

Image of PulumiUP 2025

Explore What’s Next in DevOps, IaC, and Security

Join us for demos, and learn trends, best practices, and lessons learned in Platform Engineering & DevOps, Cloud and IaC, and Security.

Save Your Spot

👋 Kindness is contagious

Explore a trove of insights in this engaging article, celebrated within our welcoming DEV Community. Developers from every background are invited to join and enhance our shared wisdom.

A genuine "thank you" can truly uplift someone’s day. Feel free to express your gratitude in the comments below!

On DEV, our collective exchange of knowledge lightens the road ahead and strengthens our community bonds. Found something valuable here? A small thank you to the author can make a big difference.

Okay