DEV Community

Jackson Studio
Jackson Studio

Posted on • Edited on

5 AI Automation Scripts That Save Me 10 Hours/Week

5 AI Automation Scripts That Save Me 10 Hours/Week

I used to spend my weekends doing repetitive coding tasks: checking APIs, reformatting data, writing boilerplate, debugging the same issues over and over.

Then I started building small AI-powered automation scripts. Not complex ML models — just practical Python scripts that leverage GPT-4 to handle boring work.

These 5 scripts now save me ~10 hours every week. I'm sharing them here so you can adapt them for your own workflow.

1. Smart Rate Limiter with Auto-Retry

The Problem

You're calling external APIs (Twitter, Dev.to, Stripe, whatever). Each has different rate limits. You hit the limit, get a 429 error, and your script crashes.

You could hard-code delays, but that's inefficient. Some APIs allow bursts, others are strict. You need something smarter.

The Solution

A rate limiter that reads API headers, respects limits, and auto-retries with exponential backoff.

# smart_rate_limiter.py
import time
import requests
from functools import wraps

class RateLimiter:
    def __init__(self, calls_per_minute=60):
        self.calls_per_minute = calls_per_minute
        self.min_interval = 60.0 / calls_per_minute
        self.last_call = 0

    def wait(self):
        """Ensure minimum interval between calls"""
        elapsed = time.time() - self.last_call
        if elapsed < self.min_interval:
            time.sleep(self.min_interval - elapsed)
        self.last_call = time.time()

    def __call__(self, func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            self.wait()

            max_retries = 5
            for attempt in range(max_retries):
                try:
                    response = func(*args, **kwargs)

                    # Check for rate limit in response headers
                    if hasattr(response, 'headers'):
                        remaining = response.headers.get('X-RateLimit-Remaining')
                        if remaining and int(remaining) < 5:
                            reset_time = int(response.headers.get('X-RateLimit-Reset', 0))
                            wait_time = max(reset_time - time.time(), 0)
                            print(f"⚠️ Low rate limit. Waiting {wait_time:.0f}s...")
                            time.sleep(wait_time)

                    return response

                except requests.exceptions.HTTPError as e:
                    if e.response.status_code == 429:  # Rate limited
                        wait_time = 2 ** attempt  # Exponential backoff
                        print(f"⏳ Rate limited. Retry {attempt+1}/{max_retries} in {wait_time}s...")
                        time.sleep(wait_time)
                    else:
                        raise

                except Exception as e:
                    print(f"❌ Error: {e}")
                    if attempt == max_retries - 1:
                        raise
                    time.sleep(2 ** attempt)

            return None

        return wrapper

# Usage
rate_limiter = RateLimiter(calls_per_minute=30)

@rate_limiter
def fetch_user_data(user_id):
    response = requests.get(
        f'https://api.example.com/users/{user_id}',
        headers={'Authorization': 'Bearer YOUR_TOKEN'}
    )
    response.raise_for_status()
    return response.json()

# Call it like normal - rate limiting happens automatically
for user_id in range(1, 100):
    data = fetch_user_data(user_id)
    print(f"Fetched user {user_id}")
Enter fullscreen mode Exit fullscreen mode

Why It Works

  • Respects API headers (X-RateLimit-Remaining, X-RateLimit-Reset)
  • Exponential backoff prevents hammering the API
  • Auto-retry on transient failures
  • Works as a decorator — just add @rate_limiter to any function

I use this for every external API call now. Haven't hit a rate limit error in months.

2. AI-Powered Code Review Bot

The Problem

You write code. You know it needs review. But you don't want to bother teammates with every small PR, and self-review misses obvious bugs.

The Solution

A script that uses GPT-4 to review your code, check for bugs, suggest improvements, and enforce style guidelines.

# code_reviewer.py
import os
from openai import OpenAI

client = OpenAI()

def review_code(file_path, language='python'):
    """AI-powered code review"""

    with open(file_path, 'r') as f:
        code = f.read()

    prompt = f"""
    Review this {language} code for:
    1. Bugs and potential errors
    2. Performance issues
    3. Security vulnerabilities
    4. Code style and readability
    5. Missing edge cases

    Code:
    ```
{% endraw %}
{language}
    {code}
{% raw %}

    ```

    Provide specific, actionable feedback. If the code is good, say so.
    Format your response as:
    - ✅ What's good
    - ⚠️ Warnings (potential issues)
    - ❌ Critical issues (must fix)
    - 💡 Suggestions (improvements)
    """

    response = client.chat.completions.create(
        model="gpt-4",
        messages=[
            {"role": "system", "content": "You are an experienced software engineer performing code review."},
            {"role": "user", "content": prompt}
        ],
        temperature=0.3,  # Lower temp for more focused analysis
        max_tokens=1500
    )

    review = response.choices[0].message.content
    return review

def review_diff(diff_text):
    """Review a git diff instead of full files"""

    prompt = f"""
    Review this code diff. Focus on the changes being made.

    {diff_text}

    Check for:
    - Breaking changes
    - New bugs introduced
    - Performance implications
    - Missing tests for new code
    """

    response = client.chat.completions.create(
        model="gpt-4",
        messages=[
            {"role": "system", "content": "You are reviewing a git diff."},
            {"role": "user", "content": prompt}
        ],
        temperature=0.3
    )

    return response.choices[0].message.content

# Example usage
if __name__ == "__main__":
    import sys

    if len(sys.argv) < 2:
        print("Usage: python code_reviewer.py <file_path>")
        sys.exit(1)

    file_path = sys.argv[1]
    print(f"📝 Reviewing {file_path}...\n")

    review = review_code(file_path)
    print(review)
Enter fullscreen mode Exit fullscreen mode

Usage

# Review a single file
python code_reviewer.py my_script.py

# Review uncommitted changes
git diff | python -c "
from code_reviewer import review_diff
import sys
print(review_diff(sys.stdin.read()))
"
Enter fullscreen mode Exit fullscreen mode

Real Example Output

✅ What's good:
- Clean function separation
- Good error handling with try/except
- Descriptive variable names

⚠️ Warnings:
- Line 23: Large list comprehension might be slow for big datasets
- Line 45: No input validation on user_id parameter
- Consider adding type hints for better IDE support

❌ Critical issues:
- Line 67: SQL query is vulnerable to injection (use parameterized queries)
- Line 89: File handle not closed properly (use context manager)

💡 Suggestions:
- Add docstrings to public functions
- Consider caching the API responses (lines 34-40)
- Extract magic numbers to named constants
Enter fullscreen mode Exit fullscreen mode

I run this before every commit. It's caught dozens of bugs that I would've missed.

3. Trend Analyzer for Content Ideas

The Problem

You want to write content that people actually care about. But how do you know what's trending? Manually checking HackerNews, Reddit, and Dev.to daily is tedious.

The Solution

A script that scrapes trending topics, analyzes them with AI, and suggests content ideas ranked by potential impact.

# trend_analyzer.py
import requests
from datetime import datetime, timedelta
from openai import OpenAI
from collections import Counter

client = OpenAI()

def fetch_hackernews_trends(days=3):
    """Get top stories from HackerNews"""
    top_stories_url = 'https://hacker-news.firebaseio.com/v0/topstories.json'
    story_url = 'https://hacker-news.firebaseio.com/v0/item/{}.json'

    response = requests.get(top_stories_url)
    story_ids = response.json()[:50]  # Top 50 stories

    titles = []
    for story_id in story_ids:
        story_response = requests.get(story_url.format(story_id))
        story = story_response.json()
        if story and 'title' in story:
            titles.append(story['title'])

    return titles

def fetch_devto_trends():
    """Get trending posts from Dev.to"""
    url = 'https://dev.to/api/articles?per_page=30&top=7'  # Top posts this week

    response = requests.get(url)
    articles = response.json()

    return [article['title'] for article in articles]

def analyze_trends(titles):
    """Use AI to identify common themes and suggest content"""

    titles_text = '\n'.join(f"- {title}" for title in titles)

    prompt = f"""
    Analyze these trending titles from developer communities:

    {titles_text}

    Tasks:
    1. Identify the top 5 themes/topics that appear most frequently
    2. For each theme, suggest 2 blog post ideas that would resonate with this audience
    3. Rank suggestions by potential impact (engagement + traffic)

    Format:
    Theme: [theme name]
    Why it's hot: [explanation]
    Post ideas:
    1. [specific, actionable title]
    2. [specific, actionable title]
    """

    response = client.chat.completions.create(
        model="gpt-4",
        messages=[
            {"role": "system", "content": "You are a content strategist analyzing trends in the developer community."},
            {"role": "user", "content": prompt}
        ],
        temperature=0.7,
        max_tokens=2000
    )

    return response.choices[0].message.content

def extract_keywords(titles):
    """Simple keyword extraction from titles"""
    # Remove common words
    stop_words = {'the', 'a', 'an', 'and', 'or', 'but', 'in', 'on', 'at', 'to', 'for', 
                  'of', 'with', 'by', 'from', 'how', 'why', 'what', 'that', 'this', 'is'}

    words = []
    for title in titles:
        words.extend([word.lower().strip('.,!?') for word in title.split() 
                     if word.lower() not in stop_words and len(word) > 3])

    # Count frequency
    word_counts = Counter(words)
    return word_counts.most_common(15)

if __name__ == "__main__":
    print("🔍 Fetching trending topics...")

    hn_titles = fetch_hackernews_trends()
    devto_titles = fetch_devto_trends()

    all_titles = hn_titles + devto_titles

    print(f"📊 Analyzing {len(all_titles)} trending posts...")

    # Show keyword frequency
    keywords = extract_keywords(all_titles)
    print("\n🔥 Hot keywords:")
    for word, count in keywords[:10]:
        print(f"  {word}: {count} mentions")

    # AI analysis
    print("\n🤖 AI Analysis:\n")
    analysis = analyze_trends(all_titles)
    print(analysis)

    # Save results
    with open('trend_analysis.txt', 'w') as f:
        f.write(f"Trend Analysis - {datetime.now()}\n")
        f.write("=" * 50 + "\n\n")
        f.write(analysis)

    print("\n✅ Analysis saved to trend_analysis.txt")
Enter fullscreen mode Exit fullscreen mode

Output Example

🔥 Hot keywords:
  ai: 18 mentions
  python: 12 mentions
  typescript: 9 mentions
  docker: 8 mentions
  security: 7 mentions

🤖 AI Analysis:

Theme: AI/LLM Integration
Why it's hot: Massive interest in practical AI applications, particularly around automation and productivity
Post ideas:
1. "Building a Personal AI Assistant with OpenAI API and Python"
2. "5 Real-World Problems I Solved by Adding GPT-4 to My Scripts"

Theme: Developer Productivity
Why it's hot: Developers are seeking ways to automate repetitive tasks and optimize workflows
Post ideas:
1. "How I Automated My Entire Development Workflow (And You Can Too)"
2. "10 CLI Tools That Made Me 3x More Productive"

[... more themes ...]
Enter fullscreen mode Exit fullscreen mode

I run this every Monday morning. It gives me a week's worth of content ideas in 2 minutes.

4. Automatic Documentation Generator

The Problem

You write code. You know you should document it. But writing docs is boring, so you don't.

Three months later, you (or a teammate) stare at your code and have no idea what it does.

The Solution

A script that reads your code and generates comprehensive documentation automatically.

# doc_generator.py
import os
import ast
from openai import OpenAI

client = OpenAI()

def extract_functions(file_path):
    """Parse Python file and extract function signatures"""
    with open(file_path, 'r') as f:
        tree = ast.parse(f.read())

    functions = []
    for node in ast.walk(tree):
        if isinstance(node, ast.FunctionDef):
            # Get function name, args, and docstring
            args = [arg.arg for arg in node.args.args]
            docstring = ast.get_docstring(node) or ""

            functions.append({
                'name': node.name,
                'args': args,
                'docstring': docstring,
                'lineno': node.lineno
            })

    return functions

def generate_docs(file_path):
    """Generate documentation for a Python file"""

    with open(file_path, 'r') as f:
        code = f.read()

    functions = extract_functions(file_path)

    prompt = f"""
    Generate comprehensive documentation for this Python module.

    Code:
    ```
{% endraw %}
python
    {code}
{% raw %}

    ```

    Include:
    1. Module overview (what it does, main purpose)
    2. For each function: description, parameters, return value, example usage
    3. Dependencies and setup instructions
    4. Common use cases

    Format as clean markdown suitable for a README.
    """

    response = client.chat.completions.create(
        model="gpt-4",
        messages=[
            {"role": "system", "content": "You are a technical writer creating API documentation."},
            {"role": "user", "content": prompt}
        ],
        temperature=0.3,
        max_tokens=3000
    )

    docs = response.choices[0].message.content

    # Save documentation
    doc_path = file_path.replace('.py', '.md')
    with open(doc_path, 'w') as f:
        f.write(docs)

    print(f"✅ Documentation generated: {doc_path}")
    return doc_path

# Batch process
def document_project(directory):
    """Generate docs for all Python files in a directory"""
    for root, dirs, files in os.walk(directory):
        for file in files:
            if file.endswith('.py') and not file.startswith('__'):
                file_path = os.path.join(root, file)
                print(f"📝 Documenting {file_path}...")
                generate_docs(file_path)

if __name__ == "__main__":
    import sys

    if len(sys.argv) < 2:
        print("Usage: python doc_generator.py <file_or_directory>")
        sys.exit(1)

    path = sys.argv[1]

    if os.path.isdir(path):
        document_project(path)
    else:
        generate_docs(path)
Enter fullscreen mode Exit fullscreen mode

Usage

# Document a single file
python doc_generator.py my_module.py

# Document an entire project
python doc_generator.py ./src/
Enter fullscreen mode Exit fullscreen mode

Now every Python file has a companion .md file with full documentation. When teammates ask "what does this function do?", I just link them to the auto-generated docs.

5. Smart Log Analyzer

The Problem

Your app crashes in production. You open the log file. 10,000 lines of noise. Somewhere in there is the actual error.

You spend 20 minutes scrolling, searching, trying to find the root cause.

The Solution

A script that uses AI to analyze logs, identify errors, and suggest fixes.

# log_analyzer.py
import re
from openai import OpenAI

client = OpenAI()

def extract_errors(log_text):
    """Extract error lines and surrounding context"""
    lines = log_text.split('\n')
    errors = []

    error_patterns = [
        r'ERROR',
        r'FATAL',
        r'Exception',
        r'Traceback',
        r'failed',
        r'\[ERROR\]'
    ]

    pattern = '|'.join(error_patterns)

    for i, line in enumerate(lines):
        if re.search(pattern, line, re.IGNORECASE):
            # Get context: 3 lines before and after
            start = max(0, i - 3)
            end = min(len(lines), i + 4)
            context = '\n'.join(lines[start:end])
            errors.append(context)

    return errors

def analyze_logs(log_file_path):
    """Analyze log file and identify issues"""

    with open(log_file_path, 'r') as f:
        log_text = f.read()

    # Extract only error sections to save tokens
    errors = extract_errors(log_text)

    if not errors:
        print("✅ No errors found in logs")
        return

    errors_text = '\n\n---\n\n'.join(errors[:10])  # Max 10 errors

    prompt = f"""
    Analyze these application logs and identify issues:

    {errors_text}

    For each error:
    1. Explain what went wrong
    2. Identify the root cause
    3. Suggest a fix

    Prioritize by severity. Be specific.
    """

    response = client.chat.completions.create(
        model="gpt-4",
        messages=[
            {"role": "system", "content": "You are a senior DevOps engineer debugging production issues."},
            {"role": "user", "content": prompt}
        ],
        temperature=0.3,
        max_tokens=2000
    )

    analysis = response.choices[0].message.content

    print("🔍 Log Analysis:\n")
    print(analysis)

    return analysis

if __name__ == "__main__":
    import sys

    if len(sys.argv) < 2:
        print("Usage: python log_analyzer.py <log_file>")
        sys.exit(1)

    log_file = sys.argv[1]
    analyze_logs(log_file)
Enter fullscreen mode Exit fullscreen mode

Example Output

🔍 Log Analysis:

**Error 1: Database Connection Timeout**
Root cause: Connection pool exhausted. Max connections (20) reached, new requests timing out after 30s.
Fix:
- Increase max_connections in database config
- Add connection pooling with proper cleanup
- Implement retry logic with exponential backoff

**Error 2: Memory Leak in Background Worker**
Root cause: Worker process not releasing memory after processing large files. Heap grows unbounded.
Fix:
- Add explicit garbage collection after processing
- Stream large files instead of loading into memory
- Monitor worker memory and restart if threshold exceeded

**Error 3: API Rate Limit Exceeded**
Root cause: Making 300+ requests/minute to external API (limit: 100/min)
Fix:
- Implement request queue with rate limiting
- Cache API responses (TTL: 5 minutes)
- Add exponential backoff on 429 errors
Enter fullscreen mode Exit fullscreen mode

This turns 20 minutes of manual log diving into a 30-second AI analysis.

The Impact

These 5 scripts save me roughly:

  • Rate limiter: 1 hour/week (no more debugging API errors)
  • Code review bot: 2 hours/week (catch bugs before they hit production)
  • Trend analyzer: 1.5 hours/week (no more manual research)
  • Doc generator: 3 hours/week (documentation is automatic)
  • Log analyzer: 2 hours/week (faster debugging)

Total: ~9.5 hours/week saved

More importantly, they've eliminated entire categories of annoying, repetitive work. I can focus on building features instead of fighting infrastructure.

Getting Started

All these scripts follow the same pattern:

  1. Read input (file, API, logs)
  2. Send to GPT-4 with a clear prompt
  3. Parse and format the response
  4. Save or display the result

You can adapt them to your workflow in minutes. The key is writing good prompts and handling errors gracefully.

Resources


Enjoyed this? Support the project on PayPal

📘 Want more automation scripts? Check out my AI Automation Blueprint — 20+ ready-to-use scripts for developers.


What Would You Automate?

Drop a comment with your most annoying repetitive task. I might write a script for it and share it in a follow-up post!


Free Resource: AI Automation Cheat Sheet

If you're building automation pipelines like this, save yourself some debugging time:

Get the AI Automation Workflow Cheat Sheet (Free) — 5 production patterns for fallback chains, rate limiting, quality gates, cost optimization, and dead man's switch. Python code included, copy-paste ready.

Real data from 30 days of running this exact kind of pipeline.

Top comments (0)