DEV Community

Alex
Alex

Posted on

I Automated My Entire Morning Workflow with 50 Lines of Python

Every morning I used to spend 45 minutes on repetitive tasks: checking emails, scanning security alerts, reviewing data dashboards, and updating spreadsheets. Now it takes 2 minutes.

Here's the exact Python script I use and how you can build your own.

The Problem

My morning routine looked like this:

Task Time
Check security alerts 10 min
Scan data dashboards 15 min
Update tracking spreadsheet 10 min
Send team summary 10 min

Total: 45 minutes of brain-dead work before I even start coding.

The Solution: One Script to Rule Them All

import smtplib
import csv
from datetime import datetime, timedelta
from pathlib import Path

def check_security_status(log_path):
    """Scan last 24h of logs for anomalies"""
    cutoff = datetime.now() - timedelta(hours=24)
    alerts = []

    for line in Path(log_path).read_text().splitlines():
        parts = line.split("|")
        if len(parts) >= 3:
            ts = datetime.fromisoformat(parts[0].strip())
            if ts > cutoff and parts[1].strip() in ("WARN", "ERROR"):
                alerts.append(parts[2].strip())

    return alerts

def aggregate_metrics(csv_path):
    """Pull key metrics from daily export"""
    metrics = {"total": 0, "errors": 0, "latency_avg": 0.0}
    rows = 0

    with open(csv_path) as f:
        for row in csv.DictReader(f):
            metrics["total"] += int(row["requests"])
            metrics["errors"] += int(row["errors"])
            metrics["latency_avg"] += float(row["latency_ms"])
            rows += 1

    if rows:
        metrics["latency_avg"] /= rows
    return metrics

def generate_summary(alerts, metrics):
    """Build human-readable summary"""
    status = "ALL CLEAR" if not alerts else f"{len(alerts)} ALERTS"

    return f"""
=== Daily Dev Summary ({datetime.now().strftime('%Y-%m-%d')}) ===

Security: {status}
{chr(10).join(f'  - {a}' for a in alerts[:5])}

Metrics:
  Requests: {metrics['total']:,}
  Error rate: {metrics['errors']/max(metrics['total'],1)*100:.1f}%
  Avg latency: {metrics['latency_avg']:.0f}ms
"""

# Run everything
alerts = check_security_status("/var/log/app/security.log")
metrics = aggregate_metrics("/data/daily_metrics.csv")
summary = generate_summary(alerts, metrics)
print(summary)
Enter fullscreen mode Exit fullscreen mode

Key Takeaways

  1. Start small - automate ONE task first, then chain them
  2. Use pathlib - it handles cross-platform paths cleanly
  3. CSV is underrated - most dashboard exports support it
  4. Parameterize everything - paths, thresholds, recipients should be configurable

What I'd Add Next

  • Slack/Discord webhook for critical alerts
  • Cron job for truly hands-free operation
  • Historical trend tracking with pandas

Resources That Helped Me Build This

If you're getting into Python automation and data analysis, these helped me level up:

What repetitive tasks are eating your mornings? Drop a comment 👇

Top comments (0)