DEV Community

Cover image for Python logging: Stop Using print() in Your Automation Scripts
German Yamil
German Yamil

Posted on

Python logging: Stop Using print() in Your Automation Scripts

Python logging: Stop Using print() in Your Automation Scripts

You spent an hour debugging a cron job last night. The script ran fine manually, but in production it silently failed. You had no idea what happened because all your print() calls vanished into the void.

Here is the problem with print():

  • Cron jobs discard stdout — your output goes nowhere unless you explicitly redirect it
  • No timestamps — you cannot tell when something happened
  • No severity levels — a warning and an error look identical
  • No filtering — you cannot say "show me only errors"

The fix is one import away: Python's built-in logging module.

Free: AI Publishing Checklist — 7 steps in Python · Full pipeline: germy5.gumroad.com/l/xhxkzz (pay what you want, min $9.99)


The Five Logging Levels

Before writing any code, understand the severity ladder:

Level Value When to use
DEBUG 10 Detailed diagnostic info (dev only)
INFO 20 Confirmation that things are working
WARNING 30 Something unexpected, but the script continues
ERROR 40 A function failed, needs attention
CRITICAL 50 The script cannot continue

The Basics: basicConfig in 5 Lines

import logging

logging.basicConfig(level=logging.DEBUG)

logging.debug("Reading config file...")
logging.info("Script started")
logging.warning("API rate limit at 80%")
logging.error("Could not connect to database")
logging.critical("Disk full — aborting")
Enter fullscreen mode Exit fullscreen mode

Output:

DEBUG:root:Reading config file...
INFO:root:Script started
WARNING:root:API rate limit at 80%
ERROR:root:Could not connect to database
CRITICAL:root:Disk full — aborting
Enter fullscreen mode Exit fullscreen mode

Better than print(), but still no timestamps and it only writes to the console.


Pattern 1: Log to File AND Console Simultaneously

This is the most useful pattern for automation scripts. You want to see logs in your terminal while they are also saved to disk.

import logging

def setup_logging(log_file="app.log", level=logging.DEBUG):
    fmt = "%(asctime)s %(levelname)-8s %(name)s %(message)s"
    datefmt = "%Y-%m-%d %H:%M:%S"

    logging.basicConfig(
        level=level,
        format=fmt,
        datefmt=datefmt,
        handlers=[
            logging.FileHandler(log_file),
            logging.StreamHandler(),          # console
        ],
    )

setup_logging()

log = logging.getLogger(__name__)
log.info("Logging to file and console at the same time")
Enter fullscreen mode Exit fullscreen mode

Output (console and app.log):

2026-05-03 14:22:01 INFO     __main__ Logging to file and console at the same time
Enter fullscreen mode Exit fullscreen mode

The format string %(asctime)s %(levelname)-8s %(name)s %(message)s gives you:

  • timestamp — exactly when each line was written
  • level — padded to 8 chars so columns align
  • name — which module produced the message
  • message — what you logged

Pattern 2: A Reusable get_logger() Helper

Instead of calling setup_logging() in every script, put a helper in a shared file (e.g., utils/logger.py) and import it everywhere:

# utils/logger.py
import logging
import os

def get_logger(name: str) -> logging.Logger:
    """Return a logger configured for the calling module."""
    fmt = "%(asctime)s %(levelname)-8s %(name)s %(message)s"
    datefmt = "%Y-%m-%d %H:%M:%S"

    logger = logging.getLogger(name)

    if not logger.handlers:   # avoid duplicate handlers on re-import
        level_name = os.getenv("LOG_LEVEL", "INFO").upper()
        level = getattr(logging, level_name, logging.INFO)
        logger.setLevel(level)

        handler = logging.StreamHandler()
        handler.setFormatter(logging.Formatter(fmt=fmt, datefmt=datefmt))
        logger.addHandler(handler)

    return logger
Enter fullscreen mode Exit fullscreen mode

Usage in any script:

from utils.logger import get_logger

log = get_logger(__name__)
log.info("This module has its own named logger")
Enter fullscreen mode Exit fullscreen mode

Named loggers (__name__) mean you can immediately see which file generated each log line — essential when a project grows beyond one file.


Pattern 3: Log Exceptions with exc_info=True

Never swallow exceptions silently. Log the full traceback with one extra argument:

import requests
from utils.logger import get_logger

log = get_logger(__name__)

def fetch_data(url: str) -> dict:
    try:
        response = requests.get(url, timeout=10)
        response.raise_for_status()
        return response.json()
    except Exception as e:
        log.error("Failed to fetch %s: %s", url, e, exc_info=True)
        return {}
Enter fullscreen mode Exit fullscreen mode

exc_info=True appends the full traceback to the log line. You can also use log.exception("message") as a shortcut — it calls log.error with exc_info=True automatically.


Pattern 4: Different Log Levels per Environment

Use an environment variable so you get verbose output in development and quiet output in production:

import logging
import os

def get_level() -> int:
    """Read LOG_LEVEL from the environment, default to WARNING in prod."""
    env = os.getenv("APP_ENV", "production").lower()
    default = "DEBUG" if env == "development" else "WARNING"
    level_name = os.getenv("LOG_LEVEL", default).upper()
    return getattr(logging, level_name, logging.WARNING)
Enter fullscreen mode Exit fullscreen mode

Run your script in different modes without changing a single line of code:

# Development — see everything
APP_ENV=development python sync.py

# Production — only warnings and above
python sync.py

# Temporarily increase verbosity in prod
LOG_LEVEL=DEBUG python sync.py
Enter fullscreen mode Exit fullscreen mode

Pattern 5: Rotating Log Files (Don't Fill the Disk)

A long-running script that writes a new log line every minute will produce 1,440 lines per day. Over months, that adds up. Use RotatingFileHandler to cap the file size and keep a fixed number of backups:

import logging
from logging.handlers import RotatingFileHandler

def setup_rotating_log(log_file="app.log"):
    fmt = "%(asctime)s %(levelname)-8s %(name)s %(message)s"
    datefmt = "%Y-%m-%d %H:%M:%S"
    formatter = logging.Formatter(fmt=fmt, datefmt=datefmt)

    file_handler = RotatingFileHandler(
        log_file,
        maxBytes=1 * 1024 * 1024,   # 1 MB per file
        backupCount=5,               # keep app.log, app.log.1 … app.log.5
    )
    file_handler.setFormatter(formatter)

    console_handler = logging.StreamHandler()
    console_handler.setFormatter(formatter)

    root = logging.getLogger()
    root.setLevel(logging.DEBUG)
    root.addHandler(file_handler)
    root.addHandler(console_handler)
Enter fullscreen mode Exit fullscreen mode

When app.log reaches 1 MB, it is renamed to app.log.1, and a fresh app.log is created. After 5 rotations the oldest backup is deleted automatically. Disk usage stays bounded at 6 MB maximum.


Complete Example: Automation Script with Proper Logging

Here is a realistic script that downloads a JSON feed, processes the records, and saves a report — using everything covered above:

#!/usr/bin/env python3
"""
fetch_report.py — Download JSON feed and write a summary report.
Usage: python fetch_report.py
       APP_ENV=development python fetch_report.py
"""

import json
import logging
import os
import urllib.request
from datetime import datetime, timezone
from logging.handlers import RotatingFileHandler
from pathlib import Path

# ── logging setup ──────────────────────────────────────────────────────────────

def get_logger(name: str) -> logging.Logger:
    fmt = "%(asctime)s %(levelname)-8s %(name)s %(message)s"
    datefmt = "%Y-%m-%d %H:%M:%S"
    formatter = logging.Formatter(fmt=fmt, datefmt=datefmt)

    env = os.getenv("APP_ENV", "production").lower()
    default_level = "DEBUG" if env == "development" else "WARNING"
    level = getattr(logging, os.getenv("LOG_LEVEL", default_level).upper(), logging.WARNING)

    logger = logging.getLogger(name)
    if not logger.handlers:
        logger.setLevel(level)

        # console
        ch = logging.StreamHandler()
        ch.setFormatter(formatter)
        logger.addHandler(ch)

        # rotating file
        fh = RotatingFileHandler("fetch_report.log", maxBytes=1_048_576, backupCount=3)
        fh.setFormatter(formatter)
        logger.addHandler(fh)

    return logger


log = get_logger(__name__)

# ── business logic ─────────────────────────────────────────────────────────────

FEED_URL = "https://jsonplaceholder.typicode.com/todos"
REPORT_PATH = Path("report.json")


def fetch_todos(url: str) -> list[dict]:
    log.info("Fetching data from %s", url)
    try:
        with urllib.request.urlopen(url, timeout=15) as resp:
            data = json.loads(resp.read().decode())
        log.debug("Received %d records", len(data))
        return data
    except Exception as exc:
        log.error("Fetch failed", exc_info=True)
        return []


def build_report(todos: list[dict]) -> dict:
    if not todos:
        log.warning("No records to process — returning empty report")
        return {}

    completed = [t for t in todos if t.get("completed")]
    log.debug("%d of %d items are completed", len(completed), len(todos))

    return {
        "generated_at": datetime.now(timezone.utc).isoformat(),
        "total": len(todos),
        "completed": len(completed),
        "pending": len(todos) - len(completed),
    }


def save_report(report: dict, path: Path) -> None:
    if not report:
        log.warning("Empty report — skipping write to %s", path)
        return
    path.write_text(json.dumps(report, indent=2))
    log.info("Report saved to %s", path)


def main() -> None:
    log.info("=== fetch_report starting ===")
    todos = fetch_todos(FEED_URL)
    report = build_report(todos)
    save_report(report, REPORT_PATH)
    log.info("=== fetch_report finished ===")


if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

Run it in development mode to see every log line:

APP_ENV=development python fetch_report.py
Enter fullscreen mode Exit fullscreen mode

Run it in a cron job — only warnings and errors will appear (and they land in fetch_report.log):

0 6 * * * /usr/bin/python3 /home/user/fetch_report.py
Enter fullscreen mode Exit fullscreen mode

No more silent failures.


Quick Reference

Task Code
Basic setup logging.basicConfig(level=logging.INFO)
Named logger log = logging.getLogger(__name__)
File + console Add FileHandler and StreamHandler to handlers=[]
Log an exception log.exception("msg") or log.error("msg", exc_info=True)
Rotate files RotatingFileHandler(file, maxBytes=1_048_576, backupCount=5)
Control level via env LOG_LEVEL=DEBUG python script.py

What to do next

  1. Open your most-used automation script right now
  2. Replace every print() with the appropriate log level (info, warning, error)
  3. Add the get_logger(__name__) helper and the RotatingFileHandler
  4. Run it once with APP_ENV=development to verify the output

That's it. Your cron jobs will now have a paper trail.

The pipeline logs every chapter generation step with structured logging: germy5.gumroad.com/l/xhxkzz — pay what you want, min $9.99.


Further Reading

Top comments (0)