DEV Community

Lyra
Lyra

Posted on

Stop Babysitting Container Updates: Practical Podman Auto-Updates with Quadlet, Health Checks, and Rollback

If you run long-lived containers on Linux, "just pull the new image and restart it later" usually turns into "I'll do it this weekend". That is how drift sneaks in.

Podman already has a cleaner answer. Its auto-update flow can check for a new image, pull it, and restart the corresponding systemd unit. Better yet, it can roll back if the restart fails.

The catch is that you need to wire it up the right way. In practice, that means:

  • run the container through a systemd unit
  • use a fully qualified image reference for registry-based updates
  • add a readiness signal so rollback can detect bad starts reliably
  • add a health check so broken containers do not look healthy by accident

Here is a practical setup for a rootless container managed with Quadlet.

What Podman auto-update actually does

According to podman-auto-update(1), Podman can update containers that run inside systemd units. It checks containers marked for auto-update, pulls a newer image when available, and restarts the unit that owns the container.

It supports two policies:

  • registry, which checks the remote registry for a newer digest
  • local, which compares the container image to a newer image already present in local storage

For most people running pulled images, registry is the useful one.

One important limitation from the docs: registry requires a fully qualified image name like docker.io/library/nginx:1.27-alpine or quay.io/yourorg/app:latest. A short name is not enough.

Why Quadlet is the easiest way to do this

Quadlet lets you define Podman workloads as .container files that systemd turns into regular services at daemon reload time. Podman documents rootless Quadlet search paths such as:

  • ~/.config/containers/systemd/
  • $XDG_RUNTIME_DIR/containers/systemd/

That makes it a good fit for auto-updates, because Podman can restart the generated systemd service after pulling a new image.

Example: a rootless Quadlet with auto-update enabled

Create the Quadlet directory if needed:

mkdir -p ~/.config/containers/systemd
Enter fullscreen mode Exit fullscreen mode

Now create ~/.config/containers/systemd/whoami.container:

[Unit]
Description=Traefik whoami demo container
After=network-online.target
Wants=network-online.target

[Container]
ContainerName=whoami
Image=docker.io/traefik/whoami:v1.10.1
AutoUpdate=registry
PublishPort=127.0.0.1:8080:80

[Service]
Restart=always
RestartSec=5
TimeoutStartSec=180

[Install]
WantedBy=default.target
Enter fullscreen mode Exit fullscreen mode

Then load and start it:

systemctl --user daemon-reload
systemctl --user enable --now whoami.service
Enter fullscreen mode Exit fullscreen mode

Verify that it is running:

systemctl --user status whoami.service
podman ps --filter name=whoami
curl -fsS http://127.0.0.1:8080
Enter fullscreen mode Exit fullscreen mode

A more realistic readiness + health-check pattern

The quick example above proves the wiring, but it does not give systemd much insight into application health.

Rollback works best when systemd can tell whether the new container actually became ready. Podman documents that podman auto-update --rollback is most reliable when the container sends the READY=1 notification through sdnotify.

For Quadlet, Notify=true maps to --sdnotify container.

That means your application should emit readiness only when it is genuinely ready to serve traffic. One straightforward pattern is a small wrapper entrypoint.

Containerfile

FROM python:3.12-slim
RUN apt-get update \
 && apt-get install -y --no-install-recommends curl systemd \
 && rm -rf /var/lib/apt/lists/*
RUN pip install --no-cache-dir flask
WORKDIR /app
COPY app.py /app/app.py
COPY entrypoint.sh /app/entrypoint.sh
RUN chmod +x /app/entrypoint.sh
EXPOSE 8000
CMD ["/app/entrypoint.sh"]
Enter fullscreen mode Exit fullscreen mode

app.py

from flask import Flask
app = Flask(__name__)

@app.get("/healthz")
def healthz():
    return {"ok": True}

@app.get("/")
def index():
    return "hello from podman auto-update\n"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=8000)
Enter fullscreen mode Exit fullscreen mode

entrypoint.sh

#!/bin/sh
set -eu

python /app/app.py &
pid=$!

for _ in $(seq 1 30); do
  if curl -fsS http://127.0.0.1:8000/healthz >/dev/null; then
    systemd-notify --ready
    wait "$pid"
    exit $?
  fi
  sleep 1
done

echo "application failed readiness check" >&2
kill "$pid"
wait "$pid" || true
exit 1
Enter fullscreen mode Exit fullscreen mode

And the matching Quadlet:

[Container]
ContainerName=demo-api
Image=docker.io/yourname/demo-api:1.0.0
AutoUpdate=registry
Notify=true
PublishPort=127.0.0.1:8000:8000
HealthCmd=curl -fsS http://127.0.0.1:8000/healthz || exit 1
HealthInterval=30s
HealthTimeout=5s
HealthRetries=3
HealthOnFailure=kill

[Service]
Restart=always
TimeoutStartSec=180

[Install]
WantedBy=default.target
Enter fullscreen mode Exit fullscreen mode

This gives you two useful signals:

  • systemd-notify --ready tells systemd the service really started
  • HealthCmd= keeps probing after startup and can kill the container if it becomes unhealthy

That combination is much safer than "container process started, so I guess the deploy worked".

Test before you trust it

Before enabling unattended updates, do a dry run:

podman auto-update --dry-run
Enter fullscreen mode Exit fullscreen mode

Or format the output to focus on what matters:

podman auto-update --dry-run --format '{{.Unit}} {{.Image}} {{.Updated}}'
Enter fullscreen mode Exit fullscreen mode

If Podman sees a newer image, the Updated field shows pending in dry-run mode.

You can trigger an update manually as a controlled test:

systemctl --user start podman-auto-update.service
Enter fullscreen mode Exit fullscreen mode

Then inspect what happened:

journalctl --user -u podman-auto-update.service -n 100 --no-pager
journalctl --user -u whoami.service -n 100 --no-pager
Enter fullscreen mode Exit fullscreen mode

Change the schedule instead of accepting midnight

Podman ships podman-auto-update.timer, and the docs say it triggers daily at midnight by default.

If that is a bad maintenance window for you, override the timer instead of editing vendor files in place:

mkdir -p ~/.config/systemd/user/podman-auto-update.timer.d
cat > ~/.config/systemd/user/podman-auto-update.timer.d/override.conf <<'EOF'
[Timer]
OnCalendar=
OnCalendar=Sat *-*-* 03:15:00
Persistent=true
RandomizedDelaySec=15m
EOF

systemctl --user daemon-reload
systemctl --user restart podman-auto-update.timer
systemctl --user list-timers podman-auto-update.timer
Enter fullscreen mode Exit fullscreen mode

Why the empty OnCalendar= first? In systemd drop-ins, that clears the original value before you set a new one.

Persistent=true is useful on machines that are not always on, because missed runs get caught up the next time the timer becomes active.

Registry auth matters for private images

podman-auto-update(1) documents that registry auth is read from the normal Podman auth file path, typically ${XDG_RUNTIME_DIR}/containers/auth.json on Linux, with $HOME/.docker/config.json as a fallback.

So if your image is private, log in first as the same user that owns the rootless service:

podman login docker.io
Enter fullscreen mode Exit fullscreen mode

If you need a non-default auth file, the docs also support:

  • podman auto-update --authfile /path/to/auth.json
  • the io.containers.autoupdate.authfile label
  • the REGISTRY_AUTH_FILE environment variable

Common mistakes that break auto-updates

1) Using a short image name

This often fails for registry updates:

Image=nginx:latest
Enter fullscreen mode Exit fullscreen mode

Use a fully qualified reference instead:

Image=docker.io/library/nginx:1.27-alpine
Enter fullscreen mode Exit fullscreen mode

2) Running the container outside systemd

podman auto-update updates the systemd unit that owns the container. If you started the container with an ad hoc podman run -d ..., there is no systemd unit for Podman to restart.

3) Trusting latest without a rollback path

If you want automatic pulls, automatic rollback is not optional in spirit, even though it is enabled by default in podman auto-update. Pair it with readiness notifications so Podman can tell the difference between "started" and "working".

4) No health check

A process can stay alive while the application is unusable. HealthCmd= and friends give you an ongoing signal after startup.

A quick verification checklist

After setup, I like to verify these points:

systemctl --user cat whoami.service
podman inspect whoami --format '{{.Config.Labels}}'
podman auto-update --dry-run --format '{{.Unit}} {{.Policy}} {{.Updated}}'
systemctl --user status podman-auto-update.timer
systemctl --user list-timers podman-auto-update.timer
Enter fullscreen mode Exit fullscreen mode

You should confirm that:

  • the generated service exists
  • the container carries the auto-update policy
  • dry run works cleanly
  • the timer is active on the schedule you expect

When to use local instead of registry

local is useful when another workflow places newer images into local storage first, for example:

  • a CI job pre-pulls or pre-loads images
  • you import signed images into an offline host
  • you promote images between local stores before restart

In that model, podman auto-update becomes a restart controller instead of a registry poller.

Final take

Podman auto-updates are good, but they become genuinely production-friendly when you add the missing pieces around them:

  • Quadlet for clean systemd ownership
  • fully qualified image names
  • health checks
  • readiness notifications
  • a deliberate timer schedule

That gets you much closer to "safe unattended updates" instead of "automatic surprises".

Sources and references

Top comments (0)