DEV Community

k1lgor
k1lgor

Posted on • Originally published at dev.to

🐳 I Built a Container Dashboard for Your AI Coding Agent — And It's Awesome

The Problem

If you're like me, you live in your terminal. You've got Docker containers running for databases, Redis instances for caching, microservices doing their thing — and you're constantly context-switching to check on them.

# The old way:
docker ps
docker logs my-app -n 50
docker stats
docker inspect some_container
# ... back and forth, breaking your flow
Enter fullscreen mode Exit fullscreen mode

Now imagine you're working with an AI coding agent — an LLM that can read files, write code, and run commands for you. Every time you need to check a container, you either:

  1. Break your flow by typing out commands manually
  2. Trust the AI to blindly run docker rm -f without confirmation (yikes!)
  3. Squint at raw JSON output from docker inspect

I wanted something better. So I built it.


🐳 Enter: Container Dashboard

Container Dashboard is a pi coding agent extension that brings full container lifecycle management into your AI agent. It's like having Docker Desktop — but inside your LLM-powered terminal, with safety guarantees baked in.

It works with Docker, Podman, and Nerdctl — all three major container runtimes.


✨ The Coolest Features

📊 Live TUI Widget

Your AI coding agent's sidebar shows a live container count at all times. You always know what's running without asking:

🐳 Docker v24.0.7  |  ▶ 3 running  |  ● 8 total
Enter fullscreen mode Exit fullscreen mode

🎯 14 Slash Commands

Type /docker:ps and instantly get a formatted table. /docker:stats shows CPU & memory. /docker:logs my-app -n 100 tails logs. Here's the full arsenal:

Command Superpower
/docker:ps List containers (with --running or --all flags)
/docker:logs <name> Tail logs with -n line control
/docker:stats Live CPU, memory, and network I/O
/docker:inspect <name> Deep-dive into container config (JSON parsed beautifully)
/docker:top <name> See processes running inside a container
/docker:images Browse all pulled images with sizes
/docker:prune Remove stopped containers (+ --images or --all)
/docker:stop/start/restart <name> Lifecycle control
/docker:rm <name> Remove containers or images
/docker:detect Re-detect the container runtime

🤖 13 LLM Tools

Your AI agent can proactively manage containers using tools like container_ps, container_stats, container_logs, and container_prune_system. It can check what's running, diagnose issues, and clean up — all autonomously but safely.

🛡️ Safety First — Built-in Confirmations

This is my favorite part. Dangerous commands get intercepted:

const dangerousPatterns = [
  /(?:docker|podman|nerdctl)\s+(?:rm|container\s+rm)\s+-f/i,
  /(?:docker|podman|nerdctl)\s+system\s+prune\s+-a/i,
  /(?:docker|podman|nerdctl)\s+stop\s+\$\(docker\s+ps\s+-aq\)/i,
  // ...
];
Enter fullscreen mode Exit fullscreen mode

Before the AI can:

  • Force-remove a running container
  • System prune everything
  • Stop ALL containers at once

...it hits a confirmation dialog. The AI literally asks "Are you sure?" before pulling the trigger. 🎯

No more accidental docker system prune -a wiping your CI cache while the AI was "just trying to help."


🧱 How It Works Under the Hood

The architecture is surprisingly clean — 5 TypeScript files, ~800 lines total:

container-dashboard/
├── index.ts       # Entry point, permission gates, lifecycle hooks
├── runtime.ts     # Runtime detection (docker  podman  nerdctl), CLI abstraction
├── commands.ts    # /docker:* slash commands with formatted output
├── tools.ts       # 13 LLM tools registered via TypeBox schemas
└── widget.ts      # Live TUI sidebar widget
Enter fullscreen mode Exit fullscreen mode

Runtime Detection: Auto-Discovery

The extension auto-detects which container runtime you have installed by checking docker, then podman, then nerdctl in priority order. It also grabs the version string so you see Docker v24.0.7 instead of just "Docker."

const RUNTIMES = ["docker", "podman", "nerdctl"] as const;

export async function detectRuntime(pi: ExtensionAPI): Promise<RuntimeState> {
  for (const runtime of RUNTIMES) {
    try {
      const result = await pi.exec(runtime, ["--version"], { timeout: 3000 });
      if (result.code === 0 && result.stdout) {
        return { runtime, version: result.stdout.trim(), available: true };
      }
    } catch {
      continue;
    }
  }
  return { runtime: null, version: "", available: false };
}
Enter fullscreen mode Exit fullscreen mode

Cross-Runtime Compatibility

Every function — listContainers, getContainerLogs, pruneSystem, getContainerStats — works identically across Docker, Podman, and Nerdctl because they all share the same CLI interface for basic operations. The extension parses JSON output from docker ps --format '{{json .}}', normalizes status fields, and handles the slight differences between Docker's and Podman's JSON schemas.

Beautiful Terminal Tables

No more raw JSON. The commands render colorized, formatted tables with proper padding, truncation, and status colors:

 Containers

CONTAINER ID   NAME                IMAGE                    STATUS      PORTS
a1b2c3d4e5f6   my-postgres         postgres:16              ▶ running   5432→5432
b2c3d4e5f6a7   redis-cache         redis:7-alpine           ▶ running   6379→6379
c3d4e5f6a7b8   old-test-container  node:18                  ● exited    —
Enter fullscreen mode Exit fullscreen mode

Smart Inspect Parsing

/docker:inspect takes the raw JSON dump and extracts the useful bits — ports, environment variables, mounted volumes, IP address, command — and displays them as a clean summary instead of a JSON firehose.


📦 Installation (60 seconds)

# From npm (recommended)
pi install npm:container-dashboard

# Or from GitHub
pi install git:github.com/k1lgor/pi-container-dashboard

# Or load locally
pi -e ./path/to/index.ts
Enter fullscreen mode Exit fullscreen mode

That's it. The extension auto-detects your container runtime at startup and starts tracking containers immediately.


🚀 Real-World Workflow

Here's what a typical session looks like:

You: "What containers are running?"

🤖 AI: *calls container_ps*
▶ my-postgres (running)
▶ redis-cache (running)
▶ api-gateway (running)

You: "Check the api-gateway logs, something's wrong"

🤖 AI: *calls container_logs("api-gateway", 100)*
📋 Logs for api-gateway:
Error: Connection refused to postgres:5432
    at ...

You: "Restart it"

🤖 AI: *calls container_restart("api-gateway")*
🔄 Restarted api-gateway

You: "Clean up old containers, but save the images"

🤖 AI: *calls container_prune*
🗑️ Pruned 3 stopped containers. Freed: 1.2GB
Enter fullscreen mode Exit fullscreen mode

No context switching. No leaving your AI agent. No accidentally running dangerous commands.


💡 Why I Built This

I've been using AI coding agents for months, and the biggest friction point was always permission boundaries. I wanted my AI to be useful — to actually manage infrastructure, not just generate code. But giving an LLM raw access to docker commands is terrifying.

This extension solves that tension:

  • The AI gets agency — it can check logs, restart services, clean up disk space
  • You get safety — every destructive action requires confirmation
  • Everyone gets pretty output — formatted tables instead of JSON vomit

It's a pattern I think we'll see more of: AI agents with guardrails, not blacklists. Give them a sandbox, define safe patterns, and let them do real work.


🛠️ Tech Stack

  • TypeScript 5 — Fully typed, strict mode
  • pi coding agent SDK — Extension API hooks
  • TypeBox — Runtime type validation for LLM tool parameters
  • Zero external dependencies for the runtime logic — pure pi.exec() calls

📊 Stats & Facts

Metric Value
Lines of code ~800
Source files 5
Slash commands 14
LLM tools 13
Supported runtimes 3 (Docker, Podman, Nerdctl)
GitHub k1lgor/pi-container-dashboard

🔗 Get Started


💬 What Do You Think?

I'm excited about this pattern of guardrailed AI infrastructure management. Have you tried giving your AI coding agent access to Docker or other infrastructure tools? How do you handle the safety vs. agency tradeoff?

Drop a comment below — I'd love to hear your thoughts!


Built with ❤️ and 🤖 by @k1lgor

Top comments (0)