DEV Community

Midas126
Midas126

Posted on

Beyond the Chatbot: A Practical Guide to Building Your Own AI-Powered CLI Tools

Why Your Next Productivity Hack Should Be an AI CLI

You’ve seen the headlines: AI is revolutionizing everything from code generation to image creation. But for many developers, the most tangible experience with AI is still a chat interface—typing prompts and waiting for responses. What if you could break out of the browser and weave AI directly into your daily workflow, right from the terminal? The command line is the developer's home turf, a place of speed, automation, and power. By building custom AI-powered CLI tools, you move from consuming AI to orchestrating it, creating personalized assistants that work exactly how you do.

This guide will walk you through the principles and code to build practical, focused AI tools that solve real problems, moving beyond generic chat to targeted automation.

The Philosophy: Narrow Beats General

The key to a useful AI CLI tool is specificity. A general-purpose chatbot in your terminal offers little advantage over a browser tab. The magic happens when you build a tool for a single, well-defined job.

Think about your workflow:

  • gitlog-ai: Summarize the last 10 commit messages into a plain-English status report.
  • error-decoder: Pass a stack trace and get the three most likely fixes.
  • bash-explainer: Pipe a complex shell command to get a step-by-step breakdown.

These tools are scoped, actionable, and fit seamlessly into existing processes. They use AI as a powerful processing engine, not a conversation partner.

Architecture of an AI CLI Tool

A typical AI CLI follows a simple pattern:

  1. Input: Capture arguments, stdin, or file contents.
  2. Processing: Format the input into a precise prompt for the AI.
  3. AI Call: Send the prompt to an AI model API.
  4. Output: Present the AI's response cleanly in the terminal.

The real art lies in steps 2 and 4—crafting the perfect prompt and presenting the output usefully.

Building log-sensei: An AI Log File Analyzer

Let’s build a concrete example: log-sensei. This tool will take a log file, identify errors and critical patterns, and provide a concise summary and suggested actions.

We’ll use Python for its rich CLI and HTTP libraries, and the OpenAI API (though the pattern works for Anthropic, Gemini, or local models via Ollama).

Step 1: Project Setup

mkdir log-sensei && cd log-sensei
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install openai click rich
Enter fullscreen mode Exit fullscreen mode

Create the main file:

touch log_sensei.py
chmod +x log_sensei.py
Enter fullscreen mode Exit fullscreen mode

Step 2: The Core Script

Here’s the complete log_sensei.py:

#!/usr/bin/env python3
import click
import openai
from rich.console import Console
from rich.syntax import Syntax
from rich.panel import Panel
import sys
import os

console = Console()

# Initialize OpenAI client (set your API key in environment variable OPENAI_API_KEY)
client = openai.OpenAI()

def analyze_logs(log_content):
    """Crafts a precise prompt and calls the AI API."""

    system_prompt = """You are a senior DevOps engineer analyzing application logs. Your task is to:
    1. Identify CRITICAL errors (e.g., exceptions, connection failures, crashes).
    2. Identify IMPORTANT warnings or patterns (e.g., high latency, repeated retries).
    3. Provide a very concise summary (max 3 bullet points).
    4. Suggest 1-2 immediate next steps for investigation.
    Be direct and use plain language. Format the output clearly."""

    user_prompt = f"""Analyze the following log file content:

    {log_content}

    Provide your analysis as structured above."""

    try:
        response = client.chat.completions.create(
            model="gpt-4-turbo-preview",  # or "gpt-3.5-turbo" for speed/cost
            messages=[
                {"role": "system", "content": system_prompt},
                {"role": "user", "content": user_prompt}
            ],
            temperature=0.2,  # Low temperature for more deterministic, focused output
            max_tokens=500
        )
        return response.choices[0].message.content
    except Exception as e:
        return f"[ERROR] Failed to call AI API: {e}"

@click.command()
@click.argument('log_file', type=click.Path(exists=True), required=False)
def main(log_file):
    """Analyze a log file with AI to pinpoint issues and suggestions."""

    # Get log content: from file argument or stdin
    if log_file:
        with open(log_file, 'r') as f:
            content = f.read()
    elif not sys.stdin.isatty():  # Check if stdin has data (piped or redirected)
        content = sys.stdin.read()
    else:
        console.print("[red]Error: Please provide a log file or pipe log content to stdin.[/red]")
        console.print("\nExamples:")
        console.print("  log-sensei /var/log/app/error.log")
        console.print("  tail -50 /var/log/app/error.log | log-sensei")
        sys.exit(1)

    if not content.strip():
        console.print("[yellow]The provided log file or input is empty.[/yellow]")
        sys.exit(0)

    console.print(Panel.fit("🤖 [bold cyan]log-sensei[/] Analyzing...", border_style="cyan"))

    # Show a preview of the log
    syntax = Syntax(content[:1000], "bash", theme="monokai", line_numbers=False)
    console.print(Panel(syntax, title="Log Preview", border_style="dim"))

    with console.status("[bold green]Calling AI for analysis...", spinner="dots"):
        analysis = analyze_logs(content[:6000])  # Limit token count for demo

    console.print("\n")
    console.print(Panel.fit(
        f"[bold]Analysis Results[/bold]\n\n{analysis}",
        border_style="green",
        title="✅ Summary"
    ))

if __name__ == '__main__':
    main()
Enter fullscreen mode Exit fullscreen mode

Step 3: Making It a Global Tool

Install it in development mode:

pip install -e .
Enter fullscreen mode Exit fullscreen mode

Create a minimal setup.py:

from setuptools import setup, find_packages

setup(
    name="log-sensei",
    version="0.1.0",
    py_modules=["log_sensei"],
    install_requires=[
        "openai",
        "click",
        "rich",
    ],
    entry_points={
        'console_scripts': [
            'log-sensei=log_sensei:main',
        ],
    },
)
Enter fullscreen mode Exit fullscreen mode

Now you can run it from anywhere:

# Analyze a file
log-sensei /path/to/your/app.log

# Or pipe to it
docker logs your_container | log-sensei
Enter fullscreen mode Exit fullscreen mode

Leveling Up: Advanced Patterns

Once you have the basic pattern, you can create more sophisticated tools.

1. Chaining Commands with subprocess

Imagine a tool that runs a test suite, captures the output, and asks the AI to suggest the most likely flaky test.

import subprocess
# ... inside a click command ...
result = subprocess.run(['pytest', '--tb=short'], capture_output=True, text=True)
test_output = result.stdout + result.stderr
# Send test_output to AI with a prompt like "Identify flaky tests..."
Enter fullscreen mode Exit fullscreen mode

2. Context-Aware Tools with Embeddings

For a tool that answers questions about your codebase, you can use embeddings for semantic search before crafting the final prompt.

# Pseudocode for a `code-qa` tool
question = click.prompt("What do you want to know about the codebase?")
relevant_chunks = semantic_search(question, vector_database_of_code_chunks)
prompt = f"Based on this code: {relevant_chunks}\n\nAnswer: {question}"
# Call AI with prompt
Enter fullscreen mode Exit fullscreen mode

3. Streaming Responses for Long Tasks

For longer AI generations, stream the response to the terminal for a better user experience.

response = client.chat.completions.create(
    model="gpt-4",
    messages=messages,
    stream=True,
    temperature=0.7,
)

for chunk in response:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="")
Enter fullscreen mode Exit fullscreen mode

Choosing Your AI Backend

  • OpenAI API: Easiest start, great models (gpt-4-turbo, gpt-3.5-turbo). Mind costs and data privacy.
  • Anthropic Claude API: Excellent for long contexts and analysis.
  • Local Models (via Ollama/LM Studio): Full privacy, no network latency. Requires local GPU power. Great for iterating on prompt design offline.
  • Gemini API: Strong alternative, often competitive pricing.

The beauty of the pattern is that swapping the backend is often just changing a few lines of the API call.

Start Building Your AI Workflow Today

The terminal is your cockpit. AI is a powerful new engine. By building your own CLI tools, you connect them directly, creating a workflow that is uniquely yours. Start small: automate a tedious documentation task, summarize daily logs, or explain complex configuration files.

Your Call to Action: This week, identify one repetitive, text-based task in your workflow. Sketch out a prompt that would solve it. Then, fork the log-sensei example and adapt it to your need. The leap from using AI to building with it is smaller than you think—and infinitely more powerful.

Share what you build! The best tools are often the hyper-specific ones we create for ourselves.

Top comments (0)