DEV Community

Midas126
Midas126

Posted on

Beyond the Hype: A Practical Guide to Building Your Own AI-Powered CLI Tool

From Consumer to Creator: Why Build Your Own AI CLI?

Another week, another wave of AI articles. We've read the announcements, seen the Copilot demos, and maybe even won a challenge or two. It's incredible to consume these tools, but there's a deeper level of understanding and empowerment that comes from building. Instead of just using AI as a black-box assistant, what if you could embed it directly into your terminal workflow, tailored to your specific needs?

This guide is for developers ready to move past the hype and get their hands dirty. We're going to build a custom, locally-runnable AI Command Line Interface (CLI) tool from scratch. You'll learn how to structure a CLI, interact with the OpenAI API (or a local model), and create a practical utility that answers your tech stack questions right in the terminal. By the end, you'll have a blueprint for automating your own repetitive cognitive tasks.

What We're Building: stack-helper

Our tool, stack-helper, will be a simple Python-based CLI. You'll ask it a question about a technology, library, or concept, and it will return a concise, actionable answer.

Example usage:

$ stack-helper "Explain the difference between useEffect and useMemo in React"
Enter fullscreen mode Exit fullscreen mode

And get a formatted, useful response directly in your shell.

Prerequisites

  • Python 3.8+ installed.
  • An OpenAI API key (Get one here). We'll use GPT-3.5-turbo for its speed and cost-effectiveness. (We'll also discuss local alternatives).
  • Basic familiarity with the terminal and Python.

Step 1: Setting Up the Project

Create a new directory and set up a virtual environment—a crucial step for managing dependencies.

mkdir stack-helper
cd stack-helper
python3 -m venv venv

# Activate it:
# On macOS/Linux:
source venv/bin/activate
# On Windows (PowerShell):
.\venv\Scripts\Activate

# Now, install the essential libraries:
pip install openai click rich
Enter fullscreen mode Exit fullscreen mode
  • openai: The official client library.
  • click: A fantastic package for creating beautiful, composable CLIs.
  • rich: For adding colors, styles, and pretty formatting to our terminal output.

Step 2: Crafting the Core AI Function

Let's create the brain of our operation. Create a file named ai_core.py.

# ai_core.py
import openai
from typing import Optional

# It's best practice to load the API key from an environment variable.
import os

class StackHelperAI:
    def __init__(self, api_key: Optional[str] = None):
        # Use provided key or fall back to environment variable
        key = api_key or os.getenv("OPENAI_API_KEY")
        if not key:
            raise ValueError("OpenAI API key must be provided or set as OPENAI_API_KEY environment variable.")
        self.client = openai.OpenAI(api_key=key)

    def ask(self, question: str, model: str = "gpt-3.5-turbo") -> str:
        """Sends a question to the AI model and returns the response."""
        try:
            # System message "primes" the AI with its role and behavior.
            response = self.client.chat.completions.create(
                model=model,
                messages=[
                    {"role": "system", "content": "You are a senior software engineer. Provide clear, concise, and practical answers about programming concepts, frameworks, and tools. Format responses for readability in a terminal. Use bullet points or short paragraphs."},
                    {"role": "user", "content": question}
                ],
                temperature=0.7,  # Controls creativity. 0.7 is a good balance for technical Q&A.
                max_tokens=500    # Limits response length.
            )
            return response.choices[0].message.content
        except openai.APIError as e:
            return f"API Error: {e}"
        except Exception as e:
            return f"An unexpected error occurred: {e}"
Enter fullscreen mode Exit fullscreen mode

Key Takeaways:

  • The system message is your most powerful tool for steering the AI's behavior. We've cast it as a "senior software engineer."
  • temperature=0.7 ensures answers are focused but not robotic.
  • Robust error handling is essential for a good CLI experience.

Step 3: Building the CLI Interface with Click

Now, let's create the entry point for our tool. Create cli.py.

# cli.py
import click
from rich.console import Console
from rich.markdown import Markdown
from ai_core import StackHelperAI

console = Console()

@click.command()
@click.argument('question', required=True) # Defines our main command argument.
@click.option('--model', '-m', default='gpt-3.5-turbo', help='OpenAI model to use (e.g., gpt-4, gpt-3.5-turbo).')
def main(question, model):
    """Stack Helper - Get AI-powered answers for your tech stack questions."""
    console.print("[cyan]🤖 Thinking...[/cyan]")

    # Initialize our AI core
    ai = StackHelperAI()

    # Get the answer
    answer = ai.ask(question, model=model)

    # Print the answer beautifully using Rich.
    # We treat the AI's response as Markdown for nice formatting.
    console.print("\n[bold green]Answer:[/bold green]")
    console.print(Markdown(answer))
    console.print("\n")

# This makes the file executable as a script.
if __name__ == '__main__':
    main()
Enter fullscreen mode Exit fullscreen mode

Why Click? It handles argument parsing, help text generation (--help), and option management automatically, creating a professional-feeling tool.

Step 4: Making it a Runnable Package

We need to tell Python this is an installable package. Create a setup.py file.

# setup.py
from setuptools import setup, find_packages

setup(
    name="stack-helper",
    version="0.1.0",
    packages=find_packages(),
    install_requires=[
        "openai",
        "click",
        "rich",
    ],
    entry_points={
        'console_scripts': [
            'stack-helper=cli:main', # This links the command to our function!
        ],
    },
)
Enter fullscreen mode Exit fullscreen mode

Now, install your package in "editable" mode. This is the magic step that creates the global stack-helper command.

pip install -e .
Enter fullscreen mode Exit fullscreen mode

Set your OpenAI API key as an environment variable:

# On macOS/Linux:
export OPENAI_API_KEY='your-api-key-here'
# On Windows (PowerShell):
$env:OPENAI_API_KEY='your-api-key-here'
Enter fullscreen mode Exit fullscreen mode

Step 5: Run It!

Your tool is now a system command. Test it out.

stack-helper "How do I handle errors in async/await syntax in JavaScript?"
Enter fullscreen mode Exit fullscreen mode

You should see a formatted, helpful answer appear in your terminal.

Leveling Up: Advanced Considerations

Our basic tool works, but here’s how to make it production-ready and explore further.

1. Go Local for Privacy & Cost:
Replace the openai client with a library like llama-cpp-python to run models like Llama 3 or Mistral locally. Your ai_core.py would change to interact with a local server (e.g., using ollama or lmstudio).

# Example using requests to a local Ollama server
import requests

class LocalAI:
    def ask(self, question: str, model: str = "llama3") -> str:
        response = requests.post('http://localhost:11434/api/generate',
                                 json={'model': model, 'prompt': question, 'stream': False})
        return response.json()['response']
Enter fullscreen mode Exit fullscreen mode

2. Add Context & Memory:
Modify ai_core.py to maintain a conversation history in the messages list, allowing for follow-up questions.

3. Implement Caching:
Use diskcache or sqlite3 to cache responses for identical questions, saving API calls and speeding up repeated queries.

4. Extend with Subcommands:
Use Click's command groups to add features.

stack-helper explain "GraphQL"
stack-helper translate "code from Python to Rust"
stack-helper summarize "article at https://..."
Enter fullscreen mode Exit fullscreen mode

The Real Takeaway: Automate Your Own Workflow

You've just built more than a Q&A tool. You've built a framework for AI-powered terminal automation. Think about your own pain points:

  • A CLI that generates boilerplate code for your specific framework.
  • A tool that parses git diff and suggests commit messages.
  • An assistant that queries your internal documentation.

The pattern is the same: CLI Interface + AI Logic + Your Domain Knowledge.

Don't just consume the AI revolution—script it. Fork the stack-helper code from this gist, replace the ask() function with your own logic, and start building the tools you wish existed. What will you automate first?


The complete code for this project is available as a GitHub Gist.

Top comments (0)