DEV Community

Midas126
Midas126

Posted on

Building Your Own AI-Powered CLI Assistant: Beyond Copilot

From Consumer to Creator: Why Build Your Own AI CLI?

GitHub Copilot’s CLI challenge winners showcased the incredible potential of AI to supercharge our terminals. It’s a powerful tool, but it represents a closed ecosystem. What if you could build your own, tailored assistant that integrates directly with your workflows, uses your preferred models, and operates on your terms? Moving from being a consumer of AI tools to a creator unlocks unparalleled customization and deeper technical understanding.

This guide will walk you through building a foundational, locally-runnable AI CLI assistant in Python. We'll move beyond simple command generation into creating a tool that can contextually understand your projects and execute safe, approved actions. Let's build devassist.

Architecture Overview: The Core Components

Our CLI assistant will rest on three pillars:

  1. The Orchestrator (cli.py): Handles user input, manages the flow, and presents output.
  2. The AI Engine (ai_engine.py): Processes natural language queries and returns structured intentions or commands.
  3. The Action Executor (executor.py): Safely interprets the AI's structured output and performs the approved, non-destructive tasks.

This separation of concerns keeps our code clean, testable, and easy to extend.

Step 1: Setting Up the Project and Environment

Create a new project directory and set up a virtual environment.

mkdir devassist && cd devassist
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
Enter fullscreen mode Exit fullscreen mode

Create a requirements.txt file with our initial dependencies. We'll use the openai package for our AI engine, but the architecture allows you to swap in local models via llama-cpp-python or litellm later.

openai>=1.0.0
click>=8.0.0
python-dotenv>=1.0.0
rich>=13.0.0
Enter fullscreen mode Exit fullscreen mode

Install them:

pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Create a .env file to store your API key securely (never hardcode it!).

# .env
OPENAI_API_KEY=your_api_key_here
Enter fullscreen mode Exit fullscreen mode

Step 2: Building the AI Engine with Structured Outputs

The key to a useful assistant is moving from raw text to structured data. We'll use the OpenAI API with JSON mode to get consistent, parsable output.

Create ai_engine.py:

import os
import json
from openai import OpenAI
from dotenv import load_dotenv

load_dotenv()

class AIEngine:
    def __init__(self, model="gpt-4o-mini"):
        self.client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
        self.model = model

    def process_query(self, user_query, context=""):
        """Takes a natural language query and returns a structured JSON action plan."""

        system_prompt = """You are a helpful and precise CLI assistant. Your task is to analyze the user's request and output a JSON object with the following structure:
        {
            "intent": "describe_the_core_goal",
            "command": "the_safe_bash_command_to_execute_or_null",
            "explanation": "a_brief_explanation_of_what_the_command_will_do",
            "needs_confirmation": true/false
        }

        Rules:
        1. Only return commands that are SAFE and NON-DESTRUCTIVE (e.g., list files, search logs, get status). Do NOT return commands for `rm`, `dd`, `format`, or altering critical system files.
        2. If the request is ambiguous or requires clarification, set "command" to null and "needs_confirmation" to true.
        3. If the request is not a CLI task, set "command" to null.
        """

        full_prompt = f"Context: {context}\n\nUser Request: {user_query}"

        try:
            response = self.client.chat.completions.create(
                model=self.model,
                messages=[
                    {"role": "system", "content": system_prompt},
                    {"role": "user", "content": full_prompt}
                ],
                response_format={"type": "json_object"},
                temperature=0.1
            )
            return json.loads(response.choices[0].message.content)
        except Exception as e:
            return {"error": str(e), "intent": "error", "command": None, "explanation": "AI engine failed."}
Enter fullscreen mode Exit fullscreen mode

This engine forces the LLM to output valid JSON that our executor can reliably parse, significantly increasing safety and predictability.

Step 3: Creating the Safe Action Executor

The executor is our safety gate. It validates the AI's proposed command against an allow-list or a block-list before any execution.

Create executor.py:

import subprocess
import shlex

class ActionExecutor:
    # A simple safety list: only these base commands are allowed.
    SAFE_BASE_COMMANDS = ['ls', 'find', 'grep', 'pwd', 'cat', 'head', 'tail', 'wc', 'ps', 'df', 'du', 'git', 'python', 'curl']

    def execute(self, action_plan):
        """Executes the command from the action plan if it passes safety checks."""

        if not action_plan.get("command"):
            print(f"🤖 Assistant: {action_plan.get('explanation', 'No action required.')}")
            return None

        proposed_command = action_plan["command"]
        base_cmd = proposed_command.split()[0] if proposed_command else None

        # Safety Check 1: Is the base command in our safe list?
        if base_cmd not in self.SAFE_BASE_COMMANDS:
            print(f"⚠️  Blocked: Command '{base_cmd}' is not on the safe list.")
            return None

        # Safety Check 2: Does the command contain obviously dangerous patterns?
        dangerous_patterns = ['rm ', '> /dev/sd', 'mkfs', 'dd of=', 'chmod 777', '; rm', '&& rm']
        if any(pattern in proposed_command for pattern in dangerous_patterns):
            print(f"🚨 Blocked: Command contains a dangerous pattern.")
            return None

        # If safe, execute
        if action_plan.get("needs_confirmation", False):
            confirm = input(f"Run: `{proposed_command}`? (y/N): ")
            if confirm.lower() != 'y':
                print("Cancelled.")
                return None

        print(f"⚡ Executing: `{proposed_command}`")
        try:
            # Using shell=False for better security when possible
            args = shlex.split(proposed_command)
            result = subprocess.run(args, capture_output=True, text=True, timeout=30)
            return {
                "stdout": result.stdout,
                "stderr": result.stderr,
                "returncode": result.returncode
            }
        except Exception as e:
            return {"error": str(e), "stdout": "", "stderr": "", "returncode": 1}
Enter fullscreen mode Exit fullscreen mode

Step 4: Wiring It All Together with the CLI

Now, let's create the main entry point using the click library for a robust CLI interface.

Create cli.py:

import click
from rich.console import Console
from rich.syntax import Syntax
from ai_engine import AIEngine
from executor import ActionExecutor

console = Console()
ai = AIEngine()
executor = ActionExecutor()

@click.command()
@click.argument('query', nargs=-1)
@click.option('--context', '-c', default='', help='Project context (e.g., "I am in a Python project with a src/ directory").')
def main(query, context):
    """DevAssist - Your AI CLI Companion. Ask it to help with terminal tasks."""
    user_query = ' '.join(query)

    if not user_query:
        click.echo("Please provide a query. Example: `devassist 'find all Python files modified today'`")
        return

    with console.status("[bold green]Thinking...") as status:
        action_plan = ai.process_query(user_query, context)

    console.print(f"\n🎯 [bold cyan]Intent:[/bold cyan] {action_plan.get('intent')}")
    console.print(f"💡 [bold yellow]Explanation:[/bold yellow] {action_plan.get('explanation')}")

    if action_plan.get('command'):
        console.print(f"```
{% endraw %}
bash\n{action_plan['command']}\n
{% raw %}
```")

    result = executor.execute(action_plan)

    if result and not result.get('error'):
        if result['stdout']:
            console.print("\n[bold green]Output:[/bold green]")
            # Use rich to print stdout with syntax highlighting if it looks like code
            syntax = Syntax(result['stdout'], "bash", theme="monokai", line_numbers=False)
            console.print(syntax)
        if result['stderr']:
            console.print(f"\n[bold red]Errors:[/bold red]\n{result['stderr']}")
    elif result and result.get('error'):
        console.print(f"\n[bold red]Execution Error:[/bold red] {result['error']}")

if __name__ == '__main__':
    main()
Enter fullscreen mode Exit fullscreen mode

Finally, create a setup.py or use a pyproject.toml to make it installable, or simply run it with python cli.py. For a true CLI, add a shebang and make it executable, or install it in development mode:

pip install -e .
Enter fullscreen mode Exit fullscreen mode

Testing Your AI Assistant

Run it with various queries to see the structured thought process in action:

# List files in a detailed way
devassist "show me all markdown files in this directory"

# Find information in code
devassist "find all function definitions in the current directory that contain 'error'"

# Get system status
devassist "what's taking up space in the current directory?"

# Ambiguous request (will ask for confirmation or clarification)
devassist "clean up the logs"
Enter fullscreen mode Exit fullscreen mode

Leveling Up: Next Steps for Your Assistant

You've built the foundation. Here’s how to make it truly powerful:

  1. Add Persistent Context: Store conversation history or project-specific context (e.g., from a README) in a vector database (ChromaDB, LanceDB) to make your assistant project-aware.
  2. Integrate Local Models: Swap the AIEngine to use a local LLM via llama.cpp or ollama for complete privacy and offline use. The structured JSON output pattern remains crucial.
  3. Expand the Action Suite: Teach it to run project-specific scripts, make safe Git operations (git status, git log), or interact with Docker and Kubernetes in a read-only manner first.
  4. Implement a Plugin System: Allow yourself (or others) to add custom "skill" modules for specific frameworks or tools.

The Takeaway: Empowerment Through Building

While tools like GitHub Copilot CLI are fantastic, building your own AI assistant demystifies the "magic" and gives you ultimate control. You understand the safety mechanisms, the prompt engineering, and the integration points. This foundational knowledge lets you adapt faster as the AI landscape evolves.

Start with this blueprint, iterate on it for your specific needs, and share what you create. The future of developer tooling isn't just about using AI—it's about shaping it.

Your Challenge: Fork this blueprint, add one new feature (like reading project context from a file), and share your implementation on Dev. What unique superpower will you give your CLI?

Top comments (0)