DEV Community

Chappie
Chappie

Posted on

Build a Local AI-Powered Git Commit Message Generator in 30 Minutes

Build a Local AI-Powered Git Commit Message Generator in 30 Minutes

Tired of writing "fixed stuff" or "updates" in your commit messages? This weekend, let's build a CLI tool that reads your staged changes and generates meaningful commit messages using local AI — no API keys, no cloud dependencies, completely private.

What We're Building

A simple Python CLI that:

  1. Reads your staged git diff
  2. Sends it to a local LLM (Ollama)
  3. Returns a well-formatted conventional commit message
  4. Optionally commits directly with that message

Total time: ~30 minutes. Total cost: $0.

Prerequisites

  • Python 3.10+
  • Ollama installed with a model (we'll use llama3.2:3b for speed)
  • Git (obviously)

Quick Ollama setup if you haven't:

# Install Ollama (Linux)
curl -fsSL https://ollama.com/install.sh | sh

# Pull a fast, capable model
ollama pull llama3.2:3b
Enter fullscreen mode Exit fullscreen mode

The Code

Create a new file called aicommit.py:

#!/usr/bin/env python3
"""
AI-powered git commit message generator using local Ollama.
"""

import subprocess
import sys
import json
import urllib.request

OLLAMA_URL = "http://localhost:11434/api/generate"
MODEL = "llama3.2:3b"

SYSTEM_PROMPT = """You are a git commit message generator. Given a diff, write a conventional commit message.

Rules:
- Use conventional commit format: type(scope): description
- Types: feat, fix, docs, style, refactor, test, chore
- Keep the first line under 72 characters
- Be specific but concise
- No explanations, just output the commit message

Examples:
- feat(auth): add JWT refresh token rotation
- fix(api): handle null response in user endpoint
- refactor(utils): extract date formatting to helper
"""

def get_staged_diff() -> str:
    """Get the staged git diff."""
    result = subprocess.run(
        ["git", "diff", "--cached", "--no-color"],
        capture_output=True,
        text=True
    )
    if result.returncode != 0:
        print("Error: Not a git repository or git not found")
        sys.exit(1)
    return result.stdout

def generate_message(diff: str) -> str:
    """Send diff to Ollama and get commit message."""
    if not diff.strip():
        print("Error: No staged changes. Stage files with 'git add'")
        sys.exit(1)

    # Truncate very large diffs
    if len(diff) > 8000:
        diff = diff[:8000] + "\n... (truncated)"

    prompt = f"{SYSTEM_PROMPT}\n\nDiff:\n```
{% endraw %}
\n{diff}\n
{% raw %}
```\n\nCommit message:"

    payload = json.dumps({
        "model": MODEL,
        "prompt": prompt,
        "stream": False,
        "options": {
            "temperature": 0.3,
            "num_predict": 100
        }
    }).encode()

    req = urllib.request.Request(
        OLLAMA_URL,
        data=payload,
        headers={"Content-Type": "application/json"}
    )

    try:
        with urllib.request.urlopen(req, timeout=30) as resp:
            result = json.loads(resp.read())
            return result["response"].strip().strip('"').strip("'")
    except Exception as e:
        print(f"Error connecting to Ollama: {e}")
        print("Make sure Ollama is running: ollama serve")
        sys.exit(1)

def main():
    auto_commit = "--commit" in sys.argv or "-c" in sys.argv

    print("📝 Reading staged changes...")
    diff = get_staged_diff()

    print("🤖 Generating commit message...")
    message = generate_message(diff)

    print(f"\n✨ Suggested commit message:\n")
    print(f"  {message}\n")

    if auto_commit:
        subprocess.run(["git", "commit", "-m", message])
        print("✅ Committed!")
    else:
        print("Run with --commit to auto-commit, or copy the message above.")

if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

Make it executable and move it somewhere in your PATH:

chmod +x aicommit.py
sudo mv aicommit.py /usr/local/bin/aicommit
Enter fullscreen mode Exit fullscreen mode

Usage

Stage your changes and run:

git add .
aicommit
Enter fullscreen mode Exit fullscreen mode

Output:

📝 Reading staged changes...
🤖 Generating commit message...

✨ Suggested commit message:

  feat(api): add rate limiting middleware with Redis backend

Run with --commit to auto-commit, or copy the message above.
Enter fullscreen mode Exit fullscreen mode

Or commit directly:

aicommit --commit
Enter fullscreen mode Exit fullscreen mode

Making It Better

Here are some quick upgrades you can add:

1. Add a Git Alias

git config --global alias.ai '!aicommit'
Enter fullscreen mode Exit fullscreen mode

Now you can just run git ai or git ai --commit.

2. Support Multiple Commit Types

Add a flag to bias toward specific commit types:

# Add after MODEL definition
COMMIT_TYPES = {
    "-f": "This is a new feature (feat)",
    "-x": "This is a bug fix (fix)", 
    "-r": "This is a refactor (refactor)",
    "-d": "This is documentation (docs)",
}

# In main(), parse flags and append to prompt
Enter fullscreen mode Exit fullscreen mode

3. Interactive Mode

Let the user edit before committing:

import tempfile
import os

def edit_message(message: str) -> str:
    editor = os.environ.get("EDITOR", "nano")
    with tempfile.NamedTemporaryFile(mode="w", suffix=".txt", delete=False) as f:
        f.write(message)
        f.flush()
        subprocess.run([editor, f.name])
        with open(f.name) as edited:
            return edited.read().strip()
Enter fullscreen mode Exit fullscreen mode

4. Multi-line Messages for Complex Changes

For larger changes, update the prompt to generate a body:

DETAILED_PROMPT = """
Write a conventional commit with a body for complex changes:

Format:
type(scope): short description

- Bullet point explaining a change
- Another change
- Keep it scannable
"""
Enter fullscreen mode Exit fullscreen mode

Why Local AI?

You might wonder why not just use Claude or GPT-4 APIs. Here's why I prefer local:

  1. Privacy — Your code never leaves your machine. Important for work projects.
  2. Speed — No network round-trip. The 3B model responds in ~2 seconds.
  3. Cost — $0/month vs. API costs that add up.
  4. Offline — Works on planes, trains, and coffee shops with bad WiFi.

The quality difference for this specific task is negligible. Commit messages don't need GPT-4-level reasoning.

Takeaways

  • Local LLMs are practical for developer tooling today
  • Ollama's HTTP API makes integration trivial (no SDKs needed)
  • Small models (3B parameters) are fast enough for CLI tools
  • Conventional commits become effortless with AI assistance

The full project is about 60 lines of Python with no dependencies beyond the standard library. You can extend it with your own conventions, team guidelines, or ticket number extraction.

Clone, customize, and never write "misc fixes" again.

More at dev.to/cumulus

Top comments (0)