DEV Community

Jet Xu
Jet Xu

Posted on

Introducing llama-github: Enhance Your AI Agents with Smart GitHub Retrieval

Hello Dev Community!

I'm excited to introduce llama-github, a powerful tool designed to enhance LLM Chatbots, AI Agents, and Auto-dev Agents by retrieving relevant code snippets, issues, and repository information from GitHub. Whether you're working on complex coding tasks or need quick access to relevant code, llama-github is here to help.

High level architecture

Image description

Key Features

  • Efficient GitHub Retrieval: Quickly find relevant code snippets, issues, and repository information.
  • Empowers AI Agents: Enhances LLM Chatbots, AI Agents, and Auto-dev Agents for complex coding tasks.
  • Advanced Caching: Speeds up searches and saves API tokens with innovative repository pool caching.
  • Smart Query Analysis: Leverages state-of-the-art language models to understand and process complex queries.
  • Asynchronous Processing: Handles multiple requests concurrently for improved performance.
  • Flexible Integration: Easily integrates with various LLM providers and models.
  • Robust Authentication: Supports personal access tokens and GitHub App authentication.
  • Comprehensive Logging: Provides detailed logging and error handling for smooth operations.

Installation

You can install llama-github using pip:

pip install llama-github
Enter fullscreen mode Exit fullscreen mode

Quick Start Guide

Here's a quick example to get you started with llama-github:

from llama_github import GithubRAG

# Initialize GithubRAG with your credentials
github_rag = GithubRAG(
    github_access_token="your_github_access_token", 
    openai_api_key="your_openai_api_key", # Optional in Simple Mode
    jina_api_key="your_jina_api_key" # Optional - unless you want high concurrency production deployment (s.jina.ai API will be used in llama-github)
)

# Retrieve context for a coding question (simple_mode is default set to False)
query = "How to create a NumPy array in Python?"
context = github_rag.retrieve_context(
    query, # In professional mode, one query will take nearly 1 min to generate final contexts. You could set log level to INFO to monitor the retrieval progress
    # simple_mode = True
)

print(context)
Enter fullscreen mode Exit fullscreen mode

Get Involved

We would love to hear your feedback and suggestions! Please feel free to try out llama-github and let us know your thoughts. You can find more information and contribute to the project on our GitHub repository.

If you like this project or believe it has potential, please give it a ⭐️. Your support is our greatest motivation!

Happy coding!


Jet Xu


Enter fullscreen mode Exit fullscreen mode

Top comments (0)