DEV Community

WonderLab
WonderLab

Posted on

Open Source Project of the Day (Part 20): NanoBot - Lightweight AI Agent Framework, Minimalist and Efficient Agent Building Tool

Introduction

"The greatest truth is simplicity — the most powerful tools often have the simplest interfaces."

This is Part 20 of the "Open Source Project of the Day" series. Today we explore NanoBot (GitHub).

In the AI Agent framework space, tools like LangChain and CrewAI are powerful but come with steep learning curves — often too complex for rapid prototyping or small projects. NanoBot was born to fill this gap: a lightweight, minimalist AI Agent framework developed by the Hong Kong University Data Science Lab (HKUDS). NanoBot focuses on providing a clean API, flexible architecture, and strong extensibility, letting developers quickly build and deploy AI agents without getting bogged down in complex configuration and abstraction layers.

Why this project?

  • 🪶 Lightweight design: Minimal codebase, easy to understand and customize
  • 🚀 Quick start: Clean API — build your first agent in minutes
  • 🔧 Flexible architecture: Modular design, extend as needed
  • 🎯 Core-focused: Focuses on essential Agent capabilities, avoids over-engineering
  • 🏫 Academic background: From the Hong Kong University Data Science Lab, with solid theoretical foundations
  • 📦 Easy to integrate: Seamlessly integrates into existing projects
  • 🔌 Highly extensible: Supports custom tools, memory, planners, and other components

What You'll Learn

  • NanoBot's core architecture and design philosophy
  • How to quickly build and deploy AI Agents
  • Core mechanisms of Agent planning, execution, and tool use
  • How to extend and customize Agent functionality
  • Comparative analysis with other AI Agent frameworks
  • Real-world application scenarios and best practices
  • Design principles behind lightweight frameworks

Prerequisites

  • Basic understanding of AI Agent concepts
  • Familiarity with Python programming
  • Understanding of LLM (Large Language Model) basics
  • Basic knowledge of Function Calling (optional)

Project Background

Project Introduction

NanoBot is a lightweight, minimalist AI Agent framework developed by the Hong Kong University Data Science Lab (HKUDS — Hong Kong University Data Science). It aims to provide a clean, efficient, easy-to-use framework that lets developers quickly build and deploy AI agents without dealing with complex configuration and abstraction layers.

Core problems the project solves:

  • Existing Agent frameworks are overly complex with steep learning curves
  • For small projects and rapid prototyping, existing frameworks are overkill
  • Lack of lightweight frameworks focused on core Agent functionality
  • Developers need a simple yet fully-featured Agent building tool
  • Academic research and teaching need a framework that's easy to understand and customize

Target user groups:

  • Developers who need to quickly build and deploy AI Agents
  • Researchers conducting AI Agent research and experiments
  • Students and developers who want to learn Agent framework design
  • Small projects needing lightweight Agent solutions
  • Technical users interested in the internals of Agent frameworks

Author/Team Introduction

Team: HKUDS (Hong Kong University Data Science)

  • Background: Hong Kong University Data Science Lab, focused on data science, machine learning, and AI systems research
  • Research areas: Data mining, machine learning, AI Agents, recommendation systems, and more
  • Philosophy: Build clean, efficient, easy-to-use AI tools and frameworks
  • Tech stack: Python, LLM, Agent systems

Project creation date: 2024–2025 (actively under development)

Project Stats

  • GitHub Stars: Growing (check GitHub for current count)
  • 🍴 Forks: Active community participation
  • 📦 Version: Continuously updated
  • 📄 License: Open source (see GitHub for details)
  • 🌐 Project address: GitHub
  • 💬 Community: GitHub Issues and Discussions
  • 📚 Documentation: Includes usage guides and API docs

Project highlights:

  • Lightweight: Minimal codebase with clear core functionality
  • Usability: Clean API, quick to get started
  • Flexibility: Modular design, easy to extend
  • Academic rigor: From a renowned university lab, with solid theoretical foundations

Main Features

Core Purpose

NanoBot's core purpose is to provide a lightweight, easy-to-use AI Agent framework, with main features including:

  1. Agent building: Quickly create and configure AI Agents
  2. Tool integration: Supports custom tools and function calling
  3. Planning and execution: Agent planning, execution, and reflection mechanisms
  4. Memory management: Short-term and long-term memory support
  5. Multi-Agent collaboration: Supports multiple Agents working together (if applicable)
  6. LLM integration: Supports multiple LLM providers
  7. Streaming responses: Supports streaming output and real-time interaction

Use Cases

NanoBot is suitable for a variety of AI Agent application scenarios:

  1. Rapid prototyping

    • Quickly validate Agent ideas
    • Build MVPs (Minimum Viable Products)
    • Experiment with different Agent architectures
  2. Research and teaching

    • AI Agent-related research
    • Teaching and demonstrating Agent concepts
    • Understanding the internal implementation of Agent frameworks
  3. Small projects

    • Personal projects and small applications
    • Scenarios that don't need complex frameworks
    • Agents that need to be quickly deployed
  4. Learning and experimentation

    • Learning Agent framework design
    • Experimenting with different Agent capabilities
    • Understanding how Agents work
  5. Tool integration

    • Integrating Agents into existing systems
    • Adding AI capabilities to applications
    • Building intelligent assistants and automation tools

Quick Start

Installation

NanoBot can be installed via pip:

# Method 1: Install from GitHub
pip install git+https://github.com/HKUDS/nanobot.git

# Method 2: Clone and install locally
git clone https://github.com/HKUDS/nanobot.git
cd nanobot
pip install -e .

# Method 3: If published to PyPI
pip install nanobot
Enter fullscreen mode Exit fullscreen mode

System requirements:

  • Python 3.8+
  • A supported LLM API (OpenAI, Anthropic, etc.)

Basic Usage

1. Create a simple Agent

from nanobot import Agent, LLM

# Initialize LLM
llm = LLM(provider="openai", model="gpt-4")

# Create Agent
agent = Agent(
    name="Assistant",
    llm=llm,
    system_prompt="You are a helpful assistant."
)

# Chat with the Agent
response = agent.chat("Hello, how are you?")
print(response)
Enter fullscreen mode Exit fullscreen mode

2. Add tool support

from nanobot import Agent, Tool

# Define a tool function
def get_weather(location: str) -> str:
    """Get weather information for a location."""
    # Actual weather API call
    return f"Weather in {location}: Sunny, 25°C"

# Create the tool
weather_tool = Tool(
    name="get_weather",
    description="Get weather information",
    function=get_weather
)

# Create Agent with tools
agent = Agent(
    name="WeatherBot",
    llm=llm,
    tools=[weather_tool]
)

# Agent can automatically use tools
response = agent.chat("What's the weather in Hong Kong?")
print(response)
Enter fullscreen mode Exit fullscreen mode

3. Use a planner

from nanobot import Agent, Planner

# Create a planner
planner = Planner(llm=llm)

# Create Agent with planner
agent = Agent(
    name="PlannerBot",
    llm=llm,
    planner=planner
)

# Agent can plan complex tasks
response = agent.chat(
    "Plan a trip to Japan: research flights, hotels, and attractions"
)
print(response)
Enter fullscreen mode Exit fullscreen mode

4. Streaming responses

# Enable streaming responses
for chunk in agent.chat_stream("Tell me a story"):
    print(chunk, end="", flush=True)
Enter fullscreen mode Exit fullscreen mode

Core Features

  1. Clean API

    • Intuitive interface design
    • Minimal configuration to get started
    • Clear code structure
  2. Flexible architecture

    • Modular design with replaceable components
    • Supports custom extensions
    • Easy to integrate into existing projects
  3. Tool system

    • Easily define and use tools
    • Automatic function calling
    • Tool chain composition
  4. Planning and execution

    • Built-in planner support
    • Task decomposition and execution
    • Reflection and optimization mechanisms
  5. Memory management

    • Conversation history management
    • Long-term memory support (if applicable)
    • Context window optimization
  6. Multi-LLM support

    • Supports multiple LLM providers
    • Unified interface abstraction
    • Easy to switch between models
  7. Streaming responses

    • Real-time output support
    • Improved user experience
    • Reduced latency perception
  8. Easy to debug

    • Clear log output
    • Detailed execution tracing
    • Easy to locate issues

Project Advantages

Comparison with other AI Agent frameworks:

Comparison NanoBot LangChain CrewAI AutoGPT
Learning curve ✅ Minimal, quick to start ⚠️ Steep, many concepts ⚠️ Medium, need to understand team concepts ⚠️ Complex, many configurations
Codebase size ✅ Lightweight, minimal core ⚠️ Large framework, feature-rich ⚠️ Medium scale ⚠️ Large project
Flexibility ✅ Highly flexible, easy to customize ⚠️ Many abstraction layers, complex to customize ✅ Flexible, multi-Agent support ⚠️ Relatively fixed
Use cases ✅ Rapid prototyping, small projects ✅ Production, complex apps ✅ Multi-Agent collaboration ✅ Autonomous Agents
Documentation ✅ Clean and clear ✅ Detailed and comprehensive ✅ Good ⚠️ Average
Community support ⚠️ Emerging project ✅ Mature, large community ✅ Active community ✅ Active community
Academic background ✅ University lab ❌ Commercial company ❌ Commercial company ❌ Community project

Why choose NanoBot?

  • 🎯 Core-focused: Focuses on essential Agent functionality, avoids over-engineering
  • 🚀 Rapid development: Clean API for quick building and iteration
  • 🧠 Easy to understand: Clear code, good for learning and customization
  • 🏫 Academic support: From a renowned university lab, with theoretical backing
  • 🔧 Highly customizable: Modular design, extend as needed
  • 📦 Lightweight: Suitable for resource-constrained scenarios
  • 🎓 Learning value: Excellent material for understanding Agent framework design

Detailed Project Analysis

Architecture Design

NanoBot uses a modular, extensible architecture with clear separation of core components for easy understanding and customization.

Core Architecture

NanoBot/
├── Agent (Core)
│   ├── LLM Interface
│   ├── Tool System
│   ├── Planner
│   ├── Memory Management
│   └── Execution Engine
├── Tools
│   ├── Built-in Tools
│   ├── Custom Tools
│   └── Tool Chains
├── Planner
│   ├── Task Decomposition
│   ├── Step Planning
│   └── Execution Strategy
├── Memory
│   ├── Conversation History
│   ├── Long-term Memory
│   └── Context Management
└── LLM
    ├── Multi-provider Support
    ├── Unified Interface
    └── Streaming Responses
Enter fullscreen mode Exit fullscreen mode

Design Principles

1. Simplicity first

# NanoBot's philosophy: maximize functionality with minimum code
agent = Agent(llm=llm)
response = agent.chat("Hello")
Enter fullscreen mode Exit fullscreen mode

2. Modular design

Each component is an independent module that can be used or replaced individually:

# Use the planner independently
planner = Planner(llm=llm)
plan = planner.plan("Complex task")

# Use a tool independently
tool = Tool(name="calculator", function=calculate)
result = tool.execute("2 + 2")
Enter fullscreen mode Exit fullscreen mode

3. Extensibility

Easily extend functionality through inheritance and composition:

# Custom Agent
class CustomAgent(Agent):
    def custom_method(self):
        # Custom logic
        pass

# Custom Tool
class CustomTool(Tool):
    def execute(self, input):
        # Custom execution logic
        pass
Enter fullscreen mode Exit fullscreen mode

Core Modules

1. Agent Core Module

Functions:

  • Agent creation and configuration
  • Conversation management and response generation
  • Tool invocation and planning execution
  • State management and context maintenance

Technical implementation:

class Agent:
    def __init__(
        self,
        name: str,
        llm: LLM,
        system_prompt: str = None,
        tools: List[Tool] = None,
        planner: Planner = None,
        memory: Memory = None
    ):
        self.name = name
        self.llm = llm
        self.system_prompt = system_prompt
        self.tools = tools or []
        self.planner = planner
        self.memory = memory or SimpleMemory()
        self.conversation_history = []

    def chat(self, message: str) -> str:
        # 1. Add to conversation history
        self.conversation_history.append({
            "role": "user",
            "content": message
        })

        # 2. If a planner is set, plan first
        if self.planner:
            plan = self.planner.plan(message, self.conversation_history)
            # Execute plan...

        # 3. Check if tool calls are needed
        tool_calls = self._detect_tool_calls(message)
        if tool_calls:
            results = self._execute_tools(tool_calls)
            message = self._format_with_tool_results(message, results)

        # 4. Call LLM
        response = self.llm.chat(
            messages=self._build_messages(),
            tools=self._format_tools()
        )

        # 5. Save response
        self.conversation_history.append({
            "role": "assistant",
            "content": response
        })

        return response

    def _detect_tool_calls(self, message: str) -> List[dict]:
        # Use LLM to determine if tool calls are needed
        # Return list of tool calls
        pass

    def _execute_tools(self, tool_calls: List[dict]) -> List[dict]:
        # Execute tool calls
        results = []
        for tool_call in tool_calls:
            tool = self._find_tool(tool_call["name"])
            result = tool.execute(tool_call["arguments"])
            results.append(result)
        return results
Enter fullscreen mode Exit fullscreen mode

2. Tool System Module

Functions:

  • Tool definition and registration
  • Tool call execution
  • Tool chain composition
  • Tool result formatting

Technical implementation:

class Tool:
    def __init__(
        self,
        name: str,
        description: str,
        function: Callable,
        parameters: dict = None
    ):
        self.name = name
        self.description = description
        self.function = function
        self.parameters = parameters or {}

    def execute(self, **kwargs) -> Any:
        # Validate parameters
        self._validate_parameters(kwargs)

        # Execute function
        try:
            result = self.function(**kwargs)
            return {
                "success": True,
                "result": result
            }
        except Exception as e:
            return {
                "success": False,
                "error": str(e)
            }

    def to_openai_format(self) -> dict:
        # Convert to OpenAI function call format
        return {
            "type": "function",
            "function": {
                "name": self.name,
                "description": self.description,
                "parameters": self.parameters
            }
        }
Enter fullscreen mode Exit fullscreen mode

3. Planner Module

Functions:

  • Task decomposition and step planning
  • Execution strategy formulation
  • Plan optimization and adjustment

Technical implementation:

class Planner:
    def __init__(self, llm: LLM):
        self.llm = llm

    def plan(self, task: str, context: List[dict] = None) -> dict:
        # Use LLM to generate a plan
        prompt = self._build_planning_prompt(task, context)

        response = self.llm.chat(
            messages=[{"role": "user", "content": prompt}],
            response_format="json"
        )

        plan = json.loads(response)
        return {
            "task": task,
            "steps": plan["steps"],
            "estimated_time": plan.get("estimated_time"),
            "dependencies": plan.get("dependencies", [])
        }

    def _build_planning_prompt(self, task: str, context: List[dict]) -> str:
        return f"""
        Given the following task, create a detailed plan:

        Task: {task}

        Context: {json.dumps(context, indent=2) if context else "None"}

        Please provide a JSON response with:
        - steps: List of steps to complete the task
        - estimated_time: Estimated time for each step
        - dependencies: Dependencies between steps
        """
Enter fullscreen mode Exit fullscreen mode

4. Memory Management Module

Functions:

  • Conversation history management
  • Context window optimization
  • Long-term memory storage (if applicable)

Technical implementation:

class SimpleMemory:
    def __init__(self, max_history: int = 100):
        self.max_history = max_history
        self.history = []

    def add(self, role: str, content: str):
        self.history.append({
            "role": role,
            "content": content,
            "timestamp": time.time()
        })

        # Limit history length
        if len(self.history) > self.max_history:
            self.history = self.history[-self.max_history:]

    def get_context(self, max_tokens: int = None) -> List[dict]:
        # Return conversation context
        if max_tokens:
            # Intelligently truncate, preserving important information
            return self._truncate_by_tokens(self.history, max_tokens)
        return self.history

    def _truncate_by_tokens(self, history: List[dict], max_tokens: int) -> List[dict]:
        # Start from the most recent messages, add until token limit is reached
        truncated = []
        current_tokens = 0

        for message in reversed(history):
            message_tokens = self._count_tokens(message["content"])
            if current_tokens + message_tokens > max_tokens:
                break
            truncated.insert(0, message)
            current_tokens += message_tokens

        return truncated
Enter fullscreen mode Exit fullscreen mode

5. LLM Integration Module

Functions:

  • Multi-LLM provider support
  • Unified interface abstraction
  • Streaming response handling

Technical implementation:

class LLM:
    def __init__(self, provider: str, model: str, api_key: str = None):
        self.provider = provider
        self.model = model
        self.api_key = api_key or os.getenv(f"{provider.upper()}_API_KEY")
        self.client = self._create_client()

    def _create_client(self):
        if self.provider == "openai":
            import openai
            return openai.OpenAI(api_key=self.api_key)
        elif self.provider == "anthropic":
            import anthropic
            return anthropic.Anthropic(api_key=self.api_key)
        # Support more providers...

    def chat(
        self,
        messages: List[dict],
        tools: List[dict] = None,
        response_format: str = None
    ) -> str:
        # Unified chat interface
        if self.provider == "openai":
            return self._openai_chat(messages, tools, response_format)
        elif self.provider == "anthropic":
            return self._anthropic_chat(messages, tools, response_format)

    def chat_stream(self, messages: List[dict], tools: List[dict] = None):
        # Streaming response
        if self.provider == "openai":
            stream = self.client.chat.completions.create(
                model=self.model,
                messages=messages,
                tools=tools,
                stream=True
            )
            for chunk in stream:
                if chunk.choices[0].delta.content:
                    yield chunk.choices[0].delta.content
Enter fullscreen mode Exit fullscreen mode

Key Technical Implementations

1. Automatic Tool Calling

NanoBot uses the LLM's function calling capability to automatically detect and execute tools:

def _detect_and_execute_tools(self, message: str, context: List[dict]) -> str:
    # 1. Build messages with tool information
    messages = self._build_messages_with_tools(context)

    # 2. Call LLM with function calling enabled
    response = self.llm.chat(
        messages=messages,
        tools=[tool.to_openai_format() for tool in self.tools]
    )

    # 3. Check if there are tool calls
    if hasattr(response, "tool_calls") and response.tool_calls:
        # 4. Execute tools
        tool_results = []
        for tool_call in response.tool_calls:
            tool = self._find_tool(tool_call.function.name)
            result = tool.execute(**json.loads(tool_call.function.arguments))
            tool_results.append({
                "tool_call_id": tool_call.id,
                "role": "tool",
                "name": tool_call.function.name,
                "content": json.dumps(result)
            })

        # 5. Add tool results to context, call LLM again
        messages.extend(tool_results)
        final_response = self.llm.chat(messages=messages)
        return final_response.content

    return response.content
Enter fullscreen mode Exit fullscreen mode

2. Planning and Execution Loop

def execute_with_planning(self, task: str) -> str:
    # 1. Generate plan
    plan = self.planner.plan(task, self.conversation_history)

    # 2. Execute each step in the plan
    results = []
    for step in plan["steps"]:
        # Check if tools are needed
        if step.get("requires_tool"):
            tool_result = self._execute_tool_for_step(step)
            results.append(tool_result)
        else:
            # Use LLM directly
            response = self.llm.chat(
                messages=self._build_messages() + [{
                    "role": "user",
                    "content": step["description"]
                }]
            )
            results.append(response.content)

        # 3. Reflect and adjust (optional)
        if step.get("requires_reflection"):
            reflection = self._reflect_on_step(step, results[-1])
            if reflection["should_adjust"]:
                plan = self._adjust_plan(plan, reflection)

    # 4. Summarize results
    summary = self._summarize_execution(plan, results)
    return summary
Enter fullscreen mode Exit fullscreen mode

3. Context Window Optimization

def optimize_context(self, messages: List[dict], max_tokens: int) -> List[dict]:
    # Strategy 1: Keep system prompt and most recent conversation
    system_messages = [msg for msg in messages if msg["role"] == "system"]
    recent_messages = messages[-10:]  # Keep last 10 messages

    # Strategy 2: If still over limit, use summarization
    current_tokens = self._count_tokens(system_messages + recent_messages)
    if current_tokens > max_tokens:
        # Summarize older messages
        old_messages = messages[:-10]
        summary = self._summarize_messages(old_messages)

        return system_messages + [
            {"role": "assistant", "content": f"Previous conversation summary: {summary}"}
        ] + recent_messages

    return system_messages + recent_messages
Enter fullscreen mode Exit fullscreen mode

Extension Mechanisms

Custom Agent

class CustomAgent(Agent):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.custom_state = {}

    def custom_method(self, input_data):
        # Custom logic
        result = self.llm.chat([
            {"role": "user", "content": f"Process: {input_data}"}
        ])
        self.custom_state["last_result"] = result
        return result
Enter fullscreen mode Exit fullscreen mode

Custom Tool

class DatabaseTool(Tool):
    def __init__(self, connection_string: str):
        super().__init__(
            name="query_database",
            description="Query a SQL database",
            function=self._query
        )
        self.db = connect(connection_string)

    def _query(self, sql: str) -> dict:
        try:
            result = self.db.execute(sql)
            return {
                "success": True,
                "data": result.fetchall(),
                "columns": result.keys()
            }
        except Exception as e:
            return {
                "success": False,
                "error": str(e)
            }
Enter fullscreen mode Exit fullscreen mode

Custom Planner

class HierarchicalPlanner(Planner):
    def plan(self, task: str, context: List[dict] = None) -> dict:
        # 1. High-level planning
        high_level_plan = self._high_level_planning(task)

        # 2. Detailed planning for each high-level goal
        detailed_steps = []
        for goal in high_level_plan["goals"]:
            steps = self._detailed_planning(goal)
            detailed_steps.extend(steps)

        return {
            "task": task,
            "high_level_goals": high_level_plan["goals"],
            "detailed_steps": detailed_steps
        }
Enter fullscreen mode Exit fullscreen mode

Project Resources

Official Resources

Related Resources

  • HKUDS Lab: Resources from the Hong Kong University Data Science Lab
  • AI Agent framework comparison: LangChain, CrewAI, AutoGPT, and more
  • LLM integration guides: OpenAI, Anthropic, and other LLM provider documentation

Similar Projects

If you want to explore more AI Agent frameworks:

  • LangChain: Feature-rich LLM application framework
  • CrewAI: Multi-Agent team collaboration framework
  • AutoGPT: Autonomous AI Agent system
  • AgentGPT: AI Agent building tool in the browser
  • SuperAGI: Open-source AI Agent development framework

Who Should Use This

NanoBot is ideal for the following developers:

  • AI Agent developers: Need to quickly build and deploy Agents
  • Researchers: Conducting Agent-related research and experiments
  • Students and educators: Learning Agent framework design and implementation
  • Rapid prototypers: Need a lightweight framework to validate ideas
  • Framework learners: Want to understand the internal workings of Agent frameworks
  • Small project developers: Scenarios that don't require complex frameworks

Learning value:

  • ✅ Design principles of lightweight frameworks
  • ✅ Implementation of core Agent mechanisms
  • ✅ Tool system and function calling
  • ✅ Planning and execution loop
  • ✅ Modular architecture design
  • ✅ LLM integration and multi-provider support
  • ✅ Code simplicity and maintainability

Welcome to visit my personal homepage for more useful knowledge and interesting products

Top comments (0)