Tian AI's Three-Layer Architecture: A Developer's Perspective
Tian AI is organized into three logical layers, each with a distinct responsibility.
Layer 1: Perception & Communication
Talker Module
- Handles conversation management
- Multi-turn context tracking
- Emotion-aware responses
- LLM-driven follow-up questions
File: talker/__init__.py
Layer 2: Reasoning & Knowledge
Thinker Module
- Three-layer reasoning (Fast/CoT/Deep)
- Query routing based on complexity
- Knowledge base integration
Knowledge Module
- SQLite FTS5 knowledge retrieval
- Million-entry concept database
- 0.04-second average query time
Files: cot_engine.py, knowledge/__init__.py
Layer 3: Action & Evolution
Agent Module
- LLM-driven task planning
- TaskQueue with dependency resolution
- Safety whitelist for autonomous execution
Self-Evolution Module
- AST-based code analysis
- Automated patch generation
- XP + leveling system
Files: agent/__init__.py, llm_agent.py, self_modify.py
Cross-Cutting Concerns
LLM Management
-
llm_manager.py: Process lifecycle -
llm_bridge.py: API communication -
prompt_cache.py: Response caching
Performance
- Project: 770+ Python files, 171K+ lines
- Core: 6 modules, 3 extension languages
- Response time: 0.04s (knowledge) to 60s (deep reasoning)
Developer Notes
- Each module is independently testable
- Minimal external dependencies (just Flask, Gradio, requests)
- Designed for easy module replacement
- All modules work offline
This architecture allows Tian AI to be both powerful and maintainable.
Top comments (0)