DEV Community

Jeffrey.Feillp
Jeffrey.Feillp

Posted on

Tian AI: The Self-Evolving AI System Powered by Qwen2.5

Tian AI: The Self-Evolving AI System Powered by Qwen2.5

What Is Tian AI?

Tian AI is an open-source, self-evolving AI system — the sister project to TFinancial OS. While TFinancial OS focuses on financial intelligence, Tian AI is a general-purpose autonomous AI framework that can run completely offline on consumer hardware.

Built on top of Qwen2.5-1.5B, Tian AI combines multiple specialized engines into a cohesive intelligence system that doesn't just answer questions — it learns, evolves, and improves its own code over time.

Aspect Detail
Project Size 771 Python files, ~171,380 lines of code
Core LLM Qwen2.5-1.5B (via llama.cpp)
Knowledge Base SQLite with millions of indexed concepts
Backend Flask REST API
License Open Source
GitHub github.com/3969129510/tian-ai

Core Architecture

Tian AI's architecture is built around five specialized engines that work together:

┌──────────────────────────────────────────────────────┐
│                    Web UI / API Layer                  │
├──────────────────────────────────────────────────────┤
│  ┌──────────┐  ┌──────────┐  ┌────────────────────┐  │
│  │  Thinker │  │  Talker  │  │ Knowledge Retriever │  │
│  │ (LLM Eng)│  │ (Dialog) │  │ (SQLite Search)    │  │
│  └────┬─────┘  └────┬─────┘  └─────────┬──────────┘  │
│       └──────────────┼──────────────────┘             │
│                      ▼                                │
│  ┌──────────────────────────────────────────────┐     │
│  │           Agent Scheduler                     │     │
│  │  (Task routing, orchestration, priority)     │     │
│  └──────────────────┬───────────────────────────┘     │
│                     ▼                                  │
│  ┌──────────────────────────────────────────────┐     │
│  │         Self-Evolution System                │     │
│  │  (XP tracking, capability unlock, code mod)  │     │
│  └──────────────────────────────────────────────┘     │
├──────────────────────────────────────────────────────┤
│           Qwen2.5-1.5B (llama.cpp backend)            │
└──────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

1. Thinker — The LLM Reasoning Engine

The Thinker is the brain of Tian AI. It handles all LLM interaction with three distinct thinking modes:

  • Fast Mode — Single-pass responses for simple queries (~1-3s on mobile)
  • Chain-of-Thought Mode — Step-by-step reasoning for complex problems
  • Deep Mode — Multi-perspective analysis with reflection and synthesis

The Thinker manages prompt templates, context window, and response parsing — ensuring the small 1.5B model punches well above its weight through smart prompting architecture.

2. Talker — The Conversation System

The Talker manages multi-turn dialogue, maintaining context across interactions. Features include:

  • Short-term and long-term memory management
  • Conversation history summarization for context window efficiency
  • Personality and tone modulation
  • Multi-language support (Chinese + English)

3. Knowledge Retriever

To compensate for the smaller model's knowledge limitations, Tian AI includes a massive pre-built SQLite knowledge base:

  • Millions of indexed concepts across 100+ domains
  • 30 question patterns per concept for flexible retrieval
  • Instant lookup — 0.04-0.1s response time
  • Fully local — no external API calls needed
  • Updateable — add new knowledge without retraining the model

The retriever first checks the knowledge base; if confidence is above 0.8, the answer is returned directly. Otherwise, knowledge context is injected into the LLM prompt for augmented generation.

4. Agent Scheduler

The scheduler routes tasks to the appropriate engine, manages priority queues, and handles concurrent requests. It coordinates between the Thinker, Talker, and Knowledge Retriever to ensure efficient operation.

5. Self-Evolution System

The most distinctive feature of Tian AI — it can grow itself:

  • XP System — Earn experience points through conversations and task completion
  • Leveling — Unlock new capabilities at milestone levels
  • Version Upgrades — Named releases (M1-E1-Theme format) mark system evolution
  • Self-Modifying Code — The system can analyze its own source code (via AST parsing), suggest improvements, and apply patches automatically

The evolution loop works like this:

  1. SCAN — Parse all Python files with AST analysis
  2. ANALYZE — Measure complexity, find duplication, detect issues
  3. SUGGEST — Send analysis to the LLM with structured improvement prompts
  4. APPLY — Apply suggested patches (with automatic backup)
  5. VERIFY — Syntax check with compile() and basic functional testing

Technical Stack

Layer Technology Purpose
LLM Engine llama.cpp + Qwen2.5-1.5B GGUF Local inference on ARM/CPU
Backend Server Flask (Python) REST API, request handling
Knowledge Base SQLite (indexed) Local knowledge retrieval
Frontend Pure HTML/CSS/JS Zero-dependency UI
Agent System Python asyncio Task scheduling & orchestration
Self-Modify Python AST + LLM Code analysis & patching
Auth Simple session No external dependencies

Project scale: 771 Python files totaling ~171,380 lines of code, making this a substantial local AI framework.


Why Local AI Matters

The AI industry is racing toward ever-larger cloud models. But there's a fundamental problem: your data leaves your device. Every query to ChatGPT, Claude, or Gemini is processed on servers you don't control.

Tian AI takes the opposite approach:

  • 100% Offline — No internet connection required after setup
  • Complete Privacy — Zero data leaves your device. Ever.
  • No Subscription Fees — Runs on your own hardware
  • Open Source — Fully auditable, modifiable, and improvable
  • Self-Evolving — Gets better over time without cloud dependency

Getting Started

# Clone the repository
git clone https://github.com/3969129510/tian-ai
cd tian-ai

# Install dependencies
pip install -r requirements.txt

# Download Qwen2.5-1.5B GGUF model
# Place it in ~/storage/downloads/qwen-1.5b-q4.gguf

# Start llama.cpp server
llama-server -m ~/storage/downloads/qwen-1.5b-q4.gguf --port 8080 -t 4 -c 2048

# Launch Tian AI
python run.py
Enter fullscreen mode Exit fullscreen mode

Support the Project

Tian AI is completely free and open source. If you'd like to support development:

USDT (TRC-20): TNeUMpbwWFcv6v7tYHmkFkE7gC5eWzqbrs
BTC: bc1ph7qnaqkx4pkg4fmucvudlu3ydzgwnfmxy7dkv3nyl48wwa03kmnsvpc2xv


Tian AI — Your Private AI, Completely Offline.

GitHub: github.com/3969129510/tian-ai

Top comments (0)