🤖 Collaborative Agents v2.1
https://github.com/Mijin-VT/Collaborative_agents_V2
Python 3.10+ License: MIT Platform LM Studio
Local AI system that enables 3 models to work together as a collaborative team.
What is this?
A system that enables 3 AI models installed on your computer to work together as a team. Each AI has a specialized role and can read files, write code, execute commands, and collaborate to solve complex tasks.
✅ 100% local — Nothing is sent to the internet
✅ No accounts or subscriptions — Everything runs on your machine
✅ Secure — Agents work in an isolated folder with action confirmation
✅ Configurable — Change models, URLs, and limits without touching code
Complete system features
🤝 Agent collaboration
Feature
Description
Automatic planning
Coordinator decomposes tasks and assigns subtasks to agents
Sequential execution
Agents work 1 at a time, reading previous results
Error recovery
If an agent fails, another automatically takes its place
Accumulated context
Each agent receives the full history of previous work
Final integration
Coordinator unifies all results into a coherent response
📂 File management
Feature
Description
Read files
Agents read workspace files with path validation
Create files
Generate new files with confirmation and preview
Modify files
Overwrites with automatic backup before each change
List workspace
Explore project file and folder structure
Content cleaning
Extracts code from markdown blocks and removes unwanted markers
⚡ Command execution
Feature
Description
Run commands
Executes system commands with 2-minute timeout
Risk analysis
Classifies each command: low 🟢, medium 🟡, high 🔴
Mandatory confirmation
User must approve each command before execution
Command editing
Allows correcting the command before approval
Pipe detection
Recursively analyzes each part of a piped command
🛡️ Integrated security
Feature
Description
Isolated workspace
Agents only access ~/agents_workspace/
Blocked paths
Blocks C:\Windows, /usr/bin, /etc, etc.
Banned commands
Blocks format, rm -rf /, shutdown, sudo, etc.
Automatic backups
Backup copy before modifying any file
Session logs
Every action recorded with timestamp
Cleanup on exit
Deletes logs and backups when closing the app
🎛️ External configuration
Parameter
What it controls
lm_studio_url
LM Studio server URL
workspace
Agent working folder
max_tokens
Maximum tokens per response
max_context_tokens
Maximum context tokens
temperature
Default temperature
models.*.name
Name of each model in LM Studio
models.*.temperature
Per-agent temperature
forbidden_paths
Blocked system paths
forbidden_commands
Blocked dangerous commands
📊 Context management
Feature
Description
Token counting
Automatically estimates tokens (~1 token ≈ 3.5 characters)
Smart truncation
Preserves system prompt, removes oldest messages
Context recovery
If limit exceeded, reduces and retries automatically
Automatic retries
Up to 2 retries with 3-second wait
🎨 Visual interface
Element
Description
Decorative panels
Unicode-bordered boxes around text
Progress bars
Visual indicator [████████░░░░] 66% per phase
Animated bars
Rotating spinner + estimated progress during generation
Per-agent colors
Coordinator (cyan), Analyst (magenta), Developer (blue)
Loading indicators
"⏳ COORDINATOR thinking..."
Success indicators
"✅ COORDINATOR responded in 2.3s"
Per-flow icons
🚀 Full, 💻 Code, 💬 Debate, 💬 Free chat
💬 Workflows
Workflow
Phases
Recovery
🎯 Full task
Planning → Execution → Integration
Alternative agent if fails
💻 Development
Specifications → Implementation → Review
Analyst → Coordinator, Developer → Analyst
💬 Debate
Analysis → Technical perspective → Synthesis
Individual agent fallback
💬 Free chat
Direct conversation with visual selector
N/A
🌐 Indirect web access
Feature
Description
Download with curl/wget
Agent can use curl or wget to download content
Confirmation required
Classified as medium risk, requires approval
Content as reference
Downloaded HTML is used as source for the project
Included files
File
Description
INSTALL.bat
Windows Installer (run as administrator)
install_full.ps1
Windows Installer logic (PowerShell, executed by INSTALL.bat)
install.sh
Linux/macOS Installer (chmod +x install.sh && ./install.sh)
RUN_AGENTS.bat
Windows Launcher (double-click to use)
run_agents.sh
Linux/macOS Launcher (chmod +x run_agents.sh && ./run_agents.sh)
collaborative_agents_v2.py
Main script — The brain of the system with 3 collaborative agents
config.json
External config — Models, URLs, workspace, tokens, banned commands
test_agents.py
Unit tests — 24 tests for security, analysis, and context management
test_cleaning.py
Cleaning tests — Verifies code extraction and content cleaning
test_progress_bar.py
Progress bar tests — Verifies animated progress bar
.gitignore
Excludes logs, backups, workspace and caches from version control
DOCUMENTATION_EN.md
Complete documentation in English
What's new in v2.1
🎛️ External config (config.json): Change models, URLs, workspace and limits without touching code
🔄 Error recovery : If an agent fails, another automatically takes its place
📊 Context management : Token counting and automatic truncation to prevent overflows
🐧 Linux/macOS support : install.sh and run_agents.sh scripts
🧪 Unit tests : 24 tests covering security, command analysis and context management
🎨 Improved UI : Decorative panels, animated progress bars, loading indicators, per-agent colors
🧹 Session cleanup : Deletes logs and backups on exit to prevent file accumulation
🪟 Windows compatibility : UTF-8 forced + colorama for proper color and character rendering
📁 .gitignore : Excludes logs, backups and automatically generated files
Quick start
# 1. Install (Windows: run as admin)
INSTALL.bat
# 2. Open LM Studio → Load all 3 models → Developer → Start Server
# 3. Run the system
RUN_AGENTS.bat
Enter fullscreen mode
Exit fullscreen mode
Main menu
Option
Function
1. Full task
All 3 agents collaborate sequentially on your task
2. Code development
Specifications → Implementation → Code review
3. Debate/Evaluation
3 perspectives on a topic with final synthesis
4. Free chat
Direct conversation with the agent of your choice
5. View workspace
Explore project files and folders
6. Exit
Clean session and close the application
System requirements
Component
Minimum
Recommended
GPU
NVIDIA 12 GB VRAM
NVIDIA RTX 4080 Super (16 GB)
RAM
32 GB
48 GB
Disk
~50 GB free
SSD NVMe
OS
Windows 10/11, Linux, macOS
Windows 11, Ubuntu 22.04+, macOS 13+
AI Models
Agent
Model
Size
Role
Temperature
🤖 Coordinator
Qwen3 4B Instruct Heretic
~8 GB
Organizes, integrates, decides
0.3
🔍 Analyst
GPT-OSS 20B Heretic
~22 GB
Reasons, plans, analyzes
0.7
💻 Developer
Qwen3 Coder 30B A3B
~15 GB
Writes code, executes scripts
0.5
If something fails
Check the troubleshooting section in DOCUMENTATION_EN.md https://github.com/Mijin-VT/Collaborative_agents_V2/blob/master/DOCUMENTATION_EN.md#12-troubleshooting
Top comments (0)