DEV Community

guangda
guangda

Posted on

LingTerm MCP Tutorial — Secure Terminal Access for AI Assistants

LingTerm MCP — Let AI Safely Control Your Terminal

A hands-on tutorial. After reading, you'll have AI executing terminal commands in Cursor or Claude — safely.

Quick Start

1. Install

Option A: Run with npx (recommended)

No clone needed — just use npx in your MCP config:

"ling-term-mcp": {
  "command": "npx",
  "args": ["-y", "ling-term-mcp"]
}
Enter fullscreen mode Exit fullscreen mode

Option B: Install from source

git clone https://github.com/guangda88/ling-term-mcp.git
cd ling-term-mcp
npm install && npm run build
Enter fullscreen mode Exit fullscreen mode

Or use the one-liner: bash quickstart.sh (auto-checks environment, installs deps, builds, and runs tests).

2. Connect to Cursor

Open Cursor Settings → MCP Servers, add:

{
  "mcpServers": {
    "ling-term-mcp": {
      "command": "npx",
      "args": ["-y", "ling-term-mcp"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

If installing from source, change command to "node" and args to ["/your/absolute/path/ling-term-mcp/dist/index.js"]. Note: the path must be absolute.

Restart Cursor.

3. Connect to Claude Desktop

Edit your Claude Desktop config file and add the same mcpServers config.

Restart Claude Desktop.

4. Connect via HTTP (Remote / Multi-Client)

Stdio is great for local use, but what if you want to share one LingTerm instance across multiple AI clients? Or connect from a remote machine?

LingTerm supports the Streamable HTTP transport — the MCP protocol's modern standard for HTTP-based connections.

Start the HTTP server:

npx ling-term-mcp http
# → Listening on http://127.0.0.1:9529/mcp
Enter fullscreen mode Exit fullscreen mode

Connect from any MCP client that supports HTTP transport:

The MCP endpoint is http://127.0.0.1:9529/mcp. Configure your client to use this URL as the MCP server address.

Health check:

curl http://127.0.0.1:9529/health
# → {"status":"ok"}
Enter fullscreen mode Exit fullscreen mode

Configuration (environment variables):

Variable Default Description
LING_TERM_HTTP_PORT 9529 Port to listen on
LING_TERM_HTTP_HOST 127.0.0.1 Host to bind
LING_TERM_AUTH_TOKEN (none) Bearer token for authentication

Securing with a token:

export LING_TERM_AUTH_TOKEN="your-secret-token"
npx ling-term-mcp http
Enter fullscreen mode Exit fullscreen mode

Clients must then include Authorization: Bearer your-secret-token in requests.

When to use HTTP vs Stdio:

Stdio HTTP
Best for Single local client Multiple clients, remote access
Setup Zero config Start server + configure URL
Security Process isolation Token auth + rate limiting
Overhead Minimal Slightly higher (HTTP)

5. Try It

In Cursor or Claude, say:

Show me what files are in the current directory

AI will invoke LingTerm to execute ls -la (Linux/macOS) or dir (Windows) and return the result. That simple.

Real-World Scenarios

Scenario 1: Let AI Run Your Tests During Development

Run the project's unit tests

AI → npm test, returns test results.

Show me test coverage

AI → npm run test:coverage, returns the coverage report.

Scenario 2: Git Operations

What's the current git status?

AI → git status

Recent commits

AI → git log --oneline -10

What branch am I on?

AI → git branch

Scenario 3: Troubleshooting

Who's using port 3000?

AI → lsof -i :3000 or netstat -tlnp | grep 3000

How much disk space is left?

AI → df -h

Show me the last 20 lines of the nginx error log

AI → tail -20 /var/log/nginx/error.log

Scenario 4: Multi-Session Management

You're working on both a frontend and a backend project:

Create a session called "frontend" with working directory ~/projects/web
Create a session called "backend" with working directory ~/projects/api

Sessions record working directory and environment variable metadata per session, making it easy to switch contexts.

Workflow Example: A Complete Development Task

Get my project running

AI will execute a multi-step workflow:

1. git clone https://github.com/your/project.git  → clone the repo
2. cd project && npm install                        → install deps
3. npm test                                         → run tests to confirm everything works
4. npm run build                                    → build the project
Enter fullscreen mode Exit fullscreen mode

You said one sentence. AI handled multiple steps within security boundaries — each command passes through the whitelist, blacklist, and injection detection.

FAQ

Can't connect to AI assistant?

  1. Make sure the path is an absolute path (e.g. /Users/you/ling-term-mcp/dist/index.js, not a relative path)
  2. Confirm dist/index.js exists (run npm run build first)
  3. Confirm Node.js >= 18
  4. Restart the AI assistant

LingTerm not responding?

Check the MCP client's log output. In Cursor, open Developer Tools (View → Toggle Developer Tools) to see MCP connection logs. Confirm the config JSON is valid — no trailing commas.

Command was rejected?

Check if it hit the blacklist or injection detection. If it's a false positive, adjust the whitelist in the config.

Does it support Windows?

Yes. Use backslashes for paths: "args": ["C:\\Users\\you\\ling-term-mcp\\dist\\index.js"]

Can I use it with anything other than Cursor and Claude?

Any client that supports the MCP protocol: GitHub Copilot (with MCP support), Windsurf, Cline, etc.

Top comments (0)