DEV Community

Cover image for DeepSeek-TUI: Run a DeepSeek Coding Agent Directly in Your Terminal
ArshTechPro
ArshTechPro

Posted on

DeepSeek-TUI: Run a DeepSeek Coding Agent Directly in Your Terminal

If you have spent time using AI coding tools through a browser or GUI, you already know the friction. You switch windows, lose context, and your workflow gets interrupted. DeepSeek-TUI removes that friction by bringing a full DeepSeek coding agent into your terminal.

This article walks you through what DeepSeek-TUI is, what you can do with it, and exactly how to get it running.


What is DeepSeek-TUI

DeepSeek-TUI is an open-source terminal user interface that connects to DeepSeek's language models and acts as an agentic coding assistant. It is written in Rust and installable via npm, which means you do not need a Rust toolchain to get started.

The key thing to understand is that this is not just a chat interface. It is an agent — meaning it can take actions on your behalf: edit files, run shell commands, make git commits, search the web, and interact with external services through MCP (Model Context Protocol) servers.

Everything runs inside your terminal. No browser tab. No electron app. Your existing workflow stays intact.


Three Modes of Operation

DeepSeek-TUI has three visible modes you can cycle through with Tab or Shift+Tab:

Plan mode — Before the agent starts making changes, it shows you a plan. You review and approve before anything happens. Good for unfamiliar or risky tasks.

Agent mode — The default. The agent works interactively, uses tools step by step, and asks for approval on sensitive actions like running shell commands.

YOLO mode — Auto-approves all tool use. Useful in isolated, trusted environments where you want fully autonomous operation without confirmation prompts.

You can also set a default mode in your config, or launch straight into YOLO with deepseek-tui --yolo.


Installation

Prerequisites

You need Node.js installed. That is it for the npm path.

Step 1 — Install via npm

npm install -g deepseek-tui
Enter fullscreen mode Exit fullscreen mode

This is the quickest way and works on macOS, Linux, and Windows.

Step 2 — Get a DeepSeek API Key

Go to platform.deepseek.com and create an account. Generate an API key from the dashboard. DeepSeek's API pricing is notably low compared to other providers, which makes this tool cost-effective even for heavy use.

Step 3 — Set Your API Key

You have two options:

Option A — Interactive login (recommended for first-time setup)

deepseek-tui login
Enter fullscreen mode Exit fullscreen mode

This prompts you for your API key and saves it to ~/.deepseek/config.toml.

Option B — Environment variable (useful for CI or scripting)

DEEPSEEK_API_KEY="your_key_here" deepseek-tui
Enter fullscreen mode Exit fullscreen mode

Step 4 — Launch

deepseek-tui
Enter fullscreen mode Exit fullscreen mode

On first launch, if no API key is configured, it will prompt you for one automatically.

Verify Your Setup

deepseek-tui doctor
Enter fullscreen mode Exit fullscreen mode

This runs a diagnostics check: API key presence, model configuration, MCP status, shell tool availability, and API connectivity. If something is off, it tells you exactly what.


Alternative Installation Methods

If you prefer to install from source or via Rust's package manager:

Via cargo (requires Rust 1.85 or newer):

cargo install deepseek-tui --locked
cargo install deepseek-tui-cli --locked
Enter fullscreen mode Exit fullscreen mode

Build from source:

git clone https://github.com/Hmbown/DeepSeek-TUI.git
cd DeepSeek-TUI
cargo install --path crates/tui --locked
Enter fullscreen mode Exit fullscreen mode

Basic Usage

Once running, you interact with the agent through the TUI. Here are the most useful commands to know:

deepseek-tui                                   # start the interactive TUI
deepseek-tui -p "explain this codebase"        # one-shot prompt, no interactive UI
deepseek-tui --yolo                            # start in YOLO (auto-approve) mode
deepseek-tui models                            # list available DeepSeek models
deepseek-tui serve --http                      # run as an HTTP/SSE API server
Enter fullscreen mode Exit fullscreen mode

Inside the TUI:

  • F1 opens help
  • Ctrl+K opens the command palette
  • Esc backs out of the current action
  • Tab / Shift+Tab cycles between Plan, Agent, and YOLO modes
  • /config opens the interactive config editor
  • /compact manually compresses session history when context gets long

To add local files as context, type @path/to/file in the composer. To attach an image from the clipboard, use Ctrl+V.


Configuration

The config file lives at ~/.deepseek/config.toml. A minimal working config looks like this:

api_key = "your_deepseek_api_key"
default_text_model = "deepseek-v4-pro"
Enter fullscreen mode Exit fullscreen mode

Profiles

If you work with multiple providers or API keys, profiles let you switch between them:

api_key = "personal_key"
default_text_model = "deepseek-v4-pro"

[profiles.work]
api_key = "work_key"
base_url = "https://api.deepseek.com"
Enter fullscreen mode Exit fullscreen mode

Switch profiles on launch:

deepseek-tui --profile work
# or
DEEPSEEK_PROFILE=work deepseek-tui
Enter fullscreen mode Exit fullscreen mode

Key Environment Variables

Variable Purpose
DEEPSEEK_API_KEY Your API key
DEEPSEEK_MODEL Override the default model for one run
DEEPSEEK_BASE_URL Point to a custom endpoint
DEEPSEEK_PROFILE Select a named profile
DEEPSEEK_SANDBOX_MODE Control file access: read-only, workspace-write, danger-full-access
DEEPSEEK_APPROVAL_POLICY Tool approval behavior: on-request, untrusted, never

Supported Providers

Beyond DeepSeek's own API, you can point the tool at other providers that host DeepSeek models:

  • deepseek — Default, uses https://api.deepseek.com
  • nvidia-nim — NVIDIA's hosted NIM endpoints
  • fireworks — Fireworks AI
  • sglang — Self-hosted, defaults to http://localhost:30000/v1
  • openrouter — OpenRouter
  • novita — Novita AI

Set the provider in your config or via DEEPSEEK_PROVIDER.


MCP Server Integration

MCP (Model Context Protocol) lets you connect external tools and services to the agent. DeepSeek-TUI reads MCP configuration from ~/.deepseek/mcp.json.

To scaffold the MCP directory:

deepseek-tui mcp init
Enter fullscreen mode Exit fullscreen mode

Once configured, any MCP server listed in that file becomes available as a tool the agent can call. This is how you would connect databases, custom APIs, or other external systems to the agent's toolset.


Feature Flags

You can enable or disable individual capabilities:

[features]
shell_tool = true
subagents = true
web_search = true
apply_patch = true
mcp = true
Enter fullscreen mode Exit fullscreen mode

Or override for a single session:

deepseek-tui --enable web_search
deepseek-tui --disable subagents
Enter fullscreen mode Exit fullscreen mode

To see the current state of all flags:

deepseek-tui features list
Enter fullscreen mode Exit fullscreen mode

Current Models

DeepSeek-TUI defaults to deepseek-v4-pro. Both current public models have 1M context windows and support thinking mode:

  • deepseek-v4-pro — Full capability model, default
  • deepseek-v4-flash — Faster, lighter variant

The legacy aliases deepseek-chat and deepseek-reasoner still work but map to deepseek-v4-flash. Run deepseek-tui models to see live model IDs from your configured endpoint.


Practical Tips

Use Plan mode for anything that touches production. It forces a review step before the agent starts modifying files. Five seconds of reading a plan is worth it.

Run doctor after any config change. It catches misconfiguration before you need it to work.

Use @file references liberally. The more context you give the agent up front, the fewer clarification rounds you need.

Set sandbox_mode = "workspace-write" for normal development. This restricts the agent to your project directory, which is a sensible default. Use danger-full-access only when you explicitly need broader access.

Check deepseek-tui --no-alt-screen if you want scrollback. By default the TUI uses an alternate screen. Running with --no-alt-screen keeps output in your normal terminal buffer so you can scroll through it.


Links

Top comments (0)