DEV Community

Cover image for I built an Open-Source CLI that stops your AI terminal from leaking secrets
Daniel Skov Jacobsen
Daniel Skov Jacobsen

Posted on

I built an Open-Source CLI that stops your AI terminal from leaking secrets

I love AI coding tools. I use them every day. But there's a thing nobody building them seems to care about: everything you send goes straight to the model, unfiltered.

Your command history. Your environment variables. Credentials, tokens, customer data sitting in error logs. None of it gets redacted. There's no audit trail. You just have to trust that nothing sensitive slipped into your prompt.

That's not a great security posture. So I built something.

The quiet risk in AI-Powered dev workflows

AI terminal assistants are incredible for productivity. I use one constantly. But most of them work like this:

Your full prompt → LLM API → response back

No filtering. No redaction. No audit trail.

That means if your prompt context includes credentials from command history, PII from error logs, internal hostnames from your environment, or tokens from your clipboard, it all goes to the model.

For side projects? Who cares. But the moment you're working with production systems, customer data, or proprietary code, this becomes a real compliance problem. Especially if you're in a regulated industry or working with enterprise clients.

So I built: Bast CLI

Bast is a free, open-source AI terminal assistant that routes through a security gateway before anything reaches the LLM.

It does everything you'd expect from an AI CLI: Natural language to shell commands, command explanation, error recovery, but with a security layer that:

  • Redacts PII automatically before it hits the model (emails, API keys, credentials, tokens)
  • Blocks prompt injection and jailbreak attempts
  • Logs everything so you have full observability on what's being sent and when

It's written in Go, uses Bubble Tea for the TUI, and is MIT licensed.

See It in Action

Here's a quick walkthrough:

Bastio AI Security | BAST CLI Tool — Screen Studio

Bastio AI Security | BAST CLI Tool — Created and shared with Screen Studio

favicon screen.studio

Install (macOS / Linux):

curl -fsSL https://raw.githubusercontent.com/bastio-ai/bast/main/scripts/install.sh | sh
bast init
Enter fullscreen mode Exit fullscreen mode

Generate commands from plain English:

$ bast run
> find all go files modified in the last week

find . -name "*.go" -mtime -7

[⏎ Run] [e Edit] [c Copy] [? Explain] [Esc Exit]
Enter fullscreen mode Exit fullscreen mode

Understand a command before you run it:

$ bast explain "tar -xzvf archive.tar.gz"

Extracts a gzip-compressed tar archive. The flags: x=extract,
z=decompress gzip, v=verbose output, f=specify filename.
Enter fullscreen mode Exit fullscreen mode

Fix a failed command:

$ git push origin feature/auth
! [rejected] feature/auth -> feature/auth (non-fast-forward)

$ bast fix
The remote branch has commits you don't have locally.
Suggested fix:
  git pull --rebase origin feature/auth
Enter fullscreen mode Exit fullscreen mode

Beyond the security layer

Bast isn't just a security wrapper - it's a full-featured AI terminal assistant. A few highlights:

Dangerous command detection — automatically warns before rm -rf, git push --force, dd, and other destructive operations. There's a full list of protected git operations, including force push, hard reset, branch deletion, and history rewriting.

Git awareness — knows your current branch, staged changes, merge/rebase state, and recent commits. When you ask it to "commit my changes with a good message," it actually reads your diff to write something meaningful.

Agentic mode — type /agent for multi-step tasks. Bast can execute commands, read files, and iterate to complete complex workflows:

$ bast run
> /agent find all TODO comments in go files and summarize them

Tool Calls:
  run_command {"command": "grep -r 'TODO' --include='*.go' ."}

Found 2 TODO comments in the codebase:
1. internal/ai/anthropic.go — Add streaming support for responses
2. internal/tools/loader.go — Validate script permissions before execution
Enter fullscreen mode Exit fullscreen mode

Shell integration — add eval "$(bast hook zsh)" to your config and get Ctrl+A to launch bast and Ctrl+E to explain whatever command you're currently typing.

Custom plugins — extend bast with your own tools using simple YAML manifests in ~/.config/bast/tools/. Great for deployment pipelines, database operations, or any workflow you want to make AI-aware.

The Gateway (and How to Skip It)

The Bastio security gateway is free for 100,000 requests/month. No credit card required. It handles PII redaction, injection blocking, and observability.

If you'd rather connect directly to the Anthropic API:

export ANTHROPIC_API_KEY=sk-ant-...
export BAST_GATEWAY=direct
Enter fullscreen mode Exit fullscreen mode

You lose the security features, but everything else works.

Why Open Source?

This is simple: if a tool claims to protect your data, you should be able to read the code. A closed-source security tool is a contradiction.

The entire CLI is on GitHub under MIT. Read it, fork it, break it, improve it.

github.com/bastio-ai/bast

What's Next

This is v0.1.0 — the beginning. Some things I'm working on:

  • More LLM provider support beyond Anthropic
  • Team dashboards and policy controls
  • Expanded PII detection patterns
  • IDE integration

I'd Love Your Feedback

This is my first Dev.to post, and bast is a brand new project. If you're using AI coding tools in environments with sensitive data, I'd genuinely like to hear how you're handling it:

  • Are you running AI assistants against production codebases?
  • Does your team have any policies around what can be sent to LLMs?
  • Would you use something like this, or does it feel like overkill?

Star the repo if it's interesting to you, open an issue if something breaks, and tell me what I'm missing.

GitHub: github.com/bastio-ai/bast
Website: bastio.com

Top comments (0)