DEV Community

Cover image for claude-switcher: The Concept of Piping Prompts into Unix
tumf
tumf

Posted on • Originally published at blog.tumf.dev

claude-switcher: The Concept of Piping Prompts into Unix

Originally published on 2026-01-16
Original article (Japanese): claude-switcher: プロンプトをUnixパイプに流し込む発想

Recently, I came across a post on Hacker News titled “Executable Markdown files with Unix pipes”. I couldn't help but think, "This is interesting."

By using claude-switcher, you can make a Markdown file executable simply by writing #!/usr/bin/env claude-run at the top of the file. Furthermore, it can be piped together just like any ordinary Unix command.

cat data.json | ./analyze.md > results.txt
git log -10 | ./summarize.md
Enter fullscreen mode Exit fullscreen mode

The idea of "integrating LLMs into a pipeline" is refreshing, and I was eager to try it out.

What is claude-switcher?

claude-switcher is a tool that makes Markdown files executable on Claude Code. It was developed by the team at Andi Search and is released under the MIT license.

Key features include:

  • Shebang support: Markdown files can be executed directly.
  • Unix pipe support: stdin/stdout can be used, allowing for combinations with other commands.
  • Provider switching: You can switch between multiple cloud providers like AWS Bedrock, Vertex AI, and Azure.
  • Session isolation: Completely separate your personal Claude Code environment from automation scripts.

What’s Interesting About It?

The essence of this tool lies in the "combination of deterministic processing and LLMs."

Tasks that were difficult to handle with traditional shell scripts can now be accomplished, such as:

  • Summarizing log files
  • Evaluating test results
  • Generating commit messages
  • Classifying and formatting data

These "ambiguous tasks that required human judgment" can now be delegated to LLMs. Moreover, they can be treated as part of a pipeline.

# Run tests and have LLM summarize the results
./run_tests.sh | ./summarize-results.md > report.txt

# Generate a changelog based on Git history
git log --oneline -20 | ./generate-changelog.md > CHANGELOG.md
Enter fullscreen mode Exit fullscreen mode

The novelty is in connecting deterministic processing (shell scripts, command lines) with non-deterministic processing (LLMs) in the same pipeline.

Addressing the "Lack of Reproducibility" Issue

In the comments section of Hacker News, there were many criticisms regarding "nondeterministic shell scripting." LLMs return different outputs even with the same input. Therefore, unlike shell scripts, "consistent results" cannot be guaranteed.

However, I believe this is a matter of usage.

For parts that can be solved with traditional shell scripts (file operations, data extraction, command execution, etc.), you can use them as they are. Delegate only the parts that require "judgment," "summarization," and "classification" to the LLM.

For example:

# Deterministic part: Run tests and extract logs
./run_tests.sh 2>&1 | tee test.log

# Nondeterministic part: LLM summarizes the logs
cat test.log | ./summarize.md > summary.txt
Enter fullscreen mode Exit fullscreen mode

The expression of the summary may change each time, but the goal of "identifying and reporting the 3 failed tests" can be achieved.

Installation and Basic Usage

Prerequisites

  • Claude Code must be installed.
  • A macOS or Linux environment is required.

Setup

git clone https://github.com/andisearch/claude-switcher.git
cd claude-switcher
./setup.sh
Enter fullscreen mode Exit fullscreen mode

The command will be installed in /usr/local/bin, and a configuration file will be created in ~/.claude-switcher/.

To update, run git pull and re-execute ./setup.sh:

cd claude-switcher
git pull
./setup.sh
Enter fullscreen mode Exit fullscreen mode

Your First Executable Markdown

Let's start with a simple example:

cat > analyze.md << 'EOF'
#!/usr/bin/env claude-run
Analyze this codebase and summarize the architecture in 3 bullet points.
EOF

chmod +x analyze.md
./analyze.md
Enter fullscreen mode Exit fullscreen mode

This will analyze the codebase in the current directory.

Using in a Pipeline

Here’s an example that receives data from standard input:

cat > summarize-commits.md << 'EOF'
#!/usr/bin/env claude-run
Summarize the following git commits in plain Japanese, focusing on what changed and why.
EOF

chmod +x summarize-commits.md
git log --oneline -10 | ./summarize-commits.md
Enter fullscreen mode Exit fullscreen mode

When you pipe the Git history, it will summarize it in Japanese.

Permission Management

By default, Executable Markdown does not have code execution permissions. This is a design choice for safety.

If code execution is necessary, you must explicitly allow it with a flag:

#!/usr/bin/env -S claude-run --permission-mode bypassPermissions
Run ./test/automation/run_tests.sh and summarize what passed and failed.
Enter fullscreen mode Exit fullscreen mode

Specifying --permission-mode bypassPermissions allows the LLM to execute shell commands. When passing multiple arguments in the shebang, use #!/usr/bin/env -S on macOS and similar systems.

Important: Use this flag only with trusted scripts. There is a risk of the LLM inadvertently executing dangerous commands (e.g., rm -rf).

Provider Switching

If you want to separate your personal Claude Code environment from automation scripts, you can execute them via cloud provider APIs.

# Using AWS Bedrock
claude-run --aws task.md

# Using Google Vertex AI
claude-run --vertex task.md

# Using Anthropic API
claude-run --apikey task.md
Enter fullscreen mode Exit fullscreen mode

Configuration is done in ~/.claude-switcher/secrets.sh:

nano ~/.claude-switcher/secrets.sh

# AWS Bedrock
export AWS_PROFILE="your-profile-name"
export AWS_REGION="us-west-2"

# Anthropic API
export ANTHROPIC_API_KEY="sk-ant-..."
Enter fullscreen mode Exit fullscreen mode

This allows you to run automation scripts in the cloud without worrying about rate limits on your personal Claude Code subscription.

Practical Example: Slack Notifications for Test Results

I tried a practical example that could be quite useful.

1. Test Execution Script (standard bash)

#!/bin/bash
# test-runner.sh
pytest tests/ --tb=short > test-output.txt 2>&1
echo $? > test-exit-code.txt
Enter fullscreen mode Exit fullscreen mode

2. Markdown for Summarizing Results

#!/usr/bin/env claude-run
Read test-output.txt and test-exit-code.txt.
If exit code is 0, output "✅ All tests passed".
Otherwise, summarize failed tests in Japanese (max 3 lines).
Enter fullscreen mode Exit fullscreen mode

3. Connecting in a Pipeline

./test-runner.sh && ./summarize-test.md | slack-cli post -c dev-alerts
Enter fullscreen mode Exit fullscreen mode

This setup allows the LLM to summarize the test results and post them to Slack.

Points to Note

Nondeterminism

Even with the same input, LLMs may return different outputs. It is not suitable for tasks expecting "exactly the same results."

Cost

When executing via API, token usage fees will apply. It is advisable to estimate costs before processing large amounts of data.

Security

When using --permission-mode bypassPermissions, code generated by the LLM will be executed. If dealing with untrusted input data, it should be run in an isolated environment, such as a DevContainer.

Similar Tools

Similar tools mentioned in the Hacker News comments include:

  • mdflow: Supports variable expansion within Markdown.
  • Atuin Desktop: YAML format "Executable Runbook."
  • Runme: Executes code blocks within Markdown documents.

Each of these tools attempts to "make documents executable" in different ways.

Conclusion

The appeal of claude-switcher lies in the fact that "prompts become files."

  • They can be managed with Git (allowing for diffs and history).
  • They can be shared with teams (enabling reusable automation).
  • They can be integrated into Unix pipelines (allowing combinations with existing tools).

There was also a comment noting that it is "more readable than curl | bash." Indeed, instructions written in Markdown are easier to follow regarding "what is being done."

The idea of treating LLMs as "commands" is likely to become important in future workflow automation. If you're interested, I encourage you to give it a try.

Reference Links

Top comments (0)