TL;DR
Built a terminal command that generates code/commands using any AI provider. No more alt-tabbing to ChatGPT. Works in vim too. pip install llmswap
→ llmswap generate "your command"
Last Tuesday, I was debugging a Kubernetes cluster at 3 AM. Had vim open with a YAML file that refused to work, ChatGPT open in Chrome, Stack Overflow in another tab, and Claude.ai in a third. The constant alt-tabbing between terminal, ChatGPT, Stack Overflow, and AI assistants was driving me absolutely insane.
That's when it hit me - why am I leaving my terminal to get help from AI?
The Breaking Point
We've all been there. You know that one command exists, but can't remember the exact syntax. Maybe it's that Docker cleanup command, or the specific awk
pattern you need, or how to properly configure nginx for reverse proxy.
The routine is always the same: switch to browser, search, scroll through results, copy command, switch back to terminal, paste, test, fail, repeat.
I realized I was doing this dozens of times per day. Every context switch was breaking my flow. Every copy-paste felt like admitting defeat.
That's when I remembered - llmswap 4.1.1 had just shipped with a generate
command. What if I could get AI help without ever leaving my terminal?
The Terminal Revolution
Here's what changed everything. Instead of alt-tabbing to ChatGPT, I could just type:
llmswap generate "show me all Docker containers eating memory"
And instantly get:
docker stats --no-stream --format "table \t\t"
No browser. No tabs. No context switching. The answer appeared right where I needed it.
But it gets better. Need to save it to a file?
llmswap generate "backup script for PostgreSQL with rotation" > backup.sh
chmod +x backup.sh
Done. Script created, permissions set, ready to go.
Real Examples from This Week
Let me show you how this actually works in practice. These are real commands I generated this week:
Monday morning: Client's website was down. Needed to check nginx logs quickly.
llmswap generate "find errors in nginx logs from last hour"
# Got: tail -f /var/log/nginx/error.log | grep $(date -d '1 hour ago' '+%d/%b/%Y:%H')
Tuesday: Setting up a new Django project. Needed that docker-compose I always forget.
llmswap generate "docker compose for django postgres redis development" --language yaml
Perfect 40-line docker-compose.yml appeared. With health checks, volume mounts, everything.
Wednesday: Database migration day. Needed to find duplicate emails before adding a unique constraint.
llmswap generate "PostgreSQL find duplicate emails in users table"
# Got: SELECT email, COUNT(*) FROM users GROUP BY email HAVING COUNT(*) > 1;
Thursday: Had to analyze some API logs. That awk command I can never remember.
llmswap generate "extract IP addresses and count from nginx access log"
# Got: awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -nr
Friday: Client needed 100 VMs spun up in OpenStack with delays to avoid overwhelming the API.
llmswap generate "script to create 100 VMs in openstack with 30 second delays"
Got a complete bash script with proper batching and error handling.
Weekend: Emergency debugging session. Needed to grep through compressed log files.
llmswap generate "search for error patterns in gzipped log files"
# Got: zgrep -i "error\|fail\|exception" /var/log/app/*.gz | head -50
Each time, the answer appeared in seconds. No browser needed.
But Wait, It Gets Even Better
Here's where things got really interesting. I discovered you can use this inside vim too.
Working on a Node.js project, needed rate limiting middleware. Instead of googling and copy-pasting, I typed this inside vim:
:r !llmswap generate "Express rate limiting middleware"
And this appeared directly in my file:
const rateLimit = require('express-rate-limit');
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
message: {
error: 'Too many requests, please try again later.'
},
standardHeaders: true,
legacyHeaders: false,
});
module.exports = limiter;
I just sat there staring at the screen. The code materialized exactly where my cursor was. No switching. No copying. Just… there.
The Vim Integration That Blew My Mind
The :r !
command in vim reads the output of any shell command directly into your buffer. Combine that with llmswap generate
and you get instant code generation inside your editor.
Need a MongoDB schema?
:r !llmswap generate "User schema with authentication for MongoDB"
Need nginx config?
:r !llmswap generate "nginx reverse proxy config for Node.js app"
Need a bash script?
:r !llmswap generate "script to backup files older than 30 days"
Every time, the code appears right at your cursor. It's like having ChatGPT built into vim, but better - because it works with any LLM provider.
The Provider Freedom That Changes Everything
Here's what really gets me excited. While GitHub Copilot locks you into their $10/month subscription and their provider choices, llmswap gives you complete freedom.
Want to use Claude because it's better at systems programming? Set ANTHROPIC_API_KEY
.
Want ultra-fast responses? Use Groq with GROQ_API_KEY
.
Want to keep everything local? Run Ollama.
Already paying for Gemini API? Use that instead of another subscription.
export ANTHROPIC_API_KEY="your-key"
llmswap generate "whatever you need"
Same command, different AI behind it. Your choice.
My New Daily Workflow
This is how I work now:
Morning system check:
llmswap generate "check system health on Ubuntu server"
# Gets me: df -h; free -m; uptime; systemctl --failed
Database work:
llmswap generate "MySQL show slow queries from last hour"
# Gets me: mysqldumpslow -t 10 /var/log/mysql/slow.log
llmswap generate "MongoDB create user with read and write privileges"
# Gets me: mongo admin --eval 'db.createUser({user:"appuser",pwd:"password",roles:[{role:"readWrite",db:"myapp"}]})'
Git operations (because I always forget):
llmswap generate "git remove file from history completely"
# Gets me: git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch file.txt'
Quick scripts in vim:
:r !llmswap generate "Python function to read CSV with error handling"
:r !llmswap generate "JavaScript debounce function"
:r !llmswap generate "Bash script to rotate logs"
The Unexpected Use Cases
I started using it for everything. Literally everything.
AWS operations:
llmswap generate "AWS CLI list all EC2 instances with names and IPs"
Kubernetes debugging:
llmswap generate "kubectl commands to debug failing pod"
Performance analysis:
llmswap generate "find which process is using most CPU"
Security checks:
llmswap generate "check for failed SSH attempts in system logs"
Network debugging:
llmswap generate "test if port 443 is open on remote server"
Even configuration files. Need a systemd service?
llmswap generate "systemd service file for Python web app" > myapp.service
The answer is always there, in seconds, right in your terminal.
Why This Matters More Than You Think
Let's talk numbers. GitHub Copilot costs $10/month per developer. For a team of 10 developers, that's $1,200/year.
With llmswap:
- Use Gemini: Costs 96% less than GPT-4
- Use Ollama: Completely free, runs locally
- Use whatever API access you already have
- No subscriptions, no vendor lock-in
But it's not just about money. It's about choice. Maybe you want IBM Watson for enterprise features. Maybe you want Groq for speed. Maybe you want everything local with Ollama.
Your terminal, your AI, your choice.
The Setup (Embarrassingly Simple)
pip install llmswap
Set one environment variable for your preferred provider:
export OPENAI_API_KEY="your-key" # Or any other provider
That's it. Start generating:
llmswap generate "your first command"
What I Realized
I haven't opened ChatGPT in my browser for three weeks. My terminal became my AI assistant.
Think about your daily workflow. How many times do you:
- Ask ChatGPT for a command you've forgotten
- Search Stack Overflow for syntax
- Switch to Claude for code explanations
- Jump to Gemini for a different perspective
- Copy-paste from documentation sites
What if you never had to leave your terminal again?
The Real Test
Last Friday, I was setting up monitoring for a client. Needed Prometheus, Grafana, alertmanager - the whole stack. Instead of hunting through documentation:
llmswap generate "docker compose for Prometheus Grafana monitoring stack" > monitoring.yml
Got a complete 80-line docker-compose file with proper networking, volumes, and health checks. Launched it, configured alerts, done.
What used to take 2 hours took 20 minutes.
Try This Right Now
If you're in your terminal, try this:
echo "The current time is: $(date)"
See how the command output appears? Now imagine that with any code you need:
llmswap generate "function to validate email addresses in Python"
Or in vim:
:r !date
See how it inserts into your file? Now try:
:r !llmswap generate "that regex you always forget"
The Future is Here
We're developers. We live in terminals. Our AI assistants should live there too.
Every alt-tab breaks focus. Every context switch costs time. Every copy-paste is friction.
This shipped in llmswap 4.1.1 last week. It's already changed how I write code.
The best part? It's not replacing your thinking - it's eliminating the friction between your thoughts and your code.
Your terminal is waiting. Your AI assistant is one pip install
away.
pip install llmswap
llmswap generate "the command you always google"
Welcome to coding without context switching.
Links:
Setup: Just set one API key for your preferred provider:
export ANTHROPIC_API_KEY="sk-..." # For Claude
export OPENAI_API_KEY="sk-..." # For GPT-4
export GEMINI_API_KEY="..." # For Google Gemini
# ... or any of the 8 supported providers
⭐ Star the project on GitHub if it saves you time - every star motivates continued development! ⭐
This is part 2 of my llmswap series. Read part 1: From Hackathon to Open Source to see how llmswap was born.
What's your biggest context-switching pain point when coding? Drop your thoughts in the comments!
Top comments (0)