Recently, I've noticed that LLM and RAG (Retrieval Augmented Generation) applications are particularly popular, especially here in the North American Bay Area. I decided to spend some downtime exploring and created a small open-source project called Termax (Github link).
Initially, its main function was to allow programmers to use natural language to generate desired commands in the command line. However, considering everyone's different environments, directly generating commands might not be effective (like generating PowerShell commands in Linux). So, I thought about incorporating some user environment data to improve the model’s outputs.
Another way to use RAG, which I think many LLM agents are adopting, is to cache 'successful' output samples for future model reference — making the model more attuned to user habits. Termax caches outputs that run successfully without errors for future requests, which has proven to significantly improve performance.
Here are some small demos:
BTW, as someone quite lazy... I feel like the large model can actually predict what my next command might be, such as the series of operations I frequently need: git add .
, git commit -m ...
, and git push
. I wondered if the model could predict my intentions and guess what I’m about to do (similar to context-based command suggestions):
I implemented a small feature in Termax, but sometimes it doesn’t guess very accurately, Lol.
However, some might worry about sensitive information in shell command history, so not everyone may want to use this feature. We are also considering how to make it more secure.
Nonetheless, we welcome everyone to help test this little gadget. We’ve also made plugins for bash, zsh, and fish, so you can easily use shortcuts to generate commands. Your support and suggestions are welcome!
Top comments (0)