DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’»

Halil Can Ozcelik
Halil Can Ozcelik

Posted on • Updated on

An NLP CLI App for Terminal Commands

This project is an approach for a command-line application which is working with human language. The main benefits of using such a tool at first glance are:

  1. You don’t need to find different commands for the same job according to operating systems. For instance, you must use ls in a Linux based OS, but you have to use dir in Windows for the same job. Of course, this is a very basic example but if you think about more complicated and less-known commands and also take into consideration the number of different operating systems, working independently of operating systems will be very beneficial.
  2. No need to memorize commands & parameters anymore. Again if you think about well-known easy commands, writing list files instead of ls doesn’t seem effective to you. But if more advanced commands come in or you need several parameters while executing commands, writing a sentence instead of searching them on the internet makes sense.
  3. You can use your native language for the command line. This application supports all languages which are available in Wit.ai service. It includes almost all widely used languages such as Chinese, English, French, German, Russian, Spanish, Turkish, etc. You can check the list of all supported languages from this link.

When it comes to the technical part, it has two sub-projects. Both client and server sides of it are developed with Node.js. Here is a more detailed explanation of them:

  1. Client-side project: It is an npm package. It basically sends requests to the server. According to the response whether executing the related command or show an error or confirmation messages. You can check the GitHub link if you want to examine it in detail. Or you can install it directly by running npm install sem-cli -g in your terminal.

  2. Server-side project: This project is developed with Azure Functions (Node.js) + CosmosDB + Wit.ai. It is developed as a serverless system in Azure Cloud. Here is the Github repository of this project. The server side of the project contains the main logic. I will try to explain in detail below.

In the client-side package, there are three commands which are sem-exec, sem-look, and sem-suggest. The first one is for running a command, the second one is to search for a command with human language and the last one is for suggesting new intent & command relations. The command coverage of this project will increase under the favor of these suggestions from users.
Now let’s dig into how these commands work. But first I want to give information about parameters in these commands. So we can understand easier the rest of this article.

  • intent: It is a short explanation for the purpose of this command.
  • command: It is the related command.
  • message: It is the client’s message with the human language. No need to write it without any typo. Our AI service can handle many typo errors. Also, it supports many different languages. You can use your native language, but I give all the examples in English for this article.
  • dangerLevel: It shows how dangerous to run this command. It can be β€œlow”, β€œmedium” or β€œhigh”. (β€œhigh” dangerous commands will not be run before a client approval)

The workflow of running command by sem-exec

sem-exec workflow
Let’s go through step by step according to numbers in the schema:

  1. The user enters a message. You don’t need to enter messages with %100 correctly typed. Thanks to Wit.ai, we can analyze sentences although some typos.
  2. The server asks Wit.ai to find out related intent, in other words, the meaning of the user’s sentence.
  3. Then we got the intent, if there is no result, the server returns an error message to the client.
  4. Query database to find related command according to the intent & operating system of the request.
  5. And then get the result from the database.
  6. If there is a command, return it with danger level information otherwise return a β€œnot found” error message. On the client-side, it runs the command or shows the error or confirmation message.
  7. For commands with a β€œhigh” danger level, our program asks for client confirmation. Are you sure to run: <result-command>? (type 'y' for yes, 'n' for no) If the user accepts it, then the command will be executed.

The workflow of running command by sem-look

This command has the same process as sem-exec.
However, it returns the corresponding command instead of executing it. For instance, if you run
sem-look compare files p="a.txt b.txt"
as command then it returns the following message:
Your command: "cmp a.txt b.txt" with danger level: "low" for your current operating system.
The current operating system is Mac OS for this example.

The logic of sem-suggest is much simpler

sem-suggest workflow

  1. The user sends a suggestion triples that must contain intent, command, and danger level.
  2. The server adds this new suggestion to the database. These records are stored in the suggestion table. They will be evaluated manually in the current scenario.
  3. We got a response from the database.
  4. The server returns a success or error message to the client.

Finally, I want to mention the database. There are two containers with the same document structure.

  1. commands
    • intent (the aim of the command)
    • command (executable command)
    • os (the operating system which the command can work)
    • dangerLevel (danger level of the command)
  2. suggestions
    • intent (the aim of the command)
    • command (executable command)
    • os (the operating system which the command can work)
    • dangerLevel (danger level of the command)

By the way, you don’t need to write your operating system either for sem-look or sem-suggest, because it is detected by the client-side program and added to the requests as a parameter. So, please suggest a command which is working on your current operating system.

Current Status

The project is working as expected. However, there is not enough data in the commands table. So, it is not covering a wide variety of command requests for now. For this reason, the enrichment of the commands list is the most important point in the current situation.

Future Improvements & Challenges

In my opinion, the main challenge in front of this project is the enrichment of the database with new commands. It will be very difficult if only several people add new commands. Suggestion functionality is added to give permission for proposing new commands to everyone. After checking these suggestions, beneficial ones will be added to the database. This evaluation process is done manually for now.
Additionally, distinguishing the parameter differences of commands in the semantic analysis will be another challenging point. Although we handle the finding correct command for human sentence requests when it comes to detecting differences in the manner of command parameters, training of AI is the crucial point. User messages and intents are matched on Wit.ai panel and it increases the analysis power of our tool. The more this tool is used, the more matching occurs and it increases the confirmation rate (which is between 0 and 1). So we can increase our threshold for matchings, it also helps us for detecting the differences in quite similar messages such as list files => ls and list all files => ls -a. Again this one is a very basic example, it is already detectable by our system :)

Here is my project about an NLP approach to command line usage. If you think it can be useful and you have some ideas, I will be happy to hear. Also, I am eager for collaboration.

Top comments (0)

Click 'Save' on this post

Then head to your Reading List to read and manage the posts you've saved.