DEV Community

ColeDrain
ColeDrain

Posted on

ChatGPT in the Shell

Introduction:
Hello there amigo πŸ‘‹.

In this tutorial, I will show you how to create a CLI app that allows you to interact with ChatGPT from the command line. The app, named "ChatShell", will allow users to ask for shell commands for specific tasks or chat with ChatGPT directly. We'll be using the Go programming language and Cobra CLI library to create our CLI app and the go-openai library to interact with OpenAI's GPT-3 API.

Please do give a thumbs-up, if this was insightful in any way.

Prerequisites:

  • Go programming language installed (version 1.20.2 or higher)
  • Cobra CLI library installed
  • go-openai library installed
  • Viper library installed (for config and environment vars)
  • An OpenAI API key

End Goal:

  • Able to run shell command of form:
chatshell ask "how to delete a local branch ?"
Enter fullscreen mode Exit fullscreen mode
chatshell ask -c "when was gpt first built ?"
Enter fullscreen mode Exit fullscreen mode

Steps:

A. Set up the project structure

Create a new directory for the project and navigate to it

$ mkdir chatshell
$ cd chatshell
Enter fullscreen mode Exit fullscreen mode

Initialize a new Go module

$ go mod init github.com/{your_username}/chatshell
# note: replace {your_username} with actual github username
Enter fullscreen mode Exit fullscreen mode

Install the Cobra CLI library and go-openai library

$ go install github.com/spf13/cobra-cli@latest
$ go get -u github.com/sashabaranov/go-openai
$ go get github.com/spf13/viper
Enter fullscreen mode Exit fullscreen mode

B. Create the CLI app

cobra-cli helps generate boilerplates for your cli app automatically, this speeds up development.

Use the Cobra CLI to generate the basic structure of the CLI app:

$ cobra-cli init
Enter fullscreen mode Exit fullscreen mode

This will generate the basic structure of the CLI app, including a main.go file, a cmd directory, and a LICENSE file.

You can have a look at the code if you're familiar with cobra.

Project Structure

C. Implement the 'ask' command

Once again we will call on our friend (cobra-cli)

cobra-cli add ask
Enter fullscreen mode Exit fullscreen mode

This will generate a file ask.go which will contain the implementation of the 'ask' command. We have to finetune the boilerplate to our need, below is finetuned code well commented for understanding.

Import necessary packages

/*
Copyright Β© 2023 Ugochukwu Onyebuchi <pyvinci@gmail.com>
*/
package cmd

import (
    "context"
    "fmt"
    "os"
    "runtime"

    "github.com/spf13/cobra"
    "github.com/spf13/viper"
    openai "github.com/sashabaranov/go-openai"
)
Enter fullscreen mode Exit fullscreen mode

Next intialize the askCmd
Use: a short description of how to use the ask command, in our case we expect ask to be followed by a text in quotes
Short: a short description of the ask command
Long: a longer description
Args: we specify we only want one argument, which is a text in quotes
Run: we specify the function to be called, this function will handle the business logic.

// askCmd represents the ask command
var askCmd = &cobra.Command{
    Use:   "ask [text]",
    Short: "ask ChatGPT",
    Long:  `ask is used to ask for shell commands for a particular task`,
    Args:  cobra.ExactArgs(1),
    Run:   runAsk,
}

// authToken holds your openai auth key
var authToken string
Enter fullscreen mode Exit fullscreen mode
// initial operations
func init() {
        // add the ask command to root
    rootCmd.AddCommand(askCmd)

        // add flag to enable chat mode
    askCmd.PersistentFlags().BoolP("chat", "c", false, "Chat with ChatGPT")

    // Read the auth token from a configuration file
    viper.SetConfigName("config")
    viper.AddConfigPath("$HOME/.chatshell")
    viper.AddConfigPath(`%USERPROFILE%/.chatshell`)
    viper.SetEnvPrefix("OPENAI")
    viper.AutomaticEnv()

        // Handle missing config file or auth token
    if err := viper.ReadInConfig(); err == nil {
        authToken = viper.GetString("OPENAI_AUTH_TOKEN")
    } else {
        fmt.Printf("Error reading configuration file: %v\n", err)
    }

    if authToken == "" {
        fmt.Println("Error: OPENAI_AUTH_TOKEN environment variable not set")
        os.Exit(1)
    }
}
Enter fullscreen mode Exit fullscreen mode

The prompt I use is a bit large: but for much correctness I get the osType using runtime.GOOS and SHELL type using os.Getenv("SHELL"), this will help ChatGPT give reasonable shell commands of the relevant OS, we don't want shell commands for mac whilst we are using windows.. and it also responds with "Sorry I can't seem to find a command for that", if it can find a command for your prompt. It took a while to get a good prompt as this, it's an iterative process..

// Call OpenAI and prints response
func runAsk(cmd *cobra.Command, args []string) {
    client := openai.NewClient(authToken)
    osType := runtime.GOOS
    shell := os.Getenv("SHELL")
    content := ""

    chatMode, _ := cmd.Flags().GetBool("chat")

        // We use the state of the flag to determine the context
    if !chatMode {
        content = fmt.Sprintf(`You are a very helpful shell assistant that gives users only shell commands to achieve a task, just give out only the shell command(s), and nothing else, no preamble, greetings or explanation please, just the shell command. When you can't find a command for a query/prompt/greeting respond strictly with "Sorry I can't seem to find a command for that". Start now: "%v in %v os using %v shell"`, args[0], osType, shell)
    } else {
        content = args[0]
    }
        // Get ChatGPT Response
    resp, err := client.CreateChatCompletion(
        context.Background(),
        openai.ChatCompletionRequest{
            Model: openai.GPT3Dot5Turbo,
            Messages: []openai.ChatCompletionMessage{
                {
                    Role: openai.ChatMessageRoleUser,
                    Content: content,
                },
            },
        },
    )

    if err != nil {
        fmt.Printf("ChatCompletion error: %v\n", err)
    }

    fmt.Println(resp.Choices[0].Message.Content)
}

Enter fullscreen mode Exit fullscreen mode

D. Update the configuration file
For all this to work you will need your OPENAI_AUTH_TOKEN set.
Remember, we set viper to look for a config file that should contain our OPENAI_AUTH_TOKEN

Create a new directory called .chatshell in your home directory (on Windows, use %USERPROFILE% instead of $HOME):

$ mkdir ~/.chatshell
Enter fullscreen mode Exit fullscreen mode

Create a new file called config.json in the .chatshell directory and add your OpenAI API key as the value for the OPENAI_AUTH_TOKEN variable:

{"OPENAI_AUTH_TOKEN":"your_api_key_here"}
Enter fullscreen mode Exit fullscreen mode

E. Build the CLI app
Navigate to the project's root directory and build the app:

$ go build
Enter fullscreen mode Exit fullscreen mode

This will create an executable binary called 'chatshell'.

F. Use the ChatShell CLI app
Run the chatshell binary and provide a task description as an argument:

$ ./chatshell ask "how to delete a local branch ?"
Enter fullscreen mode Exit fullscreen mode
output: git branch -d <branch_name>
Enter fullscreen mode Exit fullscreen mode

The app will then interact with ChatGPT and return the shell command for the given task.

You can also use the -c flag to chat with ChatGPT directly:

$ ./chatshell ask -c "When was gpt first built?"
Enter fullscreen mode Exit fullscreen mode
output: GPT (Generative Pre-trained Transformer) was first introduced by OpenAI in June 2018
Enter fullscreen mode Exit fullscreen mode

Conclusion:

In this tutorial, we saw how to create a Conda CLI app that interacts with ChatGPT from the command line. The ChatShell app can be used to obtain shell commands for specific tasks or chat with ChatGPT directly. Feel free to explore the provided code and the go-openai library to extend the functionality of the app or adapt it to your needs.

Repository: https://github.com/ColeDrain/chatshell

Top comments (0)