DEV Community

Cover image for `safeaws` checks your AWS CLI commands before they are run

`safeaws` checks your AWS CLI commands before they are run

As a heavy AWS user, you probably know the struggle of needing to edit an existing AWS resource. Sometimes, the AWS CLI is just so much easier to use than the AWS Console, whether it's more programmatically friendly or the API isn't yet supported in the Console.

You end up copying a nicely drafted AWS CLI snippet from StackOverflow, or recently, one generated by a GenAI.

Are you that confident the AWS API call would not make any breaking changes? Any potential side effects with your call? Have you familarized yourself with all arguments?

This is when I came up with the idea of the safeaws CLI wrapper:
(Btw, I am pronouncing as “Safe”-“W”-“S”.):

safeaws cli demo GIF

How does it work

In short, it is a Python CLI wrapper around the famous aws CLI, that does the following:

  1. Use it like how you would normally run an AWS CLI command - just prefix aws with safe into safeaws.
  2. It fetches the CLI documentation with aws <service> <cmd> help, preprocesses it to remove less useful parts of the docs.
  3. A prompt is generated using the CLI documentation and the command you want to execute, which is then sent to Amazon Bedrock's Claude 3 model.
  4. The model will check for any potential issues/things to note to you;
  5. The CLI utility streams the checks to you in your shell.
  6. You are prompted to decide whether the AWS CLI command should still be run, after validating the checks.

Why pick the Claude 3 model on Amazon Bedrock?

First of all, you are using aws CLI anyway, so it’s handy to further add little configuration to get Amazon Bedrock’s GenAI models called handily in your CLI.

Second, some AWS CLI docs are indeed long. For example, the documentation for aws ec2 run-instances command is around 17k tokens long. Only the latest models could accept prompts/chats with such a long context - Claude 3 models with their 200k context lengths are more than suitable.

Lastly, Claude 3 models are very cost effective and fast, at least when compared to our GPT-4 counterparts. You cannot achieve such a cost/speed ratio with other models.

Let’s run it!

Download the Python script and put it into your favorite bin directory, for example:

#!/bin/bash
# It’s assumed your system Python3 has `boto3` installed.
sudo curl https://raw.githubusercontent.com/gabrielkoo/safeaws-cli/main/safeaws.py \
  -o /usr/local/bin/safeaws && \
sudo chmod +x /usr/local/bin/safeaws
Enter fullscreen mode Exit fullscreen mode

Be sure to validate its contents before using it. It’s always a good habit to do so for security:

cat /usr/local/bin/safeaws
Enter fullscreen mode Exit fullscreen mode

Now just run your AWS command like you normally do, move your cursor to the beginning (Opt + ← on macOS) to prefix the aws command with safe:

# From
> aws ec2 run-instances \
  —-image-id ami-12345678 \
  —-instance-family g6.48xlarge
# To
> safeaws ec2 run-instances \
  —-image-id ami-12345678 \
  —-instance-family g6.48xlarge

# I made up the responses below, the actual call should be different
It looks like you want to launch a new EC2 instance, but please be aware of the following:

1. g6.48xlarge seems to be really computationally powerful. Do you really need such a large instance? It would be very costly.

2. You did not specify —-security-groups in your arguments, the default security group would be used and it might violate your organization’s security policy.

Tokens Usage:  Input: xxx, Output: yyy


Do you want to execute the command? (y/N)
Enter fullscreen mode Exit fullscreen mode

Customization

The CLI wrapper could be configured with a few environment variables, namely the choice of AWS profile, which Claude model to use, max tokens as well as temperature.

You can also customize the GenAI prompt with your own checks, by modifying the PROMPT_TEMPLATE variable in the Python script.

Conclusion

Of course, you won’t expect the GenAI to be 100% of the time right. At least in a few seconds and a very small cost (~US$0.005 per call, assuming each CLI docs is 20k tokens long), you got an industry-leading AI reviewing your world-changing AWS CLI calls - and it could have saved you from the trouble of recovering your wrongly modified/deleted AWS resources.

Source

You may find the source code of safeaws on my GitHub repo:
https://github.com/gabrielkoo/safeaws-cli

Top comments (0)