DEV Community

Shobhit Patkar
Shobhit Patkar

Posted on

Lambda function for Instance management

Objective

Set up a system where an EC2 instance starts automatically when a Lambda function is triggered. Configure it so that when a new instance is created, its service automatically comes up. Additionally, ensure there is an option to terminate the instance using a Lambda function.

If developers are working on the server and want to save the state without terminating the instance directly, manage it by creating an AMI before termination. Additionally, provide an option to use the last AMI to resume the developers' work at server creation.

Reslult

We created a Lambda function with four options: the first option creates a fresh EC2 instance with automatic configuration, the second option terminates the EC2 instance directly, the third option creates an EC2 instance using the last AMI and deletes that AMI, and the fourth option creates an AMI and then deletes the EC2 instance.

We created a Bash script to create a DynamoDB table and add data to it, as well as fetch data from the table. Additionally, we developed a batch script to encode and decode user data using Base64. Our collection of batch scripts is also useful for dockerizing an application with the provision of a deployment console

Procedure

Step 1: Create a lambda function to trigger the EC2 instance

`
def lambda_handler(event, context):
"""
AWS Lambda function to perform EC2 instance actions based on user input.
"""
input_number = event.get('input_number')
instance_name = "Dev_from_lambda"
region = "ap-south-1"
ami_name = "DevServerImage-Latest"
first_ami = "ami-053b12d3152c0cc71"
ec2_client = boto3.client('ec2', region_name=region)

try:
    logger.info(f"log - going to check instance availability")
    instance_id = get_instance_id(ec2_client, instance_name,input_number)
    logger.info(f"log - got instance id result as - {instance_id}")
except Exception as e:
    logger.info(f"log - instance id fetching  - exceptional condition occoured ")
    return {
        'statusCode': 500,
        'body': f"Error: {str(e)}"
    } 




create_instance_response = "None"
termination_response = "None"
change_state_response = "None"
create_ami_response = "None"
remove_ami_response = "None"
cancel_spot_response = "None"
first_instance_response = "None"

if input_number == 1:
    logger.info(f"log - You selected to create an new instance with AMI specification")
    create_instance_response = create_instance(ec2_client,ami_name,instance_name)
    remove_ami_response = remove_ami(ec2_client,ami_name,input_number)
elif input_number == 2:
    logger.info(f"log - You selected to terminate instance with its AMI's")
    create_ami_response = create_ami(ec2_client, instance_id,ami_name)
    termination_response = terminate_instance(ec2_client, instance_id)
    cancel_spot_response = cancel_spot_request(ec2_client)
elif input_number == 3:
    logger.info(f"log - You selected to create only instance")
    first_instance_response = first_instance(ec2_client,first_ami,instance_name)
elif input_number == 4:
    logger.info(f"log - You selected to terminate simple instance only")
    termination_response = terminate_instance(ec2_client, instance_id)
    cancel_spot_response = cancel_spot_request(ec2_client)
elif input_number == 5:
    print("Five")
else:
    print("Number is not between 1 and 5")

try:
    response_data = {
        "statusCode": 200,
        "body": {
            "create_instance_response": create_instance_response,
            "termination_response": termination_response,
            "create_ami_response": create_ami_response,
            "remove_ami_response": remove_ami_response,
            "cancel_spot_response": cancel_spot_response,
            "first_instance_response" : first_instance_response
        }
    }
    return response_data
except Exception as e:
    return {
        'statusCode': 500,
        'body': f"Error: {str(e)}"
    }
Enter fullscreen mode Exit fullscreen mode

`

Step 2: Set up the Payload for various operations

`

first instance

{
"input_number": 3
}

Terminate instance only

{
"input_number": 4
}

start with AMI

{
"input_number": 1
}

Stop with AMI

{
"input_number": 2
}
`

Step 3: Get the permissions policy for Lambda function


{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"ec2:*",
"iam:CreateServiceLinkedRole",
"iam:PutRolePolicy",
"iam:PassRole"
],
"Resource": [
"*"
]
}
]
}

Step 4: Create an EventBridge Schedule for automatic triggering of Lambda function.

  1. User the cron expression - 30 9 ? * MON-FRI * , with recurring schedule.
  2. Set flexible window off.
  3. Switch off the retry policy, if you face any issue, see cloudwatch logs for lambda function.
  4. Give your lambda function.
  5. Give a proper IAM role.
  6. Give corresponding payload.

Step 5: Test your Lambda function and see logs in cloudwatch

Use the Cloud watch to see the logs from the Lambda function.

Theories

Theory 1: Management codes

  1. Management bash codes ` # create dynamo db table aws dynamodb create-table \ --table-name UserScripts \ --attribute-definitions AttributeName=Name,AttributeType=S \ --key-schema AttributeName=Name,KeyType=HASH \ --billing-mode PAY_PER_REQUEST

getting the user data

/var/lib/cloud/instances/INSTANCE_ID/user-data.txt

uploading all the files from current directory

for file in *; do
bash add_data.sh "$file"
done

Source Bash file

cat source.sh | base64 > source_encrypted

put the data to table

variable=$(cat source_file)
aws dynamodb put-item \
--table-name UserScripts \
--item '{
"Name": {"S": "Shobhit"},
"data": {"S": "'"$variable"'"}
}'
`

  1. first_script.sh ` #!/bin/bash

fetching the scripts

bash fetch_data.sh add_data.sh
bash fetch_data.sh all_start.sh
bash fetch_data.sh all_stop.sh
bash fetch_data.sh generate_logs.sh
bash fetch_data.sh get_logs.sh
bash fetch_data.sh run_backend.sh
bash fetch_data.sh run_containers.sh
bash fetch_data.sh stop_containers.sh
bash fetch_data.sh stop_logging.sh
bash fetch_data.sh delete_image.sh
bash fetch_data.sh deploy.sh
bash fetch_data.sh pull_image.sh
bash fetch_data.sh show_containers.sh
bash fetch_data.sh show_images.sh
bash fetch_data.sh instructions.txt
bash fetch_data.sh backend.env
bash fetch_data.sh automatic_elastic_ip.sh
bash fetch_data.sh set_ssl_files.sh
bash fetch_data.sh start_all.service

project directory

mkdir /application

path for environment credentials

mkdir -p /application/Environment/backend

path for scripts

mkdir /application/auto-scripts/

path for building images

mkdir /application/build-scripts

for deployemts

mkdir /application/deploy-scripts

path for systemd files

mkdir /application/systemd-files

setting the .env file

mv backend.env /application/Environment/backend/.env

make the scripts executable

sudo chmod +x *

setting sytemd

sudo ln -s /usr/local/bin/start_all.service /etc/systemd/system/
sudo systemctl daemon-reload

pull the image

pull_image.sh

start the server

systemctl start start_all.service

setting ssl configuration

set_ssl_files.sh

wait for instance to be running

sleep 180

setting up elastic ip

automatic_elastic_ip.sh
`

  1. add_data.sh ` #!/bin/bash

if [ $# -eq 0 ]; then
echo "Usage: $0 "
exit 1
fi

filename="$1"

Read the content of the file an encrypt it

file_content=$(cat "$filename" | base64)
file_content=$(echo "$file_content" | tr -d '\n')

Construct the DynamoDB put-item command

aws dynamodb put-item \
--table-name UserScripts \
--item '{
"Name": {"S": "'"${filename}"'"},
"data": {"S": "'"$file_content"'"}
}'
`

  1. fetch_data.sh

`

!/bin/bash

Function to fetch data from DynamoDB and save to a file, user argument as key to fetch the data

Use same argument to create a new file (bash file)

fetch_and_save_data() {
local key_name="$1"
local filename="${key_name}"

# Get the item from DynamoDB
item=$(aws dynamodb get-item \
--table-name UserScripts \
--key '{"Name": {"S": "'"$key_name"'"}}' \
--output json | jq -r '.Item.data.S')

# Check if the item exists
if [ -z "$item" ]; then
echo "No data found for key: $key_name"
return 1
fi

item=$(echo "$item" | base64 -d)
# Save the data to a file
echo "$item" > "$filename"
echo "Data saved to: $filename"
}

Check if at least one argument is provided

if [ $# -eq 0 ]; then
echo "Error: Please provide a key name as an argument."
exit 1
fi

Get the key name from the first argument

key_name="$1"

Call the function to fetch and save the data

fetch_and_save_data "$key_name"
`

  1. start_all.service ` [Unit] Description=Simple systemd service to start all the services After=network.target

[Service]
ExecStart=all_start.sh
ExecStop=all_stop.sh
Restart=on-failure
User=root
Group=root

[Install]
WantedBy=multi-user.target
`

Top comments (0)