DEV Community

maso
maso

Posted on

DAY4 -Storage Configuration

Overview

Today, I'll run a hands-on lab on AWS storage services (Amazon S3, Amazon EBS and Amazon EFS).

Hands-on

1. Environment Preparation

1. Launch two private EC2 instances.

Launch two EC2 instances (Amazon Linux 2023) in private subnets-one in each Availability Zone- and attach an IAM role with AmazonSSMManagedInstanceCore.

2. Create Interface Endpoints in two private subnets to allow private EC2 instances to reach Systems Manager (SSM).

Create the following three endpoints to access to the EC2 instances using SSM Session Manager. Replace with your region.
com.amazonaws..ssm
com.amazonaws..ec2messages
com.amazonaws..ssmmessages

Enable Private DNS : ON
Security group : create a new one

3. Add rules to the security group attached to the VPC endpoints.

Inbound : HTTPS443 source = the security group attached to the private EC2
Outbound : all traffic (default)

2. EBS hands-on

1. Create an EBS volume

EBS → Volumes → Create volume
Type : gp3
Size : 8-10Gib
AZ : AZa

2. Attach an additional EBS volume to the EC2 instance

Actions → Attach volume → choose a private the EC2 instance in AZ-a
Device name : /dev/sdf

3. Set up the EBS volume

Connect to the EC2 instance by SSM and execute the following commands.

Check block devices.

lsblk
Enter fullscreen mode Exit fullscreen mode

You can see the new EBS volume attached to the instance ("nvme1n1" which hasn't mounted to anything).

Format the volume (initialize and create a filesystem)

sudo mkfs -t xfs /dev/nvme1n1

Enter fullscreen mode Exit fullscreen mode

Mount (attach it to the filesystem hierarchy)
If you record under /data, it's recorded in the EBS because the EBS volume is mounted to the /data.

sudo mkdir -p /data
sudo mount /dev/nvme1n1 /data
Enter fullscreen mode Exit fullscreen mode

Check that the volume is mounted to the instance. If you see the filesystem mounted on /data, you've successfully mounted EBS.

df -h
Enter fullscreen mode Exit fullscreen mode

Check the UUID (= A globally unique identifier assigned to objects in software.).

sudo blkid

Enter fullscreen mode Exit fullscreen mode

Persistence (= Making the OS mount even if it is rebooted)
※Device names may change after reboots, so specify the disk using its UUID.

sudo sh -c 'echo "UUID=xxxx-xxxx  /data  xfs  defaults,nofail  0  2" >> /etc/fstab'

Enter fullscreen mode Exit fullscreen mode

Writing test.

echo "hello ebs" | sudo tee /data/hello.txt
cat /data/hello.txt
Enter fullscreen mode Exit fullscreen mode

4. Create snapshot

EC2 → Volumes → Create snapshot

※You can restore the volume data using snapshot.

3. EFS hands-on

1. Create an EFS file system

VPC : the VPC made in Day1 hands-on
Mount targets : two private subnets
Security group : create a new SG for EFS with the following settings.
Inbound - NFS2049 source = EC2's SG
Outbound - All traffic (default)

2. Mount EFS by EC2

EFS → Attach → Execute the following commands
※Replace with your resource's id shown on the EFS console.

sudo dnf -y install amazon-efs-utils
sudo mkdir -p /efs
sudo mount -t efs <FILE_SYSTEM_ID>:/ /efs
df -h | grep efs

Enter fullscreen mode Exit fullscreen mode

Do the same steps for both of two EC2 instances to attach EFS for them (EFS can be used for multi-AZ).

3. Verify that both EC2 instances can access the same EFS filesystem.

Execute the following commands by SSM in the EC2 instance in AZa, and create test text file in the folder mounted to the EFS instance.

echo "from ec2-a" | sudo tee /efs/shared.txt
cat /efs/shared.txt

Enter fullscreen mode Exit fullscreen mode

in the EC2 instance in AZb, you can see "shared.txt" file created by the instance in AZa.

cat /efs/shared.txt

Enter fullscreen mode Exit fullscreen mode

4. S3 hands-on

1. Create an S3 bucket

Block Public Access : ON
Default encryption : SSE-S3

2. Upload a test file to the bucket.

3. Attach the S3 access permission to the EC2 instance

IAM → Create policy → Set the following JSON to allow EC2 instance to access the S3 bucket made in the previous step
※replace with your S3 bucket name

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "ListBucket",
      "Effect": "Allow",
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::<BUCKET_NAME>"
    },
    {
      "Sid": "ObjectRW",
      "Effect": "Allow",
      "Action": ["s3:GetObject","s3:PutObject","s3:DeleteObject"],
      "Resource": "arn:aws:s3:::<BUCKET_NAME>/*"
    }
  ]
}

Enter fullscreen mode Exit fullscreen mode

Attach this policy to the IAM role attached to the private EC2

4. Ensure that the EC2 instance can access the S3 bucket

1. Verify the instance can't reach the public Internet (because it's in a private subnet).
curl -I https://aws.amazon.com || true

Enter fullscreen mode Exit fullscreen mode

2. Ensure the instance can reach to the S3 using the S3 endpoint.

※Replace the region name and bucket name with your region

aws s3 ls s3://<BUCKET_NAME> --region <REGION>

Enter fullscreen mode Exit fullscreen mode

You can see the file uploaded on the S3 bucket by console.

3. S3 test operation

Create text file → send it to the S3 bucket →check the S3 folder

echo "hello from private ec2 via vpce" > /tmp/hello.txt

aws s3 cp /tmp/hello.txt s3://$BUCKET/day4/hello.txt --region $REGION
aws s3 ls s3://$BUCKET/day4/ --region $REGION
Enter fullscreen mode Exit fullscreen mode

You can see the file uploaded by the EC2 instance on S3 console.

You can also copy the file from the S3 bucket → check it

aws s3 cp s3://$BUCKET/day4/hello.txt /tmp/hello-from-s3.txt --region $REGION
cat /tmp/hello-from-s3.txt
Enter fullscreen mode Exit fullscreen mode

Tidyng-up

  • Delete a S3 object → Delete a S3 bucket

  • Delete EFS File systems (Mount targets are automatically deleted)

  • Terminate two EC2 instances

  • Delete EBS volumes

  • Delete EBS snapshots

※Keep the S3 Gateway endpoint, as it's no hourly cost.

For test

Key exam points related to today's services.

Choosing the Right Storage Service
  • S3
    can store objects. Highly durable and requires no scaling

  • EBS
    can store block. Low latency, high IOPS. basically for only one AZ and one EC2 instance (can't be shared with instances)

  • EFS
    can store file. can be shared with instances and is multi-AZ.

  • FSx for windows
    can store file. SMB, NTFS, ACL, AD integrated (= useful for windows server.)

  • FSx for Lustre
    can store file. Integrates with S3. High-speed parallel processing capability

See you soon in Day5!

Top comments (0)