DEV Community

nithinalias
nithinalias

Posted on

AWS Storage & Data Management

S3

Go to Storage - S3 - Create bucket(add Bucket name,Region) - Next - we can enable Versioning(backup),server access logging but we need to pay for this services,add tag - Next - Next - create Bucket - upload - Add files - upload - click on added file to get details of the file - overview - click on link - file cannot open(it is not publically accessible) - overview - make public -
click on link now file can be open

S3 Lifecycle Policies

MFA Delete

S3 Encryption

Create a bucket - Permissions - Bucket Policy - Policy Generator - select Type of policy(S3 Bucket policy) - Effect(Deny),Principle(* )-This means anyone who tries to upload file to S3 bucket without using server-side Encryption will be denied,AWS Service(Amazon S3),Actions(Put object),ARN(Go back there you can see ARN) - Add Condition(Condition=StringNotEquals,key=S3:X-amz-server-side-encryption,value=aws:kms) - Add condition - Add statement - Generate Policy - Copy the Code and Paste it in Bucket policy console - save - This will show an error - add /* to "Resources":"arn:aws:S3:::..../*" - save - upload a file to S3 bucket - upload - Add files - Next - Next - Encryption(Enable AWS KMS master key then add aws/s3) - Next - upload

EC2 Volume Types

EC2 - Launch Instance - Add storage(Add new 3 EBS volume - Volume Type cold HDD 500GB,Throughput optimized HDD 500GB, magnetic 8GB),Disable Delete on Termination for except for root - Launch - volumes - Action - modify volume - modify according to your needs

select Root device volume(we need to stop EC2 instance before taking snapshot) - Action - Create a Snapshot - Add discription -Create snapshot - snapshot - select snapshot - Action - Create volume - By changing the volume type,size,availability zone you can create volume

select snapshot - Action - copy - Change Region - copy - Go to that region - snapshot - select snapshot - Action - create image - create - Go to Images - AMIs - Launch - Launch instance

Now go to Fist Region from we started all these - Instance - Action - Instance state - Terminate - yes,Terminate - volumes - root volume will disappear rest will be there - select rest volume - Action - Delete volumes - yes,Delete

Encryption & Downtime

KMS and CloudHSM

AMIs and sharing AMIs

Snowball and Snowball Edge

Storage Gateway

Athena

CloudTrail - create trail - Add Trailname,Enable select all S3 bucket in your account,create new S3 bucket,add bucket name - create - click on the bucket you can see log files - overview - copy the link - Go to Analytics - Athena - Get started(Region need to be same as that of created S3 bucket) - click on '+' - create database 'CREATE DATABASE databasename' - RunQuery - select database now created(leftside) - click on '+' - Add the below contents

CREATE EXTERNAL TABLE cloudtrail_logs (
eventversion STRING,
useridentity STRUCT<
               type:STRING,
               principalid:STRING,
               arn:STRING,
               accountid:STRING,
               invokedby:STRING,
               accesskeyid:STRING,
               userName:STRING,
sessioncontext:STRUCT<
attributes:STRUCT<
               mfaauthenticated:STRING,
               creationdate:STRING>,
sessionissuer:STRUCT<  
               type:STRING,
               principalId:STRING,
               arn:STRING, 
               accountId:STRING,
               userName:STRING>>>,
eventtime STRING,
eventsource STRING,
eventname STRING,
awsregion STRING,
sourceipaddress STRING,
useragent STRING,
errorcode STRING,
errormessage STRING,
requestparameters STRING,
responseelements STRING,
additionaleventdata STRING,
requestid STRING,
eventid STRING,
resources ARRAY<STRUCT<
               ARN:STRING,
               accountId:STRING,
               type:STRING>>,
eventtype STRING,
apiversion STRING,
readonly STRING,
recipientaccountid STRING,
serviceeventdetails STRING,
sharedeventid STRING,
vpcendpointid STRING
)
ROW FORMAT SERDE 'com.amazon.emr.hive.serde.CloudTrailSerde'
STORED AS INPUTFORMAT 'com.amazon.emr.cloudtrail.CloudTrailInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION 's3://mycloudtrailbucket-faye/AWSLogs/757250003982/';

Enter fullscreen mode Exit fullscreen mode

The last line you need to change according to copied link in S3
bucket overview

LOCATION 's3://mycloudtrailbucket-faye/AWSLogs/757250003982/';
Enter fullscreen mode Exit fullscreen mode

Run Query - Now tables will be created - click on '+' - Add the below contents

SELECT
 useridentity.arn,
 eventname,
 sourceipaddress,
 eventtime
FROM cloudtrail_logs
LIMIT 100;

Enter fullscreen mode Exit fullscreen mode

RUN Query - you will get the required data

Go to cloudTrail - view trails - click on your trail - delete trail and S3 buckets

Introduction EFS

Storage - EFS - create file system - select 3 availabilty zone - Next step - add key,value,Lifecycle Policy = 7 days since last access,Enable bursting,General purpose,Enable encryption of data at reset,select KMS master key - Next - create file system

Go to EC2 - Launch Instance - configure Instance(check vpc is same as that we used in NFS ) - Launch

Go to EFS - select the created file system - open the link Amazon EC2 mount instruction (from local VPC)

Login to created EC2 and run the commands shown below

sudo yum install -y amazon-efs-utils
sudo  mkdir efs
sudo mount -t efs fs-1b66ebea:/ efs
Enter fullscreen mode Exit fullscreen mode

It will show connection time out

Go to EC2 instance - select Instance - click on security groups(downside of the window) - copy Group ID - find default security group that is same as your EFS - click on Inbound - Edit - Add Rule - select NFS - source(paste the Group ID that copied early) - save

Try the command once again

sudo mount -t efs fs-1b66ebea:/ efs
df 
Enter fullscreen mode Exit fullscreen mode

Now you can see EFS mounted

cd efs
sudo touch test.txt
ls
Enter fullscreen mode Exit fullscreen mode

After that delete instance by Actions - Terminate - yes,Terminate
Go to EFS - select file system - Action - Delete file system -

Top comments (0)