<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: himwad05</title>
    <description>The latest articles on DEV Community by himwad05 (@himwad05).</description>
    <link>https://dev.to/himwad05</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/himwad05"/>
    <language>en</language>
    <item>
      <title>Launch EC2 instances from Slack - AWS Chatbot</title>
      <dc:creator>himwad05</dc:creator>
      <pubDate>Sun, 18 Oct 2020 14:30:37 +0000</pubDate>
      <link>https://dev.to/himwad05/launch-ec2-instance-from-slack-using-aws-chatbot-2ofn</link>
      <guid>https://dev.to/himwad05/launch-ec2-instance-from-slack-using-aws-chatbot-2ofn</guid>
      <description>&lt;p&gt;Few days back I had this thought in my mind that it will be so cool if I can do some small AWS operations from the chat window rather than having to log into any of my instance or AWS console. Thus, I came across this AWS service called &lt;a href="https://docs.aws.amazon.com/chatbot/latest/adminguide/what-is.html"&gt;AWS Chatbot&lt;/a&gt; which can integrate with Slack and you can then use AWS commands on slack to trigger the API calls on AWS. Through this post, I would like to, if you have not already known, introduce how you can use AWS Chatbot for a very simple operation to launch an EC2 instance. To achieve this, I have created a lambda function which will then be executed from Slack and it is a very simple python code just to show you how you would be launching an instance from Slack. You can make modifications/additions to this code or even write a totally different code that satisfies your use case and then run it in slack as explained below. &lt;/p&gt;

&lt;p&gt;I will list the steps below to integrate slack and use the lambda function to trigger launch of EC2 instance:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A. Configure Slack as a client in AWS Chatbot&lt;/strong&gt;&lt;br&gt;
This is a pretty straightforward process and AWS Documentation to &lt;a href="https://docs.aws.amazon.com/chatbot/latest/adminguide/getting-started.html#chat-client-setup"&gt;Setup Chat Clients&lt;/a&gt; covers it completely so I will not be providing the steps explicitly here. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: &lt;br&gt;
a. In step 9 while defining IAM permissions, please ensure to add these permissions in the &lt;strong&gt;Policy Templates" which includes - **Read-only command permissions, Lambda-invoke command permissions, Notification permissions&lt;/strong&gt; otherwise the invocation of lambda from Slack will not work.&lt;/p&gt;

&lt;p&gt;b. You can choose not to configure SNS as we do not require it at the moment. SNS topic is useful if you want to send alerts in Slack channel like if you would like to be notified if a new EC2 instance is launched or if S3 bucket has some activity and all this can be done using the cloudwatch events and SNS but that is a topic for another day.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;B. Lambda function to launch a new instance&lt;/strong&gt;&lt;br&gt;
Below is a simple code in python for the lambda function and you can modify it according to your use case. In this function, I have purposely kept 2 attributes to be entered at runtime and the reason is that I wanted to show you how you will be adding input values while invoking the lambda function from slack. You can create a new lambda function with python 2.7 as runtime and copy the code into the section &lt;strong&gt;Function Code&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

#Ami = 'ami-0f96495a064477ffb' ### Commented out to use them as inputs while executing the lambda function and you can remove this line
#InstanceType = 't2.micro'     ### Commented out to use them as inputs while executing the lambda function and you can remove this line
KeyName = 'xxxxxxx'            ### SSH Key name you have created earlier or create a new SSH keypair in EC2 console and use it here
SubnetId = 'subnet-xxxxxxx'    ### Input the subnet of your VPC where you want to launch the instance
Region = 'ap-southeast-2'      ### Input the subnet of your VPC where you want to launch the instance

ec2 = boto3.client('ec2', region_name=Region)

def lambda_handler(event, context):

    instance = ec2.run_instances(
        ImageId=event['Ami'],
        InstanceType=event['InstanceType'],
        KeyName=KeyName,
        SubnetId=SubnetId,
        MaxCount=1,
        MinCount=1
    )

    instance_id = instance['Instances'][0]['InstanceId']
    print instance_id
    return instance_id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;C. Add AWS Chatbot on Slack&lt;/strong&gt;&lt;br&gt;
To run any commands on Slack, you will have to add AWS Chatbot app in your channel and you can follow the steps in Slack documentation to &lt;a href="https://slack.com/intl/en-au/help/articles/202035138-Add-an-app-to-your-workspace#add-an-app"&gt;add an app&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;D. Invoke Lambda through Slack&lt;/strong&gt;&lt;br&gt;
The last step is to invoke the lambda through Slack and the command which can help you to do this is shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@aws  invoke StartEC2Instance --region ap-southeast-2 --payload {"AMI": "ami-0f96495a064477ffb", "INSTANCE_TYPE": "t2.micro"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command will first ask you to confirm if the AWS chatbot can run this command, so you will have to press either &lt;strong&gt;yes or no&lt;/strong&gt; and then AWS Chatbot will run the command and show the output as something like this which means the instance is launched successfully:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ExecutedVersion: $LATEST
Payload: \"i-xxxxxxxxx\"
StatusCode: 200
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;An important point to note is that the required input values are passed to the lambda function by using the attribute --payload which accepts a JSON format for key value pairs and if you have to input multiple parameters, you can just use the "comma" as delimiter to separate different attributes as I have shown in the command above.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I hope the above will pave the way for you to enter the world of chatops and introduce you to a whole new way of automation especially when bots are so much in demand nowdays.&lt;/p&gt;

&lt;p&gt;If you have any feedback on this article, please do let me know and I will be happy to incorporate them after review. &lt;/p&gt;

</description>
      <category>launchec2fromslack</category>
      <category>awschatbot</category>
      <category>slackandawschatbot</category>
    </item>
    <item>
      <title>Onboard AWS EKS Cluster on Lens(Kubernetes IDE)</title>
      <dc:creator>himwad05</dc:creator>
      <pubDate>Wed, 23 Sep 2020 15:43:29 +0000</pubDate>
      <link>https://dev.to/himwad05/onboard-aws-eks-cluster-on-lens-kubernetes-ide-492l</link>
      <guid>https://dev.to/himwad05/onboard-aws-eks-cluster-on-lens-kubernetes-ide-492l</guid>
      <description>&lt;p&gt;Today, while working on a personal kubernetes project, I came across &lt;a href="https://k8slens.dev/"&gt;Lens - The Kubernetes IDE&lt;/a&gt; and was impressed by a couple of its features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built-In Prometheus monitoring setup with RBAC maintained for each user so users will see only the permitted resources visualizations&lt;/li&gt;
&lt;li&gt;Built-In terminal which will ensure that it matches the Kube APIServer version with the version of kubectl. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I felt daily administration and interaction with the EKS cluster can really be simplified with these 2 features. I decided to onboard one of my AWS EKS clusters on it but I was not able to find any documentation for Lens with AWS EKS. Although it only requires a kubeconfig - whether you paste it or upload it, the outcome is that it will connect to your cluster and authenticate with it to load all the objects into the Lens. Therefore, I decided to document the steps to make it easier for Lens users.&lt;/p&gt;

&lt;p&gt;For AWS EKS, Lens can be treated as just another client which requires kubectl access. You will need to download the kubeconfig file and save it in ~/.kube folder so lens can read the file and then contact the Kube-ApiServer and aws-auth get the access to the EKS cluster. The process is well documented in AWS under &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-auth.html"&gt;Cluster Authentication&lt;/a&gt; section along with the steps and they work fine for both Windows and Linux. Even though I just tried Lens for Windows but I have authenticated kubectl client running on Linux servers numerous time to say confidently that it should work.&lt;/p&gt;

&lt;p&gt;I will  describe the steps performed below even though they are documented to ensure you do not have to move between different documentation pages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Install aws-iam-authenticator on Windows using chocolatey&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; &lt;a href="https://chocolatey.org/install"&gt;Install Chocolatey&lt;/a&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Install command:
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

If there are no errors on the above command, run the below command to show the chocolatey version which means its correctly installed:
choco
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install aws-iam-authenticator using chocolatey
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Open a PowerShell terminal window and install the aws-iam-authenticator package with the following command:
choco install -y aws-iam-authenticator

Test that the aws-iam-authenticator works:
aws-iam-authenticator help
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Ensure AWS CLI is installed&lt;/strong&gt;&lt;br&gt;
If not, then browse through this &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html#cliv2-windows-install"&gt;documentation&lt;/a&gt;. Once it is installed, please add the installation directory to your PATH environment variable using this &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-windows.html#awscli-install-windows-path"&gt;link&lt;/a&gt; as Lens will throw an error otherwise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Configure the AWS CLI with the desired role or user&lt;/strong&gt;&lt;br&gt;
Use &lt;strong&gt;aws configure&lt;/strong&gt; command as shown in this &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-methods"&gt;documentation&lt;/a&gt;. Please ensure that the user or role has the permissions to use the eks:DescribeCluster API action otherwise you will not be able to update the kubeconfig file using AWS CLI in the next step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Create Kubeconfig file for AWS EKS&lt;/strong&gt; &lt;br&gt;
The steps are taken from &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html#create-kubeconfig-automatically"&gt;official AWS Documentation&lt;/a&gt; which I have tested successfully&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Confirm that you are using the correct role or user:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws sts get-caller-identity
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Generate the kubeconfig file automatically
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks --region region-code update-kubeconfig --name cluster_name

Note: replace the following with your desired values:
     region-code = Region where EKS cluster is located such as ap-southeast-1
     cluster_name = Name of the cluster in that region
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The kubeconfig should be located under C:\Users&amp;lt;YOUR-WIN-USER&amp;gt;.kube\config. Please replace the path "C:\Users" with the that of the current logged in user to get to the .kube folder&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Upload the kubeconfig file in Lens&lt;/strong&gt;&lt;br&gt;
Click on + button on the top left corner which will give you an option to upload kubeconfig or paste it manually. Once you have selected the kubeconfig file, it will ask you to select the context, select the required context and then click on button at the bottom "Add cluster(s)" which will then start the authentication and add the objects into lens for your consumption.&lt;/p&gt;

&lt;p&gt;The above steps should get you to onboard your EKS cluster into lens but please note, the steps will be different if you are not using AWS EKS. I hope this will help everyone using AWS EKS.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Cost Optimization #2-Techniques </title>
      <dc:creator>himwad05</dc:creator>
      <pubDate>Tue, 22 Sep 2020 15:00:08 +0000</pubDate>
      <link>https://dev.to/himwad05/aws-cost-optimization-2-techniques-5dk1</link>
      <guid>https://dev.to/himwad05/aws-cost-optimization-2-techniques-5dk1</guid>
      <description>&lt;p&gt;Cost Optimization is an activity which is aimed at driving down the business spending and the cost. I recently published an article &lt;a href="https://dev.to/himwad05/cost-optimization-1-aws-ebs-volume-type-io1-or-io2-1669"&gt;Cost Optimization #1: AWS EBS Volume type IO1 or IO2?&lt;/a&gt; where I mentioned that I will be talking about more such ways to optimize your AWS cost. Through this article, I will share different tips and techniques which can be adopted to minimize operation costs for compute resources on AWS. &lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Optimization Techniques
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Stop/Scale down the resources that are not in use&lt;/strong&gt;&lt;br&gt;
The biggest benefit of cloud computing is on-demand pricing. Therefore, the most effective way to reduce your compute costs is to turn-off those resources that are not in use to stop their billing. This will require you to prepare a schedule and then automate the stop and start of instances according to that schedule. Below are some of the ways to achieve this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A. AWS Instance Scheduler&lt;/strong&gt;&lt;br&gt;
This is by far one of the most easiest ways to schedule stop and start of the EC2 and RDS instances. AWS built this solution on top of Cloudwatch Events, Lambda and Dynamo DB and provided a cloudformation template to simplify the deployment. I will provide you with couple of documentation which will help you get started - &lt;a href="https://docs.aws.amazon.com/solutions/latest/instance-scheduler/deployment.html"&gt;Automated Deployment&lt;/a&gt; and an &lt;a href="https://docs.aws.amazon.com/solutions/latest/instance-scheduler/deployment.html"&gt;example&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The only effort required from the user is to implement a schedule in DynamoDB and tag those instances which you would like to stop and start on a schedule. Although you might not be familiar with DynamoDB but the documentation surely makes it easy and here is a &lt;a href="https://docs.aws.amazon.com/solutions/latest/instance-scheduler/appendix-e.html"&gt;sample schedule&lt;/a&gt; which will be your reference point to form your own custom schedule.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important points to remember&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Do not use the solution for instances in Autoscaling group as the moment instances are stopped, Autoscaling group will terminate them and launches a replacement instance. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cost of running the solution - Below content is picked up from &lt;a href="https://docs.aws.amazon.com/solutions/latest/instance-scheduler/overview.html#cost"&gt;Instance Scheduler Documentation&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As of the date of publication, the cost for running this solution with default settings in the US East (N. Virginia) Region is approximately $5.00 per month in AWS Lambda charges, or less if you have Lambda free tier monthly usage credit. This is independent of the number of Amazon EC2 instances you are running. The optional custom Amazon CloudWatch metric, if enabled, will cost an additional $0.90 per month per schedule or scheduled service.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;B. Lambda to stop and start EC2 instances&lt;/strong&gt;&lt;br&gt;
Another way to perform the scheduled stop and start of EC2 instances is to make use of AWS Lambda and Cloudwatch Events. The solution is explained in this &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/start-stop-lambda-cloudwatch/"&gt;article along with the implementation&lt;/a&gt;. On a high level, you will have to define the instances in the lambda function and then use cloudwatch events rule cron expression to trigger the lambda function. You can define instances in the lambda function using comma as the separator as shown below:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;instances = ['i-12345cb6de4f78g9h', 'i-08ce9b2d7eccf6d26']&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Important points to remember&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It cannot be used for instances placed in Autoscaling group for the reasons explained above.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost of running the solution - You will be paying price for executing the lambda function and it is documented &lt;a href="https://aws.amazon.com/lambda/pricing/"&gt;here&lt;/a&gt;. Please ensure to select the correct region to check the prices.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;C. Schedule scaling for EC2 instances under autoscaling group&lt;/strong&gt;&lt;br&gt;
This feature of EC2 Autoscaling groups can be used to achieve the same effect as stopping the instances but with a slight difference, here the instances will be terminated by reducing the desired count of your Autoscaling groups on a schedule start time. Once the schedule ends, the desired count will reach to the original value, thus launching new instances to fulfil desired capacity. To understand more about the solution, you can refer to this &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html"&gt;documentation&lt;/a&gt;. The steps are documented nicely and the only points to note here are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You will have to create scheduled action for each Autoscaling group&lt;/li&gt;
&lt;li&gt;If you attempt to schedule an activity at a time when another scaling activity is already scheduled, the call is rejected with an error message noting the conflict. &lt;/li&gt;
&lt;li&gt;No additional costs are associated with this feature.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;D. Scale the capacity down using scaling policies&lt;/strong&gt;&lt;br&gt;
It is a similar approach as above but the difference is you do not have to schedule your termination of instances. You will be using dynamic scaling up and down your EC2 instances based on their utilization. For instance, you can scale up when the CPU utilization of the instances above 70% or scale down when the CPU utilization reaches 20%. The difference in this approach is that you can scale down anytime when the system is not being used which will help you to reduce your compute costs. Dynamic scaling is very nicely explained &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html"&gt;here&lt;/a&gt; along with 3 scaling policies which can be used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html"&gt;Target Tracking Policy&lt;/a&gt; - Here you can configure a standard cloudwatch metric for utilization or you can also create a custom metric like memory utilization for scaling up and down according to your preferences.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html"&gt;Step and Simple Scaling Policies&lt;/a&gt; - These policies require you to define low and high thresholds as well as you would like to increase or decrease instances. Read the documentation to learn more about it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Free AWS services for Cost Optimization&lt;/strong&gt;&lt;br&gt;
AWS focus heavily on the success of their customers and this is the reason they have built services which can not only optimize costs but also analyze if the infrastructure is optimized, secure and built according to the best practices. Some of the services which I am aware of are listed below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A. AWS Compute Optimizer&lt;/strong&gt;&lt;br&gt;
AWS Compute Optimizer is a free service and you can enroll in an individual account or all of the member accounts in your organization. It reports whether your EC2 and Autoscaling groups are optimal, and generates optimization recommendations to reduce the cost and improve the performance of your workloads by analyzing the historical utilization metrics. You can use the recommendations to switch the current instance type running in your account to the recommended instance type to realize the performance and cost benefit. To get started with this service, please follow this &lt;a href="https://docs.aws.amazon.com/compute-optimizer/latest/ug/getting-started.html"&gt;documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This service can really prove beneficial as it can save a lot of manual time to investigate your past performance to determine if your instance is over-provisioned or under-provisioned and this effort can increase exponentially when there are hundreds or thousands of EC2 Instances/Autoscaling groups in an account or multiple accounts. Therefore, just by using the service (even though not implementing the recommendations), you will still be saving man days worth of effort for performance measurement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;B. Trusted Advisor&lt;/strong&gt;&lt;br&gt;
AWS Trusted Advisor is an online tool which performs fixed checks to optimize your AWS infrastructure, increase security and performance, reduce your overall costs, and monitor service limits and you do not require any support plan to view the results of these checks. You will get additional checks besides the above ones under Trusted advisor if you have (or will) subscribe to any of the support plan. The biggest benefit of this service is that it checks various resources like RDS, Redshift, Elasticache, Route 53, ElasticSearch and so on for cost optimization. To get started with this service, just follow this &lt;a href="https://aws.amazon.com/premiumsupport/technology/trusted-advisor/"&gt;page&lt;/a&gt; or click on the Services button at the top of your AWS Console and search for Trusted Advisor to open it up.&lt;/p&gt;

&lt;p&gt;I will stop here for the moment so as not to make this article too long. I have other tips and techniques which will be presented in future articles. To give you a preview of what is to come in future articles, refer to the below list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tags to identify cost&lt;/li&gt;
&lt;li&gt;Tips to reduce EC2 costs &lt;/li&gt;
&lt;li&gt;AWS Budgets&lt;/li&gt;
&lt;li&gt;Optimize EBS snapshots retention&lt;/li&gt;
&lt;/ul&gt;

&lt;h6&gt;
  
  
  If you have any comments on the above methods or would like to know more, please do put your comments/feedback as I am doing this series to help people out there achieve real savings, especially during the pandemic.
&lt;/h6&gt;

</description>
      <category>awscostoptimization</category>
      <category>awscostsavings</category>
    </item>
    <item>
      <title>AWS IAM: How to achieve Logical OR effect with multiple IAM condition operators?  </title>
      <dc:creator>himwad05</dc:creator>
      <pubDate>Mon, 31 Aug 2020 16:54:34 +0000</pubDate>
      <link>https://dev.to/himwad05/aws-iam-how-to-achieve-logical-or-effect-with-multiple-iam-condition-operators-2h0p</link>
      <guid>https://dev.to/himwad05/aws-iam-how-to-achieve-logical-or-effect-with-multiple-iam-condition-operators-2h0p</guid>
      <description>&lt;p&gt;In AWS IAM (Identity and Access Management) world, it is well known fact that the evaluation logic for :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;condition operators with multiple keys or multiple condition operators is always a logical AND operation:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;conditions with a single key and multiple values is a logical OR operation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The below example will give you the context on the above 2 statements. In the below example you can see that the 2 values for global condition key &lt;strong&gt;aws:SourceIp&lt;/strong&gt; are evaluated using OR and the 3 separate condition operators &lt;strong&gt;(DateGreaterThan, DateLessThan, IpAddress)&lt;/strong&gt; are evaluated using AND. This effectively means that &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IpAddress condition operator will be true only if the SourceIP of the request belongs to either 192.0.2.0/24 or 203.0.113.0/24 subnets which is an OR operation.&lt;/li&gt;
&lt;li&gt;Overall condition block will be true only when all 3 condition operators are true which is the same as an AND operation.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Condition" :  {
      "DateGreaterThan" : {
         "aws:CurrentTime" : "2019-07-16T12:00:00Z"
       },
      "DateLessThan": {
         "aws:CurrentTime" : "2019-07-16T15:00:00Z"
       },
       "IpAddress" : {
          "aws:SourceIp" : ["192.0.2.0/24", "203.0.113.0/24"]
      }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Problem Statement&lt;/strong&gt;&lt;br&gt;
Considering the above in mind, what if you have a requirement to set condition operators which should be evaluated in a logical OR manner. For example, launch an EC2 instance if it has a particular tag called as Env:Dev or if the sourceIP is 192.0.2.0/24. One way to achieve this is to duplicate your IAM statement block and put the 2 condition operators separately in each block but this is a tedious method and complex method which makes the IAM policy messy and you can come very close to hitting IAM Managed Policy limit of 6144 characters (excluding whitespaces) when you have multiple condition operators which involve multiple actions. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;br&gt;
I will begin by sharing a very basic concept of truth tables for Logical AND as well as Logical OR operation and then move on to the solution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Logical AND:
=============
Input1   Input2      Output

True      True        True

False     False       False

True      False       False

False     True        False

Logical OR:
============
Input1   Input2      Output

True      True        True

False     False       False

True      False       True

False     True        True
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see from above, the Logical AND operation will yield a True output only when both the inputs are true and logical OR will always yield a true output as long as at least one of the inputs is true. IAM policies are also evaluated in the same manner. The Effect mentioned in the IAM statement block will be Allowed or Denied only if the condition is true. Now to perform a logical OR in the condition block, we will have to use the following method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Not [(Not(Condition 1) AND Not(Condition 2)]

------------------------------------------
Let us try to solve the above block in truth table form with values where:
Condition 1 = Input 1
Condition 2 = Input 2

* When Input 1 = True and Input 2 = True:
Not [(Not(True) AND Not(True)] =&amp;gt; Not [False AND False] 
=&amp;gt; Not [False] =&amp;gt; True

* When Input 1 = True and Input 2 = False:
Not [(Not(True) AND Not(False)] =&amp;gt; Not [False AND True] 
=&amp;gt; Not [False] =&amp;gt; True

* When Input 1 = False and Input 2 = True:
Not [(Not(False) AND Not(True)] =&amp;gt; Not [True AND False] 
=&amp;gt; Not [False] =&amp;gt; True

* When Input 1 = False and Input 2 = False:
Not [(Not(False) AND Not(False)] =&amp;gt; Not [True AND True] 
=&amp;gt; Not [True] =&amp;gt; False

If we put the values in the table, then the table will look like this which matches that of Logical OR table shown above:


Input1   Input2      Output
======   ======      ======
True      True        True

False     False       False

True      False       True

False     True        True

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Up until now, it was all fundamentals but how do you actually apply the above logic in the condition block? I will explain this below with an example of Effect:Deny and Not condition operators such as StringNotEquals. And I will take the same example as mentioned in the problem statement, that is to launch an EC2 instance if it has a particular tag called as Env:Dev or if the sourceIP is 192.0.2.0/24. Following IAM policy statement should be able to achieve the desired Logical OR effect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowToDescribeAll",
            "Effect": "Allow",
            "Action": [
                "ec2:RunInstances"
            ],
            "Resource": "*"
        },
        {
        "Effect": "Deny",
        "Action": "ec2:RunInstances",
        "Resource": "*",
        "Condition": {
            "NotIpAddress": {
                "aws:SourceIp": [
                    "192.0.2.0/24"
                ]
            },
              "StringNotLike": {
                 "aws:RequestTag/Env": [
                        "Dev"
                    ]
            }
}
}
]
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you are curious why the above policy will work as Logical OR, so let me show you the relation to the method we derived earlier:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not [(Not(Condition 1) AND Not(Condition 2)]&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The above IAM policy can also be written as (for understanding the concept only):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deny [(NotIpAddress (aws:SourceIP) AND StringNotLike (aws:RequestTag)]&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* When awsSourceIP is not 192.0.2.0/24 (meaning it is false) and aws:RequestTag/Env is not Dev (meaning it is false) and the respective condition operators will become True and Effect will be denied:
Deny [(NotIpAddress (False) AND StringNotLike (False)] ==&amp;gt;
Deny [True AND True] ==&amp;gt; Deny [True] =&amp;gt; False

* When awsSourceIP is 192.0.2.0/24 (meaning it is true) and aws:RequestTag/Env is not Dev (meaning it is false) then this statement policy block will not apply. This will mean Effect:Allow will take effect and thereby allowing ec2:RunInstances action:
Deny [(NotIpAddress (True) AND StringNotLike (False)] ==&amp;gt;
Deny [False AND True] ==&amp;gt; Deny [False] =&amp;gt; True

* When awsSourceIP is not 192.0.2.0/24 (meaning it is false) and aws:RequestTag/Env is Dev (meaning it is true) then this statement policy block will not apply. This will mean Effect:Allow will take effect and thereby allowing ec2:RunInstances action:
Deny [(NotIpAddress (False) AND StringNotLike (True)] ==&amp;gt;
Deny [True AND False] ==&amp;gt; Deny [False] =&amp;gt; True

* When awsSourceIP is 192.0.2.0/24 (meaning it is true) and aws:RequestTag/Env is Dev (meaning it is true) then this statement policy block will not apply. This will mean Effect:Allow will take effect and thereby allowing ec2:RunInstances action:
Deny [(NotIpAddress (True) AND StringNotLike (True)] ==&amp;gt;
Deny [False AND False] ==&amp;gt; Deny [False] =&amp;gt; True

If we put the values in the table, then the table will look like this which matches that of Logical OR table shown above:


Input1   Input2      Output
======   ======      ======
True      True        True

False     False       False

True      False       True

False     True        True
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
IAM Policies can be complicated and using this way you can eliminate the duplicate statement blocks which only differs in the condition statement just because that was the only way till now to achieve the OR effect.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Cost Optimization #1 : AWS EBS Volume type - IO1 or IO2?</title>
      <dc:creator>himwad05</dc:creator>
      <pubDate>Thu, 27 Aug 2020 12:49:22 +0000</pubDate>
      <link>https://dev.to/himwad05/cost-optimization-1-aws-ebs-volume-type-io1-or-io2-1669</link>
      <guid>https://dev.to/himwad05/cost-optimization-1-aws-ebs-volume-type-io1-or-io2-1669</guid>
      <description>&lt;p&gt;Benjamin Franklin once said - "Beware of little expenses. A small leak will sink a great ship". Usually storage costs are not high as compared to running an instance on cloud but when you require higher IOPS and throughput, storage can definitely bump up your bill significantly. Until today, AWS EBS (Elastic Block Storage) had io1 volume type which delivers higher performance (IOPS and throughput) but it comes at a cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Problem Statement&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let me show you an example which compares the cost per month of 2 EBS volume types (GP2 and IO1) to explain the above statement. If you would like to know more about GP2 and IO1 volume types, you can refer to the &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html"&gt;EBS Volume Types&lt;/a&gt;: &lt;/p&gt;

&lt;p&gt;Assumptions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Region is N.Virginia (Price is dependent on regions)&lt;/li&gt;
&lt;li&gt;The prices below are calculated for 1 month usage and is taken from &lt;a href="https://calculator.aws/#/"&gt;AWS Calculator&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Both IO1 and Gp2 volumes are taken as 100GB but an additional attribute added with IO1 is that we will provision 1000 IOPS as most people use IO1 to deliver higher IOPS with reduced size.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Cost for GP2:
==============
100 GB x 0.10 USD = 10.00 USD      (EBS Storage Cost)

Total cost (monthly): 10.00 USD

Cost for IO1:
==============
100 GB x 0.125 USD = 12.50 USD     (EBS Storage Cost)
1,000 iops x 0.065 USD = 65.00 USD (EBS IOPS Cost)
12.50 USD + 65.00 USD = 77.50 USD  (Total EBS storage cost)

Total cost (monthly): 77.50 USD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see from the above comparison that there is a huge price difference between the 2 volumes and if you exponentially increase the number of IO1 volumes in your environment, it can increase the operating cost of your infrastructure significantly.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Solution&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Recently AWS announced the launch of AWS EBS IO2 volumes and using IO2 volumes instead of IO1 volumes can result in optimizing your costs and benefiting from the improved performance and reliability at the same time. You can read more about it from &lt;a href="https://aws.amazon.com/blogs/aws/new-ebs-volume-type-io2-more-iops-gib-higher-durability/"&gt;Blog for IO2 volume type&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Now the question that comes to our minds is how does IO2 volume type translates to cost savings? I have explained it below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Better performance with small volume size:&lt;/strong&gt; The biggest benefit that can be realized from IO2 volumes is that of the performance even with a smaller volume size. Let me explain - IO2 volumes can deliver maximum of 500 IOPS per 1 GB where as IO1 volumes can deliver maximum of 50 IOPS per 1GB. Therefore, you can now create smaller volume sizes with higher IOPS and thereby reducing the cost of per GB-month of provisioned storage. For example, in the past to achieve 10,000 IOPS, you would be creating a 200 GB IO1 volume where your actual data size might be 75 GB only and you had to over-provision more than 100% just to meet the ratio of 50 IOPS/GB. But now, you can provision just 100 GB and still get the required 10,000 IOPS and thus reducing your cost component of storage by saving 100 GB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Same Price for IO1 and IO2:&lt;/strong&gt; Another factor that makes IO2 volumes attractive from cost perspective is that AWS has kept the pricing same for IO1 and IO2 volume types as can be seen in the &lt;a href="https://aws.amazon.com/ebs/pricing/"&gt;EBS Pricing Page&lt;/a&gt;. Rather than asking the avid readers to go there, I will share the pricing for N. Virginia region below for the 2 volume types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Provisioned IOPS SSD (io2) Volumes    &lt;strong&gt;$0.125&lt;/strong&gt; per GB-month of provisioned storage AND &lt;strong&gt;$0.065&lt;/strong&gt; per provisioned IOPS-month&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provisioned IOPS SSD (io1) Volumes    &lt;strong&gt;$0.125&lt;/strong&gt; per GB-month of provisioned storage AND &lt;strong&gt;$0.065&lt;/strong&gt; per provisioned IOPS-month&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The above numbers show that the pricing makes IO2 definitely more lucrative than using IO1. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Migrate existing IO1 to IO2&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For those who are interested in migrating existing EC2 instances from IO1 to IO2 volume types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If your intention is to reduce the volume size in the process, then unfortunately, AWS EBS does not allow decreasing the size upon modification. Therefore, the only way you can achieve this is to assign a new IO2 volume of the required size and then run rsync command to copy your data from current volume to the new volume. You can refer to some of the examples and commands given in this document to make use of &lt;a href="https://www.linuxtechi.com/rsync-command-examples-linux/"&gt;rsync&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you would like to just change the volume from IO1 to IO2 without reducing the volume size, or increase the volume size in the process, then you can follow the &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/requesting-ebs-volume-modifications.html"&gt;EBS Modification Documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Using IO2 volumes have certainly an advantage over the IO1 volumes considering the performance and durability benefits even though it means that you are not saving any cost between the 2 volumes. Next time when you are provisioning IO1 volumes, just think about the future, what if you need more performance than today in future, would you want to increase the size of volume to get the desired IOPS or will it be better to just increase the IOPS only without any change in the volume size?&lt;/p&gt;

&lt;h5&gt;
  
  
  Note: This is my first of the many more future posts that will focus on optimizing your AWS infrastructure as well as costs. Stay tuned for more.
&lt;/h5&gt;

</description>
    </item>
    <item>
      <title>Strategy - Centralize all AWS KMS Keys  in one account and encrypt EBS volumes in another account</title>
      <dc:creator>himwad05</dc:creator>
      <pubDate>Wed, 26 Aug 2020 08:28:19 +0000</pubDate>
      <link>https://dev.to/himwad05/encrypting-ebs-volume-with-kms-key-from-another-aws-account-163l</link>
      <guid>https://dev.to/himwad05/encrypting-ebs-volume-with-kms-key-from-another-aws-account-163l</guid>
      <description>&lt;p&gt;&lt;strong&gt;Strategy: Centralize AWS KMS CMK in one account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the context of Information Security, it is important to maintain the Principle of Least Privilege. One such way you can achieve this for AWS KMS(Key Management Service) is to maintain one centralized account for all your Customer Master Keys (CMKs) and key administrators will grant the necessary encryption/decryption permissions to the key users in another account. This is analogous to the enterprise setting where you have HSM devices to control store all the encryption keys and access to the HSM devices is only allowed to certain HSM administrators. The main benefits of this strategy are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS Account hosting all KMS Keys will only be used by limited number of people and is easier to maintain/monitor/notify the actions carried out in the account through cloudtrail and cloudwatch logs. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Permissions boundary and Service Control Policies (SCP's) are relatively simple in comparison to an AWS account hosting multiple services like EC2, EKS and so on where you will end up creating complicated SCP's and Permissions Boundaries and IAM Identity/Resource policies to ensure only the necessary people have the KMS access. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Implementation: Encrypt EBS volume in Account B (111122223333) from a AWS KMS CMK in Account A (444455556666)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On a high level, there are 3 steps to it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update the key policy for the CMK in Account A to share it with the desired AWS Account B&lt;/li&gt;
&lt;li&gt;(Optional) Create a grant if you are going to use Autoscaling group in Account B to make use of KMS CMK in Account A to launch new instances&lt;/li&gt;
&lt;li&gt;Create the EBS volume in the Account B OR Launch the instance from Autoscaling group&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;1. Update the Key Policy for the CMK in Account A&lt;/strong&gt;&lt;br&gt;
First, add the following two policy statements to the CMK's key policy in account A, replacing the example ARN with the ARN of the external account (Account B) which will use the KMS Keys. Please note 111122223333 is the Account ID for Account B so you will have to replace it with your account B ID. These 2 policy statements will ensure that Account B have key usage permissions only.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
   "Sid": "Allow external account 111122223333 use of the CMK",
   "Effect": "Allow",
   "Principal": {
       "AWS": [
           "arn:aws:iam::111122223333:root"
       ]
   },
   "Action": [
       "kms:Encrypt",
       "kms:Decrypt",
       "kms:ReEncrypt*",
       "kms:GenerateDataKey*",
       "kms:DescribeKey"
   ],
   "Resource": "*"
}


{
   "Sid": "Allow attachment of persistent resources in external account 111122223333",
   "Effect": "Allow",
   "Principal": {
       "AWS": [
           "arn:aws:iam::111122223333:root"
       ]
   },
   "Action": [
       "kms:CreateGrant"
   ],
   "Resource": "*"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. (Optional) Create a grant for Autoscaling group&lt;/strong&gt;&lt;br&gt;
With grants you can programmatically delegate the use of KMS customer master keys (CMKs) to other AWS principals. Please click on &lt;a href="https://docs.aws.amazon.com/kms/latest/developerguide/grants.html"&gt;Grants&lt;/a&gt; to read more about it. In order to achieve this, we will create a grant for the Service Linked Role for Autoscaling groups which if you have not customized will be "AWSServiceRoleForAutoScaling" existing in Account B and this service linked role is created automatically when you create any autoscaling group for the first time.&lt;/p&gt;

&lt;p&gt;The following example creates a grant to the AWS KMS CMK with the EC2 Auto Scaling service-linked role as the grantee principal. The create-grant command is run with any IAM  user or role configured in account B (111122223333)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws kms create-grant \
--region us-west-2 \
--key-id arn:aws:kms:us-west-2:444455556666:key/1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d \
--grantee-principal arn:aws:iam::111122223333:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling_CMK \
--operations "Encrypt" "Decrypt" "ReEncryptFrom" "ReEncryptTo" "GenerateDataKey" "GenerateDataKeyWithoutPlaintext" "DescribeKey" "CreateGrant"`
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Create an EBS volume or launch a new instance from Autoscaling group in Account B&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To create an EBS volume - Refer to the documentation in AWS to &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-volume.html#ebs-create-empty-volume"&gt;create a volume&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To launch an instance using Autoscaling group - Refer to the documentation for making use of launch configuration to &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg.html"&gt;launch a new instance from Autoscaling group&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
In the end, I also like to mention that the above steps are not restricted to Autoscaling group only but you can create grants for any roles that you want to use for automating the infrastructure provisioning via code. Once the necessary role has the grant, you will be able to create volumes, or launch instances with encrypted volumes until you terminate the grant.&lt;/p&gt;

</description>
      <category>awscrossaccountcmk</category>
      <category>awscrossaccountencryption</category>
    </item>
  </channel>
</rss>
