<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Totalcloud.io</title>
    <description>The latest articles on DEV Community by Totalcloud.io (@totalcloudio).</description>
    <link>https://dev.to/totalcloudio</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/totalcloudio"/>
    <language>en</language>
    <item>
      <title>AWS Tutorial: Create An AWS Instance Scheduler With Terraform</title>
      <dc:creator>Totalcloud.io</dc:creator>
      <pubDate>Wed, 29 Jul 2020 06:16:52 +0000</pubDate>
      <link>https://dev.to/totalcloudio/aws-tutorial-create-an-aws-instance-scheduler-with-terraform-ki0</link>
      <guid>https://dev.to/totalcloudio/aws-tutorial-create-an-aws-instance-scheduler-with-terraform-ki0</guid>
      <description>&lt;p&gt;Terraform is a popular IaaS tool used by many to create, update, and maintain their AWS architecture. If you use Terraform to provision your AWS architecture, you won’t be disappointed with our new AWS tutorial video.&lt;/p&gt;

&lt;p&gt;We provide you with the means to set up your own instance scheduler from Terraform. You are given the necessary scripts to work this magic in the video. And if you are someone who is unfamiliar with Terraform entirely but still interested in learning this method, I suggest you first check out the &lt;a href="https://learn.hashicorp.com/terraform/getting-started/install"&gt; HashiCorp Documentation for getting started with Terraform&lt;/a&gt; to get yourselves up to speed.&lt;/p&gt;

&lt;p&gt;This video is a sequel to our previous video which explains in-depth, &lt;a href="https://www.youtube.com/watch?v=oooxZsz0hS4"&gt;how to set up an AWS instance scheduler&lt;/a&gt; from the CloudFormation console. I recommend checking it out first as the details of it won’t be revisited in this part 2. &lt;/p&gt;

&lt;p&gt;The tutorial is short and straight to the point, all you have to do is deploy the script we use and connect your AWS account with Terraform. Instance Schedulers are a great way to maintain the operation of your instances and to save your cloud costs by huge amounts. With Terraform being the main management console for many users, an instance scheduler would come in handy in your cloud management. There are very few tutorials, either as videos or articles, showcasing this feature. I hope this serves as a great way to further your cloud management goals.&lt;/p&gt;

&lt;p&gt;You can find the video &lt;a href="https://www.youtube.com/watch?v=kq0qPUrZwwc"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
      <category>ec2instances</category>
    </item>
    <item>
      <title>Azure Vs AWS: What You Need To Know</title>
      <dc:creator>Totalcloud.io</dc:creator>
      <pubDate>Wed, 29 Jul 2020 06:10:29 +0000</pubDate>
      <link>https://dev.to/totalcloudio/azure-vs-aws-what-you-need-to-know-n48</link>
      <guid>https://dev.to/totalcloudio/azure-vs-aws-what-you-need-to-know-n48</guid>
      <description>&lt;p&gt;Companies that have jumped the gun with cloud migration during this time of crisis have committed a fatal mistake. The knowledge gap among businesses that seek to migrate is often underestimated, leading to devastating expenditures and operational inefficiencies.  The rise of the Covid-19 pandemic has increased the demand for companies to opt for cloud migration. If you belong to such a group, you should read further to better prepare.&lt;/p&gt;

&lt;p&gt;It might be tempting to start with the most popular platform out there to avoid hours of research but I say that would cause problems for you in the future. Learning and familiarizing with leading service providers is essential so you can pick one that best suits your goal, your budget, and your commitment.&lt;/p&gt;

&lt;h2&gt;How do I choose a public cloud provider?&lt;/h2&gt;

&lt;p&gt;Of the popular picks, AWS and Azure are among the lead performers. The year of 2020 notices the ever-decreasing gap between AWS and Azure. Additional growth in services like multi-cloud and containers makes the choice between the two merit discussions.&lt;/p&gt;

&lt;p&gt;Both come with their distinct advantages and exclusive features. When it comes down to choosing between these two, the verdict is dependent on the requirement on your business end. For example, if an organization is in need of a strong Platform-as-a-service (PaaS) provider or needs Windows integration, Azure would be the preferable choice while if an enterprise is looking for infrastructure-as-a-service (IaaS ) or diverse set of tools then AWS might be the best solution.&lt;/p&gt;

&lt;p&gt;When it comes to the essential technical and hardware capabilities, Azure and AWS are neck and neck in their offerings. Whether it be storage, networking, pricing, compute; the differences are scarce. They also share similar features such as on-demand payment model, autoscaling, IAM features, elasticity, security, and service provisions.&lt;/p&gt;

&lt;p&gt;If we look at the sheer size of the user base, then AWS is noticeably superior to its competition. A million customers, 2 million servers, 100,000 computer cores and $10 billion annual revenue. AWS alone holds a 40% market share for cloud computing. You can add the shares of the next big three platforms and it still wouldn’t match it. It also is the oldest cloud service with 11 years of a successful business.&lt;/p&gt;

&lt;p&gt;Meanwhile, growing at a rate of 120K new customers per month, 5 million organizations using Azure Active directory, nearly 5 million developers registered with visual studio team services, 1.4 million cloud databases, and 40% of revenue generated from start-ups and ISVs- Azure is on the verge of dominating AWS cloud services.&lt;/p&gt;

&lt;p&gt;So, for the comparisons to be made, there are certain essentials to look into and see what your organization prioritises. Let’s look at what each of these services excels in so you can reach your own verdict.&lt;/p&gt;

&lt;h2&gt;AWS vs Azure: What to look for?&lt;/h2&gt;

&lt;p&gt;Pricing Model:&lt;/p&gt;

&lt;p&gt;AWS has a pay-per-use model where you are charged hourly. You have three pricing plans to purchase an instance with&lt;/p&gt;

&lt;p&gt;On demand: Pay per use with a fixed rate for each machine&lt;br&gt;
Reserved: Commit to an instance for 1 or 3 years with fixed rate based on use&lt;br&gt;
Spot: Bid for extra instance capacity available&lt;/p&gt;

&lt;p&gt;Azure too has a pay-per-use model but it charges you per minute. Azure also offers short term commitments on machines with prepaid or monthly charges. AWS has a similar model for its support plans whereas Azure charges a fixed monthly rate.&lt;/p&gt;

&lt;p&gt;AWS so far has more flexible pricing options and could come out to be the least expensive option of the two if used adequately.&lt;/p&gt;

&lt;p&gt;Ease of Use:&lt;/p&gt;

&lt;p&gt;Amazon offers more features and configurations but it has a learning curve. Power, customization, flexibility and several integration compatibilities make the learning curve justified. &lt;/p&gt;

&lt;p&gt;Azure users familiar with Windows will have a far easier time with making full use of the platform. There is very little learning curve and has simple means of setting up a hybrid server by integrating with on-premise Windows servers. Many third party tools are also compatible.&lt;/p&gt;

&lt;h2&gt;Platform as a Service Functions&lt;/h2&gt;

&lt;p&gt;Azure and AWS offer similar PaaS capabilities for virtual networking, storage, and machines. However, Azure has an edge as it offers stronger and faster PaaS services.&lt;/p&gt;

&lt;p&gt;Microsoft Azure PaaS provides application developers with the environment and tools, thus giving them building blocks that they need to build and establish new cloud services quickly. It also provides essential ‘DevOps’ connections which are important for managing, monitoring, and continuously fine-tuning those apps. With Azure PaaS, much of the infrastructure management is taken care of behind the scenes by Microsoft. Thus, you have a 100% focus on innovation if you develop Azure PaaS solutions.&lt;/p&gt;

&lt;p&gt;Integrated Environment: &lt;/p&gt;

&lt;p&gt;Azure has a noticeable advantage in offering an integrated environment for setting up a development pipeline which involves testing and deployment. Clients get to choose the framework and the addition of open development languages helps with Azure cloud migration. Migration and development pipeline integrations are a complicated process and a source of many complaints within its consumer base.&lt;/p&gt;

&lt;p&gt;However, AWS has a higher range of services offered and AWS services integrate seamlessly with other Amazon services. On a global level, AWS also has many more servers; but this may not be an important consideration for your organization.&lt;/p&gt;

&lt;p&gt;Container as a Service capabilities:&lt;/p&gt;

&lt;p&gt;Amazon EC2 Container Service supports Docker containers, you can start and stop container-enabled applications with simple API calls, query the state of your cluster from a centralized service, and integrate other Amazon EC2 features like security groups, EBS volumes, and IAM roles.&lt;/p&gt;

&lt;p&gt;Offers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compatibility with Docker&lt;/li&gt;
&lt;li&gt;Managed Container Clusters&lt;/li&gt;
&lt;li&gt;Programmed Control&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Azure Container Service lets you deploy and manage containers using the tools you choose. ACS optimizes the configuration of popular open-source tools and technologies specifically for Azure. You are awarded portability for both your containers and your application configuration. Choose the size, quantity of hosts, and types of orchestrator tools, and ACS handles everything else.&lt;/p&gt;

&lt;p&gt;Offers:&lt;/p&gt;

&lt;p&gt;Create a Azure-optimized container hosting solution&lt;br&gt;
Compatibility with Apache Mesos/Docker Swarm&lt;br&gt;
Integrate other open-source, client-side tooling&lt;br&gt;
Both AWS and Azure offer Kubernetes as a service with Azure having more flexible pricing options. Azure Kubernetes service has its own resource monitoring feature whereas you need to employ a third party with AWS. AWS has auto-scaling of nodes and allows managing node groups.&lt;/p&gt;

&lt;p&gt;Security Features:&lt;/p&gt;

&lt;p&gt;Azure currently utilizes a security model based on the Security Development Lifecycle. It contains security at its base and private data and all the services stay protected and protected while they are on Azure Cloud. Azure has an edge in the security department because Microsoft was the very first cloud vendor. They had continued approvals on their security policies and have adapted according to changing standards.&lt;/p&gt;

&lt;p&gt;Developer Tools:&lt;/p&gt;

&lt;p&gt;Both platforms offer their own set of developer tools.  AWS has its developer tools offered by Amazon’s internal engineering team. These tools and processes, the AWS suite of Dev tools aim to support DevOps engineers.The tools include CodeCommit, which is used to store code in private Git repositories; CodeDeploy, which automates code deployments; and CodePipeline for a Continuous Delivery. In addition to this, AWS also offers a (CLI) Command Line Interface for controlling AWS services and writing automation scripts. &lt;/p&gt;

&lt;p&gt;Azure, on the other hand, has an IoT suite designed to provide solutions for situations like predictive maintenance and remote monitoring. It also offers core for push notifications, monitoring IoT deployments, streaming analytics, and machine learning capabilities that combine with its cloud-based IoT services.&lt;/p&gt;

&lt;h2&gt;AWS vs Azure in 2020&lt;/h2&gt;

&lt;p&gt;We’ve heard users preferring AWS for its range of services that benefit them in maintaining large scale infrastructures, especially with SAP. Some users have found Azure to be cheaper, and are actually migrating to Azure. It has also gained more following with Covid-19 as its several SaaS cloud solutions became in demand. Azure users favor the windows experience and also claim much more stable services. The superior PaaS options are a bonus to users that would prefer more automated provisions. These two cloud providers continue to evolve and so do other competitors, it's also important to keep watch for their upcoming features and updates.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>aws</category>
      <category>cloud</category>
      <category>kubernetescontainers</category>
    </item>
    <item>
      <title>Cost Optimization With AWS Serverless Resource Scheduling </title>
      <dc:creator>Totalcloud.io</dc:creator>
      <pubDate>Thu, 02 Jul 2020 04:24:14 +0000</pubDate>
      <link>https://dev.to/totalcloudio/cost-optimization-with-aws-serverless-resource-scheduling-njo</link>
      <guid>https://dev.to/totalcloudio/cost-optimization-with-aws-serverless-resource-scheduling-njo</guid>
      <description>&lt;p&gt;Serverless Resource Scheduling&lt;/p&gt;

&lt;p&gt;As soon as we realized that we could save at least 40% of our server costs, simply by switching them on and off as needed, we’ve saved potentially millions of dollars. The concept of turning something ‘on’ &amp;amp; ‘off’ was easy enough to apply to resources like EC2 &amp;amp; RDS instances, because they inherently had the capability to be controlled. &lt;/p&gt;

&lt;p&gt;But why stop at just servers? There are a ton of other resources in your cloud on which the potential cost savings are massive - the only apprehension is that they don’t have that inbuilt switch. So instead of a simple on &amp;amp; off, we find more unique ways to schedule them, to achieve similar results. And since AWS doesn’t provide a direct solution, we just have to create one.&lt;/p&gt;

&lt;p&gt;Take for example Redshift clusters - at the time of turning ‘off’ - you can take a snapshot &amp;amp; delete the cluster; and when it’s time to turn it ‘on’ - you can create a new cluster &amp;amp; restore the snapshot. This could enable us to schedule resources like EKS, ECS, Neptune Databases, and almost 80% of your cloud. Imagine the cost &amp;amp; operational benefit of being able to schedule your entire cloud when there’s no need for it to be running? &lt;/p&gt;

&lt;p&gt;As cloud infrastructure has evolved, a considerable number of us have adopted serverless - following an on-demand execution model. This shift alone has saved us huge amounts - but since you’re charged per execution, any accidental executions or services left running longer than needed can result in bill shocks. So we wanted to extend the scheduling capability to serverless services, with the same goal in mind - save costs. The principle is to block the functions &amp;amp; key entry points that trigger serverless - during non-business hours &amp;amp; weekends, so there are no unintended executions &amp;amp; charges.    &lt;/p&gt;

&lt;p&gt;Here’s a deep dive into how serverless services can be ‘scheduled’ to achieve the best outcomes. We also see an example of how we, at TotalCloud, use this concept to put our serverless architecture on a schedule.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;DynamoDB&lt;/p&gt;

&lt;p&gt;Reduce Read and Write requests&lt;/p&gt;

&lt;p&gt;Cut down on the number of read and write requests to keep the number as low as possible.&lt;/p&gt;

&lt;p&gt;DynamoDB charges you in two ways. The storage past the free tier limit(20GB) and the number read and write request units. Reducing the requests depending on certain time or changes in the architecture, you can save costs. Setting up a workflow that will make sure your RCU/WCU doesn’t go past its partition limit can save you from any accidental requests being made. &lt;/p&gt;

&lt;p&gt;Amazon S3&lt;/p&gt;

&lt;p&gt;Switch Between Different Amazon S3 Tiers&lt;/p&gt;

&lt;p&gt;Switching between different storage tiers like Glacier, Archive and One Zone I-A can bring down the storage charges. However, there are certain conditions when you should opt for such switches. You can’t do this too frequently either as a transition between tiers incur charges as well.&lt;/p&gt;

&lt;p&gt;So, for example, Let’s say there’s a particular data stored in your Frequent Access tier that you haven’t been frequently accessing as the name suggests. You can set up a workflow from our platform that can identify it and move it to the much cheaper Infrequent Access tier. All you need are the right permissions and you can easily manipulate your Amazon S3 tiers with just a single workflow.&lt;/p&gt;

&lt;p&gt;Lambda Functions&lt;/p&gt;

&lt;p&gt;Blocking The Function Execution at The Trigger&lt;/p&gt;

&lt;p&gt;You can block unwanted Lambda functions from executing by stopping the trigger. Be it an API Gateway, S3 Event, or DynamoDB event. &lt;/p&gt;

&lt;p&gt;Reduce Concurrency&lt;/p&gt;

&lt;p&gt;Reducing the time of Provisioned Concurrency on a Lambda function is also a smart way to save costs. Concurrent Lambda functions are a much-needed service for many architectures. However, you can always find ways to reduce the active execution time.&lt;/p&gt;

&lt;p&gt;CloudWatch Events&lt;/p&gt;

&lt;p&gt;Block Rules to Save Cost on all Resources&lt;/p&gt;

&lt;p&gt;You can shut down certain CloudWatch events at the trigger phase by blocking the rule. This will save costs from all the services associated with it. This practice is useful when events are accidentally executed or are scheduled to execute when they shouldn’t.&lt;/p&gt;

&lt;p&gt;The reason that makes this hard to achieve has always been the fact that AWS makes it difficult. With Totalcloud, you can create a workflow for each of these services and their schedules along with complete control over their execution.&lt;/p&gt;

&lt;p&gt;How we schedule our serverless architecture at TotalCloud&lt;/p&gt;

&lt;p&gt;As TotalCloud itself largely runs on a serverless mode, we’ve employed the same concept to our architecture. In our case, the block CloudWatch rules that trigger serverless. We use our workflows to stop scheduled events to shutdown during specific points of the week. Our workflows are no-code, so you can set this up in minutes, like a logical flow of instructions. But we’ve made it easier, our ‘Scheduling solution’ enables you to set up schedules for any resources in a simple UI, so you don’t even have to bother creating a workflow from scratch. &lt;/p&gt;

&lt;p&gt;Here’s a quick run-through of how we create our serverless schedules: &lt;/p&gt;

&lt;p&gt;Step 1: Choose the scheduling period&lt;/p&gt;

&lt;p&gt;With our Custom Schedules, you have the flexibility to choose how your schedules should function. You can set a specific time, make it a one-time event, or make it recurring. &lt;/p&gt;

&lt;p&gt;Step 2: Select the service and the resource associated with it&lt;/p&gt;

&lt;p&gt;In this case, the service is CloudWatch Events, and since we’re blocking the rule that acts as the trigger for the invocation of the function, select rules as your resource.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;Step 3: Choose the Key-Value pair to identify the Event&lt;/p&gt;

&lt;p&gt;Here, we specify the rule name associated with the events to filter those out specifically. &lt;/p&gt;

&lt;p&gt;Filters can be applied based on different metrics, a parameter, filter based on tags, or even write a function to filter when the function is invoked.&lt;/p&gt;

&lt;p&gt;In our case, we set the parameter where all of the rules that start with “ss” are filtered out to be scheduled.&lt;/p&gt;

&lt;p&gt;Step 4: Set the Parking action&lt;/p&gt;

&lt;p&gt;The parking action dictates what event will come to pass as the scheduled time arrives.&lt;/p&gt;

&lt;p&gt;Set the action as ‘disable rule’.&lt;/p&gt;

&lt;p&gt;Step 5: Set the Unparking action&lt;/p&gt;

&lt;p&gt;Unparking action simply lets you dictate the event to occur when the scheduled period comes to an end. It need not just be the canceling of your event but also the invocation of a consecutive one.&lt;/p&gt;

&lt;p&gt;You’ll need the event to start running again once the schedule period is over. Just set the action as ‘enable rule’ in the unparking action.&lt;/p&gt;

&lt;p&gt;Step 6: Save and Deploy the schedule.&lt;/p&gt;

&lt;p&gt;With this, all our Coudwatch events starting with “ss” are temporarily disabled. This means, we save costs on all the services being used for these events.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>cloudwatch</category>
      <category>s3</category>
      <category>dynamodb</category>
    </item>
    <item>
      <title>5 Best Practices For Tagging AWS Resources</title>
      <dc:creator>Totalcloud.io</dc:creator>
      <pubDate>Thu, 02 Jul 2020 03:58:30 +0000</pubDate>
      <link>https://dev.to/totalcloudio/5-best-practices-for-tagging-aws-resources-435g</link>
      <guid>https://dev.to/totalcloudio/5-best-practices-for-tagging-aws-resources-435g</guid>
      <description>&lt;p&gt;Introduction to EC2 Tags&lt;br&gt;
A Tag is an identifying label you attach to your AWS resource. As the name suggests, they function similarly to the tags you’d find in the books of your library, something to categorize the individual items by or the specific sections they’ll be available in. AWS Tags, however, offer a wide range of benefits through its simple categorization.&lt;/p&gt;

&lt;p&gt;A tag has two components you get to define, a key and a value(optional). Tags are a great way to organize your AWS resources in various categories based on their purpose, environment, who maintains them, etc. When you have the same resource in n numbers, you are left without an identification metric. &lt;/p&gt;

&lt;p&gt;Tags exist to save you from such a predicament, you can also manipulate multiple resources falling under the same tag together so that you won’t have to assign a function for each of them separately. For example, you could define a set of tags for your account's Amazon EC2 instances that helps you track each instance's owner and stack level&lt;/p&gt;

&lt;p&gt;Tagging Use Cases&lt;br&gt;
Tagging has several important use cases it can cover. A standardized approach to tagging is the surefire way to target these use cases.&lt;/p&gt;

&lt;p&gt;Common tagging use cases:&lt;/p&gt;

&lt;p&gt;Cost allocation&lt;br&gt;
Automation&lt;br&gt;
Access control&lt;br&gt;
Security&lt;br&gt;
Cost Allocation&lt;br&gt;
Tags can assist in cost allocation in the way of AWS Cost Explorer and Cost and Usage Report. These services can analyze your resources by categorizing them into tags. Analyzing the cost of resources pertaining to specific projects or one area of business will become feasible when applying tags onto said resources.&lt;/p&gt;

&lt;p&gt;So let’s say you have three different teams and you need to track the cost of these individual teams and assign responsibility to their utilization of resources. Tags could let you categorize the resources by departments or teams to give you clarity in exactly the flow of costs.&lt;/p&gt;

&lt;p&gt;Automation&lt;br&gt;
Tags can be the on/off switch of individual resources partaking in a collective automation function. Tags can filter specific groups of resources to automation activities across the whole environment, unrestricted by the walls of department/project/application, etc. Of course, this requires a tagging strategy and practice specific to automation goals. A key example of this is to start/stop AWS EC2 instances in non-business hours to save up on costs greatly. Also, new resources being launched with tags would be automatically grouped with your cost-saving functions.&lt;/p&gt;

&lt;p&gt;Access Control &lt;br&gt;
Defining access control through tags is also widely supported. AWS Identity and Access Management policies enable customers to limit permissions of resources by associating resources with tags and values. This way, resources can have limited access to test/prod/dev environments by their tags.&lt;/p&gt;

&lt;p&gt;Security&lt;br&gt;
The applications hosted by distinct resources will have their individual security risk levels. Identifying and managing these various levels of security through the use of tags is the simplest way to improve security management across the infrastructure. Combining this with automation allows for automated compliance checks/access control and as a result, increased security.&lt;/p&gt;

&lt;p&gt;Why Implement a Tagging Strategy?&lt;/p&gt;

&lt;p&gt;As the cloud continues to grow, organizations migrate many of their resources over to the cloud and have an abundance of services to keep in check. This is in many ways a positive change but the maintenance of these resources is a chore especially when you have it all loaded up in one account. &lt;/p&gt;

&lt;p&gt;Tracking all these resources in terms of who uses what and where the utilization goes to is the primary necessity of a tagging strategy. A strategy is required because it has to be scalable across your organization. This way all of the resources can be filtered by its application, the teams that run it, the responsible members of its expenses, or by its type. These various identification methods also pave the way for careful reporting and targeted alerts for analytics purposes.&lt;/p&gt;

&lt;p&gt;There are various tagging strategies you can employ based on the purpose of your tags. You can have multiple purposes based on what application or department it focuses on as well. &lt;/p&gt;

&lt;p&gt;The purpose is determined by the use cases you are focusing on. For example, if you are tagging for cost allocation then your tagging strategy should focus on clear communication of your financial dimensions.&lt;/p&gt;

&lt;p&gt;Typically, financial reporting covers a variety of dimensions, such as business unit, cost center, product, geographic area, or department. Aligning cost allocation tags with these financial reporting dimensions simplifies and streamlines your AWS cost management.&lt;/p&gt;

&lt;p&gt;Best Practices for EC2 Tagging&lt;br&gt;
Tag Every Resource&lt;br&gt;
Ensure each resource is tagged. Nametags are a unique identifier so every resource that carries one can be individually manipulated. This comes in handy when you have the same resource in multiple numbers. Name tags also aid in manipulating the resources not only through AWS services but also third-party tools/platforms meant to utilize these resources.&lt;/p&gt;

&lt;p&gt;Improve Identification with Camel Cases and Namespaces &lt;br&gt;
Camel Cases are a very simple concept of naming the tags in the format of attaching multiple words in a string. The detailed method of tagging helps identification for your team and for anyone else that requires using the resource.&lt;/p&gt;

&lt;p&gt;Namespaces are an allowed gimmick in tagging that also improves the identification of the resources upon looking at its tags. Here, the tags that all belong to one team, department, or any specific category is mentioned with a namespace, and then subsequently, it’s purpose or designation is added.&lt;/p&gt;

&lt;p&gt;Team:proj1&lt;br&gt;
Team:TestEnv2&lt;br&gt;
And so on.&lt;/p&gt;

&lt;p&gt;Limit Permission of Tags for Access Control&lt;/p&gt;

&lt;p&gt;Tags used to manage your access control policies must have strict limitations on who gets access to creating, deleting, and modifying them. You could create IAM policies that utilize conditional logic to restrict access but this could be bypassed if the unauthorized party has access to modifying the tags directly. Make sure you prevent this by applying to deny rules on ec2: CreateTags and ec2: DeleteTags actions. Also, have your information security team run analyses on your tagged resources, their existing policies and permissions, and the vulnerabilities they pose.&lt;/p&gt;

&lt;p&gt;Design a Schematic of your Tagging Strategy&lt;br&gt;
When having to present your infrastructure to a separate entity, a schema of your resources properly tagged using the earlier tips and a standardized approach improves its readability and comprehension. This is part of having a solid tagging strategy.&lt;/p&gt;

&lt;p&gt;Owner:department&lt;br&gt;
Owner:team&lt;br&gt;
Owner:application &lt;/p&gt;

&lt;p&gt;The above strategy lets a single owner’s various resources be identified and analyzed based on the department it serves, the team it functions in, and the application it powers. This way cost and utilization reports can be analyzed by each department and the resources they avail or manipulate, automate or rightsize the resources deployed for any of the applications attached. As stated earlier, your strategies can be devised by focusing on the objective of your tags. Cost allocation? Access Control? Or Security? The purposes can dilute the tagging strategy and you can name accordingly.&lt;/p&gt;

&lt;p&gt;Constrain Tag Values with AWS Service Catalog &lt;/p&gt;

&lt;p&gt;Keeping track of the data values in a designated tag becomes a chore if you are to enter it manually. Tags that are being entered through automation scripts will have its own review as part of the process but manually entered tags opens up opportunities of error. The TagOption library that is part of the AWS Service Catalog lets you specify the required tags and the range of values, this reduces the likelihood of missing or invalid tags as limit and required values are already made aware.&lt;/p&gt;

&lt;p&gt;Common Mistakes&lt;br&gt;
Implementing tags into your environment has a few considerations for you to take note of. Some are minor things, some are major but all of them could leave lasting damage if you are running a large infrastructure.&lt;/p&gt;

&lt;p&gt;Tags are case sensitive. This is a very small thing but when you have so many resources to account for, overlooking this fact could come back to bite you. Hence, I recommend you strictly use lowercase characters unless you are using namespaces.&lt;br&gt;
Spelling mistakes when naming is also a common error that leaves a lot of resources outside the categories you are trying to form.&lt;br&gt;
Tags attached to a single AWS resource, by default, only applies to that resource alone and not the resources attached to it as a dependency.&lt;/p&gt;

&lt;p&gt;Maintaining Tags&lt;br&gt;
Manually tagging each resource before you deploy it is a wishful scenario that is unlikely to happen, you will have several resources missing tags. Not to mention the challenges offered in case you make any of the common mistakes mentioned earlier, automating the tagging process by writing a script or creating a custom workflow can ease the process and secure its success during the deployment pipeline.&lt;/p&gt;

&lt;p&gt;You can also use AWS config to identify resources with missing tags by writing rules that will alert you of such. This could also be achieved with the aforementioned workflows.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
In essence, tagging is a very simple concept that lends your infrastructure easier means of categorizing to keep track of all that is happening and to communicate the developments clearly with people other than your own. In order to maintain tagging policies at scale, it's recommended to automate your tagging without the use of long-winded scripts. A platform like TotalCloud can not only help you with the automation &amp;amp; keeping track of all tags, but also report you on missing tags so you don't lose out on anything important.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>tags</category>
      <category>ec2</category>
      <category>bestpractices</category>
    </item>
    <item>
      <title>Helpful Tips For EC2 Rightsizing</title>
      <dc:creator>Totalcloud.io</dc:creator>
      <pubDate>Thu, 02 Jul 2020 03:55:26 +0000</pubDate>
      <link>https://dev.to/totalcloudio/helpful-tips-for-ec2-rightsizing-2hn1</link>
      <guid>https://dev.to/totalcloudio/helpful-tips-for-ec2-rightsizing-2hn1</guid>
      <description>&lt;p&gt;Rightsizing and Its Effects on Cost Optimization&lt;br&gt;
Rightsizing is the definitive method to save costs on AWS EC2 or RDS instances and it is achieved by ensuring that your employed machines are all exactly what is needed to meet the capacity requirement and performance of the workloads. Nothing more, nothing less.&lt;/p&gt;

&lt;p&gt;Rightsizing starts with monitoring and analyzing your current services being used. An observation period of at least two weeks or maybe even a month will give you sufficient information on the instance performance and usage patterns while also showing you the peak of your workload. &lt;/p&gt;

&lt;p&gt;The metrics that define instance performance include:&lt;/p&gt;

&lt;p&gt;vCPU utilization&lt;br&gt;
Memory utilization&lt;br&gt;
Network utilization&lt;br&gt;
Disk usage&lt;/p&gt;

&lt;p&gt;What are the steps to Rightsizing an AWS environment?&lt;br&gt;
Chart a plan for your EC2 Project&lt;br&gt;
A project goal is essential in determining beforehand what the resources you need will be. A large cloud environment will be laden with multiple applications, servers, and databases. All these resources will be built upon On-demand, reserved, or spot instances. Without preemptive knowledge, you could invest in machines far beyond your budget limitations. &lt;/p&gt;

&lt;p&gt;A lot of teams start out believing that the pay-per-use model saves them from extra expenses because they can maintain the various machines themselves. This couldn’t be further from the truth. And what’s worse, after their realization, they go after reserved instances and commit to durations that end up being too long. The duration of the instances you need depends on the volatility of various aspects of your environment and the shift in workload demands on different occasions. &lt;/p&gt;

&lt;p&gt;A plan is essential so you know what goes where and have an idea on the type of resources you need. Eventually, you’ll be able to further break it down to identify the exact machines you require.&lt;/p&gt;

&lt;p&gt;At the core of it, it is a basic principle to be aware of what your predictable workloads are so you can equip them with reserved instances. While reserved instances force you to commit to duration, it is both cheaper and if purchased with foreknowledge, you won’t have to worry about rightsizing those instances.&lt;/p&gt;

&lt;p&gt;Note: Try to plan your rightsizing a few weeks before the renewal of your reserved instances so you are prepared for your next commitment.&lt;/p&gt;

&lt;p&gt;Keep up with the updated generation of instances that pops up so that you might have a cheaper or more powerful alternative to whatever you are looking for.&lt;/p&gt;

&lt;p&gt;Choosing the right EC2 Instance&lt;br&gt;
Once you’ve charted the general idea of the resources you’ll need for your initial setup, you can go deeper into the right instances you’ll need for the capacity requirements of your servers, applications, and other parts that make up your environment. &lt;/p&gt;

&lt;p&gt;Where do the instances you need for your application fall under? General purpose or compute optimized?&lt;br&gt;
What are your security needs?&lt;br&gt;
What about your storage specs?&lt;br&gt;
What will be the estimated flow of traffic?&lt;/p&gt;

&lt;p&gt;Break down all the targets you’ll need to ensure are covered to realize what particular instances you’ll need and do this across the environment.&lt;/p&gt;

&lt;p&gt;Analyzing performance data&lt;br&gt;
Analyzing the performance data of your infrastructure will let you pinpoint idle instances or instances with low CPU utilization. These rightsizing opportunities are a result of continuous monitoring. The two factors that you should keep an eye on include CPU usage and memory usage. While Amazon CloudWatch can help you monitor your resources and provide you with reports, you are forced to make the necessary rightsizing changes yourself. You may have come to the conclusion that your CPU is under a constant load of 70-80%, so you are planning to upgrade your machines but this process is entirely manual. Understanding when changes need to happen is easier than figuring out what those changes exactly are. Totalcloud can help save costs by automating the identification of rightsizing opportunities, recommending the right instances to purchase, and taking action by deploying, shutting down, or terminating the resources as per instruction. That’s a lot of birds with one stone.&lt;/p&gt;

&lt;p&gt;AWS RDS is also benefited from the response to performance data. For RDS, all the above principles still apply but here, you should pay attention to these specific factors.&lt;/p&gt;

&lt;p&gt;Average CPU utilization&lt;br&gt;
Maximum CPU utilization&lt;br&gt;
Minimum available RAM&lt;br&gt;
Average memory being read and written from disk per second&lt;/p&gt;

&lt;p&gt;Workload fluctuations&lt;br&gt;
The workload of your environment can fluctuate at different times of the year. High traffic on your servers or increased requirements of memory volumes must all be responded to at the earliest. &lt;/p&gt;

&lt;p&gt;There are different ways you can tackle workload fluctuations. &lt;/p&gt;

&lt;p&gt;Setting an Auto scaling group will help respond accurately to meet your desired targets and react to changes. &lt;br&gt;
AWS Auto scaling can also be predictive and hence allows itself to be well prepared with sudden spikes.&lt;br&gt;
Automating the rightsizing by scheduling it proactively is a viable option when you have an idea of when certain fluctuations could happen. Totalcloud lets you approach this scenario with scheduling templates.&lt;/p&gt;

&lt;p&gt;Setting auto-scaling groups alongside your rightsizing strategies cover you in all the changing demands in machinery. It is unwise to solely rely on your ASG to respond as a lot of fluctuations and demands can be recognized with an analysis of your performance. Preparing beforehand by either modifying the instances on hand or changing their specs is a strategy that incentivizes you to always be aware of your environment.&lt;/p&gt;

&lt;p&gt;Rightsizing is an ongoing process&lt;br&gt;
Many companies adhere to forming strict plans that comply with their budget requirements but slack off on keeping up with the developments within the environment or outside.&lt;/p&gt;

&lt;p&gt;Rightsizing can’t stop after your first major changes, aside from having a plan ready for when your reservations are expired, keeping up with monitoring activity on a constant basis will open new avenues to make adjustments that can save you money. Constant monitoring also can give you patterns and ideas that aid in predicting your workload.   &lt;/p&gt;

&lt;p&gt;So starting out with a plan, monitoring the developments, and reworking the plan keeps you ahead of the curve against all possible demands faced during the runtime of your environment.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>cloudwatch</category>
    </item>
    <item>
      <title>The Ultimate Guide To AWS Savings Plan</title>
      <dc:creator>Totalcloud.io</dc:creator>
      <pubDate>Mon, 04 May 2020 06:28:22 +0000</pubDate>
      <link>https://dev.to/totalcloudio/the-ultimate-guide-to-aws-savings-plan-12ei</link>
      <guid>https://dev.to/totalcloudio/the-ultimate-guide-to-aws-savings-plan-12ei</guid>
      <description>&lt;p&gt;AWS launched Elastic Cloud Compute in August 2006, back then there was only one pricing model. Aiming for flexibility, the pay-per-use model that we come to now call, “On-Demand” was the sole way to purchase the various resources AWS offered. Back then there was only one region and one size but as the instance families and the available regions grew, Amazon rolled out a new pricing model in 2009. Reserved Instances.&lt;br&gt;
RIs saved users billions of dollars with its 1-year or 3-year commitment plans. However, while it did save money, the very concept of RIs was at odds with many of the ideas associated with AWS services. The core promise given to AWS users was flexibility and elasticity of resources.&lt;br&gt;
Limiting yourself to a fixed price resource over time takes away the capacity to utilize any number of services with the possibility of elastic adjustments.&lt;br&gt;
Amazon went through hoops by bringing up new features like selling unwanted RIs and purchasing convertible RIs to fix the lack of freedom its customers have. But such roundabout strategies came with purchase complexities and management complications. Despite all this, RIs were still successful as users were attracted to their discounted pricing. With the cloud market filled with cloud management tools, many enterprises found the management of RIs, Spot and On-Demand services easier to handle. Some organizations paid little heed to the benefits of adjusting specifications strictly for organizational needs.&lt;/p&gt;

&lt;h2&gt;Introducing AWS Savings Plan&lt;/h2&gt;

&lt;p&gt;Addressing all of these constraints, AWS announced on the November of 2019, an entirely new discount program that maintains the flexibility and elasticity of its resources but also guarantees lower pricing and longer user commitments, On paper, AWS savings plan is a great introduction, but the two main challenges it faces come from its consumers being unaware of how to take advantage of it or being at odds because they already use Reserved Instances.&lt;br&gt;
Hopefully, by the end of this guide, you’d not only have the full picture on how AWS savings plans work but also what your next would be if you already have other cost-saving plans in hand.&lt;/p&gt;

&lt;h2&gt;What is the AWS Savings plan?&lt;/h2&gt;

&lt;p&gt;With AWS Savings plan, users commit to spending a specific price per hour over a fixed period of time. In return, AWS offers substantial discounts against On-Demand rates for EC2 instances and the Fargate container service. Any consumption above the committed amount is charged at On-Demand rates.&lt;br&gt;
The purchasing process is far simpler when it comes to AWS Savings plan because of how it allows users to be more flexible with the resource specifications. Purchasing a RI requires about 8 components to be determined whereas you only need 5 for a Savings Plan.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5Odzrwf9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1dq5bmorblvqhx0pz1ez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5Odzrwf9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1dq5bmorblvqhx0pz1ez.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The longer the commitment, the higher the discount. And for customers that choose to pay all upfront will have the benefit of further discounts. Customers can choose the payment commitment such as a minimum of $0.001 per hour per year and then layer multiple Savings Plans together. So for example, a customer can follow up on an instance that is about to have its discount expired with more Savings plans or you can split your savings plans according to which instance requires more utilization.&lt;br&gt;
As mentioned earlier, there are two types of Savings Plan you can choose from. AWS EC2 Instance Savings plan or Compute Savings plan. The two plans, similar to RI and convertible RIs differentiate themselves by offering more discounts for less flexibility.&lt;/p&gt;

&lt;h2&gt;AWS EC2 Instance Savings Plan&lt;/h2&gt;

&lt;h3&gt;What is the EC2 Instance Savings Plan?&lt;/h3&gt;

&lt;p&gt;EC2 Instance Saving plan can offer discounts up to 72% as that of the On-Demand rate depending on the term of commitment, the payment option used, and the instance family has chosen. The restriction is that you can only use the plan on specific EC2 instances, family and region. You can, however, change the size, OS, and tenancies without losing the plan.&lt;/p&gt;

&lt;h3&gt;Standard RIs vs EC2 Instance Savings Plan&lt;/h3&gt;

&lt;p&gt;Both Standard RIs and EC2 Instance Savings Plan are the more restricted options within their respective discount programs. They aim to offer more discounts for fewer liberties.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V7lOWkyT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yxxicdq2nxn2thrrlhd3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V7lOWkyT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yxxicdq2nxn2thrrlhd3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;AWS Compute Savings Plan&lt;/h2&gt;

&lt;h3&gt;What is the AWS Compute Savings Plan?&lt;/h3&gt;

&lt;p&gt;For a smaller discount of up to 66%, Compute Savings plan offers more freedom of choice than its counterpart. With the most obvious feature being the plan extension across EC2 and Fargate services. Along with that, you are given freedom of region, instance family and migrated services (for example, if a container service is moved to Fargate).&lt;/p&gt;

&lt;h3&gt;Convertible RIs vs Compute Savings Plan
Compute Savings plan is a better option than Convertible RIs because of less purchase complexity and improved flexibility which is the main selling point of convertible RIs.

&lt;h3&gt;The drawbacks of Saving Plans&lt;/h3&gt;

&lt;/h3&gt;

&lt;p&gt;Saving Plans come with its own drawbacks.&lt;/p&gt;


&lt;li&gt;You can’t purchase any savings plan for ECS, RDS, Redshift and other services.&lt;/li&gt;
&lt;br&gt;
&lt;li&gt;There are no options to resell underutilized Savings Plans so your purchases must be careful.&lt;/li&gt;
&lt;br&gt;
&lt;li&gt;Once you are past your Saving Plan limits, you will be charged On-Demand prices.&lt;/li&gt;
&lt;br&gt;
&lt;li&gt;No capacity reservations&lt;/li&gt;
&lt;br&gt;
&lt;li&gt;The discounts aren’t actually better than Reserved Instances, most of the time, they are the same and sometimes, they are lower.&lt;/li&gt;

&lt;h3&gt;When to choose between Savings Plan and Reserved instances&lt;/h3&gt;

&lt;p&gt;The best way to choose between these two options is to answer two questions. How predictable are your resources? And how many varieties of services are you using?&lt;br&gt;
If your resources are not going to be changing its region, size, etc at all then you don’t necessarily need a savings plan. This doesn’t mean you can’t benefit from one but that is under certain circumstances.&lt;/p&gt;

&lt;p&gt;If you have an EC2 instance that is predictable,&lt;br&gt;
&lt;/p&gt;
&lt;li&gt;You will need reserved instances if its capacities will be reserved&lt;/li&gt;
&lt;br&gt;
&lt;li&gt;You will need Savings Plan if it will be running continuously with heavy utilization&lt;/li&gt;

&lt;p&gt;Similarly, you can choose a Compute Savings plan instead of Convertible RIs to reduce the management overhead of the resources.&lt;/p&gt;

&lt;p&gt;If you are running all your projects on Fargate, then Savings Plan is your definitive choice. However, for customers with Elasticache, RDS, Redshift or container services, you can’t really benefit from the Savings Plan. So all these additional services still require RIs for lower prices.&lt;/p&gt;

&lt;h2&gt;How to Purchase a Savings Plan- A 5-step Guide&lt;/h2&gt;

&lt;h3&gt;Step 1: Determine Your Resource&lt;/h3&gt;

&lt;p&gt;Determining your resources is part of finding the specifics of your Savings Plan. For this, you need to be aware of the predictability of your AWS infrastructure. How often will you be changing the scope of your EC2 instances? Will you be utilizing Fargate as a service? What are your budget plans or your workload plans? Also, factor in the waiting period before new purchases. With Savings Plans, there’s no resale option like RIs, so you can add resources to extend the plan but you can’t subtract.&lt;/p&gt;

&lt;h3&gt;Step 2: Infrastructure Rightsizing&lt;/h3&gt;

&lt;p&gt;Rightsizing is important to ensure you don’t spend your savings plans on cloud waste. Stricter management of your resources can avoid unnecessary expenditures. Opt the most effective instance that suits all your technical requirements but nothing more and nothing less. This involves comparing the CPU, memory, disk consumptions, and network type all with your requirements. We have a couple of resources that can help you make this decision easier.&lt;/p&gt;

&lt;h3&gt;Step 3: Analyze your Reservations&lt;/h3&gt;

&lt;p&gt;This step is crucial if you have existing Reserved Instances. It’s important to optimize your existing RI fleet by modifying Standard Zonal RIs and exchanging Convertible RIs that were necessary in order to accurately determine how much Savings Plan coverage is required.&lt;br&gt;
Optimizing your existing RI fleet will help you better analyze and understand RI usage.&lt;br&gt;
If your convertible RIs are allocated to dynamic resources that require constant adjustments by different teams, you can exchange them for a Compute Savings Plan to constantly have the benefit of making adjustments with no loss in discounts. Similarly, if your standard zonal RIs are specific to EC2 instances and you have optimized their specs to match your technical requirements, you can slap on an EC2 savings plan on top of them after the reservation term is over.&lt;/p&gt;

&lt;h3&gt;Step 4: The Purchasing Process&lt;/h3&gt;

&lt;p&gt;Purchasing can be done in AWS Cost Explorer and the process is similar to purchasing RIs but with the difference being that customers paying partial upfront will have to manually set their payment amount.&lt;/p&gt;

&lt;p&gt;Go to Cost Explorer and you will find Savings Plan, click Recommendations for curated suggestions based on your current setup.&lt;/p&gt;

&lt;p&gt;Choose the recommendation options as you fit.&lt;/p&gt;

&lt;p&gt;The recommendation suggests you will save 40% monthly when committing for a 3-year term. Add the Savings Plan to the Cart.&lt;/p&gt;

&lt;p&gt;Click on View Cart.&lt;/p&gt;

&lt;p&gt;Review your cart and click submit order. The savings plan will be in effect right away. You can utilize the Cost Explorer’s reports and analysis to review your savings.&lt;/p&gt;

&lt;h3&gt;Step 5: Monitoring your Savings Plan&lt;/h3&gt;

&lt;p&gt;Avoid under-using the savings plan you’ve set for yourself by constantly monitoring the resources you have. Savings Plan isn’t a pay-per-use model so, despite the level of consumption, you will be charged what you set. And once you’ve used up your consumption rate, your on-demand charges start getting applied. This would facilitate the need to purchase additional plans. Ideally, you should control the consumption rate and seize opportunities to avail of new savings plans.&lt;br&gt;
You can use utilization reports, coverage reports, and the AWS inventory to monitor and track your consumption. Additionally, you can set up budget plans using AWS budgets to ensure your consumption rate doesn’t go past your limit.&lt;/p&gt;

&lt;h2&gt;Using RIs and Saving Plans together&lt;/h2&gt;

&lt;p&gt;Purchasing AWS Savings Plan doesn’t exempt you from following cost optimization strategies. Many of the customers that are planning on taking up Savings Plan must have already purchased RIs for their cheaper prices. Trying to abandon your RIs by selling them all and taking up a Savings Plan isn’t an ideal solution for your budget. Contrary to some opinions, Reserved Instances aren’t getting shafted to accommodate Savings Plans. You can use both programs in conjunction to add smarter discounts to your resources.&lt;br&gt;
By using RIs as layers upon your AWS Savings Plan, you have a backup discount program that can reduce cloud costs until the expiration of your reservations. If you have predictable resources that aren’t covered by RIs, applying a Savings Plan on them can be a safe option.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awssavingsplan</category>
      <category>awsreservedinstances</category>
      <category>awscostoptimization</category>
    </item>
    <item>
      <title>Tips to Optimize AWS Auto Scaling Groups</title>
      <dc:creator>Totalcloud.io</dc:creator>
      <pubDate>Mon, 27 Apr 2020 07:43:07 +0000</pubDate>
      <link>https://dev.to/totalcloudio/tips-to-optimize-aws-auto-scaling-groups-201i</link>
      <guid>https://dev.to/totalcloudio/tips-to-optimize-aws-auto-scaling-groups-201i</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3abvXdi6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gg200888eeivtsnfrzmf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3abvXdi6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gg200888eeivtsnfrzmf.png" alt="ASG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;What is AWS Auto Scaling?&lt;/h2&gt;

&lt;p&gt;Auto scaling group treats a collection of EC2 instances as a logical group to automatically scale it in size and quality. It allows for dynamic management and gives you further features that reduce the management overhead. Two of these features are scaling policies and health check replacements.You can adjust the size of the group to meet the demand of your environment. You can achieve this manually or automatically. An Auto Scaling group starts by launching enough instances to meet its desired capacity. It maintains this number of instances by performing periodic health checks on the instances in the group. They continue to maintain a fixed number of instances even if an instance becomes unhealthy. If an instance becomes unhealthy, the group terminates the unhealthy instance and launches another instance to replace it. Auto Scaling Groups can give you your desired number of resources and when its goal is achieved, automatically scale it down to cut down costs and management. Similarly, for updated deployments, you can bring in new ASG with new instances. &lt;/p&gt;

&lt;h2&gt;How Do You Optimize an AWS Auto Scaling Group?&lt;/h2&gt;

&lt;p&gt;Once you have your ASG set up, you can optimize the process of scaling and managing your instances in a number of ways. There are many mistakes people make when setting up their applications, especially ASGs. There’s also a lot of different tools and best practices available that make the process either simpler, more efficient, or just better for your wallet. &lt;/p&gt;

&lt;h2&gt;Automatic vs Manual management&lt;/h2&gt;

&lt;p&gt;Most people set up their ASG by going to the AWS console. From there, you have the option to choose your name, the size, set an AMI, security group and other tightly packed specifications. There’s plenty more. Problem is, if you mess up in this setup process, you have to do it all over again. Forgot to enter a detail or mistyped something. Start over. Just check this out, they have an entire guide on modifying ASG specs. Now, on the other hand, there’s a different way you can go about this process. AWS Cloudformation lets you create Auto Scaling Groups(and other resources in fact) based on your description of what specifications you want. It has a template that details resources, provisions and security policies all in one package. Here’s the kicker, you can even modify it. Besides the gift of specifying all aspects of your deployment, you can also apply version control to your AWS infrastructure in the same manner in which you version your code.&lt;/p&gt;

&lt;h2&gt;Configure Your Health Check&lt;/h2&gt;

&lt;p&gt;Auto Scaling Group has a default health check policy called EC2 status checks. If your instances fail these checks, the ASG considers them unhealthy and replaces it with another one. Health checks are vital to keeping your applications running optimally. Sadly though, the default EC2 health checks don’t guarantee a thorough look into whether or not your applications will fail during deployment. There are factors that the health check doesn’t account for. This means EC2 check could say the instance is A-ok but then your application still dies while running in your instance. With the EC2 health check, you really don’t know if your application can handle requests or is still performing its duties correctly.AWS does offer the option to attach more target groups to broaden the scope of the health check. You can attach an ELB health check to add more conditions to the monitoring of your instances and you an application load balancer health checks to focus on your application. These health checks involve sending pings, trying to establish a connection, and send requests.&lt;/p&gt;

&lt;h2&gt;Reactive/Proactive/Predictive Scaling&lt;/h2&gt;

&lt;p&gt;You will need to preemptively configure your scaling group to dynamically scale your instances. This is done by enabling a policy which triggers a resize through cloudwatch metrics. &lt;/p&gt;

&lt;p&gt;There are three ways scaling can be done and intelligently managing these three different ways can give you optimal results.&lt;/p&gt;

&lt;p&gt;Reactive Scaling&lt;br&gt;
Proactive Scaling&lt;br&gt;
Predictive Scaling&lt;/p&gt;

&lt;h3&gt;Reactive Scaling&lt;/h3&gt;

&lt;p&gt;Dynamically resizes your instances based on demands that occur in real-time. This is the standard use case of auto-scaling and it comes with no cost. So as you come upon increased usage, traffic, or load, your instances are automatically scaled to meet these changing demands. It is useful to those applications in which traffic keeps going up and down with no absolute pattern and thus, want to maintain performance with less of a hassle.&lt;/p&gt;

&lt;h3&gt;Proactive Scaling&lt;/h3&gt;

&lt;p&gt;Proactive scheduling is chosen when you know what the state of the load will be in the upcoming future so you schedule your scaling policies to trigger a resize when it is required. For example, if you have an application that has high activity during the weekends and not so much on other days. You might not want to use a high number of EC2 instances or use a large number of resources just because of the weekend loads. So, a better approach would be using a scheduled scaling.&lt;/p&gt;

&lt;h3&gt;Predictive Scaling&lt;/h3&gt;

&lt;p&gt;This is a feature that AWS provides which hinges on its predictive capabilities to report to you on possible upcoming resizing. This is very much similar to reactive scaling in how the scaling is done but you are given knowledge of the future similar to proactive scaling. Predictive Scaling gives you a general trend on how your instances will be altered so it means you can utilize this information to paint a picture on what your pricing would look like.&lt;/p&gt;

&lt;p&gt;Each of these scaling methods come with their own benefits, you can use all three depending on the nature of the environments you have created to maximize the efficiency of your instances while reducing the price and being aware of it as well. The ideal goal is to not entirely rely on reactive scaling but also not to load up your management overhead.&lt;/p&gt;

&lt;h2&gt;Turning off Auto Scaling&lt;/h2&gt;

&lt;p&gt;You don’t need to keep your scaling policies always active. ASG themselves don’t charge you any extra fees but the cloudwatch metrics that contain the policies for the scaling group can incur charges. You can bring this down by carefully choosing when to turn your policies on and when to turn them off. You can reduce the tediousness of having to manually maintain the status of your ASG by choosing a service that can automate the process. &lt;/p&gt;

&lt;p&gt;Similar to Proactive Scaling, scheduling your scaling group is best done when the status of your application or environment is predictable. It is a small task but when you consistently find opportunities to stop your scaling groups, it will help ease your AWS bills.&lt;/p&gt;

</description>
      <category>awsautoscalinggroup</category>
      <category>ec2instances</category>
      <category>awsscaling</category>
      <category>awselb</category>
    </item>
    <item>
      <title>Instance Comparison Chart: Find The Right AWS EC2 Instance</title>
      <dc:creator>Totalcloud.io</dc:creator>
      <pubDate>Wed, 15 Apr 2020 11:16:40 +0000</pubDate>
      <link>https://dev.to/totalcloudio/instance-comparison-chart-find-the-right-aws-ec2-instance-3hj0</link>
      <guid>https://dev.to/totalcloudio/instance-comparison-chart-find-the-right-aws-ec2-instance-3hj0</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xiHgya3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/f2gacaf7mqr23bmbx233.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xiHgya3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/f2gacaf7mqr23bmbx233.PNG" alt="EC2 instance"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So you’ve decided to choose AWS the primary provider of your cloud service and now you’re looking into setting up your environment. You’ve got your project to be deployed and all you have left to do is choose an AWS instance that will run your machine image. But now, like many others before you, you are stumped by the countless choices of EC2 instances out there.&lt;/p&gt;

&lt;p&gt;Choose the wrong ones and you pay the price not only from your wallet but also in the effectiveness of your cloud environment. It is a very common obstacle for new AWS users to overcome. Along with the varied types of instances, the different AWS EC2 pricing options add fuel to the flames of uncertainty. &lt;/p&gt;

&lt;p&gt;For example, purchasing i3EN 3XLarge for its memory and CPU specs is a choice that will leave you with an extra $2k expense in your annual bill when you can gain the same size for a cheaper price by purchasing Z1D 3XLarge instead. &lt;/p&gt;

&lt;p&gt;Having a concrete budgetary plan with a deep insight into the specs you are looking for can greatly help narrow down your choices. However, we don’t all come in with solid plans. Sifting through the choices is time-consuming and ineffective.&lt;/p&gt;

&lt;p&gt;Totalcloud’s Instance Comparison Chart lets you view the instances in a scatter chart with rich customization options. You can compare instances, filter it by your preferences and view the different instances from lowest to highest EC2 prices. &lt;br&gt;
Choose from compute-optimized instances, General purpose instances or any other instance type, filter them according to the memory or vCPU specs you have in mind and view your narrowed down the list of instances according to the price from lowest to highest. &lt;br&gt;
Change the pricing model from AWS Reserved Instances of Windows or Linux to On-Demand pricing.&lt;br&gt;
Click on the data points to compare multiple instances. &lt;br&gt;
View the prices as hourly, weekly, monthly or annually. &lt;/p&gt;

&lt;p&gt;Whether you come with everything you want already in your mind or you’re completely blank on what you’re looking for, you’ll still gain a lot of information from the chart. We hope you can find the right EC2 instances for your projects, you can provide us with feedback for any suggestions you might have.&lt;/p&gt;

&lt;p&gt;You can check the tool out &lt;a href="https://www.totalcloud.io/aws-instance-types"&gt;here&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pNc5Up9_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/u5xq62f1azs132umswgd.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pNc5Up9_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/u5xq62f1azs132umswgd.PNG" alt="Table"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsec2instance</category>
      <category>awscostoptimization</category>
      <category>awsreservedinstance</category>
    </item>
    <item>
      <title>How DevOps can Switch to Remote Working</title>
      <dc:creator>Totalcloud.io</dc:creator>
      <pubDate>Mon, 06 Apr 2020 10:39:04 +0000</pubDate>
      <link>https://dev.to/totalcloudio/effective-transition-to-remote-working-for-devops-1lld</link>
      <guid>https://dev.to/totalcloudio/effective-transition-to-remote-working-for-devops-1lld</guid>
      <description>&lt;p&gt;Covid-19 has left things in disarray for Agile development teams. Sudden transition into a remote working structure has baffled the blended approach to DevOps which combines work culture &amp;amp; automation tools. The lack of contact work will start by striking your work culture first &amp;amp; then affect infrastructure &amp;amp; tools. It becomes imperative for your operative modes to adapt to the new normal. We've covered both these areas of Agile practices, so you don't incur the cost of inflexibility.&lt;/p&gt;

&lt;h2&gt;Keeping Things Continuous&lt;/h2&gt;

&lt;p&gt;Everything in DevOps is continuous. Code integration, delivery through testing, reviews and deployment to end users. The primary concern for a DevOps team would be to keep things continuous during the transition to this new work environment.&lt;/p&gt;

&lt;h3&gt;Cloud Migration&lt;/h3&gt;

&lt;p&gt;A lot of DevOps teams have an on-premise environment or a mix of cloud based and on-premise but with this shift, migrating to cloud solutions is a necessity. In fact, some suggest this mass cloud migration is a switch that will stay.&lt;/p&gt;

&lt;p&gt;You could either go for a private virtual machine vendor like VMware’s vCloud or you can choose the popular services like AWS or Azure. &lt;/p&gt;

&lt;p&gt;Icertis is one of the companies that realized the importance of the &lt;a href="https://www.icertis.com/blog/covid-19-and-the-cloud-how-icertis-is-leveraging-azure-to-stay-safe-during-the-pandemic/"&gt;on-premise to cloud migration&lt;/a&gt; to enable their DevOps to function without hassle. They used Azure’s Site-to-Site and Point-to-Point VPN functionalities to quickly solve their problem. &lt;/p&gt;

&lt;h3&gt;Their processes entailed three primary steps&lt;/h3&gt;

&lt;p&gt;Duplicate the on-premise network with Azure’s VPN&lt;br&gt;
Give this access to end-user employees and remove bottlenecks of office connections&lt;br&gt;
Enable Virtual Desktop Infrastructure to replicate their work environment while also removing bottlenecks of home connection.&lt;/p&gt;

&lt;p&gt;Azure isn’t the only service that helps with this migration, AWS has similar features as well.&lt;/p&gt;

&lt;p&gt;This is a short-term solution compared to a multi-cloud migration but it's cheaper. When it comes to AWS and Azure, If you play it smart, you can save up a lot of money too.&lt;/p&gt;

&lt;h3&gt;Automated Testing Tools&lt;/h3&gt;

&lt;p&gt;There’s an abundant list of tools offered and a lot of factors to consider. For one, there’s a lot of testing strategies that can be automated and tools don’t come cheap. Continuous testing is a prerequisite for continuous delivery. Manual code review takes time and is prone to errors. Now, with everyone at their home juggling between their work and life, manual testing is the last thing you need to adopt.&lt;/p&gt;

&lt;p&gt;Some smaller companies opt to write their own test automation scripts but, again, if you’re going that route in this circumstance, that would be adding more pressure to the already hectic DevOps work cycle. &lt;/p&gt;

&lt;p&gt;Zephyr and IBM Rational are two very popular testing tools but if it’s necessary, you can look into open-source tools like Selenium and Watir.&lt;/p&gt;

&lt;h3&gt;No Compromises in Security&lt;/h3&gt;

&lt;p&gt;Continuous Security isn’t in the definition of DevOps but this pandemic has increased vulnerability. Sacrificing security for throughput is the equivalent of standing in the line of fire naked. Make sure you have your security policies updated for application deployment and integrate security checks to every phase of your automated testing. &lt;/p&gt;

&lt;p&gt;OWASP’s very own ZAP is a popular open-source tool for automated security testing. You can find code vulnerabilities with Veracode. Contrast security helps identify issues during runtime. AWS and Azure have their own tools for checking if there are any vulnerabilities in your architecture. Evident.io on the other hand, can help you out if you use a private server vCloud.&lt;/p&gt;

&lt;h3&gt;Identity Management&lt;/h3&gt;

&lt;p&gt;DevOps by nature increases the need for stricter Identity Management policies but with the fragility of how remote work is distributed among employees, it is vital you create more concrete plans on who has access. Make sure you don’t slow things down by restricting the process. Automating password management, identity lifecycles and finding rogue accounts in the infrastructure reduces the chances of vulnerabilities through human error.&lt;/p&gt;

&lt;p&gt;Identity automation is a popular service that offers all these facilities. OneLogin excels in password management and finding rogue accounts via Vigilance AI.&lt;/p&gt;

&lt;h2&gt;Changing the Culture&lt;/h2&gt;

&lt;p&gt;So now you’ve got your tools and resources sorted out. Enough tools to take the workload of your team but not too much that you are crippled financially. The adjustments you make to keep yourself afloat financially has to be compensated by boosting the work culture between the DevOps team to be highly productive yet flexible for individuals.&lt;/p&gt;

&lt;p&gt;Easier said than done, but there are some guaranteed practices you can focus on.&lt;/p&gt;

&lt;h3&gt;Collaboration &amp;amp; Communication Frequency&lt;/h3&gt;

&lt;p&gt;Collaboration is what made DevOps such a popular method of product development. Reinforcing this aspect of DevOps is the first step to avoiding bottlenecks in your project. Combine tools such as Jira or Kanban with communication channels like Slack to organize work and distribute them effectively between members. Keep frequent communication to encourage the collaboration process and resolve any issues with dependency. &lt;/p&gt;

&lt;h3&gt;Avoid Individual Specialization&lt;/h3&gt;

&lt;p&gt;You don’t want your team to be split according to specializations, work distribution should be done by employing a work queue so that all of the members can participate in any given task. &lt;br&gt;
The last thing you need is for one member to have his work on hold because of family issues and that blocking everyone else’s progress.&lt;/p&gt;

&lt;h3&gt;Quantify the Progress&lt;/h3&gt;

&lt;p&gt;Documenting the task to be done and even each phase of one task creates a timeline for management to keep track of progress which also allows team members to know where everyone is at. Using tools like Slack, Trello, Jira, etc helps to keep track of work through simple documentation channels. Users can revisit anything they’ve missed by checking the history.&lt;/p&gt;

&lt;h3&gt;Judge based on results&lt;/h3&gt;

&lt;p&gt;Focusing on consistent activity will not yield better output from your employees. In these tough times, scrutinizing workers over how long they’re online on Slack or how they aren’t looking busy can negatively affect the progress. Instead, focusing on how they’re progressing with the tasks they’ve laid out for themselves can give you an idea of their work-rate.&lt;/p&gt;

&lt;h3&gt;Take up the Back-Burner projects&lt;/h3&gt;

&lt;p&gt;The way things are, it’s wiser to not go for further innovations. Instead, prioritizing on back-burner projects that were being held off on can help your organization out in the long-term. Identify which of these projects can hold better ROI and split your priorities accordingly. &lt;/p&gt;

&lt;p&gt;A healthy shift in priority between automated tools and work culture has always been the factors that decide DevOps success and even in this circumstance, it remains the same. It is no secret that people are panicking. One look at r/devops and you might catch some freight yourself. However, plan your changes accordingly and not only will you survive this but it can reap benefits for the long term success of your organization.&lt;/p&gt;

</description>
      <category>remotework</category>
      <category>cloudmigration</category>
      <category>devops</category>
      <category>automatedtesting</category>
    </item>
    <item>
      <title>Cloud Parking</title>
      <dc:creator>Totalcloud.io</dc:creator>
      <pubDate>Fri, 20 Mar 2020 13:26:50 +0000</pubDate>
      <link>https://dev.to/totalcloudio/cloud-parking-86p</link>
      <guid>https://dev.to/totalcloudio/cloud-parking-86p</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cWzEbv0E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ticxd1gdm2ko4tfsdyhl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cWzEbv0E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ticxd1gdm2ko4tfsdyhl.jpg" alt="Cloud Parking"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;What is Cloud Parking?&lt;/h3&gt;

&lt;p&gt;The cloud’s model of operations lets you use resources on-demand, but ironically you’re paying for them even when you aren’t using them. This system of cloud services is a double-edged sword to the unprepared. Cloud Parking is a concept that enables you to leverage the ‘pay for what you consume’ philosophy for every cloud resource that you use. Turn on the resource when you’re using it, and park it when you’re not - but applicable across your entire cloud. &lt;/p&gt;

&lt;h3&gt;What can be parked?&lt;/h3&gt; 

&lt;p&gt;The very paradigm of Cloud Parking rests on a resource’s ability to be turned on &amp;amp; off. Some resources have an inherent ability to be started at a certain time and be parked at a certain time - the best known would be EC2 &amp;amp; RDS instances. These can be parked out of the box in multiple ways that have been explored before. &lt;/p&gt;

&lt;p&gt;But your cloud has a multitude of other resources that are charged 24/7, but not used the same. Almost 80% of your cloud can be parked when not in use, barring a few functions like serverless. Why not extend the parking feature even to the ‘non-schedulable’ services - like RedShift Clusters, ECS, and EKS? ‘Start and Stop’ in their case involves a few additional actions like taking a backup/snapshot, deleting the resource &amp;amp; restoring it when needed from the backup. &lt;/p&gt;

&lt;h3&gt;What are the current ways to park?&lt;/h3&gt;

&lt;p&gt;The current most widely practiced method of parking resources is to manually create lambda functions that start and stop them at a specified time. AWS has equipped users with the Instance Scheduler that provides a CloudFormation script, that they can use in conjunction with DynamoDB tables to set up schedules. There are a couple of third-party tools that solely provide the function of scheduling instances - in an automated fashion and an easy-to-use UI. &lt;/p&gt;

&lt;h3&gt;Pitfalls of the existing parking methods&lt;/h3&gt;

&lt;p&gt;Like Michael Wittig points out, EC2s have been around for 13 years. Despite this, the existing parking methods haven’t prevented multiple businesses from incurring losses up to ‘000s of dollars from unnecessary uptime.&lt;/p&gt;

&lt;p&gt;They only park EC2 &amp;amp; RDS instances &lt;/p&gt;

&lt;p&gt;The existing parking methods focus on just two out of the hundreds of resources that exist in your cloud. That hardly fulfills the paradigm of ‘Cloud Parking’. &lt;/p&gt;

&lt;h4&gt;The most widely used method is hardly automated&lt;/h4&gt;

&lt;p&gt;The biggest parking method, the Instance Scheduler, is a manual of instructions to follow to achieve a schedule, it doesn’t put your cloud on auto-pilot. Consider a set of windscreen wipers. In the case of a slight drizzle, you might manually wipe it whenever needed. But as soon as the rain gets heavier, you trigger them to run automatically. Suppose the makers of the cars hadn’t inserted the automatic option - a heavy downpour would have you scrambling in chaos, your visibility is affected &amp;amp; you’re driving slower. As your cloud scales &amp;amp; becomes more intricate, human error &amp;amp; redundant repetition become your enemies.  &lt;/p&gt;

&lt;h4&gt;They only park based on time&lt;/h4&gt;

&lt;p&gt;The standard approach to parking is to set a start and stop time - based on usual business hours for your non-prod instances. But in a lot of scenarios, for larger companies, time doesn’t justify the scheduling needs. If you’re dealing with a complex or cross-continental infra, you need a smarter parking method that’s dynamic and based on actual usage. As soon as an idle resource is identified, it should power off, without the hassle of calculating the ideal time or worrying about timezones.  &lt;/p&gt;

&lt;p&gt;I’m talking real-time power off &amp;amp; on-demand turn on (as easy as the flick of a switch, sticking with the metaphor). The duration where employees take breaks or their priority shifts between projects are all valid reasons to reduce uptime. &lt;/p&gt;

&lt;h4&gt;They do not allow collaboration&lt;/h4&gt;

&lt;p&gt;DevOps teams that operate in a multi-geo setup require collaboration-friendly tools &amp;amp; platforms. You wouldn’t want team meetings being called on Slack just to park or unpark a resource. In a lot of cases, the creator and users of the resources are different, the intention is to give the actual user the required control to operate smoothly. &lt;/p&gt;

&lt;p&gt;Your engineers should be able to spin up a resource whenever they need without any hassle, through simple external URL triggers and integrations. This ability, synced with automated parking based on idleness, will be a well-oiled machine in a collaborative landscape.&lt;/p&gt;

&lt;h3&gt;In conclusion&lt;/h3&gt;

&lt;p&gt;We’re always looking for ways to create a better version of our architecture, applications, and offering - the same applies to every cloud management function. In the case of Cloud Parking, although lambdas have been around for a while, it’s time we have something more scalable and dynamic - so that you don’t have to recreate it across resources, regions &amp;amp; accounts. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudmanagement</category>
      <category>awsec2</category>
      <category>cloudcostoptimization</category>
    </item>
    <item>
      <title>The Proven Practices for Successful AWS Cost Optimization</title>
      <dc:creator>Totalcloud.io</dc:creator>
      <pubDate>Wed, 04 Mar 2020 11:04:15 +0000</pubDate>
      <link>https://dev.to/totalcloudio/the-proven-practices-for-successful-aws-cost-optimization-34</link>
      <guid>https://dev.to/totalcloudio/the-proven-practices-for-successful-aws-cost-optimization-34</guid>
      <description>&lt;p&gt;Cost optimization strategies for AWS services are abundant. Prioritizing between your options is necessary to make sure you don’t overload yourself with the wealth of information. Looking at the best practices in the industry right now and the practices that have now become obsolete would help you find stability in your finances.&lt;/p&gt;

&lt;h3&gt;Cost Visibility and Analysis&lt;/h3&gt;

&lt;p&gt;Before diving into the nitty-gritty details of the strategies to be employed, you need to monitor the current status of your infrastructure to gain insight into what you need. Accessing billing information and purchase reports enables you to analyze your expenses and directly restrict additional costs. Additional monitoring of your cloud resources, their utilization, accuracy &amp;amp; security can save massive costs. AWS has a few native tools that point out what you need to be doing by looking at where your money is going.&lt;/p&gt;

&lt;p&gt;The tools analyze your bill to predict future expenses or give you a detailed report on where your expenses went. Using this information, you can set up budget plans for your reserved instances. Identifying unallocated Elastic IPs or unused objects with these tools also lets you save costs by deleting them. &lt;/p&gt;

&lt;p&gt;The three valuable native AWS cost optimization tools are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Cost Explorer&lt;/li&gt;
&lt;li&gt;AWS Trusted Advisor&lt;/li&gt;
&lt;li&gt;Cost and Usage Report&lt;/li&gt;
&lt;/ul&gt;
 

&lt;p&gt;Detailed Billing used to be able to provide this service but now it has been disbanded and instead, Cost and Usage Report does it. Detailed Billing had let you group your expenditure by the features, tools or parts of your infrastructure to see where expenses are high and low.&lt;/p&gt;

&lt;p&gt;Now that you have an understanding of how to analyze your expenses, it’s redundant if it’s not actionable. Here are the three major changes you can undertake, based on the information you have accessed. &lt;/p&gt;

&lt;p&gt;These are the Pillars of AWS Cost Optimization&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scheduling&lt;/li&gt;
&lt;li&gt;Rightsizing&lt;/li&gt;
&lt;li&gt;Storage optimization&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Scheduling&lt;/h3&gt;

&lt;p&gt;Scheduling is the act of maintaining or limiting the runtime of your resources so that you don’t pay for non-usage. Oftentimes, poor (or non-existing) scheduling strategies are the main culprit for why companies suffer massive losses. How often have there been articles about a server being turned on and left running for months or even years together? Since the pricing model of many AWS services charges you for the duration a service stays up, every moment counts. This makes scheduling an integral element to cost optimization.&lt;/p&gt;

&lt;p&gt;The most widely scheduled resource is EC2, but there’s no need to stop at just that. We’ve observed that cost-savings can come from scheduling every other resource as well. Some of them inherently support start/stop functions (like RDS Instances &amp;amp; Redshift Clusters), and some don’t (ECS, EKS, Fargate). But even these can be optimized to save costs.&lt;/p&gt;

&lt;p&gt;The go-to metric to create a schedule is time. You’re maintaining uptime of your resources based on business hours or the time you deem necessary for them to be up. In a 168-hour week, you're most likely using your resources for only 40 hours (a typical 8-hour workday x 5). Shutting your resources down the remaining 76% of the time can be the difference between an optimized, cost-effective workload and one that gives you dreadful bill shocks.&lt;/p&gt;

&lt;p&gt;A more automated &amp;amp; quick method is usage-based scheduling, where resources shut down on the basis of idleness. You can set up your system to detect idle resources and park them in real-time. This intricately optimizes resource usage, to save you 2x more costs than the usual time-based scheduling. &lt;/p&gt;

&lt;p&gt;Now, you can automate the scheduling of resources with CloudWatch events or a third-party AWS scheduler. While CloudWatch is an AWS exclusive option that offers flexibility, it requires customers to write their own code to execute their automated tasks. A third-party &lt;a href="https://www.totalcloud.io/resource-scheduling?utm_source=Dev-to&amp;amp;utm_medium=dist&amp;amp;utm_campaign=Cost-Optimization"&gt;AWS Scheduler&lt;/a&gt; would be able to offer similar functionalities but with a more user-friendly approach.&lt;/p&gt;

&lt;h3&gt;Rightsizing&lt;/h3&gt;

&lt;p&gt;Rightsizing achieves the lowest possible cost of maintaining your instances by choosing the appropriate instance types and size according to your performance and hardware requirements.&lt;br&gt;
Rightsizing is the strategy to adopt to avoid spending excess on the hardware you don’t need.&lt;/p&gt;

&lt;p&gt;Purchasing the instances you need won’t end the rightsizing process. Keeping a periodical monitoring(monthly recommended) of the resources you have purchased is necessary to plan ahead on future purchases and to break down what works and what doesn’t.&lt;/p&gt;

&lt;p&gt;The 4 factors to monitor that decide what the right amount of instances to purchase are &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vCPU utilization&lt;/li&gt;
&lt;li&gt;Memory utilization&lt;/li&gt;
&lt;li&gt;Network I/O utilization&lt;/li&gt;
&lt;li&gt;Disk utilization&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;Purchase Behaviour&lt;/h4&gt;

&lt;p&gt;On-Demand Instances&lt;br&gt;
On-Demand Instances are the most commonly purchased instance types, mainly due to how much control the customer is given with it. For long-time users, looking into the updated pricing list can be beneficial to their AWS cost optimization. It is likely they are running instances that are currently more expensive than their alternatives. Instances such as m1,c1, and t1 can now be switched to m3,t2, and c3. This migration comes with superior CPU performance, higher memory, and a lower price. Stagnating from updating your instances could cost you 10-20% higher. &lt;/p&gt;

&lt;h4&gt;Reserved Instances&lt;/h4&gt;

&lt;p&gt;Purchasing the right AWS reserved instances can save costs up to 60% of your current expenditure. Businesses that continuously make use of the AWS cloud environment should opt to evaluate their Reserved Instances every month. Reserving an instance for one year or three-year duration and usage parameters can fetch an hourly rate lower than On-Demand pricing for up to 75%.  &lt;/p&gt;

&lt;p&gt;AWS Reserved instances for DynamoDB charges based on throughput instead of running hours. That is, whether the instance is running or not, the reservations continue to charge.&lt;/p&gt;

&lt;h4&gt;Spot Instances&lt;/h4&gt;

&lt;p&gt;The last instance type to pay attention to for cost optimization is Spot Instance. Spots are spare Amazon EC2 instances that can be purchased at prices less than 90% of its original. The spot instance is terminated if its price exceeds the customer’s stated price or the capacity ends up unavailable.&lt;/p&gt;

&lt;p&gt;Using the pricing history on the AWS console makes finding the right Spot Instance easy. Spot Instances are best used along with workloads that don’t suffer from interruption and can be replaced with On-Demand instances without backups and data restoration.&lt;/p&gt;

&lt;p&gt;Deploying a tool like Spot Instance Advisor gives you the benefit of being prepared to purchase the right Spot instance with the least amount of interruptions and fair pricing. Spot Fleet will further help you by having multiple spot instances ready to be deployed according to their value in the case of an interruption. &lt;/p&gt;

&lt;h3&gt;Storage Optimization&lt;/h3&gt;

&lt;p&gt;You are likely spending a lot of money for storage space that you are not utilizing at all or utilizing improperly. The main goal of cutting storage costs is ensuring your services remain functional at optimal conditions. To ensure this, you have to balance your use of Amazon S3 storage tiers and other AWS storage services properly.&lt;br&gt;
When evaluating storage requirements, customers should segment data by how available and durable it needs to be, the size of data sets, throughput and IOPS thresholds, and regulatory requirements.&lt;br&gt;&lt;br&gt;
Amazon S3 Storage Lifecycle Optimization&lt;br&gt;
The ideal way to optimize your storage is to set an S3 storage lifecycle. This will make the optimization automatic. AWS S3 has several storage classes where you move your resources according to the purpose of each class. The various pricing models they offer can effectively cut down your costs as long as you handle distribution wisely.&lt;br&gt;
Amazon S3 storage tiers include:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3P_fmeQj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xt97uh75oh2xixp5dq2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3P_fmeQj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xt97uh75oh2xixp5dq2j.png" alt="S3 Tiers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deleting unused disk volumes open up large spaces for backups and moving your data more freely. Keep tabs on your storage allocation and you can save up a ton.&lt;/p&gt;

&lt;h3&gt;Structural Cost Optimization&lt;/h3&gt;

&lt;p&gt;You can also optimize your costs by targeting the expenses made by the individual members or teams of your organization. Miscommunication can lead to poor resource handling which causes excess purchases that you might not need.&lt;/p&gt;

&lt;h4&gt;Complex Infrastructure&lt;/h4&gt;

&lt;p&gt;When your business is large and has a complex infrastructure that spans across the globe, effective communication between overseas teams becomes difficult to achieve. The various departments of an organization and their individual technical specialties are a huge factor in said miscommunication. There are usually multiple teams, one that sets up the infrastructure and another that actually operates it; and the former team has control over their runtime.  If there is any miscommunication between the deploying team &amp;amp; the operating team, it can lead to unforeseen charges. &lt;br&gt;
Setting up policies and guidelines for your various teams to follow aligns all the employees to common practice.&lt;br&gt;
It could remove potential communication errors by encouraging the teams to open educated conversations with peers of other specialties.&lt;/p&gt;

&lt;h4&gt;The Human Element&lt;/h4&gt;

&lt;p&gt;The human element of running a business makes it prone to occasional errors. Forgetting to shut down a resource, deleting unused volumes or using up more expensive resources are all common careless mistakes. The best solution is to balance the manual workload with the automated workload. A lot of the strategies can be implemented using automation tools. Taking some of the burdens of your employees can lead to lower chances of errors.&lt;/p&gt;

&lt;h3&gt;How Cost Optimization can help other aspects of your business&lt;/h3&gt;

&lt;h4&gt;Detecting Rogue Infrastructure&lt;/h4&gt;

&lt;p&gt;If you’ve got infrastructures that fulfill purposes that the organization has not assigned it to do, then it has gone rogue. Depending on the complexity of this unauthorized activity, your infrastructure might be something that is racking up some excess expense. Analyzing your bills with cost management tools can help you pinpoint what infrastructure is acting up.&lt;/p&gt;

&lt;h4&gt;Security holes&lt;/h4&gt;

&lt;p&gt;Analyzing your bills could help determine any potential vulnerabilities. Unintended spikes in prices could be because of a potential breach in security.&lt;/p&gt;

&lt;h4&gt;Neglected projects&lt;/h4&gt;

&lt;p&gt;You can identify half-finished projects that continue to charge you but aren’t being worked on. Idle projects can either be discontinued or you can strip its resources off temporarily.&lt;/p&gt;

&lt;p&gt;With the right strategies for your resources and the right tools to assist you in actualizing these practices, you are on your way to stabilize the finances for your cloud management.&lt;/p&gt;

</description>
      <category>awscostoptimization</category>
      <category>awsreservedinstances</category>
      <category>awsspotinstances</category>
      <category>awss3</category>
    </item>
    <item>
      <title>AWS Cost Optimization Checklist</title>
      <dc:creator>Totalcloud.io</dc:creator>
      <pubDate>Tue, 03 Mar 2020 06:46:54 +0000</pubDate>
      <link>https://dev.to/totalcloudio/aws-cost-optimization-checklist-2fem</link>
      <guid>https://dev.to/totalcloudio/aws-cost-optimization-checklist-2fem</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2atc2hpO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9y7ypo0ruj1tk8ftxbwn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2atc2hpO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9y7ypo0ruj1tk8ftxbwn.png" alt="Thumbnail"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Are you sure you're following all of the practices and strategies to optimize your expenses? AWS has many flexible methods of keeping your expenditure to a minimum but the myriad of services and strategies could overwhelm you. Check out the list of essential strategies below to see if you've got all your bases covered.&lt;/p&gt;

&lt;h4&gt;Instances&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Purchase instances that suit your organizational needs.&lt;/li&gt;
&lt;li&gt;Buy reserved instances with appropriate duration. Don't reserve instances long-term if you won't need them.&lt;/li&gt;
&lt;li&gt;Analyze the pricing history of the Spot market at least a week before purchase to know which instances are worth purchasing.&lt;/li&gt;
&lt;li&gt;Purchase Spot Instances for resources that aren't set up for production.&lt;/li&gt;
&lt;li&gt;Deploy some Spot instance tools so you don't have to worry about your instances being interrupted.&lt;/li&gt;
&lt;li&gt;Monitor your instances monthly. Purchase the latest instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;Scheduling&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Schedule your instances. Keep them running only on business hours.&lt;/li&gt;
&lt;li&gt;Make use of AWS instance scheduler or third-party automatic &lt;a href="https://www.totalcloud.io/resource-scheduling?utm_source=Dev-to&amp;amp;utm_medium=dist&amp;amp;utm_campaign=Cost-Checklist"&gt;scheduling tools&lt;/a&gt; to take the task off your hands.&lt;/li&gt;
&lt;li&gt;Try to be even more thorough with the scheduling times. If you don't need instances to be running during certain working hours(like lunchtime), then turn it off.&lt;/li&gt;
&lt;li&gt;Schedule your resources based on their activity. Don't need to keep them running if they are idle.&lt;/li&gt;
&lt;li&gt;Shut down resources other than instances like unused DB volumes or elastic IPs based on time or activity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;Storage&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Optimizing your storage directly results in the optimization of your costs.
Store your running resources on S3 and move them between tiers according to their activity.&lt;/li&gt;
&lt;li&gt;Set up a lifecycle policy&lt;/li&gt;
&lt;li&gt;Use S3 Intelligent-tiering for moving between frequent and infrequent access tiers automatically.&lt;/li&gt;
&lt;li&gt;Archive your less active resources on Glacier.&lt;/li&gt;
&lt;li&gt;Archive data backups long-term on Glacier Deep Archive.&lt;/li&gt;
&lt;li&gt;Backup large volumes of data on Elastic Block Storage. Store your EC2 instances or clusters on here.&lt;/li&gt;
&lt;li&gt;Applications with high workloads can be run on Amazon EFS as it provides auto-scaling and quick outputs.&lt;/li&gt;
&lt;li&gt;Delete old snapshots and unallocated disk volumes&lt;/li&gt;
&lt;li&gt;Delete unnecessary objects and buckets&lt;/li&gt;
&lt;li&gt;Make use of AWS auto-scaling to allocate the sufficient capacity of storage to the workload&lt;/li&gt;
&lt;li&gt;Choose open-source operating systems&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;AWS Cost Management Tools&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Analyze your expenses with Cost and Usage report.&lt;/li&gt;
&lt;li&gt;Use Trusted Advisor to see your expense forecast.&lt;/li&gt;
&lt;li&gt;Figure out unused resources with Trusted Advisor.&lt;/li&gt;
&lt;li&gt;Keep up with the trends in the market with AWS Cost Explorer&lt;/li&gt;
&lt;li&gt;Create budget plans for your resources&lt;/li&gt;
&lt;li&gt;Aggregate the expenses by services you run&lt;/li&gt;
&lt;li&gt;The individual employees of your organization with cost optimization tools&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>cloudcostmanagement</category>
      <category>spotinstances</category>
      <category>reservedinstances</category>
    </item>
  </channel>
</rss>
